Hacker Newsnew | past | comments | ask | show | jobs | submit | kn1ght's commentslogin

Hey I've also worked in a previous company that was doing exactly this for DK, UK, etc. As far as I remember one of the bigger things there was tissue density and masking effects where it might become not as clear cut for a radiologist. We saw a lot of opportunity in that area and in prognosis- the whole health economics aspect of frequency of visits.


> tissue density and masking effects

But those are also harder for software based systems.


The whole protocol was designed very cleverly from the start to avoid all the privacy blocks that might inhibit people from using it [1], because the main drawback in this is that it's completely useless unless you have a critical mass of users that actually use it.

It is very difficult to explain to people that are not curious about the technology and all they hear is 'tracing = tracking = no privacy'.

I imagine this is why this app has been silently pushed, but in my mind just having it available and active on phones does not help you that much if the same users are also not aware and actively reporting their infections. So you will have a very small group that consciously install it and when they get infected they report; a lot larger group will get a notification that they have been close to an infected individual. I suppose they hope that by showing those notifications then people that subsequently get tested positive will be curious enough to find out how they should report in, etc. It's risky especially seeing this backlash about silent installations...

[1] https://covid19-static.cdn-apple.com/applications/covid19/cu...


>It is very difficult to explain to people that are not curious about the technology and all they hear is 'tracing = tracking = no privacy'.

But this is literally true. This is an app pushed to people remotely without their consent or even knowledge. People cannot trust the claim that there is no privacy gotcha involved in this, especially when previous attempts seem to have opened the log of this information to all installed apps:

https://themarkup.org/privacy/2021/04/27/google-promised-its...

You cannot trust them when they say that the app respects your privacy.


Is there something special about it being an app? Because the contact tracing framework that the app uses was already pushed to people remotely without their consent or knowledge - as well as the contents of every update ever to Google Services Framework. And in the big scheme of shady shit that Android does without the user's consent or knowledge, that's a pretty benign, privacy-respecting one.


That's fascinating, I wasn't aware that they actually did that to allow for semi private tracing.

That's going to be hell to explain though, as you've already mentioned.


I worked in a small research company that had a method (segmentation + CNNs, etc) a few years back. We had some exciting stuff also on masking effects, but as soon as we got into SaMD and the main revenue stream (grants) dried up, engineering closed down.


One of my Uni professors was working on Proof-carrying code *(https://en.wikipedia.org/wiki/Proof-carrying_code). I got a cursory involvement. Although I agree with you, the fact that there is a way forward and is entirely based in mathematics (on the formal side) makes me also agreeable with OP. I don't see a contradiction. If you talk about the more informal side- the way something is used does not necessarily define its nature.


Indeed, I don't disagree with OP. I just felt compelled to point major problems with Dijkstra's argument, because it's somewhat generally known yet its very real shortcomings aren't as widely known.

The general response seems to be that "yeah, ideally we should be more mathematically rigorous with programs if we have time and mathematical expertise", but few have sufficient exposure to formal methods to understand that it's far from the panacea the memo makes it out to be and there are more reasons not to do it than just time and expertise.


There is a way forward, but it is tedious as all hell. I love formal methods. I spent years in grad school working in it. But the truth is that formal reasoning struggles to scale to interesting programs, especially if you have to consider open programs, and is utterly incoherent for most software engineers. You can compare something like symbolic execution, which seems amazingly elegant and powerful, with coverage guided fuzzing, which is hacky and random. Coverage guided fuzzing eats symbolic execution for lunch.


Programming languages are not full linguistics, at least not yet. We focus primarily on syntax, semantics and pragmatics. All of this, though, is firmly rooted in mathematics, defining grammar as expressions and mathematical relationships. This then enables formal mathematical proofs where we can reason about outcome. I don't know if there exists such search that you mention, or it is more of an optimization of language features to problem space. Perhaps in a future where we're able to program our machines by having a conversation with them...


I once read an article suggesting that reading computer code does not activate the regions of the brain that are involved in language processing:

https://news.mit.edu/2020/brain-reading-computer-code-1215

This may indicate that the sampled programmers did not program in the way it can be done for example with declarative languages, i.e., primarily as a linguistic activity where we describe what we know about the task.

I also like the description in Programming as Theory Building by Peter Naur:

https://pages.cs.wisc.edu/~remzi/Naur.pdf

The feasibility of this approach may depend on the programming languages one uses.


Thanks for the articles. I definitely don't do it the same way I process language. To use the same example- I describe a task unambiguously, which makes it a translation from written text to memory regions or execution paths. It gets more concrete than just what I know about the task. A better parallel would be to writing mathematical formulas. When I process language, I believe, more of my brain is engaged in empathy, in trying to match context, in allowing ambiguity, storing ideas and concepts to be disambiguated at a later point.


Yeah I remember that I started reaching my first CS successes once I knew enough of the mathematics and properties of the hardware to "think like a machine" and was trying to explain some of my peers still struggling how to do a more step by step execution in their mind.

In hindsight it's clear that you dont tell a story when you program a computer or not the way I'd describe my day or teach my kids about a phenomenon. It feels more like unrolling a tape and executing a receipe in the most simple way you can after you built a mind model of the machine. You mimic an actor maybe, rather than imagine an event ?


I have a very different experience; when I code, it feels similar to mapping out a conversation or constructing a narrative, and this is how I teach it to my students.

I've met people who view it as mathematics or philosophy or a bunch of other things; in the end, I think there is a wide range of valid mental models for how computers function and therefore how to communicate with them.


Did they compare identical information expressed as written language or programming language or did they compare messages that contain logic expressed via programming languages vs messages that don't contain logic expressed via written languages?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: