Hacker Newsnew | past | comments | ask | show | jobs | submit | SubiculumCode's commentslogin

I feel like they don't use those signals, just time spent...and you spend more time fishing for the 'not interested' button

How about complaining that brain waves get sent to a server? I'm a neuroscientist, so I'm not going to say that the EEG data is mind reading or anything, but as a precedent, non privacy of brain data is very bad.

Non-privacy of this person is currently sleeping data is very bad as well, for different reasons.

You know, now that I'm thinking about it, I'm beginning to wonder if poor data privacy could have some negative effects.


It sounds like there was "presence in room" data as well, which could be very bad

This is the easiest signal though, on basically any account. You can see the time that communication happens, and the times when it doesn't.

For example a while back I wanted to map out my sleep cycle and I found a tool that charts your browser history over a 24 hour period, and it mapped almost perfectly to my sleep / wake periods.


Unsecured fitness monitor data revealed military guard post (IIRC) activity a while back.


not because you knew how much someone worked out. But because it had GPS.

True.

But keep in mind that other less obvious data sources can often lead to similar issues. For example phone accelerometer data can be used to precisely locate someone driving in a car in a city by comparing it with a street map.

In the context of the military even just inferring a comprehensive map of which people are on which shift and when they change might be considered a threat.


People will be lining up to have their brainwaves harvested because it'll be mildly easier to send emails or something similarly inane.

Corporations will be lining up to require their employees have their brainwaves harvested, so they can fire employees who aren't alert enough.

Will someone invent the equivalent of a mouse jiggler to get around this?

Porn?

They'll update the required brain state to "alert but not enjoying yourself".

You could read the alertness level from an EEG, which could be helpful to a burglar. The device with slow-wave status seems ideal.

How useful could something like this be for research? I'm not a neuroscientist so I have no clue, but it seems like the only justification I can think of..

The general idea of an EEG system that posts data to a network?

Very, but there are already tons of them at lots of different price, quality, openness levels. A lot of manufacturers have their own protocols; there are also quasi/standards like Lab Streaming Layer for connecting to a hodgepodge of devices.

This particular data?

Probably not so useful. While it’s easy to get something out of an EEG set, it takes some work to get good quality data that’s not riddled with noise (mains hum, muscle artifacts, blinks, etc). Plus, brain waves on their own aren’t particularly interesting—-it’s seeing how they change in response to some external or internal event that tells us about the brain.


Not a neuroscientist either but I would imagine that raw data without personal information would not be useful for much. I can imagine that it would be quite valuable if accompanied with personal data plus user reports about how they slept each night, what they dreamed about if anything, whether it was positive dreams or nightmares etc. And I think quite a few people wouldn’t mind sharing all of that in the name of science, but in this case they don’t seem to have even tried to ask.

What if you gonna think about your social security number 30000 times in your dreams, and someone knows the pattern? See the danger? That's evil.

I believe they use it for sleep tracking

If they're taking patient data for research without permission, they are not ethical researchers.

Is it really “without permission” if it’s from a server for which the access credentials have been deliberately published to the entire internet?

From the perspective of research ethics: it is very much without permission in that situation

If it's without the patient's permission, then yes, it is without the only permission that matters for medical ethics.

I would presume data privacy laws already have good precedent for health data?

> I would presume data privacy laws already have good precedent for health data?

Google for a list of all the exceptions to HIPPA. There are a lot of things that _seem_ like they should be covered by HIPPA but are not...


Interesting...

Only for "covered entities" under HIPAA (at least in the US)

"Broker" is right there in the title of the post.

Baby's gotta get some cash somewhere.


An MQTT Broker just mean server, that's MQTT terminology.

Dark humor is like food.

Not everybody gets it.


Here it's more Poe's law.

Millions of people voluntarily use Gmail which gives a lot more useful data than EEG output to DHS et al without a warrant under FAA702. What makes you think people who “have nothing to hide” would care about publishing their EEG data?

Sorry. No. I'm not going to get pushed around by a bunch of bootlickers.

Meh. This just sounds like all the interface theory stuff we users have to deal with, where useful things are removed in favor of a 'clean' and empty interface that makes you work harder to get your actual work done.

Then there's Adobe who remove features to add feature and justify it's next version; or clones it into a separate product so they can justify it's next subscription rise; or moves it into a different product so they can justify it's subscription expansion.

This has already been an area of research, both publicly, and most likely in private/government defense research. In a targeted situation, i.e. surveillance of a household of 6, this would work easily enough...but I doubt there is enough information to provide reliable (high AUC) tagging of ID in a public scenario oh hundreds to thousands of individuals.

https://www.theregister.com/2025/07/22/whofi_wifi_identifier...

> Researchers in Italy have developed a way to create a biometric identifier for people based on the way the human body interferes with Wi-Fi signal propagation.. can re-identify a person in other locations most of the time when a Wi-Fi signal can be measured. Observers could therefore track a person as they pass through signals sent by different Wi-Fi networks – even if they’re not carrying a phone.. their technique makes accurate matches on the public NTU-Fi dataset up to 95.5 percent.


That’s 95.5% accuracy with a 16 person dataset in a highly constrained environment.

Wi-Fi uses long wavelengths, you can cancel out the noise with one person but crowds are all distorting the same very weak signal here. 5Ghz = 6cm, visible light is 380 to about 750 nanometers.


I bet there is.

Off the top of my head, I bet body composition combined with gait analysis would be enough to uniquely identify an individual.


Xfinity's is sensitive enough to configure for animals or humans under 40-70lbs, I forget the exact number.

From my minimal research, it could be pushed a lot further.

What I'm particularly interested in is the edge case scenario of duplexes and apartments, where neighbors are unwittingly subjected to surveillance. There is little more to their routers than firmware to impart these capabilities. No reason to think it won't become common, and there are a handful of other companies basically offering just this as a service.

Strange times.

Edit: I should have mentioned the obvious, that pesky thing no one wants to address... When AI is added to this tech, it will get grotesque. Gait recognition, behavioral patterning, etc. Not something to sneeze at.

Possibly what was used to watch Maduro, along with synthetic aperture radar etc.


if it's a public scenario, you don't need that, they're using wifi on people's persons to id them. The concern is more gait analysis, and by some accounts even lip reading is possible with mm-wave 5g.

This is awesome stuff. I love pinball and I could almost see myself getting into building (a simple) one someday.

That benchmark is pretty saturated, tbh. A "regression" of such small magnitude could mean many different things or nothing at all.

Isn't SWE-Bench Verified pretty saturated by now?

Depends what you mean by saturated. It's still possible to score substantially higher, but there is a steep difficulty jump that makes climbing above 80%ish pretty hard (for now). If you look under the hood, it's also a surprisingly poor eval in some respects - it only tests Python (a ton of Django) and it can suffer from pretty bad contamination problems because most models, especially the big ones, remember these repos from their training. This is why OpenAI switched to reporting SWE-Bench Pro instead of SWE-bench Verified.

Dumb question: Can inference be done in a reverse pass? Outputs predicting inputs?

Strictly speaking: no. The "forward pass" terminology does not imply that there exists a "reverse pass" that does the same kind of computation. Rather, it's describing two different kinds of computation, and the direction they occur in.

The forward pass is propagating from inputs to outputs, computing the thing the model was trained for. The reverse/backwards pass is propagating from outputs back to inputs, but it's calculating the gradients of parameters for training (rougly: how much changing each parameter in isolation affects the output, and whether it makes the output closer to the desired training output). The result of the "reverse pass" isn't a set of inputs, but a set of annotations on the model's parameters that guide their adjustment.

The computations of the forward pass are not trivially reversible (e.g. they include additions, which destroys information about the operand values). As a sibling thread points out, you can still probabilistically explore what inputs _could_ produce a given output, and get some information back that way, but it's a lossy process.

And of course, you could train a "reverse" model, one that predicts the prefix of a sequence given a suffix (trivially: it's the same suffix prediction problem, but you train it on reversed sequences). But that would be a separate model trained from scratch on that task, and in that model the prefix prediction would be its forward pass.


I do want to see ChatGPT running upwards on my screen now, predicting earlier and earlier words in a futile attempt to explain a nonsense conclusion. We could call it ChatJeopardy.

Not as trivially as the forwards direction, unsurprisingly information is lost, but better than you might expect. See for example https://arxiv.org/pdf/2405.15012

Sounds like a great premise for a sci-fi short story.

Sci-fi ? You mean historical fiction!

The power is in the tails

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: