What I find puzzling about these proposals is that it SEEMS like they could be designed to achieve 90% of the stated goals with almost 0% of the loss of privacy.
The idea would be that devices could "opt in" to safety rather than opt out. Allow parents to purchase a locked-down device that always includes a "kids" flag whenever it requests online information, and simply require online services to not provide kid-unfriendly information if that flag is included.
I know a lot of people believe that this is just all just a secret ploy to destroy privacy. Personally, I don't think so. I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.
I generally try to think of things like this in terms of the natural incentives of systems, politicians, and well-meaning voters smoking hopeium. But now with the revelations of how insidious the Epstein class is, I have to wonder if the reason all these digital lockdowns are being shamelessly pushed with a simultaneous urgency is really just because of a giant fucking conspiracy. The common wisdom has always been that conspiracies naturally fall apart as they grow, succumbing to an increasing possibility of a defector. But I think that calculus might change when the members have all got mortal crimes hanging over their heads.
Never mind thinking about how legitimacy was laundered through scientific institutions, and extrapolating to wondering how much that same dynamic applies to "save the children" lobbying NGOs and whatnot.
Better yet, require online services to send a 'not for kids' flag along with any restricted content then let families configure their devices however they want.
Even better, make the flags granular: <recommended age>, <content flag>, <source>, <type>
No - Kid friendly should be something site's Attest to and claim they ARE. That becomes an FTC enforceable market claim (or insert other thing here).
Foreign sites, places that aren't trying to publish things for children? The default state should be unrated content for consumers (adults) prepared to see the content they asked for.
Just say the whole internet is not for kids without adult supervision and leave it at that.
It doesn't even matter if you can get something that technically works. Half the "age appropriate" content targeted at children is horrifying brainrot. Hardcore pornography would be less damaging to them.
This gets complicated when you need to start giving your kids some degree of independence. I would also argue this could be implemented in a more accessibility-oriented approach.
Also, not all 13-year-olds are of equal level of maturity/content appropriate material. I find it very annoying that I can’t just set limits like: no drug-referencing but idgaf about my kid hearing swear words.
On other machines:
I do not want certain content to ever be displayed on my work machine. I’d like to have the ability to set that.
Someone who has specific background may not want to see things like: children in danger. This could even be applied to their Netflix algorithm. The website: does the dog die, does a good job of categorizing these kinds of content.
But, in essence, they want to strip the ability of parents to give their kids the responsibility you describe. No letting your kids use social media, look adult content, or whatever else. It's simply banned.
- It's much easier for web sites to implement, potentially even on a page-by-page basis (e.g. using <meta> tags).
- It doesn't disclose whether the user is underage to service providers.
- As mentioned, it allows user agents to filter content "on their own terms" without the server's involvement, e.g. by voluntarily displaying a content warning and allowing the user to click through it.
This exact method was implemented back around the turn of the century by RSAC/ICRA. I think only MSIE ever looked at those tags. But it seems like they met the stated goal of today's age-verification proposals.
That's why I have a hard time crediting the theory that today's proposals are just harmlessly clueless and well intentioned (as dynm suggests). There are many possible ways to make a child-safe internet and it's been a concern for a long time. But, just in the last year there are simultaneous pushes in many regions to enact one specific technique which just happens to pipe a ton of money to a few shady companies, eliminate general purpose computing, be tailor made for social control and political oppression, and on top of that, it isn't even any better at keeping porn away from kids! I think Hanlon's razor has to give way to Occam's here; malice is the simpler explanation.
The "problem" back then was that nothing required sites to provide a rating and most of them didn't. Then you didn't have much of a content rating system, instead you effectively had a choice for what to do with "unrated" sites where if you allow them you allow essentially the whole internet and if you block them you might as well save yourself some money by calling up your ISP to cancel.
This could pretty easily be solved by just giving sites some incentive to actually provide a rating.
As others have said, the goal is the surveillance. But this notion goes further than that. So many ills people face in life can be solved by just not doing something. Addicted to something? Just stop. Fat? Stop eating. Getting depressed about social media? Stop browsing.
Some people have enough self control to do that and quit cold turkey. Other people don't even consciously realize what they are doing as they perform that maladaptive action without any thought at all, akin to scratching a mosquito bite.
If someone could figure out why some people are more self aware than others, a whole host of the worlds problems would be better understood.
The Purpose Of a System Is What It Does. Whether it is stated (or even designed) to protect kids, if it does anything more or different from that goal, it will perform those actions regardless of what is said about what the System should be doing.
I have not once seen a proposal actually contain zero knowledge proof.
This isn't something exotic or difficult.
It is clear to me there is ulterior motives, and perhaps a few well meaning folks have been co-opted.
A ZKP will work as a base, but the proof mechanism will have to be combined with anti-user measures like device attestation to prevent things like me offering an API to continually sign requests for strangers. You can rate-limit it, or you can add an identifier, both of which makes it not zero knowledge.
Parent's proposal is better in that it would only take away general purpose computing from children rather than from everyone. A sympathetic parent can also allow it anyway, just like how a parent can legally provide a teen with alcohol in most places. As a society we generally consider that parents have a right to decide which things are appropriate for their children.
> A ZKP will work as a base, but the proof mechanism will have to be combined with anti-user measures like device attestation to prevent things like me offering an API to continually sign requests for strangers
Spot on! The "technical" proposal from Google of a ZKP system is best seen as technically-disingenuous marketing meant to encourage governments to mandate the use of Google's locked down devices and user-hostile ecosystem.
The only sane way to implement this is to confine the locked-down computing blast radius to the specific devices that need child protection, rather than forcing the restrictions onto everyone.
I'm not sure what I feel about depriving teens of general purpose computing devices, either, which is the logical consequence of both the pseudo-ZKP scheme and parent's "underage signal". I believe most of us here learned programming through being able to run arbitrary programs, and that would never have happened if we only had access to locked down devices. And that habit of viewing computers as appliances controlled by other people isn't going to go away on their 18th birthday either.
Overall I think while there is a reasonable argument in favor of age verification for some types of sites, the harms of implementing it would drastically outweigh any benefits that it should not be done.
Sure, I'm sympathetic to that idea. The point is that it leaves such a decision up to parents, putting non-locked-down computers in the same position as any other potentially-harmful thing you might want to keep away from your kids.
Keeping parents in control also lets them make decisions contrary to what the corporate surveillance industry can legally get away with. For example we can easily imagine an equivalent of Facebook jumping through whatever hoops it needs to do to target minors, perhaps outright banned various places but not generally in the US. If age restrictions are going to be the responsibility of websites, then parents will still have been given no additional tools to prevent their kids from becoming addicted to crap like that.
Shooting from the hip about the situation you describe, I'd be tempted to give a kid a locked-down phone with heavy filtering (or perhaps without even a web browser so they can't use Facebook and its ilk), and then an unrestricted desktop computer which carries more "social accountability".
I think banning facebook/instagram/etc is one of the special cases where it makes more sense to be enforced by the site, because people use these out of mainly peer pressure and network effect. If a majority is kept off, the rest have little use for it regardless of their personal wishes. Heck, I'd reckon most kids don't actually want to use them all that much. Regardless of technical details, giving parents this control will also cause a lot of resentment if most parents don't go along.
As opposed to censoring internet content in general, which does not work because there will always be sites not under your jurisdiction and things like VPNs. I don't support any such censorship measures as a result.
But why not both? I'm coming from a USian perspective here where I don't see much possibility of actual widespread bans of these types of products, rather just a retrenching to what can be supported by regulatory capture.
Also, we're getting the locked down computing devices anyway - mobile phones as they are right now are a sufficient root of trust for parental purposes. So it seems pointless to avoid using that capability (which corpos are happy to continue embracing regardless) but instead put an additional system of control front and center.
> don't see much possibility of actual widespread bans
Why do you think there would be regulation to honor the "underage signal", but not explicitly ban social media sites for "unverified" users?
> seems pointless to avoid using that capability
It's not pointless, because relying on it will soon make these locked down devices mandatory for everyone under 18, and they will keep using it past 18. Everyone will lose general purpose computing, along with adblocking and other mitigations that protect you from various harms. It also leads to widespread surveillance being possible as parents will want to be able to "audit" their teen's usage.
> put an additional system of control front and center
The problem should be controlled at the source, not the destination, if feasible.
This means any legislation should be aimed at directing device manufacturers to implement software that can respect content assertions sent by websites.
> relying on it will soon make these locked down devices mandatory for everyone under 18, and they will keep using it past 18
Okay, but in 2026 we're basically at this point. Show me a mobile phone that doesn't have a bootloader locked down with "secure boot." For this particular threat that we had worried about for a long time, we've already lost. Not in the total-sweeping way that analysis from first principles leads you to, but in the day to day practical way. It's everywhere.
The next control we're staring down is remote attestation, which is already being implemented for niches like banking. The scaffolding is there for it to be implemented on every website - "verifying your device's security" - I get that on basically everywhere these days. As soon as 80% of browsers can be assumed to have remote attestation capabilities, we can be sure they will start demanding these signals and slowly clamping down on libre browsers (as has been done with browser/IP fingerprinting over the past decade)
Any of these talks of getting the server involved intrinsically rely on shoring up "device security" through remote attestation. That is exactly what can end ad-blocking and every other client-represents-the-user freedom.
> The problem should be controlled at the source, not the destination, if feasible.
You've already acknowledged VPNs and foreign jurisdictions, which means "at the source" implies a national firewall, right?
Unless your goal is to undermine any solution on this topic? I'm sympathetic to this, I just don't see that being realistic in today's environment!
I agree with controls on addictive/exploitative platforms like Facebook or Instagram. These can be feasibly controlled at the source.
In principle I agree with keeping some content away from children, but I don't think any of the implementations will work without causing worse problems, so I disagree with implementing those.
> in the day to day practical way
There's a world of difference between practically required and it being illegal to use anything else, even if initially for a small set of population. You still have a choice to avoid those now. Moreover there is a fairly large subculture of gamers etc opposed to these movements, and open computing platforms will take a long time to fizzle out without intervention.
If you mandate locked down devices for kids, it will very quickly become locked down devices for everyone except for "licensed developers", because no one gets a bunch of new computers upon becoming an adult, and a new campaign from big tech will try to associate open computers with criminals.
> Moreover there is a fairly large subculture of gamers etc opposed to these movements, and open computing platforms will take a long time to fizzle out without intervention.
You kind of skipped over the distinction I made between "secure boot" and "remote attestation". Based on what you wrote here I'm not quite sure if you understand the difference between them. And in the context of locked down computing, the difference between them, and their specific implications, is highly important.
I'm not pointing this out to shoot down your point or something, rather I think you'd benefit from learning about this outside of this comment. But I'll be a little more explicit here to get you started:
The worry with secure boot was based around the possibility that all manufacturers would stop making non-locked-down devices. This has not really panned out - all phones basically have secure boot, there are many you can install your own OS image onto, and there are many escape hatches.
The worry with remote attestation is that website owners will be able to insist that you run specific software environment and/or hardware, and deny you access otherwise. On desktop web browsers, this is the WEI proposal that seems to have stalled. But on mobile, this is still going full speed ahead, both web and apps (SafetyNet).
The thing about remote attestation is that its restrictions take the same shape as current CAPTCHA nags, IP block based hassling, etc. When websites see that more and more visitors are compliant, they can crank up the pain. First it's invisible, then it's a warning, then it's a big hassle (eg lots of CAPTCHAs), and then finally it's a hard lockout. This can happen, led by specific industries (eg banking), regardless of any communities working to resist it. What you should picture is all of our old computers working just fine, but being able to access modern websites in a way that cannot be technically worked around.
it may be simple to sleuth out over time kid status or not, but i would be very uncomfortable with a tag that verifies kid status instantly no challenges, as it would provide a targeting vector, and defeat safety.
> I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.
Advancing a case for a precedent-creating decision is a well-known tactic for creating the environment of success you want for a separate goal.
It's possible you can find a genuine belief in the people who advance the cause. Charitably, they're perhaps naive or coincidentally aligned, and uncharitably sometimes useful idiots who are brought in-line directly or indirectly with various powerful donors' causes.
It's tiring how legislation like this is becoming predictable and feels inevitable. This article even mentions the verification needing to be embedded in the operating system itself, spelling the death of open computing
Some people have been saying for so long that you should need a license to use the internet, and now that we have it, it's a little different than we intended :(
I'd argue it's more like KYC for the internet. Something HN users have brutally and ruthlessly defended for banking every time I argue it's a 4A violation (in fact, it's one of the most fiercely defended things anytime I bring it up).
Give in 20+ years and you'll be called a kook for thinking otherwise.
The government requires the bank to search your identity documents to open an account, even when there is no individualized suspicion you've broken the law as to why your papers need to be searched, as part of the KYC regulations passed post 9/11. Technically it's not in the statute that they actually search your documents, but rather enforced through a byzantine series of federal regulatory frameworks that basically require them to do something that approximates "industry standard" KYC compliance which ends up being, verifying the customer through inspecting their identity and perhaps other documents. This is why i.e. when I was homeless even my passport couldn't open an account anywhere -- they wanted my passport plus some document showing an address to satisfy KYC requirements.
Maybe I will have more energy for it tomorrow, I've been through this probably a couple dozen times on HN and I don't have the energy to go through the whole rigmarole today because usually it results in 2-3 days of someone fiercely disagreeing down some long chain and in the end I provide all the evidence and by that point no one is paying attention and it just goes into this pyrrhic victory where I get drained dry just for no one to give a shit. I should probably consolidate it into a blog post or something.
It isn't a coincidence we have two Palantir articles on the front page and this. It's in the cards and American's seem to be ignoring it and are more than happy to accept the dystopian future where this leads.
It's incredibly sad as an optimistic person trying to find any silver lining here.
William Tong, Anne E. Lopez, Dave Yost, Jonathan Skrmetti, Gwen Tauiliili-Langkilde, Kris Mayes, Tim Griffin, Rob Bonta, Phil Weiser, Kathleen Jennings, Brian Schwalb, Christopher M. Carr, Kwame Raoul, Todd Rokita, Kris Kobach, Russell Coleman, Liz Murrill, Aaron M. Frey, Anthony G. Brown, Andrea Joy Campbell, Dana Nessel, Keith Ellison, Lynn Fitch, Catherine L. Hanaway, Aaron D. Ford, John M. Formella, Jennifer Davenport, Raúl Torrez, Letitia James, Drew H. Wrigley, Gentner Drummond, Dan Rayfield, Dave Sunday, Peter F. Neronha, Alan Wilson, Marty Jackley, Gordon C. Rhea, Derek Brown, Charity Clark, and Keith Kautz
--
Always operate under the assumption that the people serve the state, not the other way around. There are some names in that list that are outwardly infamous of this behavior, and none are surprising considering what type of person looks to be an AG. Maybe fighting fire with fire is appropriate - no such thing as a private life for any of these people, all their communications are open to the public 100% of the time and there are precisely 0 instances where it is not the case. It's only fair considering that is what their goal is for everyone not of the state.
Anonymous or pseudonymous publishing is an essential element of Constitutionally protected freedom of speech [1]. If the government knows who's posting what, it's a lot easier for them to harass, intimidate, punish, or otherwise suppress people who post uncomfortable or politically inconvenient things.
Recent New York Times headline: "Homeland Security Wants Social Media Sites to Expose Anti-ICE Accounts". I'm sure the administration would be very happy if there was a database of government issued photo ID's for every account on Facebook. And if the government gets the ID's of those accounts, I'm quite sure nothing good will come of it for either the individuals involved, or the ability of the people to understand whether the government department in question is going about its duties in a way that respects the law and the Constitution.
[1] My username refers to an anonymously published pamphlet that played a key role in US history.
so they can act accordingly is the variable, a simple headcount is one thing, but when it creeps like a census, then it is prone to polyusary.
putting the consiracy hat on, the exploit is to direct as many installed AGs to push for such bills, with no big letdown if they dont pass, why/because, the demographics on dissention are valuable and are, passed to a hostile federal government.
Being active about KOSA won't get you put on a "list of dissenters". This is an issue being pushed by the States and your federal lawmakers, not the executive branch.
Instead of lobbying for taking away everyone's privacy, why isn't the government going after those they say are the actual culprits? From the article:
"The attorneys general argue that social media companies deliberately design products that draw in underage users and monetize their personal data through targeted advertising. They contend that companies have not adequately disclosed addictive features or mental health risks and point to evidence suggesting firms are aware of adverse consequences for minors."
Okay, so why aren't they going after the social media companies?
I guess HN leans strongly towards the pro-anonymity viewpoint, and sure, the "think of the children!" claim has worn very thin.
But: what methods could that reduce the harm that anonymous internet discourse so often produces? People send death threats, threats of sexual assault, harassment of all kinds, unsolicited pics of their genitals, swat attacks, just absolute nonsense all day every day, hiding behind the veil of anonymity and the asymmetry between the cost of sending such trash and the cost to track it down and do something about it.
There is, quite unusually in politics, a bit of a bipartisan consensus that this is a real, real problem, and that steps like this, or repealing Section 230, would help. Would it, or would it not, and if not, what alternatives are there?
At a certain point the world just becomes less appealing to live in. Day by day death becomes more appealing; what do you have to lose when life just means living in a pigpen?
The strange thing is that all leaders end up doing the same. I wonder if direct democracy would be possible. Right now we are like in a Trueman show movie.
The Sinicization of the West continues, yet people still aren't pushing back; there are no indefinite general strikes, nor is there anyone foaming at the mouth demanding the arrest of the coup plotters in power...
RIP Internet. I don't agree with any of this, but I don't see the majority of people protesting this. If anything, promotion it because: Think of the children.
"Many social media platforms deliberately
target minors, fueling a nationwide youth mental health crisis."
". These
platforms are intentionally designed to be addictive, particularly for
underaged users, and generate substantial profits by monetizing
minors’ personal data through targeted advertising. These companies
fail to adequately disclose the addictive nature of their products or
the well-documented harms associated with excessive social media
use. Increasing evidence demonstrates that these companies are
aware of the adverse mental health consequences imposed on underage users, yet they
have chosen to persist in these practices. Accordingly, many of our Offices have initiated
investigations and filed lawsuits against Meta and TikTok for their role in harming minors. "
Yet, the comapnies aren't being regulated, nor the algorithims, the marketing or even the existence. It's the users that are the problem therefore everyone has to submit their Identity to use the Internet if this passes.
"We rate Reclaim The Net as Right-Biased based on story selection and editorial positions that align with a conservative perspective. We also rate them Mixed for factual reporting due to poor sourcing, lack of transparency, and one-sided biased reporting."
I’m going to go against the pessimism here and say that this is the US not Europe or the UK and the First Amendment has teeth. There’s ample Supreme Court precedent that anonymous speech is a protected right (Talley vs CA, Macintyre vs Ohio, etc) so I’d expect efforts like this to flounder in the courts if push came to shove.
What the US Supreme Court decides is much less relevant than it used to be because the US executive can and will simply ignore the decision. If anyone in the US administration breaks the law, they can be pardoned by the US president,if the president breaks the law, he's immune against prosecution based on a previous decision of the Supreme Court, and no court can enforce anything if the executive doesn't comply with the court order.
I have mixed feelings about this website, reclaimthenet. In one breath it supports net neutrality and and opposes ID laws, and in the next — not in this particular article — mentions the Twitter files and says the UK is a dictatorship for arresting Lucy Connolly.
I blame HN and Silicon Valley in general for consistently treating keeping children online safe as a parental responsibility only, rather than a government-parent team effort like every other regulation.
This loophole, “think of the children,” would not exist if SV had gotten over itself and not called very solution unworkable while insisting that any solution parents receive, no matter how sloppy or confusing, is workable.
Yeah exactly, had it not been for Facebook and the rest of social media not taking children online seriously, The Simpsons wouldn't have had to mock the cultural meme of blaming everything on saving children back in 1996 https://www.youtube.com/watch?v=RybNI0KB1bg
The idea would be that devices could "opt in" to safety rather than opt out. Allow parents to purchase a locked-down device that always includes a "kids" flag whenever it requests online information, and simply require online services to not provide kid-unfriendly information if that flag is included.
I know a lot of people believe that this is just all just a secret ploy to destroy privacy. Personally, I don't think so. I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.
reply