Hacker Newsnew | past | comments | ask | show | jobs | submit | alterom's commentslogin

Sorry, but this seems to be so off-base (as well as naively optimistic) that I am having difficulty responding to this.

But I'll try nevertheless.

- >Want to run a program as you wish? Great! It's easier than ever to build a replacement.

Non-sequitur. Building a replacement does nothing for being able to run a program as you wish.

Nobody else is able to run your program as they wish unless you release it with a Copyleft license.

- >Want to study a how a program works and to modify it? This is now much more achievable.

Reverse engineering is more achievable.

Modifying a program, without having its source code, documentation, and a legal right to do so guaranteed by the license is (and always be) easier compared to not having those things.

- >Want the freedom to redistribute copies to help others? Build your own version! It may not even be copyrightable if it's 100% generated (IANAL).

So, that's not about redistributing copies. That's about building an alternative option.

I can download an Ubuntu image and get Libre Office on it with a click.

Go vibe-code me a Microsoft Excel running on Windows 11, please, and tell me it's easier.

- >Want to distribute modified versions? yes! see previous.

You're not even trying here.

One can't legally modify and redistribute copyrighted works without explicit permission to do so.

You keep saying "...but vibe coding allows anyone to create something else entirely instead and do whatever with it!" as if that is a substitute for checking out a repo, or simply downloading FOSS software to use as you wish.

- >I dunno; seems like generative coding can be as much a liberator as any kind of problem.

Now, that statement I fully agree with.

Generative coding is a liberator as much as any kind of problem is.

Headache, for example, is generally a problem. It's not a great liberator.

Neither is generative coding.

Now, you probably didn't intend to say what you wrote. And that's exactly why generative coding is not a panacea: the only way to say things that you mean to say is to write precisely what you mean to say.

Vibe-coding (like any vibe-writing) simply can't accomplish that, by design.


>Legal liability is completely unchanged.

It's changed completely, from your own example.

If you comission art from an artist who paints a modified copy of Warhol's work, the artist is liable (even if you keep that work private, for personal use).

If you commission it from OpenAI (by sending a query to their ChatGPT API), by your argument, you are the person liable — and OpenAI is off the hook even if that work is distributed further.

I'm not going to argue about the merits of creativity here, or that someone putting a prompt into ChatGPT considers themselves an artist.

That's irrelevant. The work is created on OpenAI servers, by the LLMs hosted there, and is then distributed to whoever wrote the prompt.

Models run locally are distributed by whoever trained them.

If you train a model on whatever data you legally have access to, and produce something for yourself, it's one thing.

Distribution is where things start to get different.


> If you commission it from OpenAI (by sending a query to their ChatGPT API), by your argument, you are the person liable — and OpenAI is off the hook even if that work is distributed further.

Let's distinguish two different scenarios here:

1) Your prompt is copyright-free, but the LLM produces a significant amount of copyrighted content verbatim. Then the LLM is liable, and you too are liable if you redistribute it.

2) Your prompt contains copyrighted data, and the LLM transforms it, and you distribute it. Then if the transformation is not sufficient, you are liable for redistributing it.

The second example is what I'm referring to, since the commercial LLM's are now very good about not reproducing copyrighted content verbatim. And yes, OpenAI is off the hook from everything I understand legally.

Your example of commissioning an artist is different from LLM's, because the artist is legally responsible for the product and is selling the result to you as a creative human work, whereas an LLM is a software tool and the company is selling access to it. So the better analogy is if you rent a Xerox copier to copy something by Warhol. Xerox is not liable if you try to redistribute that copy. But you are. So here, Xerox=OpenAI. They are not liable for your copyrighted inputs turning into copyrighted outputs.


>So the better analogy is if you rent a Xerox copier to copy something by Warhol

It isn't.

One analogy in that case would be going to a FedEx copy center and asking the technician to produce a bunch of copies of something.

They absolve themselves of liability by having you sign a waiver certifying that you have complete rights to the data that serves as input to the machine.

In case of LLMs, that includes the entire training set.


The most salient difference is that it's impossible to tell if an LLM is plagiarizing, whereas Xeroxing something implies specific intent to copy. It makes no sense to push liability onto LLM users.

Are you following the distinction between my scenarios (1) and (2)?

In scenario (1) the LLM is plagiarizing. But that's not the scenario we're discussing. And I already said, this is where the LLM is liable. Whether a user should be too is a different question.

But scenario (2) is what I'm discussing, as I already explained, and it's very possible to tell, because you yourself submitted the copyrighted content. All you need to do is look at whether the output is too similar to the input.

If there's some scenario where you input copyrighted material and it transforms it into different material that is also copyrighted by someone else... that is a pretty unlikely edge case.


>Our foreparents fought for the right to implement works-a-like to corporate software packages, even if the so-called owners did not like it

Our "foreparents" weren't competing with corporations with unlimited access to generative AI trained on their work. The times, they're-a-changin'.

You're rehashing the argument made in one of the articles which this piece criticizes and directly addresses, while ignoring the entirety of what was written before the conclusion that you quoted.

If anyone finds themselves agreeing with the comment I'm responding to, please, do yourself a favor and read the linked article.

I would do no justice to it by reiterating its points here.


I believe the GP post is saying that if we react to the new AI-enabled environment by arbitrarily strengthening IP controls for IP owners, the greatest benefactors will almost certainly be lawyer-laden corporations, not communities, artists, or open source projects. That seems like a reasonable argument.

It seems like the answer is to adjust IP owner rights very carefully, if that's possible. It sounds very hard, though.


The article makes the same point; the quote was taken out of context.

The point the author was making was that the intent of GPL is to shift the balance of power from wealthy corporations to the commons, and that the spirit is to make contributing to the commons an activity where you feel safe in knowing that your contributions won't be exploited.

The corporations today have the resources to purchase AI compute to produce AI-laundered work, which wouldn't be possible without the commons the AI it got its training data from, and give nothing back to the commons.

This state of things disincentivizes contributing to the FOSS ecosystem, as your work will be taken advantage of while the commons gets nothing.

Share-alike clause of the GPL was the price that was set for benefitting from the commons.

Using LLMs trained on GPL code to x "reimplement" it creates a legal (but not a moral!) workaround to circumvent GPL and avoid paying the price for participation.

This means that the current iteration of GPL isn't doing its intended job.

GPL had to grow and evolve. The Internet services using GPL code to provide access to software without, technically, distributing it was a similar legal (but not moral) workaround which was addressed with an update in GPL.

The author argues that we have reached another such point. They don't argue what exactly needs to be updated, or how.

They bring up a suggestion to make copyrightable the input to the LLM which is sufficient to create a piece of software, because in the current legal landscape, creating the prompt is deemed equivalent to creating the output.

You can't have your cake and eat it too.

A vibe-coded API implementation created by an LLM trained on open source, GPL licensed code can only be considered one of two things:

— Derivative work, and therefore, subject to the requirement to be shared under the GPL license (something the legal system disagrees with)

— An original work of the person who entered the prompt into the LLM, which is a transformative fair use of the training set (the current position of the legal system).

In the later case, the input to the LLM (which must include a reference to the API) is effectively deemed to be equivalent to the output.

The vibe-coded app, the reasoning goes, isn't a photocopy of the training data, but a rendition of the prompt (even though the transformativeness came entirely from the machine and not the "author").

Personally, I don't see a difference between making a photocopy by scanning and printing, and by "reimplementing" API by vibe coding. A photocopy looks different under a microscope too, and is clearly distinguishable from the original. It can be made better by turning the contrast up, and by shuffling the colors around. It can be printed on glossy paper.

But the courts see it differently.

Consequently, the legal system currently decided that writing the prompt is where all the originality and creative value is.

Consequently, de facto, the API is the only part of an open source program that has can be protected by copyright.

The author argues that perhaps it should be — to start a conversation.

As for who the benefactors are from a change like that — that, too, is not clear-cut.

The entities that benefit the most from LLM use are the corporations which can afford the compute.

It isn't that cheap.

What has changed since the first days of GPL is precisely this: the cost of implementing an API has gone down asymmetrically.

The importance of having an open-source compiler was that it put corporations and contributors the commons on equal footing when it came to implementation.

It would take an engineer the same amount of time to implement an API whether they do it for their employer or themselves. And whether they write a piece of code for work or for an open-source project, the expenses are the same.

Without an open compiler, that's not possible. The engineer having access to the compiler at work would have an infinite advantage over an engineer who doesn't have it at home.

The LLM-driven AI today takes the same spot. It's become the tool that software engineers can and do use to produce work.

And the LLMs are neither open nor cheap. Both creating them as well as using them at scale is a privilege that only wealthy corporations can afford.

So we're back to the days before the GNU C compiler toolchain was written: the tools aren't free, and the corporations have effectively unlimited access to them compared to enthusiasts.

Consequently, locking down the implementation of public APIs will asymmetrically hurt the corporations more than it does the commons.

This asymmetry is at the core of GPL: being forced to share something for free doesn't at all hurt the developer who's doing it willingly in the first place.

Finally, looking back at the old days ignores the reality. Back in the day, the proprietary software established the APIs, and the commons grew by reimplementing them to produce viable substitutes.

The commons did not even have its own APIs worth talking about in the early 1990s. But the commons grew way, way past that point since then.

And the value of the open source software is currently not in the fact that you can hot-swap UNIX components with open source equivalents, but in the entire interoperable ecosystem existing.

The APIs of open source programs are where the design of this enormous ecosystem is encoded.

We can talk about possible negative outcomes from pricing it.

Meanwhile, the already happening outcome is that a large corporation like Microsoft can throw a billion dollars of compute on "creating" MSLinux and refabricating the entire FOSS ecosystem under a proprietary license, enacting the Embrace, Extend, Extinguish strategy they never quite abandoned.

It simply didn't make sense for a large corporation to do that earlier, because it's very hard to compete with free labor of open source contributors on cost. It would not be a justifiable expenditure.

What GPL had accomplished in the past was ensuring that Embracing the commons led to Extending it without Extinguishing, by a Midas touch clause. Once you embrace open source, you are it.

The author of the article asks us to think about how GPL needs to be modified so that today, embracing and extending open-source solutions wouldn't lead to commons being extinguished.

Which is exactly what happened in the case of the formerly-GPL library in question.


I think the article in fact reaches the exact opposite conclusion it should. I'm not really sure how useful it is to talk about sharing and commons and morals when the point raised was about what is possible. The prescription includes copyleft APIs. These are not possible under Oracle v Google. And you could point it out if I'm wrong but the article doesn't discuss what would happen if Congress acted to reverse Oracle v Google (IMO a cosmically bad idea).

Adding even more intellectual property nonsense isn't going to work. The real solution is to force AI companies to open up their models to all. We need free as in freedom LLMs that we can run locally on our own computers.

I agree. But IMHO that ship has sailed. This should have been stop it when OpenAI went for-profit.

If you want to build a new world with out this, we can't do it while we are supporting the very companies that are creating the problem. The more power you give them, the strong they get and the weaker we become.

I think focus needs to shift completely off of for-profit companies. Although, not sure how that is going to happen..lol


Force them to open (and host) all their training data. They stole it from the pubic to sell it back to us anyway.

>Adding even more intellectual property nonsense isn't going to work.

[citation needed]

Where does your confidence come from?

GPL itself was precisely the "intellectual property nonsense" adding which made FOSS (free as in freedom) software possible.

The copyright law was awfully broken in the 1980s too. Adding "nonsense" then was the only solution that proved viable.

Historically, nothing but adding "more IP nonsense" has ever worked.

>The real solution is to force AI companies to open up their models to all.

Sure. Pray tell how you would do that without some "intellectual property nonsense".

We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.

>We need free as in freedom LLMs that we can run locally on our own computers

Oh, on that note.

LLMs take a fuckton of compute to train and to even run.

Even if all models were open, we're not at the point where it would create an equal playing field.

My home computer and my dev machine at work have the same specs. But I don't have a compute farm to run a ChatGPT on.


> Where does your confidence come from?

From the fact that copyright infringement is trivial and done at massive scales by pretty much everyone on a daily basis without people even realizing it. You infringe copyright every time you download a picture off of a website. You infringe copyright every time you share it with a friend. Everybody does stuff like this every single day. Nobody cares. It is natural.

> GPL itself was precisely the "intellectual property nonsense"

Yes. In response to copyright protection being extended towards software. It's a legal hack, nothing more. The ideal situation would have been to have no copyright to begin with. The corporation can copy your code but you can copy theirs too. Fair.

> Pray tell how you would do that without some "intellectual property nonsense".

Intellectual property is irrelevant to AI companies.

Intellectual property is built on top of a fundamental delusion: the idea that you can publish information and simultaneously control what people do with it. It's quite simply delusional to believe you can control what people do with information once it's out there and circulating. The tyranny required to implement this amounts to totalitarian dictatorships.

If you want to control information, then your only hope is to not publish it. Like cryptographic keys, the ideal situation is the one where only a single copy of the information exists in the entire universe.

AI companies are not publishing any information. They are keeping their models secret, under lock and key. They need exactly zero intellectual property protection. In fact such protections have negative value to them since it restricts the training of their models.

> We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.

Sure you do. The whole point of government is to do just that. Literally pass some kind of law that forces the corporations to publish the model weights. And if the government refuses to do it, people can always rise up.

> Even if all models were open, we're not at the point where it would create an equal playing field.

Hopefully we will be, in the future.


So... People are going to rise up? What makes you think most of them have enough slack in their finances to pack up and haul off to D.C.? Only the Elites do, and they pay full time lobbyists to do exactly that to make sure laws like you mention never pass. Not saying it can't work. Just saying it the game is rigged against the very people you want to rise up and in favor of the ones who'd rather you stayed in bed.

If people don't rise up they will become soylent green. Over the long term, AI threatens to replace all human labor. It cannot remain locked away in corporate servers. This is an existential issue. The ultimate logic of capitalism is that unproductive people need not be kept alive since they add nothing but cost. So either we free AI, collapse the very idea of having an economy and transcend capitalism into a post-scarcity society, or we will be enslaved and genocided by those who control the AIs.

Hence why we see more and more pushes control communication on the internet. Going to be hard to free AI when a panopicon is turned against us to prevent exactly that.

> You infringe copyright every time you download a picture off of a website. You infringe copyright every time you share it with a friend.

respectfully yoy have no idea what you are talking about here.


Why don't they, there have been lawsuits over just these behaviors in the past. Hell, even the multiple representations of the picture in computer memory have had to have allowances.

Copyright is a gigantic fucking mess that the US has forced over a large chunk of the world.


> there have been lawsuits over just these behaviors in the past

How did they turn out?


It depends if you count the ones that were settled behind NDAs with large companies with unknown amounts being paid out that are ticking time bombs waiting to go off in the future.

You might be thinking of fair use, but that's an affirmative defence. Every time someone has copied someone elses artwork and modified it into a meme, that's copyright infringement and remains so even if is eventually ruled as fair use. If you make a fair use claim, you don't deny infringement, you make the claim that you were allowed to infringe.

Try replacing "picture" with "music file".

I agree with the comment and find the linked article motivated reasoning at best. It's easy to find something "morally good" when it aligns with what you wanted. But plenty of people at Oracle, at IBM, at Microsoft, at Nintendo, at Sony and plenty of other companies whose moats have been commoditized by open source knockoffs don't find such happenings to be "morally good". And even if in general you think that "more freedom" justifies these sorts of unauthorized clones, then Oracle V. Google was at best a lateral move, as Java was hardly a closed ecosystem. One also wonders how far the idea of "more freedom" = "good" goes. How does (did if Qualcom's recent acquisition changes the position) the author feel about the various chinese knockoff clones of the Arduino boards and systems? Undeniably they were a financial good for hobbyists and the maker world alike, and they were well within the "legal" limits, and certainly they "opened" the ecosystem more. But were they "good"? Was the fact that they competed and undersold Arduino's work without contributing anything back and making it harder financially for Arduino to continue their work a "moral good"?

If "more freedom" is your goal, then this rewrite is inherently in that direction. It didn't "close" the old library down. The LGPL version remains under its license, for anyone to use and redistribute exactly as it always has. There is just now also an alternative that one can exercise different rights with. And that doesn't even get into the fact that "increased freedom" was never a condition of being allowed to clone a system from its interfaces in the first place. It might have been a fig leaf, but some major events in the legal landscape of all this came from closed reimplementations. Sony v. Connectix is arguably the defining case for dealing with cloning from public interfaces and behavior as it applies to emulators of all kinds, and Connectix Virtual Gamestation was very much NOT an open source or free product.

But to go a step further, the larger idea of AI assisted re-writes being "good", even if the human developers may have seen the original code seems to broadly increase freedoms overall. Imagine how much faster WINE development can go now that everyone that has seen any Microsoft source code can just direct Claude to implement an API. Retro gaming and the emulation scene is sure to see a boost from people pointing AIs at ay tests in source leaks and letting them go to town. No our "foreparents" weren't competing with corporations with unlimited access to AI trained on their work, they were competing with corporations with unlimited access to the real hardware and schematics and specifications. The playing field has always been un-level which was why fighting for the right to re-implement what you can see with your own eyes and measure with your own instruments was so important. And with the right AI tools, scrappy and small teams of developers can compete on that playing field in a way that previous developers could only dream of.

So no, I agree with the comment that you're responding to. The incredible mad dash to suddenly find strong IP rights very very important now that it's the open source community's turn to see their work commoditized and used in ways they don't approve of is off-putting and in my opinion a dangerous road to tread that will hand back years of hard fought battles in an attempt to stop the tides. In the end it will leave all of us in a weaker position while solidifying the hold large corporations have on IP in ways we will regret in the years to come.


I mean. Yeah. GPL's genius was that it used Copyright, which proprietary enterprise wouldn't dare dismantle, to secure for the public a permanent public good.

Pretty sure no one, (but me anyway) saw overt theft of IP by ignoring IP law through redefinition coming. Admittedly I couldn't articulate for you capital would skill transfer and commoditize it in the form of pay to play data centers, but give me a break, I was a teenager/twenty something at the time.


>You're right, but people don't actually care about privacy

The entire point of a platform like Twitter / Bluesky is reach, not privacy.

Posts and discussions there are meant to be public, and highly visible.

It's not that people don't care. It's that this is not what the platform is for.

What's important for a platform like that is not even anonymity, but functional pseudonymity.

And that thing is on its way to the effectively outlawed with the push for "age verification".

People do notice it and leave [1], but at some point, there might be no place to go to.

[1] https://www.reddit.com/r/privacy/comments/1rmlzhy/welp_goodb...


I 100% agree, I always thought that even Private Messages were a bad idea.

But no, we're way past "if you don't want it public don't post it." and then wiping our hands and being done. We need to think in a policy kind of way on this.

And again, things are already dangerous -- but ATProto makes them more dangerous. It's something like a chain-of-custody thing. I think the world is collectively safer where the gathering of data like this is less reliable and less verifiable.

ATProto's model makes the building of the proverbial evil Big Brother panopticon thing a LOT easier.


That's not what the comment you're responding to is about.

>hard to believe even Meta would do this intentionally).

Hahahahahahahaha

ZUCK: yea so if you ever need info about anyone at harvard

ZUCK: just ask

ZUCK: i have over 4000 emails, pictures, addresses, sns

FRIEND: what!? how’d you manage that one?

ZUCK: people just submitted it

ZUCK: i don’t know why

ZUCK: they “trust me”

ZUCK: dumb fucks

Actual quote, BTW [1].

[1] https://www.newyorker.com/magazine/2010/09/20/the-face-of-fa...


As much as this is a damning quote, it is perhaps also damning that any time someone wants to smear zuck they have to reach 20 years into the past.

It's not "smearing" to use Zuckerberg's own words in a discussion of his character, and this is far from the only example of things he's done or said in the past 20 years that would lead a reasonable person to call into question his moral fiber.

It remains, however, a popular point of reference because:

1. It's fast and easy to read and digest.

2. The blunt language leaves little room for speculation about his feelings and intent at the time.

3. A lot of people understand that as Zuckerberg's wealth exploded, he surrounded himself with people (coaches, stylists, PR professionals, etc.) who are paid handsomely to rehabilitate and manage his image. Therefore, his pre-wealth behavior gives insight into who he really is.


> his pre-wealth behavior gives insight into who he really is

"No man ever steps in the same river twice, for it's not the same river and he's not the same man."

Not defending Zuck but it reflects a rigid mindset to assume that people cannot change.


People can change but based on Facebook's actions vis-a-vis privacy, mental health, etc. there's little evidence that Zuckerberg has gone from treating his users like "dumb f...." to treating them like human beings.

If we're going to talk about quotes, here's one: "money amplifies who you are".


Whatsapp is one of the only instances I can think of in corporate acquisitions where the side being acquired lashes out at the acquiring side as much as this ("It's time. Delete Facebook")

You're talking about someone who changes privacy settings, who was told about gay people being automatically added to groups and posting on their walls so it outed them, being told about this and dismissing it. Or "graph search". He doesn't think people deserve any respect when it's not him?


When a man changes it is on him to prove that he has changed. Has Zuck atoned himself in any way? Has Meta?

I'm a big believer in second chances and letting people rehabilitate, but there's no evidence the Meta or Zuck have changed for the better. Meanwhile, *there is plenty of evidence that suggests he has only become more uncaring and deceptive, as Meta has only become more invasive over time*, the article itself being one such example.

So I do believe Zuck has changed, but not in the direction that we should applaud and/or forgive him. I've only seen him change in the way that should make us more concerned and further justify the hatred. A man may change, but he does not always change for the better.


I think there's more than enough evidence that Zuck has not grown to see others as human beings.

It doesn’t though, no one is the same person they were 20 years ago and every young person is makes a ton of mistakes

You're suggesting a ton of money and power made Zuckerberg more empathetic?

No I didn’t suggest that, I’m stating a fact that kids say stupid stuff all the time.

No, you didn't suggest that. You suggested that the quote is not representative of who he is now.

We'd need a lot more context (and words) for us to understand that sentence as anything other than defending him. At best you're giving him the benefit of doubt.


You're right, he's much worse now.

I think his actions speak for themselves. Facebook, effectively completely controlled by Zuckerberg, has consistently taken actions that erode privacy and degrade mental health.

And no, not every young person has the attitude that Zuckerberg demonstrated in his "dumb f...s" comment. If my son or daughter was behaving like that in their late teens/early twenties I would be ashamed and feel like a failure as a parent.


There's a big difference between "someone said something stupid as a kid"... "but now has changed and is a totally different person" and "is doing the same things but now knows how not to say the quiet part out loud"

He wasn't even a kid. He was like 20 years old at university.

Exactly.

Show us how Meta is a moral player in society.

All I can see are lots of evil behaviors.


>they have to reach 20 years into the past.

Well, they don't, but this is a particularly damning statement and it's age is more of a feature than a flaw because it shows a long history of anti-social disdain for humanity.


I hear this rebuttal a lot; here's why it doesn't work for me:

I'm the exact same age as Zuckerberg. When I first read this quote, it struck me as a really gross mindset and a point of view that I could neither relate to nor have sympathy for. I would not have said (or thought) those things when I was his age. Fundamentally, this is a demonstration of poor character.

Now, people do grow and change. We've all said or done things that we regret. Life can be really hard, at times, for most of us, and more often than not young arrogant guys eventually learn some humility and grace and empathy after they confront the real world and experience the inevitable ups and downs of life.

But Zuckerberg had no such experience. His life during and after the time when he said this was one of accelerating material success and validation. The scam he was so heartlessly bragging about in that statement actually worked, and he became one of the richest men in the world. So my expectation of the likelihood that he matured away from this mindset is much lower than it would be for someone like you or me.

(And, as others have said in this thread, there's ample evidence from his subsequent decisions to support this)


Learning to choose your words more wisely as you age does not necessarily indicate your underlying value system has evolved.

>it is perhaps also damning that any time someone wants to smear zuck they have to reach 20 years into the past.

It is perhaps not, and perhaps a bit disingenuous to claim so in good faith, as if it exceeds your abilities to search for the list of facebook scandals in the decades following and see that the behavior is often consistent with this quote. Even if you choose to ignore all that, it's also not very reasonable to expect troves of juicier quotes after all the C-suites, lawyers, and HR departments showed up locked everything down with corporate speak. I'm sure if facebook were to be so kind as to leak all the messages and audio of zuck's internal comms since that time people would be able to have many other juicy quotes to work with.

It is often referenced because it's the best quote that represents the trailblazing era of preying on users' undying thirst for convenience in order to package their private data as a product.


Thank you for saying this. I would not find a better way to word the response myself.

"It is perhaps not, and perhaps a bit disingenuous to claim so in good faith, as if it exceeds your abilities to search for the list of facebook scandals in the decades following and see that the behavior is often consistent with this quote.

It is often referenced because it's the best quote that represents the trailblazing era of preying on users' undying thirst for convenience in order to package their private data as a product.

These sentences are deliciously delightful to read in this era of writing whose blandness and sloppiness is only amplified by LLM-driven "assistance".

It is difficult to be pithy without being bitter, but your writing achieves it within the span of a single comment. If you have a blog, I hope you share it!


Okay, how about a settlement from just last year, about how Meta does nothing but violate privacy? [0]

[0] https://www.bbc.com/news/articles/cx2jmledvr3o


>As much as this is a damning quote, it is perhaps also damning that any time someone wants to smear zuck they have to reach 20 years into the past.

Smear is a word that's not applicable here. It implies that the allegations in the argument labeled thusly are wrong and unjust.

This is not the case here.


Not as self-damning as you trying to defend what he said 20 years ago, with full knowledge of how he's acted in those intervening 20 years.

Congratulations, you've just smeared yourself with your own contemporary words.


I'd say once someone reveals their true character, you should believe it.

You would have a good point if what Meta is doing now wasn’t far worse than what Zuck himself is describing in those comments, all while Zuck has remained at the helm the entire time.

Or just quote anything out of the much more recent book Careless People.

Character almost never changes.

or more recently the times he lied to Congress, all the layoffs, the "metaverse", etc

or just at any point in the last 20 years to the present works too


you are who you are

This is a very important window into how the industry, by and large, views users and the concept of privacy. It's not merely authoritarian and predatory, to them users are subhuman.

Now if only we could look up everything you said in chatrooms as a 19 year old and post the most inflammatory stuff on HN.

I’m sure you’ve never said anything callous or snarky, and were a bastion of morality as a teenager.


I've tried to learn and grow from the stupid comments of my youth. I haven't been involved in a long list of scandals directly related to the ideas those comments expressed, and if I was, it would be pretty clear that I didn't learn or grow at all.

You haven't been involved in a long list of public scandals because you've never done anything in your life that's publicly notable.

By tricking yourself into believing you sit on a higher moral pedestal you're simply easing the pain of comparison.

When high school girls spread gossip that the pretty, popular girl has loose morals, they aren't performing this service out of the goodness of their hearts. They're hoping to elevate themselves by tearing down the competition.


>You haven't been involved in a long list of public scandals because you've never done anything in your life that's publicly notable.

That's funny.

You genuinely think that doing something "publicly notable" is necessary and sufficient for being involved in multiple public scandals, as if notable people who aren't slimy asshats didn't exist.

It's a fine argument too. You can keep narrowing down what counts as "publicly notable" until it only includes "founding Meta" when counterexamples are pointed out to you.

That's how you can be so confident is saying "you've never done anything in your life that's publicly notable" without knowing who you're talking to.

>By tricking yourself into believing you sit on a higher moral pedestal you're simply easing the pain of comparison

What a beautiful example moving the goal posts with a personal attack while saying absolutely nothing that has any discernable meaning.

Easing the pain of comparison, huh?

It's not painful to compare an asshat who brags about betraying trust of people who thought he's a decent human being to anyone who finds that repulsive.

Particularly in the context of discussing how trustworthy that person is.

It's not about "morals", see.

It's that Mark Zuckerberg is the highest authority when it comes to talking about Mark Zuckerberg, —...

... — and he explicitly said that you'd be a dumb fuck to trust him with your personal data, which is what you do when you wear Meta's AI glasses.

These are the concrete, specific facts, not contrived examples about high school girls (on whose behalf you can't speak either).


Yes, I posted some stupid stuff as teenager and later.

I never in my life were mocking and making fun out of other people for trusting me, or equivalent.

I also never run company that knowingly ruined multitude of lives and social interactions in general.

> snarky

Snark is not a problem that people have with Mr. Zuckerberg.


>Now if only we could look up everything you said in chatrooms as a 19 year old and post the most inflammatory stuff on HN.

Sure. Wen I was in college, I didn't have the idea of snooping on other students and exploiting them as "dumb fucks" who were stupid enough to trust me.

Most of my public online history starts at around that time too.

And one of my first comments on Slashdot was expressing concern about Facebook violating people's privacy by introducing the feed back in 2006.

https://slashdot.org/comments.pl?sid=195861&cid=16054826

I was 19 then.


[flagged]


Hilarious.

Before you posted this I actually edited my comment to remove a sentence at the end where I said "Now please proceed to call me a bootlicker while not rebutting my point."

I thought it would be too flame-war-y. Guess it was actually needed however! US politics getting hysterical has been like the eternal semptember for HN. This place is so braindead and predictable and uninteresting now.


The worst part isn't even that quote, its that nothing structurally has changed one bit since then. The business model still requires users as the product. Glasses that upload video to Meta's servers is the entire point.

I mean, no shit Sherlock, Cyrillic letters being indistinguishable from English ones is what Russian speakers have been using to get around braindead keyword сеnsоrshір¹ forever, same way kids type "de@th" on TikTok to avoid automoderation.

Most of the added value in this article can be summed up by saying that the Cyrillic glyphs are identical to the similar English ones in the fonts that author looked at (which isn't true for all fonts), and author didn't find many other such examples.

_______

¹ Try matching that word with "censorship" for fun


>> A domain using only Cyrillic characters that happen to spell a Latin word (like “аpple” in all-Cyrillic) may still render in the address bar’s font and look identical

Here you go:

https:// аррlе.соm

(using English "l" and "m" here, Russian м looks differently)


I can touch-type with flow typing in GBoard at this point.

That's to say, I'm writing this comment on my Android phone without looking at the keyboard.

QWERTY is in my muscle memory in such a way that words have become writable as single stroke characters.

I really, really doubt this Keybee thing can be an improvement over that in any way.


I really want to see the visualization of words as the swipe typing patterns. I tried doing it on paper and realized I couldn't understand it just by looking but once I started visualizing it and swiping in my head I could start to get a feel for it. The tricky part is figuring out where in the keyboard the stroke begins


I also would like to implement all those texting algorithms but I need the help of the open source community at this moment. Cheers!.


People riding horse buggies probably thought the same when powered vehicles first came about, and look at the world now. You won't know unless you give it a honest try.


QWERTY won't be replaced on phones until there is a full phase change in how people interact with their phones that absolves us of keyboards entirely. Anybody here who thinks otherwise is welcome to make an offer to buy my two decades of notes on the topic.


>People riding horse buggies probably thought the same when powered vehicles first came about, and look at the world now. You won't know unless you give it a honest try.

This is a ridiculous non-analogy.

I'm flying a jet airplane, and you're telling me to give Ford Model T a try because you don't understand flight as a concept.

Or, in this case, Flow typing.

From Keybee's website:

>Some syllables and some words can be inserted through a simple combination of tap & swipe (we call it twipe) greatly reducing the number of touches for typing a text. For now the twipe is limited to the adjacent keys. Keybee Keyboard is swipe friendly.

I am typing an entire word with one "twipe" on GBoard.

Each word.

I'm done with touchscreen input methods that require me to think about tapping letters. I don't think in individual characters, and I don't type in them either.

Let me know if I can make it any clearer.


Also gboard is the best keyboard for that. Nothing else implements a prediction model over a number of words as far as I can tell. Or if they do, they fail really badly at it.


Swipe keyboard on Microsoft Lumias was better.

I think Android is only catching up to it in the past 2-3 years.

Sadly, Lumias went the way of the dodo, and I don't have a need for that sort of input on something that's not a phone.

Whatever Microsoft put out as a keyboard app for Android is different, they didn't implement the same UX.

Out of the swipe keyboards I tried for Android, GBoard worked the best.


Weird thing to say, why would a random port be peak efficiency that can’t be improved?


I hate that I understand what "pelican guy" refers to.


Weird for a random guy that I've met a half dozen times at conferences to be setting taste for effectively my whole professional world.


We never know...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: