Hacker Newsnew | past | comments | ask | show | jobs | submit | superposeur's commentslogin

Can you believe that This is Spinal Tap, The Sure Thing, Stand by Me, Princess Bride, When Harry Met Sally, Misery, and A Few Good Men were all directed by the same man? What an eclectic set of masterpieces.


The Maple syntax may superficially seem easier but actually leads to more problems in practice. The point of the [ ] is that argument of a function is logically distinct from algebraically grouping terms in an equation. Also, Mathematica is a camel case language since underscore is for pattern recognition, hence the capitalization of function names. Personally, I’ve found every little Mathematica design feature to be incredibly well thought out, logical, and consistently implemented over the whole of the language.


The introduction to Vol 1 of Weinberg’s Quantum Theory of Fields does this really well, albeit briefly. It feels like getting an “insider’s view” of the historical developments.


Everyone seems to be unsurprised by this move, but I’m genuinely shocked. What a shoot your own foot business decision. Google, evil though it be, doesn’t post the text of your gmails in its search results because who would consider using Gmail after that? This is the llm equivalent. Am I missing something?


Gmail used to serve ads based on your emails for many years until 2017. https://www.npr.org/sections/thetwo-way/2017/06/26/534451513...


And in 2010 they made https the default. Different times :)


I don't think https is responsible for that. Google owns the data, it doesn't matter how it is transported. It does, however, matter how it is stored (which I hope is encrypted in a way only you can retrieve it)


Google mines the bejeezus out of your email, and uses it to any number of ends, including manipulating you into buying things, and also passing your correspondence on to the US government. While this is not the same as outright making your emails universally searchable - training Claude on your emails is also not the same as posting their contents.

And - this behavior of Google's has not been penalized, I'm afraid.


> Am I missing something?

I think you do:

According to the article https://www.perplexity.ai/page/anthropic-reverses-privacy-st...

"Enterprise and educational customers will continue operating under their existing privacy protections, as the policy changes specifically exclude Claude for Work and Claude for Education services. These commercial accounts remain governed by separate contractual agreements that maintain stricter data handling standards.

Organizations using Claude through business partnerships or educational licenses can continue their operations without concern for the new training policies affecting their sensitive communications or proprietary information."

Thus, I think your claim

> What a shoot your own foot business decision.

likely does not hold: the non-commercial accounts likely led to Anthropic loosing money, so they are not liked by Anthropic anyway (but are a an "inconvenient necessity" to get people to notice and try out your product offering). With this new decision, Anthropic makes this "free-riding" less attractive.

I bet that Anthropic will soon release a press statement (that exists in the drawers for quite a long time) "We are listening to your concerns, and will thus extend our 'privacy-conscious offering' to new groups of customers. Only 30 $ per month."


> With this new decision, Anthropic makes this "free-riding" less attractive

Certainly not for any users like you and me, it takes two seconds and three clicks to review the new terms and decline chat training. This is more like Anthropic getting easy training from people who are unaware or don't care.


Seems the same thing. They're giving plausible deniability, but knowing they'll still scoop up a worthwhile amount of data/profit from some % of users.


Gmail is free. It would still be incredibly bad for Gmail to start publishing the content of free users' emails to Google search.

But also, Anthropic has said that this new policy also applies to their Pro ($20/mo) and Max ($200/mo) plans. So its not free versus not free.


Well, it means that LLMs used for business use cases will be trained on input from non-business use cases of non-privacy-conscious users.


This data is useful for reinforcement learning. All the others do it.

And most importantly, you can just opt-out.


Just because all the others do it doesn’t make it right. Many users chose Anthropic exactly because they were not like the others.


> Many users chose Anthropic exactly because they were not like the others.

Oh the naivety.

Sooner or later they all become the same, soon after "investors" or "shareholders" arrive.


> Sooner or later they all become the same, soon after "investors" or "shareholders" arrive.

They already arrived. Google was one of the main investors of Anthro.


There's no reason to be shocked by the practice however.


> Many users chose Anthropic exactly because they were not like the others.

Companies are less like people and more like bacteria. They are programmatic, like algorithms.

What they will do has already been decided for them, programmed into them, by the rules of capitalism. It is inevitable. There are no good guys, and there are no bad guys, there's just... microbes.

Those who do not engage in capitalism, perhaps they do not seek money at all, have no such hard limitations. But they are rare, because money is blood.


Ok, to be clear, let’s say I’m dumb and accidentally go with the default (I get the color of the opt out button wrong or something). As if there’s a “publish my private emails to the internet” default-on button in email. Then, I use it to edit a rec letter for student X, with my signature Y. (Yes I know this is dumb and I try changing names when editing but am sure some actual names may slip through.) A few months later the next model is released trained on the data. Student X asks Claude what Y would write in a rec letter about X. Such a button is a “wings stay on / wings fall off” button on a plane.


You're severely overestimating the ability of the model to recall a single mostly uninteresting item from it's billions of input documents.


You can't opt out of the data retention policy.


The data retention period is 30 days if you don't choose to improve model training. https://www.anthropic.com/news/updates-to-our-consumer-terms...


Oh, I didn't catch this—that's good news


The LLM equivalent is what Google does do, which is train its spam filters on the contents of your emails coupled to the signal of what human beings flag as spam.

(It was one of the first significant value-adds of GMail: at its scale, Google could create a global-concept understanding of the content and pattern of spam across hundreds of millions of users. That was the kind of Big Data that made it possible to build filters where one could confidently say "This is tuned on all spam in the wild, because we've seen all spam in the wild").


What a framing. Like there is exactly a surprise behing all these reactions.



My path crossed Nguyen many years ago and I can vouch that he is a very smart, nice, ethical, and solid dude who knows his stuff. I’m also a physicist and know enough about the relevant math and physics to evaluate Nguyen v. Weinstein, though I haven’t processed either of their papers deeply. But, fwiw, Tim’s critique is detailed and readable. In particular, what he says about a faulty complexification step makes perfect sense and would spell death for an approach to unification that hinges on detailed accidents of representation theory (as Weinstein’s seems to). To really judge this, I’d have to delve into Weinstein’s baroque-yet-vague theory, which I’m unwilling to do as I’m pretty sure it would be a waste of time.


The problem I’ve always had with over-weighting deathbed advice is that dying people rarely think through the counterfactuals involved. What would actually be the consequence of not working so hard and relentlessly prioritizing personal relationships (as all such advice seems to recommend)? How much worse of a future would result from financial insecurity and lack of career fulfillment? Has the advice giver actually thought through the tradeoffs that lead you to work hard in the first place? Further, dying people’s worlds usually contract to personal relationships only so it makes sense this is the only aspect of life they emphasize.


"Star Trek: The Next Generation" captured this so well with the "Tapestry" episode. It showed that if you do life differently, you will indeed get a different life - but maybe not the life you thought you wanted!

https://en.wikipedia.org/wiki/Tapestry_(Star_Trek:_The_Next_...


Great ep.


This is a good point. You have to strike a balance between immediate and delayed gratification.

I try and conduct myself in a way that future me could look back on present me and say "past me took advantage of life experiences that were only available at the time" (think: youthful adventures, travel, friendships, etc.) but also "past me did a good job of setting present me up for happiness and fulfillment" (think working reasonably hard, being conscientious, financial responsibility, etc.)


Part of this bias is the kind of people dying on a deathbed tend to make less risky choices. You’re underrepresenting motorcycle riders let alone BASE jumpers etc. Long hours seem like the safe option, you’ll rarely get fired for working late. However, it’s easy to be pissed how much extra time you put in when you get laid off etc.

Thus, people looking back have more information to work with and where risk adverse so they likely worked more than they should.


Working outside of normal hours is now a cause for suspicion. Especially in today's WFH environment. It's a prime time to convene with the handler who does the actual work. Or to exfiltrate proprietary information to your superiors in North Korea. Etc.

Whatever it is you need to do, get it done during normal business hours. If you can't manage that, find another job.


It's so bizarre for me to see this perspective in a tech space when my tech-adjacent academic R&D career exposed me to so many people who naturally wanted to pull periodic all-nighter efforts or just live in strange shift patterns that ranged anywhere from night owl to vampire...


I've been asked unpleasant questions about working into the night, and I've seen working outside regular hours listed as a red flag on "how to detect employee fraud" guides. So however bizarre you may think it is, it's real, and companies are well within their rights to behave this way. Remember, in the USA your employer has the right to fire you for any reason except the ones specifically enumerated in the Civil Rights Act, or if it violates your employment contract.

Most people working in "tech" are implementing business functions and processes, and are answerable to people on the business side of things. Academic R&D is a whole different animal.


It's also that you might have a better idea of events that couldn't have been foreseen at the time. Maybe working hard didn't pay off because you lost much of the savings in a bad investment or a bad divorce anyways. Maybe you could have done with fewer savings because of a larger than expected instance or stock reward. Or maybe the fruits of some efforts never materialized anyways. With the information available at the time the decisions might still have been the correct ones.


How do you know whether dying people think through the counterfactuals?

Of all the people I can think of, my future self would absolutely be on the short list for who I would like advice from.

My older self can definitely advice my younger self to not work so much and so hard, without meaning that I should "relentlessly prioritize relationships". (Edit: I already prioritize relationships, but not relentlessly)

In my eyes, this is nothing controversial at all. In this thread I am surprised that the concept of "deathbed advice" provokes so many people.


Agreed, if you look beyond the bro-ey tone of the presentation, it is smart and nontrivial advice he is delivering here. It is so easy to get distracted by complexity (esp with so much competing internet advice). Picking a couple lifts then making the numbers go up on them is effective and underrated.


In any field, what it even means to be good morphs as you go up in skill level. Non mathematicians know only about arithmetic so they often imagine that mathematicians must be really really good at arithmetic. But this isn't so. Likewise, non musicians think what must make a great musician is perfect pitch. But some of the greatest musicians in history didn’t have it while many mediocre ones do. Similarly, non chess players think GMs must be good at calculating zillions of moves in advance, but apparently they only calculate a small set of moves, which somehow are the right ones.

To take an example cited in the article, Einstein was so far up there that it’s nearly impossible for a non physicist to even understand what he was so good at — crude measures like high school grades or “IQ” barely scratch the surface of the skill that he was a genius at.

Now, perfect pitch does modestly correlate with musical ability, mathematicians are better than average at arithmetic, GMs do calculate more moves than the average shmo, and Einstein got much better than average grades (after all he was accepted at ETH). But that’s all, modest correlations.

There is such a thing as talent in music, mathematics, etc. but it isn’t something a psychologist standing outside these domains would ever be able to devise a test for.


Ha ha, first time I’ve seen another reference to this! Back in the day, I got such a kick out of his description that I began imitating it myself, calling it my “David Lynch Special” dish.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: