Hacker Newsnew | past | comments | ask | show | jobs | submit | akprasad's commentslogin

Maybe it's just the frequency illusion, but "X. Not Y." in particular is a pattern I strongly associate with LLM writing.

> That’s confabulation. Not a metaphor. The same phenomenon.

> Published. Replicated. Not fringe.

> Not to validate it. Not to refute it. Not to engage with its content at all.


It’s absolutely a signal. As is the constant repeating of points. It’s AI slop for sure

Which is a shame coz the premise is interesting.


A similar idea from the Brihadaranyaka Upanishad, ~7th century BCE

> 'And here they say that a person consists of desires. And as is his desire, so is his will; and as is his will, so is his deed; and whatever deed he does, that he will reap.


What is the strategy, in your view? Maybe something like this? --

1. All government employees get access to ChatGPT

2. ChatGPT increasingly becomes a part of people's daily workflows and cognitive toolkit.

3. As the price increases, ChatGPT will be too embedded to roll back.

4. Over time, OpenAI becomes tightly integrated with government work and "too big to fail": since the government relies on OpenAI, OpenAI must succeed as a matter of national security.

5. The government pursues policy objectives that bolster OpenAI's market position.


6. openAi continues to train "for alignment" and gets significant influence over the federal government workers who are using the app and toolkit, and thus the workflows and results thereof. eg. sama gets to decide who gets social sercurity and who gets denied


Or inject pro/anti to some foreign adversary.

Recall the ridiculous attempt at astroturfing anti-Canadian sentiment in early 2025 in parts of the media.


Yes, but there was also a step 0 where DOGE intentionally sabotaged existing federal employee workflows, which makes step 2 far more likely to actually happen.


A couple of missing steps:

2.5. OpenAI gains a lot more training data, most of which was supposed to be confidential

4.5. Previously confidential training data leaks on a simple query, OpenAI says there's nothing they can do.

4.6. Government can't not use OpenAI now so a new normal becomes established.


even simplier:

1) It becomes essential for workflows while it cost $1

2) OpenAI can increase price to any amount once they are dependent on it, as the cost for changing workflows will be huge

Giving it to them for free skews the cost/benefit analysis they would regularly do for procurement.


Also getting access to a huge amount of valuable information, or a nice margin for setting up anything sufficiently private


Do you view Microsoft as too big to fail because of the federal governments use of Office?


Yes, but the federal government uses far more than just Office.

Microsoft is very far from being at risk of failing, but if it did happen, I think it's very likely that the government keeps it alive. How much of a national security risk is it if every Windows (including Windows Server) system stopped getting patches?


Boeing will never crash. Intel neither. They are jewel assets.


I see what you did there.


Not sure if this is a real question but yes, I think Microsoft is too big to fail.


honestly I think of Microsoft was going to go bankrupt they probably would get treated like Boeing, yeah.


I can't find an exact quote either, but AFAICT he wrote extensively on extensions and amputations, though perhaps less concisely.


As a side project, I'm creating resources for learning Tamil, my parents' native language:

https://akprasad.github.io/tamil/

It's been a lot of fun getting the basic tools going: transliterators, morphological generators and analyzers, and some other things on top. But the main goal is to improve fluency as quickly and efficiently as possible.


As another Tamilian, thank you for making this! I'm fluent in spoken Tamil from my parents and I've learned to read and write at a basic level, but I'd never formally learned the language.


I use something similar: .. , .2 , .3 , etc.


The author agrees with you:

> Not all requests for help are predatory. It’s part of your job to help out engineers on your team, and cross-org impact really does involve helping others sometimes, even if you get nothing in return. Predatory behavior is a consistent pattern of drawing on your time for nothing in return.


For context, Andreessen is talking here about the Biden administration, and his revulsion to this approach is why he endorsed Trump:

> I, you know, look, and I would say like when we endorse Trump, we, we only did so on the basis of like tech policy. [...] Number two was ai, where I became very scared earlier this year that they were gonna do the same thing to AI that they did to crypto.


HN audience is essentially pro-Trump as well. 1/3rd of Santa Clara county (heart of Silicon Valley) voted for him, and a lot of tech employees come from authoritarian countries (China) and really see nothing wrong with authoritarian rule.


First of all, that is factually incorrect -- Trump received 28.1% of the vote in Santa Clara county, which is significantly lower than 1/3rd. Source: https://results.enr.clarityelections.com/CA/Santa_Clara/1225....

Second of all, your bar for "essentially pro-Trump" being 1/3rd of the vote is ridiculous, by your standard almost every county in America is "essentially pro-Trump".


Give me a break, that’s fine for rounding and a huge gap between SCC and San Francisco and Santa Cruz. SCC is the outlier in the region, and a good percentage of HN is very pro-Trump because a good percentage of HN is pro-greed.

And the point about Chinese being among the most pro-Trump is South Bay stands - I have had several conversations and most people from mainland (RIP HK) see zero problem with authoritarian rule.


Lots of voters see many things wrong with authoritarian rule but on the freedom versus authoritarianism spectrum it's not at all clear that Democrats are any better. With the recent national shift towards populism, both major political parties seem roughly equally authoritarian in different policy areas. Besides AI policy there are other major authoritarianism issues around online censorship, public health, gun control, recreational drugs, reproductive healthcare, etc. I'm not trying to start yet another fight over which side is right or wrong on those particular issues but rather using them as examples to show how both parties are authoritarian when it fits the ideology of their core voters and campaign contributors.

Overall voters who identify as Asian mainly voted for Harris. So I am skeptical of your claim that Trump got a lot of votes from first-generation immigrants from China.

https://navigatorresearch.org/2024-post-election-survey-raci...


In Hindi and some other Indian languages, it is common to drop the last short “a” of a Sanskrit word, hence Ram for Rama, Vikram for Vikrama, etc.


You're thinking of koans [1], which are a specific kind of Zen speech used to shift the mind out of conceptual thinking. Outside of that context, Zen teachers often just talk in conventional language.

https://en.wikipedia.org/wiki/Koan


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: