Being lonely is difficult. A lot of people will try to tell you that it is not or that you should be OK with it. Unfortunately, a lot of this advice is moralizing, the sort of "there must be something wrong with you if you can't be by yourself for long periods of time." This really annoys me. There is nothing wrong with wanting to spend time with people! Being social is one of our most fundamental needs. But just look at all the negative health benefits loneliness brings about[1]. If you wouldn't be OK with a friend smoking 15 cigarettes a day, you probably shouldn't be OK with yourself being lonely for most days.
Humans are social animals. Yes, some people are not like this. But if you feel unhappy when you are alone, there is nothing wrong with you. This only means that you are normal.
That aside -- what do you do when you are lonely? "Easy": Go to places where there are people doing stuff and join them. Eventually you will make weak connections. Ask these people to hang out in other contexts. You are done. There are no tricks. The hard part is it takes effort and time - you need to show up over weeks or months, and following up with people outside of the event and making plans is effortful.
If you want one really targeted tip: I love pickleball. Unlike almost any other sport, pickleball has a community where you can just wander over to a pickleball court and join in with virtually anyone. Also, it's great exercise, so even if you don't meet anyone you like, you still got healthier anyways - it's a win-win.
I firmly disagree with this advice as well; it strikes me as the sort of advice one comes up with when sitting around one's room wondering why one doesn't have any friends. The worst part about it is that it will get you doing all these activities that take up your time but don't really solve the friend problem.
Making friends isn't trivial, but it isn't a complex thing - just ask people you sort of vaguely know to hang out sometimes. Asking people to spend time together is about 10,000,000% more effective than any other strategy.
Do you firmly disagree with all of it, or just the clothes and gym part?
I don't have any objection to suggestions like "help people" or "be [a] good friend" or even "cook" and I think they're a core part of making friends. Today I cooked dinner for two friends and just got back from driving one of them home. They've been similarly kind to me in the past. Friendships are built on foundations like this.
It's absolutely correct that you need to invite people to do stuff before you worry about whether you're helpful enough, but you also need to go from being two or more people who kinda sorta know each other to actual friends.
This looks really nice, but I suppose I might ask the hard questions - how does this compare to Obsidian, which is my go-to "notes app which is just a bunch of markdown files stored to your computer"? I very much like Obsidian, and as I understand it they are your direct competitor, so some indication of how you want to distinguish your app from theirs would be great if you want to compel me to switch. :)
> Every feature we didn't build is time you spend writing.
Also, I feel that this kind of marketing language rubs me the wrong way (perhaps also that it feels LLM-ish). How is you not adding features saving me time? Maybe it saves you time...
> This looks really nice, but I suppose I might ask the hard questions - how does this compare to Obsidian,
To be honest, other than both of them allowing you to write markdown, they're not comparable.
Obsidian is the current favorite of the "make a second brain" crowd which is based the concept of a Zettelkasten [1]. There are thousands of plugins to customize Obsidian to turn it into whatever you want. It just so happens to use Markdown files to store your notes. It's a very powerful tool, but it's overkill for most people who want to write a few notes in Markdown.
Ghost isn't about wiki links, plugins, hypertext, Zettelkasten stuff.
It's just for writing, which I think is fine. Not everyone wants or needs all of whiz bang features of Obsidian or Notion or Microsoft Word.
Regarding that previewing isn't included; it's not a big deal in reality.
The Notes app that comes with macOS can import markdown files and render them. There a hundreds of apps, utilities, plugins, websites that enable a user to render a markdown file. For most people, that wouldn't be ideal; I get it.
(Aside: at a user group meeting, I saw a developer coding something in Vim with no syntax highlighting. I had never seen that before. He said he liked it better that way. Not everyone likes the same things.)
There's a great app from indie developer Brett Terpstra called Marked [1] that was created to preview markdown files. It has tons of features, all centered around previewing markdown files. I've been a satisfied customer of Marked for years.
We all use certain apps for certain things even if we have other options; sometimes it's for aesthetic reasons or we just like how a particular app "feels" when we use it.
Obsidian doesn't "just happen" to use Markdown - the entire point of the app is that it writes to markdown. The URL is literally http://obsidian.md. The "notes save to your filesystem" is a concept directly lifted from Obsidian. Saying they're "not comparable" just doesn't make sense.
Time and time again that I observe it is the AI skeptic that is not reacting with curiosity. This is almost fundamentally true, as in order to understand a new technology you need to be curious about it; AI will naturally draw people who are curious, because you have to be curious to learn something new.
When I engage with AI skeptics and I "ask these people what they're really thinking, and listen" they say something totally absurd, like GPT 3.5-turbo and Opus 4.6 are interchangeable, or they put into question my ability as an engineer, or that I am a "liar" for claiming that an agent can work for an hour unprompted (something I do virtually every day). This isn't even me picking the worst of it, this is pretty much a typical conversation I have on HN, and you can go through my comment history to verify I have not drawn any hyperbole.
AI will naturally draw people who are lazy and not interested in learning.
It's like flipping through a math book and nodding to yourself when you look at the answers and thinking you're learning. But really you aren't because the real learning requires actually doing it and solving and struggling through the problems yourself.
This is just completely inaccurate. There is more to learn now than ever before, and I find myself spending more and more time teaching myself things that I never before would have been able to find time to understand.
This is just completely inaccurate. There's the same amout of information available as before. It's not like LLMs provide you with information that isn't available anywhere else.
But I agree that it can serve as a tool for a person who it's interested in learning but I bet you that for every such person there's 10x as many who are happy to outsource all their thinking to the machine.
We already have reports from basically every school in the world struggling with this exact problem. Students are just copy pasting LLMs and not really learning.
I'm sorry you've had that experience, and I agree there are a good share of "skeptics" who have latched on to anecdata or outdated experience or theorycrafting. I know it must feel like the goalposts are moving, too, when someone who was against AI on technical grounds last year has now discovered ethical qualms previously unevidenced. I spend a lot of time wondering if I've driven myself to my particular views exclusively out of motivated reasoning. (For what it's worth, I also think "motivated reasoning" is underrated - I am not obligated to kick my own ass out of obligation to "The Truth"!)
That said, I _did_ read your comments history (only because you asked!) and - well, I don't know, you seem very reasonable, but I notice you're upset with people talking about "hallucinations" in code generation from Opus 4.6. Now, I have actually spent some time trying to understand these models (as tool or threat) and that means using them in realistic circumstances. I don't like the "H word" very much, because I am an orthodox Dijkstraist and I hold that anthropomorphizing computers and algorithms is always a mistake. But I will say that like you, I have found that in appropriate context (types, tests) I don't get calls to non-existent functions, etc. However, I have seen: incorrect descriptions of numerical algorithms or their parameters, gaslighting and "failed fix loops" due to missing a "copy the compiled artifact to the testing directory" step, and other things which I consider at least "hallucination-adjacent". I am personally much more concerned about "hallucinations" and bad assumptions smuggled in the explanations provided, choice of algorithms and modeling strategies, etc. because I deal with some fairly subtle domain-specific calculations and (mathematical) models. The should-be domain experts a) aren't always and b) tend to be "enthusiasts" who will implicitly trust the talking genius computer.
For what it's worth, my personal concerns don't entirely overlap the questions I raised way above. I think there are a whole host of reasons people might be reluctant or skeptical, especially given the level of vitriol and FUD being thrown around and the fairly explicit push to automate jobs away. I have a lot of aesthetic objections to the entire LLM-generated corpus, but de gustibus...
Your response is definitely on the top 5% of reasonableness from AI skeptics, so I appreciate that :-)
But, if you don't mind me going on a rant: the hallucinations thing. It kind of drives me nuts, because every day someone trots out hallucinations as some epic dunk that proves that AI will never be used in the real world or whatever. I totally hear you and think you are being a lot more reasonable than most (and thank you for that) -- you are saying that AI can get detail-oriented and fiddly math stuff wrong. But as I, my co-workers, and anyone who seriously uses AI in the industry all know, hallucinations are utterly irrelevant to our day-to-day.
My point is that hallucinations are irrelevant because if you use AI seriously for a while you quickly learn what it hallucinates on and what it does not, you build your mental model, and then you spend all your time on the stuff it doesn't hallucinate on, and it adds a fantastic amount of value there, and you are happy, and you ignore the things it is bad at, because why would you use a tool on things it is bad at? Hearing people talk about hallucinations in 2026 sounds to me like someone saying "a hammer will never succeed - I used it to smack a few screws and it NEVER worked!" And then someone added Hammer-doesnt-work-itis to Wikipedia and it got a few citations in Arxiv now it's all people can say when they talk about hammers online, omfg.
So when you say that I should spend more time asking "what do they see that I don't" - I feel quite confident I already know exactly what you see? You see that AI doesn't work in some domains. I quite agree with you that AI doesn't work in some domains. Why is this a surprise? Until 2023 it worked in no domains at all! There is no tool out there that works in every single domain.
But when you see something new, the much more natural question than "what doesn't this work on" is "what does this work on". Because it does work in a lot of domains, and fabulously well at that. Continuously bringing up how it doesn't work in some domain, when everyone is talking about the domains it does work, is just a non-sequitur, like if someone were to hop into a conversation about Rust and talk about how it can't solve your taxes, or a conversation about CSS to say that it isn't turing complete.
They didn't, no one asked google to do it. It was Paul Buchheit's 20% project. Google saw a good thing, solved by someone who knew what they were doing and where they wanted it to go, and fostered it. Hell, it is what built AdWords and ultimately made google the advertising behemoth it is today. I don't think this is the same thing...
I see what you are saying though, a business can expand beyond it's initial constraints, but I'm not sure that chasing prospects like what is described in the OP is really all that successful.
Why does it seem like everyone is having trouble grasping an analogy? GP was saying that as it doesn't make sense for a power company to solve trains (because it is out of their area of expertise) it doesn't make sense for Anthropic to solve Slack (because it is out of their area of expertise). My response is that a surprising number of things can fall in the area of expertise of a technology company, and this has been proven by Google in the past.
Getting hung up over the "asked" phrasing is irrelevant to the discussion.
People look for something to disagree with, and make posts that "engage". I agree with you and see this a lot, an analogy clearly makes point A but people get hung up on detail B.
Yep, and it was completely just fluke too, because within 5 years of that they'd butchered/tamed the whole concept of 20% and that kind of independent project wasn't a thing anybody at Google could do, even if 20% still nominally existed [re-routed to be "you can add 20% to some project at Google that already exists and is approved by corporate already, etc. and btw you'll still be doing your normal work for most of the time, too"]
When I was there from 2012-2022 it really wasn't a thing. Once Google found its money printing machine it swallowed everything.
> Once Google found its money printing machine it swallowed everything.
You know, I've never looked at Valve in that light before.
Once you have a money printing machine, of course any corporate hierarchy becomes antithetical to creativity, because there are huge financial rewards for climbing up. And the primary way you climb up is by turning direct reports to complete tasks you get rewarded for.
I think everyone at the time was hoping that Google was going to take on their pet project; my friends and I certainly were. But I don't think that has to do with my comment, which is around a more metaphorical use of the word 'ask'.
This seems unlikely. My company is in competition with a number of other startups. If AI removes one of my co-workers, our competitors will keep the co-worker and out-compete us.
> If AI removes one of my co-workers, our competitors will keep the co-worker and out-compete us.
This assumes that the companies' business growth is a function of the amount of code written, but that would not make much sense for a software company.
Many companies (including mine) are building our product with an engineering team 1/4 the size of what would have been required a few years ago. The whole idea is that we can build the machine to scale our business with far fewer workers.
How many companies have you worked at in the past where the backlog dried up and the engineering team sat around doing nothing?
Even in companies that are no longer growing I've always seen the roadmap only ever get larger (at that point you get desperate to try to catch back up, or expand into new markets, while also laying people off to cut costs).
Will we finally out-write the backlog of ideas to try and of feature requests? Or will the market get more fragmented as more smaller competitors can carve out different niches in different markets, each with more-complex offerings than they could've offered 5 years ago?
This is already happening. Fewer people are getting hired. Companies are quietly (sometimes not, like Block) letting people go. At a personal level all the leaders in my company are sounding the “catch up or you’ll be left behind” alarm. People are going to be let go at an accelerated pace in the future (1-3 years).
I don’t think that addresses my point. I understand a lot of companies are firing under the guise of AI, but it’s unclear to me whether AI is actually driving this - especially when the article we are both responding to says:
> We find no systematic increase in unemployment for highly exposed workers since late 2022
It depends on the "shape" of the company. Larger companies have a lot more of what I call "Conway Overhead", basically a mix of legit coordination overhead and bureaucracy. Startups by necessity have a lot less of that, and so are better "shaped" to fully harness AI.
That's not necessarily a result of AI, you also have to consider the broader economic environment. I mean, it was also difficult to get a job as a graduate in 2008, whereas it's typically been easier to get a job when credit is cheap.
Isn't it, for something like 70-80% of families? Just in slow-motion?
How long have we been hearing about crushing affordability problems for property? And how long ago did that start moving into essentials? The COVID-era bullwhip-effect inflation waves triggered a lot of price ratcheting that has slowed but never really reversed. Asset prices are doing great, as people with money continue to need somewhere to put it, and have been very effective at capturing greater and greater shares of productivity increases. But how's the average waiter, cleaning-business sole-proprietor, uber driver, schoolteacher, or pet supply shopowner doing? How's their debt load trending? How's their savings trending?
There’s a difference between a collapse and a slowdown. We don’t need a collapse for hiring to slow down [1,2]. I think we’re finally just seeing the maturation of software development. Software is increasingly a commodity, so maybe the era of crazy growth and hiring is over. I don’t think that we need AI to explain this either, although possibly AI will simply commodify more kinds of software.
FAANG realizing that they can't make infinite money by expanding into every possible market while paying FAANG salaries for low-scale-CRUD-prototyping roles has a lot to do with this, and that started a bit earlier than the AI wave.
Lots going on right now in the market, but IMO that retreat is the biggest one still.
Many companies were basically on a path of infinite hiring between ~2011 and ~2022 until the rapid COVID-era whiplash really drove home "maybe we've been overhiring" and caused the reaction and slowdown that many had been predicting annually since, oh, 2015.
Manager gigs at FAANG are pretty rough right now in my network, you can't be a manager when the higher-ups notice your group isn't a big revenue generator and so doesn't justify new hires and bigger org charts, and cutting the middlemen is the easiest way to juice the ROI numbers. If the ICs that now have 1/3 the managerial structure and have to wear more hats don't turn things around, oh well, it's not a critical area anyway, just nuke it.
You can be an exec with 10-20% fewer random products/departments in your company, and maybe 40% fewer middle managers in the rest of them. You might even get a nice bonus for cutting all that cost! Bonuses for growth, bonuses for "efficiency" when the macro vibe shifts. Trim sails and carry on.
Erm its been fucked for many years across many professions, it was just less so for software engineering in particular. Now entry into the S-E profession is taking a hit.
Also dont forget theres only so many viable revenue-generating and cost-saving projects to take. And said above - overhiring in COVID.
There's definitely tone deaf statements from managers/leaders like "AI will allow us to do more with less headcount!" As if the end worker is supposed to be excited about that, knuckleheads, lol.
Yeah I’ve been scratching my head about this too. Like, if my boss said this, I would basically start looking for a new job right then and there. Seems like a good way to drive off your own talent.
> *Isn’t it unreasonable for Anthropic to suddenly set terms in their contract?* The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.
> *Doesn’t the Pentagon have a right to sign or not sign any contract they choose?* Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.
I just wish there was a stronger source on this. I am inclined to agree you and the source you cited, but unfortunately
> [1] This story requires some reading between the lines - the exact text of the contract isn’t available - but something like it is suggested by the way both sides have been presenting the negotiations.
I deal with far too many people who won't believe me without 10 bullet-proof sources but get very angry with me if I won't take their word without a source :(
> "Two such use cases have never been included in our contracts with the Department of War..."
While I agree with Anthropic's position on this regardless, the original contract wording does matter in terms of making either the government look even more unreasonable or Anthropic look a little less reasonable.
The issue is a subtle ambiguity in Dario's statement: "...have never been included in our contracts" because it leaves two possibilities: 1. those two conditions were explicitly mentioned and disallowed in the contract, or 2. they weren't in the contract itself - and are disallowed by Anthropic's Terms of Service and complying with the ToS is a condition in the contract (which would be typical).
If that's the case, then it matters if the ToS disallowed those two uses at the time the original contract was signed, or if the ToS was revised since signing. Anthropic is still 100% in the right if the ToS disallowed these uses at the time of signing and the ToS was an explicit condition of the contract, since contracts often loop in the ToS as a condition while not precluding the ToS being updated.
However, if the ToS was updated after contract signing and Anthropic added or expanded the wording of those two provisions, then the DoD, IMHO, has a tiny shred of justification to complain and stop using Anthropic. Of course, going much further and banning the entire US government (and contractors) from using Anthropic for any use, including all the ones where these two provisions don't matter - is egregiously punitive and shitty.
While the contract wording itself may be subject to NDA, it would be helpful if Anthropic's statements could be a bit more precise. For example, if Dario had said "have always been disallowed in our contracts" this ambiguity wouldn't exist.
It does not matter. If Anthropic had been precise in this narrow way, there would have been some other nitpick to raise.
You're trying desperately to find a way that things can be at least a little normal, and I really do get it. It would be great if such a way existed. But it doesn't. I recommend you take a social media break like I'm about to, take the time you need to mourn the era of normal politics, and come back with a full understanding that the US government is not pursuing normal policy objectives with bad decisions. They hate you and they hate me for not being on their side, and their primary goal is to ensure that we're as miserable as they can make us.
I'm in a weird spot where I do agree with your assessment of the core claim. But putting that aside, in the world where the DoW's claim _is_ correct -- I think you don't have any choice other than to designate them a supply chain risk.
Disregarding who is right or wrong for a moment, if the DoW are right (which I'm not personally inclined to believe, but we're ignoring that for the moment) -- how else can they avoid secondhand Claude poisoning?
Supposing they really want to use their software for things disallowed by Claude's (now or future) ToS, it seems like designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude (either indirectly as a wrapper or tertially through use of generated code etc)
> designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude
I agree that if the DoW claim is correct (and I doubt it is), then, sure, the DoW dropping Anthropic and precluding the DoW's suppliers from using Anthropic for any DoW work would be expected. However, the "supply chain risk" designation they are deploying goes far beyond that to block Anthropic use by any supplier to any part of the entire U.S. government for anything.
For example, no one at Crayola can use Anthropic for anything because Crayola sells crayons to the Education Dept. The DoW already has much less draconian ways to restrict what their direct suppliers use to build things for military applications. But instead of addressing the actual risk in a normal measured way, they are choosing to use a nuke against a grenade-sized problem. This "supply chain risk" designation is rarely used and has never been used against a U.S. company. It's used against Chinese or Russian companies when in cases where there's credible risk of sabotage or espionage. That's why that particular designation always blocks all products from an entire company for any application by any part of the U.S. Government, contractors and suppliers (which is why it's never been used against a U.S. company).
One positive thing I will say about this administration is that they have really drawn into focus the difference between de jure and de facto law.
My hope is that this gets us some real concern for things that have been defended with de facto arguments (i.e. privacy) going forward.
edit: Anthropic argues that your Crayola analogy is fundamentally incorrect.
> Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
> Anthropic argues that your Crayola analogy is fundamentally incorrect.
Yes, I just saw Dario's latest post with that more detailed info. My understanding was informed by news reporting in a couple different outlets but those reports may have been conflating the "supply chain risk" designation (under 10 USC 3252) with the net effect of statements from the pentagon and white house which go substantially further.
Even if it's not in the legal scope of 10 USC 3252, the administration has made clear they intend to ban Anthropic from use across the federal government. AFAICT doing that is probably within the discretionary remit of the executive branch, even though I believe it's unprecedented - to your point about de jure and de facto law.
To me, if there's a silver lining to all this, it's making a strong case for restricting executive branch power.
Edit to add: Per the Wall Street Journal's lead story (updated in the last hour): "The General Services Administration, which oversees federal procurement, said it is removing Anthropic from its product offerings to government agencies... Even absent the supply-chain risk designation, broadening the clash to include all federal agencies takes the Anthropic fight to a much larger scale than its spat with the Pentagon."
How would this risk be mitigated by signing a contract? Seems like “supply chain poisoning as treason” is probably not going to stopped by a piece of paper. You either trust anthropic or you don’t but the deal has nothing to do with it.
Isn't the point that they aren't entering into a contract with them, they are just ensuring that none of their still trusted suppliers repackage Anthropic without their knowledge?
I’m not sure, but I think you’re right. I was thinking about the logical implications of the. If they are a supply chain risk without a contract, how does the existence of a contract suddenly make them not a risk? Especially if the DoD strong arms them into a deal.
Because the act that the SCR designation would “protect” against is treason, so I don’t think people would care too much whether there’s a contract.
Also, Trump's own words complaining about being forced to stick to Anthropic's terms of service:
> The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.
In this case, do you really believe that we should trust an EA less than this administration? EA as bad people is a stereotype; corruption, fraud, and breaking the law is the standard MO for this administration.
(Or maybe it’s catchier to respond glibly with “never trust a child rapist and convicted felon.”)
In this case, the choice is between the two apples, so I’d pick the one less obviously rotten. Sadly that is the current administration that operates in pure lawlessness.
I think a big question mark here, is whether anything said on Anthropic's side if in the framing of "We have a thing going on that we are trying to communicate around where a canary notice if it existed would no longer be updated"
It isn't about commercial agreements, it's about patriotism. The national industry is supposed to submit to the military's wishes to the extent that they get compensated. Here it's a question or virtue.
The Pentagon feels it isn't Anthropic to set boundaries as to how their tech is used (for defense) since it can't force its will, then it bans doing business with them.
If anthropic is saying “you can use our models for anything other than domestic spying or autonomous weapons” and the pentagon replies “we will use other models then”, I'd say Anthropic are the patriots here...
I had the same thing happen to me when I posted about how unbridled capitalism requires external costs in the form of pollution and what not. I didn't make it clear that I thought it was a terrible truth.
Once the hive decides you're being serious without checking, they turn the down vote button into an I disagree with you button.
This is actually one of the reasons I left Reddit. I hate to see it here.
It likely helps to take in the cultural moment or context around the statements or the nature of the statements you're making. It's fine to state a fact but it's also helpful to make it clear whether you are saying "it is what it is " or "I wish things were different" or "I am doing X, Y, and Z to try and help and I recommend others do so". Jokes are an exception and I think misunderstandings are fine there. But it's unreasonable to think that on the Internet, people will "check to see if you are serious".
The comment was serious. It didn't feel the need to take a side.
The DoD declaration reflects a certain context, we had the patriotic act, a whistleblower exiled in Russia for defending the constitution, etc etc. We didn't need to wait a MAGA movement to be expecting such comment from the DoD.
If hackernews threads turn into mouthpieces for opinions then we have no use posting anything in here.
The comments are naively claiming commercial agreements make Anthropic right, as if contracts had more weight than the constitution.
I would rather call out a "virtuous signalling" entity in the valley simply standing for something aligned with civil liberties, and using it as a political stance in what nobody would deny is an unfortunate polarized political climate.
What to make of OpenAI then. Should I give my opinion that they took a falsely constitutional stance, or simply made for-profit move to land a juicy government contract, while making the public think they kept the same red lines as their main competitor?
Or just stick to the fact: The DoD will, as always, get away with its liberticide demands to get what it wants, because other big tech will fall inline.
I fully acknowledge that it doesn't take much courage to bully people anonymously on HN. I don't claim to have any deep well of courage in real life either - many of my friends were already radicalized against OpenAI for other reasons, I don't expect to face professional consequences for being angry about this, and I might not be so willing to go scorched earth if either of those weren't true. Just wanted to explain where the world is at and why people should expect to see further incivility about this.
What's your definition of "patriotism" and why do private companies need to be "patriotic"? How do you reconcile this with the Constitutional guarantees of freedom of speech, freedom of association, and so on?
The US isn't Iran, North Korea, or even China, as much as some people, including the US president, seem want to emulate those models.
No one cares if the Pentagon refuses to do business with Anthropic. But Hegseth has declared that effective immediately, no one else working with the DoD can either--which includes the companies hosting Anthropics models (Amazon, Microsoft, and Alphabet).
So it's six months to phase out use of Anthropic at the DoD, but the people hosting the models have to stop "immediately".
Which miiight impact the amount of inference the DoD would be able to get done in those six months.
> So it's six months to phase out use of Anthropic at the DoD, but the people hosting the models have to stop "immediately".
> Which miiight impact the amount of inference the DoD would be able to get done in those six months.
Which might not be by accident looking at the Truth Social posts which state "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
I would not be surprised to see this being used as an excuse to nationalize Anthropic.
To attempt to nationalize Anthropic. I'm sure there would be court cases filed almost immediately, restraining orders, months of cases and then appeals and then appeals of the appeals.
I think you were downvoted due to your use of "patriotism" (specifically without scare quotes) because that word is usually used with an intended positive connotation. So the reader gets the impression that you think that submitting to the DoD’s wishes is how things ought to be.
Humans are social animals. Yes, some people are not like this. But if you feel unhappy when you are alone, there is nothing wrong with you. This only means that you are normal.
That aside -- what do you do when you are lonely? "Easy": Go to places where there are people doing stuff and join them. Eventually you will make weak connections. Ask these people to hang out in other contexts. You are done. There are no tricks. The hard part is it takes effort and time - you need to show up over weeks or months, and following up with people outside of the event and making plans is effortful.
If you want one really targeted tip: I love pickleball. Unlike almost any other sport, pickleball has a community where you can just wander over to a pickleball court and join in with virtually anyone. Also, it's great exercise, so even if you don't meet anyone you like, you still got healthier anyways - it's a win-win.
[1]: https://www.psychologytoday.com/us/blog/the-human-beast/2023...
reply