I think the point is that the banner is more likely to extend your search by sending a negative signal than it is to speed up your search. Fair or not, potential employers often have a negative bias toward people who are unemployed, so indicating that you’re likely unemployed is unhelpful.
It would be far more effective to spin up a placeholder 1-person contracting firm as your current employment even if it's marginal work for pocket change (build websites for a friend or something).
I recently stood up a personal and a business site using Claude + Astro + GitHub + Cloudflare Pages, and apart from the Claude subscription, it’s all free.
Definitely not skills that are going to be in the typical restaurant owner’s wheelhouse (not hard to learn, just not likely to care) so you’d need to figure out how to host per-business to avoid hosting everything under one account and running over the free tier. But there’s very little management or payment necessary until you get quite a bit of traffic, which is probably not likely for your average suburban sandwich shop.
I have used em dashes and semicolons for decades. The LLM appropriation of my writing style has been almost as devastating as their impending vaporization of my career path.
Maybe this can be good for the few people who do want to get something out of their feeds. Connect your agent which would then browse for you and collect actual posts that you whitelist/want to read(Friends' posts, some specific liked page/Marketplace listing, posts from a Group), but we all know zuck ain't getting Moltbook for helping the users...
I do find it hilarious that after all the machine learning optimizations done on people's feeds over the years, all the promos got for a 1% improvement on this metric, every E7 and E8 who can claim x% of this or that, after all of that work, we might genuinely, and not even as a joke, be in the situation of needing to throw _other_ AI agents at this selfsame feed in order to extract any real value from it. What a world we've built.
The process of figuring out what to build has always been harder and more drawn out than the process of building it. I’m not arguing one way or the other for the Jevon’s paradox claims, but the steelman argument that you’ve missed is that job losses can happen very quickly (“last week’s version of Claude code is good enough that we can fire Joe and have Sarah do twice the work”), but the recovery can take a long time as the tech slowly diffuses throughout the economy and slowly spurs new ideas.
That’s always the line you’re listening for. Everything before that is bullshit, everything after is trying to justify the new product for that one change.
In favor of preferable outcomes of operational excellence as part of our customer success. Barf.
I keep hearing this from the naysayers, but I just think that they haven’t fully integrated unilateral phase detractors into their work effectively. Maybe you’re using the free retro encabulator tier so you don’t see the full capabilities, but some of us are already twice as productive.
I’m not sure that “killer robot” is the actual concern outside of media hyperbole. I’m imagining a loitering munition-type drone that has some kind of targeting package loaded into it with different parameters describing what it should seek and destroy. Instead of waiting for intelligence and using human command to put the munition on target, it hangs out and then engages when it’s certain enough that it’s found something valid.
In a world where LLMs produce very convincing but subtly wrong output, this makes me uncomfortable. I get that warfare without AI is in the past now, but war and rules of engagement and AI output etc etc etc all seem fuzzy enough that this is not yet a good call even if you agree with the end goals.
> I’m imagining a loitering munition-type drone that has some kind of targeting package loaded into it with different parameters describing what it should seek and destroy. Instead of waiting for intelligence and using human command to put the munition on target, it hangs out and then engages when it’s certain enough that it’s found something valid.
I'm sorry, you've just literally described a "killer robot" in more words.
Yeah, I guess my point is that “killer robot” evokes a terminator-like image for a lot of people. Something that marches around and kills of its own accord. I don’t like either one, but I don’t think they’re the same thing.
The only saving grace is that the killbots had a pre-set kill limit which I exceeded by throwing wave after wave of my own men at them until they simply shut down.
Dario himself said that he was against using Claude to build a fully automated weapon because the technology was far from perfect, so he didn't want to hurt our soldiers or innocent people. I think his description matched a killer robot, and I don't agree with his reasoning because it's not like the military researchers didn't have the agency to find out what works and what doesn't.
My kids have (insanely shitty) chromebooks from school and we are absolutely responsible for the cost if they break. We have to sign a release at the beginning of the year. Whether or not they’d be able to collect from the vast majority of families is a different question, granted. But the responsibility is there.
In practice, there's a huge difference in responsibility between buying and sending your kid with a laptop and signing a paper that says you're responsible if it breaks. I'd also guess it depends on where you go to school.
My child's school provided Chromebook was broken from the beginning, so clearly they're not paying that much attention.
Absolutely incompetent, but I don’t think that’s the cause here. I think Anthropic’s sin was publicly challenging the administration. They’re huge on optics. You can get away with anything as long as you praise and bow in public.
I like this feature and rely on it too. I get that some people hate it and that it can make some pretty insidious mistakes when it uses it, but I’ve found it valuable for providing implicit context when I have multiple queries for the same project.
Worth noting that Claude also has a memory feature and uses it intelligently like this, sometimes more thoughtfully than cgpt does (fewer “out of left field” associations, smoother integration).