This isn't true in general, and not even true in many specific cases, because a great deal of writers have described the process of writing in detail and all of that is in their training data. Claude and chatgpt very much know how novels are written, and you can go into claude code and tell it you want to write a novel and it'll walk you through quite a lot of it -- worldbuilding, characters, plotting, timelines, etc.
It's very true that LLMs are not good at "ideas" to begin with, though.
Professional writer here. On our longer work, we go through multiple iterations, with lots of teardowns and recalibrations based on feedback from early, private readers, professional editors, pop culture -- and who knows. You won't find very clear explanations of how this happens, even in writers' attempts to explain their craft. We don't systematize it, and unless we keep detailed in-process logs (doubtful), we can't even reconstruct it.
It's certainly possible to mimic many aspects of a notable writer's published style. ("Bad Hemingway" contests have been a jokey delight for decades.) But on the sliding scale of ingenious-to-obnoxious uses for AI, this Grammarly/Superhuman idea feels uniquely misguided.
The distinction being made is the difference between intellectual knowledge and experience, not originality.
Imagine a interviewing a particularly diligent new grad. They've memorized every textbook and best practices book they can find. Will that alone make them a senior+ developer, or do they need a few years learning all the ways reality is more complicated than the curriculum?
You can have perfectly good code, which is perfectly easy to understand which nevertheless _does not do what you intended to do_. That is why tests exist, after all.
Ireland imports less than 10% of it's electricity from the UK. The UK _already_ decommissioned it's coal-based eletricity production. The UK imports roughly 14% of it's electricity, and most of those imports are from nuclear and hydro-electric power.
No, it isn't. Power in a grid is fungible so grids operate based off consumption-based accounting. Britain continues to import at times from countries still burning coal. As such, Ireland is not free of coal dependence. It's really that simple. It is accurate for Ireland to say she no longer directly burns coal, no longer operates coal power, but the common understanding of "coal-free" is, "we are no longer directly dependent on coal for our lights to turn on." That simply isn't the case.
The way to think about this is, "If the grid had zero reserves and coal cut off, who could POSSIBLY go down?" You may figure this is constructed, but in a few days' dunkelflaute, Ireland needs her interconnects. Wind is then possibly low across much of Europe, meaning Holland and Germany ramp dispatchable capacity, including German lignite.
I saw nobody making those arguments. Most people were thinking that Ireland doesn't burn coal anymore. People who think or care about this stuff know that interconnects exist.
We've been experimenting with claude code handling jira tickets and opening PR -- we're starting with Opus. It costs about $1 per PR that gets merged-- how much does it cost to have a software engineer do that PR? That's your price sensitivity. It will only get cheaper as models get more efficient and people get better at using them, though.
Managers of firms care about impact of financials. They don’t care about the metrics you are measuring / gaming. Ultimately all ‘progress’ has to show in the cash flows.
Are you taking more cost reduction projects and more revenue-generating projects? Are you actually delivering? Are customers perceiving you to be as trusted as before? Etc. are the only things that matter. ‘Show me the money’.
To me this is akin to the discussion re. Scrum, agile etc. Who cares? Show me the money.
I had claude build me something similar for my own autonomous agent system, because I was irritated at how much friction Jira has. I suspect a lot of people will do this.
You're being downvoted, but if Anthropic is going to deploy Claude for decision making in target prosecution it is clearly a "Caesar's wife must be above suspicion" moment. Association with is guilt unless proven otherwise.
I think in the long term, if an LLM can’t use a tool, people won’t stop using LLM’s, they’ll stop using the tool.
We are building everything right now with LLM agents as a primary user in mind and one of our principles is “hallucination driven development”. If LLMs hallucinate an interface to your product regularly, that is a desire path and you should create that interface.
If you a) know what you are doing and b) know what an llm is capable of doing, c) can manage multiple llm agents at a time, you can be unbelievably productive. Those skills I think are less common than people assume.
You need to be technical, have good communication skills, have big picture vision, be organized, etc. If you are a staff level engineer, you basically feel like you don’t need anyone else.
OTOH i have been seeing even fairly technical engineering managers struggle because they can’t get the LLMs to execute because they don’t know how to ask it what to do.
it's like that '11 rules for showrunning' doc where you need to operate at a level where you understand the product being made, and the people making it, and their capabilities, in order to make things come out well without touching them directly.
if you can do every job + parallelize + read fast, and you are only limited by the time it takes to type, claude is remarkable. I'm not superhuman in those ways but in the small domains where I am it has helped a lot; in other domains it has ramped me to 'working prototype' 10x faster than I could have alone, but the quality of output seems questionable and I'm not smart enough to improve it
How is that supposed to work? Humans are notoriously poor at multi-tasking. If you spend all day context switching between agents you’re going to have a bad time.
I have always had ADHD and as a consequence have a decades long backlog of things that I want to do “some day”, and Claude just removes all the friction from going from idea to execution. I am also a software engineer, so basically for me it is like having a team of developers available 24 hours a day to build anything I want to design.
I have built and thrown away a half dozen projects ideas and gotten one into production at work in just the last few months.
I can build a POC for something in the time it would take me to explain to my coworkers what I even want. An MVP takes as long as what a POC used to take.
The thing that really unlocks stuff for me is how fast it is to make a cli/tui/web ui for things.
As a fellow ADHD’er who is also old and out of coding a decade, after decade and a half coding, wholeheartedly agreed. It’s great to just shit done and abandon if needed. Feels much better than spend 6 months and abandon
If you want Anthropic to make a new slack, just ask Claude to write it for you. It wrote me a trello clone in 15 minutes. Why bother with a SaaS. You can build your own perfect chat system in a weekend.
This isn't true in general, and not even true in many specific cases, because a great deal of writers have described the process of writing in detail and all of that is in their training data. Claude and chatgpt very much know how novels are written, and you can go into claude code and tell it you want to write a novel and it'll walk you through quite a lot of it -- worldbuilding, characters, plotting, timelines, etc.
It's very true that LLMs are not good at "ideas" to begin with, though.
reply