Projects that deny AI contribution will simply disappear when an agent can reproduce their entire tech stack in a single prompt within a couple years. (not there yet, but the writing is on the wall at this point).
Whatever the right response to that future is, this feels like the way of the ostrich.
I fully support the right of maintainers to set standards and hold contributors to them, but this whole crusader against AI contribution just feels performative, at this point, almost pathetic. The final stand of yet another class of artisans to watch their craft be taken over by machines, and we won't be the last.
I do not think LLMs optimize for 'engagement', corporations do, but LLMs optimize on statistical convergence, I don't find that that results in engagement focus, your opinion my vary. It seems like LLM 'motivations' are whatever one writer feels they need to be to make a point.
I have problem pulled out postgres 10 or more times for various projects at work. Each time I had to fight for it, each time I won, it did absolute everything I needed it to do and did it well.
The vast majority of tasks you use a job processing framework for are related to io bound side effects: sending emails, interacting with a database, making http calls, etc. Those are hardly impacted by the fact that it's a single thread. It works really well embedded in a small service.
You can also easily spawn as many processes running the cli as you like to get multi-core parallelism. It's just a smidge* little more overhead than the process pool backend in Pro.
I use celery when I need to launch thousands of similar jobs in a batch across any number of available machines, each running multiple processes with multiple threads.
I also use celery when I have a process a user kicked off by clicking a button and they're watching the progress bar in the gui. One process might have 50 tasks, or one really long task.
Edit: I looked into it a bit more, and it seems we can launch multiple worker nodes, which doesn't seem as bad as what I originally thought
I don't understand these posts. Do people not understand how venture capital works?
The majority of these companies know they are burning money, but more than that knew they would be losing money at this point and beyond. That is the play, the thesis is: AI will dominate nearly everything in the near future, the play is to own a piece of that. Investors are willing to risk their investment for a chance of getting a piece of the pie.
Posts that flail around yelling companies 'losing money', without addressing the central premise are just wasting words.
In short, do you think AI is not going to dominate nearly everything? Great, talk about that. If you do believe is, then talk about something other than the completely reasonable and expected state of investors and companies fighting for a piece of the pie.
As a somewhat related tangent, people seem to not understand the likely cost trajectory of model training/inference costs:
* Models will reach a 'good enough' point where further training will be mostly focused on adding recent data. (For specific market segments, not saying that we'll have a universal model anytime soon, but we'll soon have one that is 'good enough' at c++, might already be there).
* Model architecture and infrastructure will improve and adapt. I work for a company that was among the first use deep learning to control real-time kinetic processes in production scenarios, our first production hardware was a nvidia Jetson, we had a 200ms time budget for inference, and our first model took over 2000! We released our product, running under 200ms, *using the same hardware* the only difference was improvements in the cuDNN library and some other drive updates and some domain specific improves on our YOLO implementation. Long story short, yes inference costs are huge, but they are also massively disruptable.
* Hardware will adapt. Nvidia cash machine will continue, right now nvidia hardware is optimized for balance between training and inference, where TPUs, the newer ones are more tilted towards inference. I would be surprized if other hardware companies don't force Nvidia to give the more inference based solution and 2-3x cost savings at time point in the next 5 years. And for all I know, perhaps a hardware startup will disrupt Nvidia, it would be one of the most lucrative hardware plays on the planet.
Focusing inference cost is a deadend to understanding the trajectory of AI, understanding the *capability* of AI is the answer to understanding it's place in the future.
However, "make my a python script the generates a random password" works.
Skill issue.