That's an interesting take, but I'm not sure 'easy to write' is the only advantage.
There is also a really good ecosystem of libraries, especially for scientific computing. My experience has been that Claude can write good c++ code, but it's not great about optimization. So, curated Python code can often be faster than an AI's reimplementation of an algorithm in c++.
What I love about OpenClaw is that I was able to send it a message on Discord with just this github URL and it started sending me voice messages using it within a few minutes. It also gave me a bunch of different benchmarks and sample audio.
I'm impressed with the quality given the size. I don't love the voices, but it's not bad. Running on an intel 9700 CPU, it's about 1.5x realtime using the 80M model. It wasn't any faster running on a 3080 GPU though.
yeah we'll add some more professional-sounding voices and also support for diy custom voices. we tried to add more anime/cartoon-ish voices to showcase the expressivity.
Regarding running on the 3080 gpu, can you share more details on github issues, discord or email? it should be blazing fast on that. i'll add an example to run the model on gpu too.
Oh that is a good use case. Don't connect to email and all that insecure stuff. But as a sandbox for "try this out and deploy a demo". Got me thinking!
I'm jealous. It took me far longer and much more frustration to get it to run.
Had to get the right Python version and make sure it didn't break anything with the previous Python version. A friend suggested using Docker, so I started down that path until I realized I'd probably have to set the whole thing up there myself. Eventually got it to run and I think I didn't break anything else.
Nowadays these frustrations shouldn't be a thing any more. If the author used uv, the script would be able to install its own dependencies and just work.
Even the built in venv would've solved most of his issues too. But I agree with him in that Python documentation could be better. Or have a more unified system in place. I feel like every other how to doc I read on setting something Python up uses a different environment containment product.
Conda was fantastic up to some point last year and since then I've had quite a few unresolvable version issues with it. It is really annoying, especially when you're tying multiple things together and each requires its own set of mutually exclusive specific versions of libraries. The latest like that was gnu radio and some out-of-tree stuff at the same time as a bluetooth library. High drama. I eventually gave up, rewrote the whole thing in a different language and it took less time than I had spent on trying to get the python solution duct-taped together.
Because I need a new version of python very rarely (years go by). I don't remember all the arcane incantations to set everything up.
I did eventually do that though, and I'm pretty sure I had to mess about with installing and uninstalling torch.
I dread using anything made in python because of this. It's always annoying and never just works (if the version of python is incompatible, otherwise it's fine) .
I'd love to use something other than ROS2, if for no other reason than to get rid of the dependency hell and the convoluted build system.
But there are a lot of nodes and drivers out there for ROS already. It's a chicken and egg thing because people aren't going to write drivers unless there are enough users, and it's hard to get users without drivers.
It looks like their business model is to give away the OS and make money with FoxGlove-like tools. It's not a bad idea, but adoption will be an uphill battle. And since they aren't open source yet, I certainly wouldn't start using it on a project until it us.
ROS is, in my opinion, dying on the industry front.
* It is a dependency hell
* It is resource-heavy on embedded systems
* It is too slow for real-time, high speed control loops
* Huge chunks of it are maintained by hobbyists and far behind the state of the art (e.g. the entire navigation stack)
* As robotics moves toward end-to-end AI systems, stuff needs to stay on GPU memory, not shuttled back and forth across processes through a networking stack.
* Decentralized messaging was the wrong call. A bunch of nodes running on a robot doesn't need a decentralized infrastructure. This isn't Bitcoin. Robots talking to each other, maybe, but not pieces of code on the same robot.
Can you say more about the nav stack? I thought nav2 was considered one of the better more mature packages in ROS2, but it's not my area of expertise.
| As robotics moves toward end-to-end AI systems, stuff needs to stay on GPU memory, not shuttled back and forth across processes through a networking stack.
Very interesting. There is nothing that would prevent PeppyOS nodes from running on the GPU. The messaging tech behind PeppyOS is Zenoh (it's swappable), it can run on embedded systems (PeppyOS nodes will also be compatible with embedded in the future). That being said, at the moment the messaging system runs exclusively on the CPU.
What alternatives there are that exist and can replace ROS? I imagine that not all companies are using ROS, however, I'm not in that field exactly so I don't know. I always thought that the quality of that code is mediocre at best.
Most companies in production are inventing their own purpose-built systems and not open-sourcing them. High speed control loops usually use some form of real-time OS, AI-forward robots are starting to use fused CUDA kernels.
Fun fact, we've been using pixi to compile everything Python related internally. In fact PeppyOS was even started with pixi as a base layer (but we pivoted away from it since the project is in Rust and Cargo is the de-facto toolchain). We support uv by default for Python (since it's what's the most used these days) but pixi is already supported, see the note on this page: https://docs.peppy.bot/guides/first_node/
Hey, good points, we have plans to create a ROS2 bridge in the near future. We definitely won't be able to catch up with huge ecosystem that ROS2 has created over the years but we will rewrite the annoying parts, that's for sure.
I recently filed a lawsuit in federal court, but because of the nature of the suit (adversarial proceeding on a bankruptcy case, wanting to cut my losses knowing collection is going to be the problem) I decided to do it Pro Se.
I've used a lot of AI to do this, with a lot of research of my own, reading documents from similar cases, verifying citations, etc. So far, things are going well, I've won on all the motions so far. But I'm using critical thinking and carefully reviewing everything.
The real failure with slop filings is procedural, not technological. A competent attorney should never submit a brief built on case law they hadn’t verified. Legal practice has always relied on reading the sources, confirming relevance, and taking responsibility for interpretation.
There is a way to trigger a script when a budget is hit, but they don't make it easy. You set up a billing notification that triggers a script, which can disable resources (like APIs) automatically.
Those budget alerts usually aren't instant though, they only fire when the cloud gets around to reconciling your usage some number of hours or even days after the damage is done. It's better than nothing but with runaway spending you can still blow way past your limit.
It works with cheap, generic IP cameras over RTSP. It's pretty easy to get it working with a Raspberry Pi too.
I was using the synology surveillance app, but after their recent shenanigans, I wanted something I could self host and modify on my own.
I'm using it at my property with 14 cameras right now and I'm really happy with it. There's still some work to do, but it's integrated with ML object detection, and even integration with a VLLM to describe a scene when certain things are detected.
This was my first attempt at a large-scale application that is heavily AI assisted. I need to update the screenshots and feature list for the readme, but if you have any questions or want to get involved, let me know.
>Even if its not "artisticallly worthwhile", the process is rewarding to the participant at the very least
I think that's the point though. What op did was rewarding to themselves, and I found it more enjoyable than a lot of music I've heard that was made by humans. So don't be a gatekeeper on enjoyment.
How am I a gatekeeper? I provided my own opinions; you are free to enjoy what you want or disagree with me. If you want to get into an objective discussion of why you find it enjoyable more than human works or what is art, we can do that but I do not like the personal slights.
I’m genuinely curious how you feel about LLMs being trained on pirated material. Not being snarky here.
Your comment reflects the old “information wants to be free” ideals that used to dominate places like HN, Slashdot, and Reddit. But since LLMs arrived, a lot of the loudest voices here argue the opposite position when it comes to training data.
I’ve been trying to understand whether people have actually changed their views, or whether it’s mostly a shift in who is speaking up now.
Personally, my opinion doesnt matter. I'm a nobody who doesnt work in AI fields.
But as a pirate, I specialize in finding hidden, hard to find, or otherwise lost sources. They're not making anybody any money, and I absolutely do not sell anything thats not mine (freely given).
But having every commercial work available for ingestion into an LLM is an amazing way to train an AI. However if you're going to use piracy at scale to train, you should also not be able to sell the LLM or access to it.
And yeah, that wrecks every corporate LLM strategy. Boo fucking hoo.
Do creators need paid for content they create? Ideally, yes! Do they deserve iron-fisted control of your hardware (DRM) to enact their demands? Fuck no!
Ideally, the LLMs would be FLOSS, full weights published, lists of content used to reproduce, etc. We could prune bad content and add more good. But the problem again is whoever does this must violate copyright cause copyright in the way its implemented is terrible.
In reality, I like the RIAA's congressional solution. You send a check for how many plays you did to BMI/ASCAP and you're good. That could be extended to books and shows. If that were done, you could have a New-Flix service that literally has every show and movie in existence. You just pay a reasonable cost per month to access the whole of video humanity.
why would that change anything? copyright is still a tax on the whole of society for the benefit of rich people and corporations. it opposes innovation, evolution and progress
maybe a short copyright would be fine (10 year fixed?) but copyright as-is seems indefensible to me
> copyright is still a tax on the whole of society for the benefit of rich people and corporations. it opposes innovation, evolution and progress
The original reason for copyright, patents, and trademarks made sense.
We want people to create and share. And unlike the old guild solutions from Europe, copyright and patents were a tradeoff to encourage the arts and science.
But what's a good tradeoff? Thats a big copyright question. 17 years? 34 years? Life of author? 75 years? How about individual non-commercial use? Or abandoned works?
And patents aren't even in scope, but we see similar abuses against the raison d'etra of them. Patents were supposed to entail a full reproduction of invention. Now, its a game of how incomplete can we make the filing while still getting protection. Or worse yet, really dumb shit has been patented like 1 click or the XOR patent, or that asshole Chakrabarty who patented living organisms.
There were good reasons for a fair copyright and patent law for furtherance of the art and sciences. That narrative was lost long ago. Now, only the violators can really push ahead. And they can't talk about it.
(Trademark law has never really had much complaints, aside trademarking a color. If you buy from XYZ company, you want to buy from them, not a counterfeit. And it relates back to coats of arms, again, representing a family or a charge.)
We recognize slop because it's slop. Just because a bunch of people are submitting slop to open source projects doesn't mean that AI can only generate slop.
His argument is basically a tautology "People who don't know how to code write bad code. Therefore, tools that help people who don't know how to code produce bad code"
I would love to see the US drone industry thrive, it's a major gap in both the consumer and military market.
At the same time, several businesses have and are trying to compete in this business. The amount of capital required is enormous if anyone is going to compete with DJI and the like. I personally know someone in this situation. They have a great product and some traction, but going from low quantity bespoke solutions to cost competitive large scale manufacturing costs hundreds of millions.
And the problem is, investors don't trust that the ban is going to last forever. The government could reverse the ban at any time, and that puts the US company back in a position where they can't compete with DJI, so the investors lose money. And they know that.
There is also a really good ecosystem of libraries, especially for scientific computing. My experience has been that Claude can write good c++ code, but it's not great about optimization. So, curated Python code can often be faster than an AI's reimplementation of an algorithm in c++.
reply