I have an even cheesier competitor, which randomly has a dragon on the lid (it would be a terrible choice for all but the wimpiest casual gaming... but it makes a good Home Assistant HAOS server!)
I can run my N100 nuc at 4W wall socket power draw idle. If I keep turbo boost off, it also stays there under normal load up to 6W full power. Then it is also terribly slow. With turbo boost enabled power draw can go to 8-10W on full load.
Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.
Boost enabled.
WiFi disabled.
No changes to P clock states or something from bios.
Fedora.
Applied all suggestions from powertop.
I don’t recall changing anything else.
Not the poster you're replying to, but I run an Acer laptop with an N305 CPU as a Plex server. Idle power draw with the lid closed is 4-5W and I keep the battery capped at 80% charge.
The N100/150/200/etc. can be clocked to use less power at idle (and capped for better thermals, especially in smaller or power-constrained devices).
A lot of the cheaper mini PCs seem to let the chip go wild, and don't implement sleep/low power states correctly, which is why the range is so wide. I've seen N100 boards idle at 6W, and others idle at 10-12W.
That's quite a lot for the very heatsink that still results in those overheating problems I mentioned. A standard CPU cooler will not be mountable on this in any reasonable way, that's like parking a truck on a lawn chair.
I recommend using syncthing. It's very easy to self host but I actually use a SaaS for it: syncding.com. It gets me 100gb to 1tb of disk in which I can create folders and keep them synced with my 2 laptops, my phone, my server etc... I have an Obsidian vault with Meld Encrypt to encrypt some files, a keepassxc file I share across my devices and my todo.txt
It's simple to setup and will work forever instead of paying for different providers that might shut down or increase their prices.
I originally built it for my own setup (multiple devices, encrypted files, etc.), and it kind of grew from there.
Not everyone has a NAS at home, so the idea behind syncding.com was to provide a simple, encrypted online Syncthing hub that just works without the usual setup — with built-in ZFS snapshots for versioning and recovery.
Always cool to see others using similar workflows.
Phoenix was a literal trap laid by the Conservative government just before leaving knowing it would be a shit show for the Liberals in the coming years.
I've been making skills from arxiv papers for a while. I have a one for multi-object tracking for example. It has a SKILL.md describing all important papers (over 30) on the subject and a folder with each paper's full content as reStructuredText.
To feed Arxiv papers to LLMs I found that RST gives the best token count/fidelity ratio. Markdown lacks precision. LateX is too verbose. I have a script with the paper's urls, name and date that downloads the LateX zips from Arxiv, extracts it, transforms them to RST and then adds them to the right folder. Then I ask a LLM to make a summary from the full text, then I give other LLMs the full paper again with the summary and ask them to improve on and and proofread them. While this goes on I read the papers myself and at the end I read the summaries and if I approve them I add it to the skill. I also add for each paper info on how well the algorithms described do in common benchmarks.
I highly recommend doing something similar if you're working in a cutting-edge domain. Also I'd like to know if anyone has recommendations to improve what I do.
I've been working on ctoth/research-papers-plugin, the pipeline to actually get LLMs to extract the notes. I really like your insight re RST over Markdown! It sounds like we're working on similar stuff and I'll absolutely reach out :)
I'm gonna look at your plugin. My email is in my profile.
Honestly I think that Markdown with LateX code blocks would be the most efficient representation but when doing it with Pandoc I kept having issues with loss of information and sometimes even syntax error.
This sounds like it would work, but honestly if you've already read all 30 papers fully, what do you still need to llm to do for you? Just the boilerplate?
I'm trying to make a go library that implements a wide ranges of MOT algorithms and can gather metrics for all of them.
Reading all the papers once isn't the same as this. I find it very useful.
I can ask an LLM to do the basic implementations, then I can refine them (make the code better, faster, cut on memory use), then I can ask the LLM if I'm still implementing the algorithms as they're described in the paper.
> then I can ask the LLM if I'm still implementing the algorithms as they're described in the paper.
Unit testing would save on tokens... unit testing is perfect for validating refactors, or when re-writing a project from one language to the next, build unit tests first.
For each paper, have your agent extract a three sentence description, create a description.md, then concat those with the paper names into an INDEX.md which it should consult to find appropriate papers. Also: have your agent tag papers, then autogenerate your tagged collection on the filesystem. Then you get nice things like https://github.com/ctoth/Qlatt/tree/master/papers/tagged
Then something in your {CLAUDE,AGENTS}.md that says: when working on something with relevant context supplied by papers, read the papers before doing the work. You can find all papers plus their descriptions in ./papers/INDEX.md and papers by tag in ./papers/tagged
Well, then something's wrong. I click on different pages in the documentation and the whole page gets rerendered. Seems like it's not delivering what's promised.
reply