Hacker Newsnew | past | comments | ask | show | jobs | submit | simlevesque's commentslogin

The N100 is way larger than a OrangePi 5 Max.

There are quite a few x86-64 machines in the 70mm x 70mm form factor[1], which is close?

1: https://www.ecs.com.tw/en/Product/Mini-PC/LIVA_Q2/


Lmao the hero background. They photoshopped the pc into the back pocket of that AI-generated woman. (or the entire thing is AI-generated)

I have an even cheesier competitor, which randomly has a dragon on the lid (it would be a terrible choice for all but the wimpiest casual gaming... but it makes a good Home Assistant HAOS server!)

I dunno, I hear it‘s easy to put in your pocket and let the computer is everywhere

welcome to online shopping..

This only has 4 gb lpddr4 memory max, 1GbE and it seems no pcie lanes at all. The orange pi has much better specs.

I have a Bosgame AG40 (low end Celeron N4020 - less powerful than the N100 - but runs fanless)[1].

It's 127 x 127 x 508 mm. I think most mini N100 PCs are around that size.

The OrangePi 5 Max board is 89x57mm (it says 1.6mm "thickness" on the spec sheet but I think that is a typo - the ethernet port is more than that)

Add a few mm for a case and it's roughly 2/3 as long and half the width of the A40.

[1] https://manuals.plus/asin/B0DG8P4DGV


Also about half as efficient, if that matters, and 1.5-2x higher idle power consumption (again, if that matters).

Sometimes easier to acquire, but usually the same price or more expensive.


I can run my N100 nuc at 4W wall socket power draw idle. If I keep turbo boost off, it also stays there under normal load up to 6W full power. Then it is also terribly slow. With turbo boost enabled power draw can go to 8-10W on full load.

Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.


Which NUC do you have? A lot of the nameless brands on aliexpress draw 10 watts on idle.

I have a minisforum.

Boost enabled. WiFi disabled. No changes to P clock states or something from bios. Fedora. Applied all suggestions from powertop. I don’t recall changing anything else.


Not the poster you're replying to, but I run an Acer laptop with an N305 CPU as a Plex server. Idle power draw with the lid closed is 4-5W and I keep the battery capped at 80% charge.

The N100/150/200/etc. can be clocked to use less power at idle (and capped for better thermals, especially in smaller or power-constrained devices).

A lot of the cheaper mini PCs seem to let the chip go wild, and don't implement sleep/low power states correctly, which is why the range is so wide. I've seen N100 boards idle at 6W, and others idle at 10-12W.


On the other hand RPi doesn't support suspend. So which wins depends if your application is always-on.

My Ace Magician N100 is 190x115mm

Big by comparison, but still pretty small


Zimaboard2 has n150 and is smaller

Well... https://radxa.com/products/x/x4/

It has major overheating issues though, the N100 was never meant to be put on such a tiny PCB.


They also sell a heatsink for mere $21 (on AliExpress), just in case you don't know how to fit a spare PC cooler onto it.

That's quite a lot for the very heatsink that still results in those overheating problems I mentioned. A standard CPU cooler will not be mountable on this in any reasonable way, that's like parking a truck on a lawn chair.

I recently tested a palmshell with a N100 from Radxa too [0]

It performs well but there is definitely a thermal problem compared to other N100 based systems I got.

- https://palmshell.io/slim-x4l


I recommend using syncthing. It's very easy to self host but I actually use a SaaS for it: syncding.com. It gets me 100gb to 1tb of disk in which I can create folders and keep them synced with my 2 laptops, my phone, my server etc... I have an Obsidian vault with Meld Encrypt to encrypt some files, a keepassxc file I share across my devices and my todo.txt

It's simple to setup and will work forever instead of paying for different providers that might shut down or increase their prices.


I should probably mention — I’m one of the co-founders of https://www.syncding.com

I originally built it for my own setup (multiple devices, encrypted files, etc.), and it kind of grew from there.

Not everyone has a NAS at home, so the idea behind syncding.com was to provide a simple, encrypted online Syncthing hub that just works without the usual setup — with built-in ZFS snapshots for versioning and recovery.

Always cool to see others using similar workflows.


Haha now your comment might make it seem like I was trying to advertise for you, hence the downvotes.

As a disclaimer, no I wasn't paid and have never been contacted before by Syncding, just a happy customer.


Phoenix was a literal trap laid by the Conservative government just before leaving knowing it would be a shit show for the Liberals in the coming years.

Empty ? Why call it that? It's proactive.

Also it's naive to think they announce their intention to move somewhere. They try to cover it and never tell a soul until it's a done deal.


I've been making skills from arxiv papers for a while. I have a one for multi-object tracking for example. It has a SKILL.md describing all important papers (over 30) on the subject and a folder with each paper's full content as reStructuredText.

To feed Arxiv papers to LLMs I found that RST gives the best token count/fidelity ratio. Markdown lacks precision. LateX is too verbose. I have a script with the paper's urls, name and date that downloads the LateX zips from Arxiv, extracts it, transforms them to RST and then adds them to the right folder. Then I ask a LLM to make a summary from the full text, then I give other LLMs the full paper again with the summary and ask them to improve on and and proofread them. While this goes on I read the papers myself and at the end I read the summaries and if I approve them I add it to the skill. I also add for each paper info on how well the algorithms described do in common benchmarks.

I highly recommend doing something similar if you're working in a cutting-edge domain. Also I'd like to know if anyone has recommendations to improve what I do.


I've been working on ctoth/research-papers-plugin, the pipeline to actually get LLMs to extract the notes. I really like your insight re RST over Markdown! It sounds like we're working on similar stuff and I'll absolutely reach out :)

I'm gonna look at your plugin. My email is in my profile.

Honestly I think that Markdown with LateX code blocks would be the most efficient representation but when doing it with Pandoc I kept having issues with loss of information and sometimes even syntax error.


Another format that's worth investigating is Asciidoc. It supports the richness of Docbook XML but has fewer quirks than rST in my eyes.

would it make sense to just go for pandoc instead?

This sounds like it would work, but honestly if you've already read all 30 papers fully, what do you still need to llm to do for you? Just the boilerplate?

I'm trying to make a go library that implements a wide ranges of MOT algorithms and can gather metrics for all of them.

Reading all the papers once isn't the same as this. I find it very useful.

I can ask an LLM to do the basic implementations, then I can refine them (make the code better, faster, cut on memory use), then I can ask the LLM if I'm still implementing the algorithms as they're described in the paper.


> then I can ask the LLM if I'm still implementing the algorithms as they're described in the paper.

Unit testing would save on tokens... unit testing is perfect for validating refactors, or when re-writing a project from one language to the next, build unit tests first.


It lets you filter out interesting papers more quickly.

sounds similar to "LLM Knowledge Bases" https://xcancel.com/karpathy/status/2039805659525644595

I’ve been meaning to build something similar. I will report back once I have something to show.

Thanks for sharing!


I am surprised you found RST better than markdown.

Does that even fit in the context? It seems like 30 papers worth of content would just overflow it.

For each paper, have your agent extract a three sentence description, create a description.md, then concat those with the paper names into an INDEX.md which it should consult to find appropriate papers. Also: have your agent tag papers, then autogenerate your tagged collection on the filesystem. Then you get nice things like https://github.com/ctoth/Qlatt/tree/master/papers/tagged

Then something in your {CLAUDE,AGENTS}.md that says: when working on something with relevant context supplied by papers, read the papers before doing the work. You can find all papers plus their descriptions in ./papers/INDEX.md and papers by tag in ./papers/tagged


What is RST?


I really don't like OpenCode. One thing that really irritated me is that on mouse hover it selects options when you're given a set of choices.

It shows issues now. Probably not when the person you're replying to wrote their comment.

11 minutes elapsed between the comments. There is going to be some actual time before a report and the status page being live in a breaking system.

You can use almost any model with Claude Code.


that doesnt make sense. how?


Here's how to use MiniMax v2.7 for example: https://platform.minimax.io/docs/token-plan/claude-code

You just add this to your ~/.claude/settings.json:

  {
    "env": {
      "DISABLE_AUTOUPDATER": "1",
      "ANTHROPIC_BASE_URL": "https://api.minimax.io/anthropic",
      "ANTHROPIC_AUTH_TOKEN": "YOUR_SECRET_KEY",
      "API_TIMEOUT_MS": "3000000",
      "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": 1,
      "ANTHROPIC_MODEL": "MiniMax-M2.7-highspeed",
      "ANTHROPIC_SMALL_FAST_MODEL": "MiniMax-M2.7-highspeed",
      "ANTHROPIC_DEFAULT_SONNET_MODEL": "MiniMax-M2.7-highspeed",
      "ANTHROPIC_DEFAULT_OPUS_MODEL": "MiniMax-M2.7-highspeed",
      "ANTHROPIC_DEFAULT_HAIKU_MODEL": "MiniMax-M2.7-highspeed"
    }
  }

ah 'almost' . i want to use codex.

Well, then something's wrong. I click on different pages in the documentation and the whole page gets rerendered. Seems like it's not delivering what's promised.


I did it correctly with a mouse and was rejected. Then I drew a straight line and was rejected. Then I closed the tab.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: