- you're actually listing a job
- the job pays money
- the job pays a sane amount of money for the region in which you're hiring
- you're actually trying to hire someone for the job
- you don't demand they jump through stupid hoops like write an essay about Stendhal's views of modernity or your favorite unrolled loop from the GNU standard library
- you don't come back every month for the next three years advertising the same job without hiring anyone for it
- the business model isn't obviously delusional ("we're going to feed the global poor with AI, we just need to, uh, figure out some niggling little details")
- you reply to inquiries and applications
There are no incentive structures (besides possibly "posterity") to encourage anyone to see past their noses. In fact, hardly anyone at any level of any organization, public or private, is able to operate with a real longterm, sustainable outlook. They'd get shitcanned for trying to plan ahead, even if they were intellectually equipped for that.
I'm doing a vaguely similar thing - I have a 10" rack minilab [1] and I've vibe-coded an MCP server that runs in the cluster to introspect, etc, but the main longterm goal is to set up some ML pipelines and maybe work toward formal verification via TLA+ or smth. (_not_ vibecoding that... I'm thinking of moving into AI formal verification or compliance automation as a career move.)
I have a separate amd64 server with an RTX 2070 Super - which is obviously old and low-powered. Useful for some general ML stuff, but I don't think it's sufficient to run any non-trivial modern LLM.
I'm thinking about upgrading that GPU, but haven't committed to it or even really thought that hard about it.
The main server runs 3x RTX PRO 6000 (288 GB VRAM combined), power limited to 280W each (can crank it up as temps are fine but about to add some more fans first as the cards are stacked).
The second server is 2x Radeon RX 7900 XTX (48 GB VRAM combined). It's a fairly recent gaming PC that's being repurposed. Idea is to power limit those cards too and run some overnight stuff w small/medium sized models.
Intel just released some 32 GB VRAM cards, but sounds like support across AI tooling is a bit rough atm.
@#$% that's impressive. A little above my budget. I appreciate your response. I read some of your other comments and will follow your career with interest.
My guess would be that they're in a profit vs. loss mindset and thus less inclined to be charitable, community-minded, etc. You're basically forcing them to reason purely selfishly and then wondering why they don't donate.
For instance, I wonder if (all other things aside) Nicky Case would have the same problem with Parable of the Polygons ( https://ncase.me/polygons/ ).
I think this is distinct from the cognitive vs. emotional empathy issue you raise.
Thanks for the link to the Parable of the Polygons - her work is amazing.
You might be right about the mindset, I have no idea about the demographic that plays the game; only that they seem to be repulsed by BuyMeACoffee. The heavy users (i.e. more than 12 hours on site within last month) - once they see that banner for the first time for a return session instantly stop playing. It's not long enough yet to see if they return. Maybe it's the equivalent to putting a "Remember to buy your mom a gift on her birthday" pop-up on a porn site ;-) It's a mojo killer for sure.
I wish I had a stronger background in behavioral economics/psychology or some good connections in academia. The "instantly stop playing" thing seems really extreme and thus _really_ interesting. I'd expect people to think "nah, screw you buddy" but not leave entirely.
Unfortunately, while I follow a number of people on LinkedIn who work in the computational social sciences area, I'm not aware of any of them that actually follow me. I'd love to get their opinions on it but most of them are I think more in the area of economics than psychology or social psychology.
Oh man, that's super cool. I wonder if techniques like this will be used to investigate other games, possibly contributing to a sort of "family tree" of games kind of like the ones we have for linguistics with conjectured languages like proto-indo-european, etc.
Superpowers + worktrees works really well for me. Superpowers works well at building comprehensive plans for implementing large features or refactors, and asks the questions up front, and then worktrees provide a safe place to actually perform that work.
It's not perfect; I've had some issues with Claude Code forgetting where it did things ("oh... it's not working because I'm not in the right directory"). I think it needs some architectural tweaks to function more reliably.
reply