Same originating idea: "a language for AI to write in" but then everything else is different.
The features of both are quite orthogonal. Cairn is a general purpose language with features that help in writing probably working code. Mog is more like "let's constraint our features so bad code can't do much but trade that for good agent ergonomy".
Cairn is a crazy sprawling idea, Mog is a little attempt at something limited but practical.
Mog seems like something someone has thought about. No one has thought about Cairn, it's pure LLM hallucination, the fact that it exists and can do a lot of stuff it's just the result of someone (me) not knowing when a joke has gone too far.
It’s a common complaint of value investors that boards (especially in this post-Sarbox world) are solely focused on quarterly earnings reports, to the detriment of long term strategy. One way to talk about the added and persistent value of some companies is to note that many of them have powerful, recalcitrant, or somehow anti-quarterly-cadence founders: buffet, zuck, you could make a list.
They would not be allowed to do so - too many shareholders. That’s why e.g. SpaceX will be going public even though Elon Musk would want to keep it private
Yes, but focused on it being the highest it possibly can _tomorrow_ or the highest it possibly can be in ten years is a huge difference. Only some executives have the ability to take actions based on a long view without being replaced by the board. Usually founders and near-founders.
Right so if you are already hyperfocused on tomorrow then focus on the end of the quarter is pretty much a wash in terms of short- versus long-term decisionmaking.
There are multiples examples that are easy to see once you realise presenting information has a cost.
For example having daily morning 2 hour long stand ups provide more information for everyone involved. It's also worse for productivity and work atmosphere.
Maybe. I'l am also not saying they need to say where the dollars came from, went to, or what they were for. Aggregate daily flows. Could you do some deductive reasoning to make an informed guess especially when large sums are involved? Perhaps.
I am also of the (perhaps wrong) opinion that the majority of the important stuff leaks anyways, just not on a level playing field.
Financials aren't like technology or IP where having the information open to all (perhaps with limited monopolies on usage a la patents) is essentially for the betterment of all mankind, they can be more like order of battle in a war zone.
If your competitors know that your Florida subsidiary is running inefficiently and being subsidized by your successful business elsewhere, they can target their own operations in Florida, undercut you more than you can possibly sustain, force you to exit that market entirely, so that they can monopolize there.
Sure, but others can also do that to your competitor. Hence my comment that everyone's in the same boat. The playing field would be level and the players would adapt to the new environment.
Of course I realize it's possible it might introduce systemic problems that I'm unaware of.
Isn't this exactly what we should want from a market system? If your division in Florida is inefficient, then from the market perspective we should absolutely want competitors to enter the market and crush them.
I think the problem is that people have gotten so used to seeing capitalism from the companies' perspective (i.e.: profits good), and forgot that it is supposed to be all about the collective good. So if you think sustained high profits are good... then you have missed the whole point (the market should always be driving them towards near-zero).
Not OP, but at £JOB, I use Unity most of the time making demo and sales apps for clients to use at shows. The fact that it can build for basically every common platform and (most of the time) not need any special considerations for that makes it ideal for us. Sure, we could write web apps or something, but that's a different department.
I'm also not sure if it's still in the installer, but it used to ask you what you would be using unity for, and I don't remember most of the options, but one of them was "military simulations" or something like that, so they are aware of the possibility
I think it’s complementary - superpowers seems more about what is being told to the agent.
The guardrails outside the agent guarantee it’ll behave a certain way. Still lets the agent write code but makes sure it also writes tests, and prevents boneheaded mistakes I was always telling it not to make.
Charitable read, would suggest slight touch of tongue in a cheek.
A bit of spelling it out
Point-1. People just interpreted that paganism works.
E.g. Somebody made offering to gods, and year later won a war - proof.
Point-2 paganism had this transactional notion with gods giving and taking based on your offerings.
While christianity on the other hand does not promise anything good in this life (the only promise being: bear all the bad things in this life, you will be rewarded in the afterlife), so there can’t be proof.
> This made sense when agents were unreliable. You’d never let GPT-3 decide how to decompose a project. But current models are good at planning. They break problems into subproblems naturally. They understand dependencies. They know when a task is too big for one pass.
> So why are we still hardcoding the decomposition?
Sure, decomposition is already in the pre-training corpus. and then we can do some "instruction-tuning" on top. This is fine for the last mile, but that's it. I would consider this unaddressed and after with the root comment.
reply