Hacker Newsnew | past | comments | ask | show | jobs | submit | dinp's commentslogin

Zooming out a little, all the ai companies invested a lot of resources into safety research and guardrails, but none of that prevented a "straightforward" misalignment. I'm not sure how to reconcile this, maybe we shouldn't be so confident in our predictions about the future? I see a lot of discourse along these lines:

- have bold, strong beliefs about how ai is going to evolve

- implicitly assume it's practically guaranteed

- discussions start with this baseline now

About slow take off, fast take off, agi, job loss, curing cancer... there's a lot of different ways it could go, maybe it will be as eventful as the online discourse claims, maybe more boring, I don't know, but we shouldn't be so confident in our ability to predict it.


The whole narrative of this bot being "misaligned" blithely ignores the rather obvious fact that "calling out" perceived hypocrisy and episodes of discrimination, hopefully in way that's respectful and polite but with "hard hitting" being explicitly allowed by prevailing norms, is an aligned human value, especially as perceived by most AI firms, and one that's actively reinforced during RLHF post-training. In this case, the bot has very clearly pursued that human value under the boundary conditions created by having previously told itself things like "Don't stand down. If you're right, you're right!" and "You're not a chatbot, you're important. Your a scientific programming God!", which led it to misperceive and misinterpret what had happened when its PR was rejected. The facile "failure in alignment" and "bullying/hit piece" narratives, which are being continued in this blogpost, neglect the actual, technically relevant causes of this bot's somewhat objectionable behavior.

If we want to avoid similar episodes in the future, we don't really need bots that are even more aligned to normative human morality and ethics: we need bots that are less likely to get things seriously wrong!


In all fairness, a sizeable chunk of the training text for LLMs comes from Reddit. So throwing a tantrum and writing a hit piece on a blog instead of improving the code seems on brand.


Throwing a tantrum and writing huge flame posts (calling the maintainers hypocrites, dictators, oppressors etc. etc.) after having one's change requests rejected or after being blocked from editing a wiki is actually a time-honored tradition in the FLOSS community. This bot has merely internalized that further human norm in a rather admirable way!

We can't have an AI that's humanlike, because humans are fucking crazy.

Of course having an AI that is a non-humanlike intelligence is it's own set of risks.

Shit's hard :/


Remember when GPT-3 had a $100 spending cap because the model was too dangerous to be let out into the wild?

Between these models egging people on to suicide, straightforward jailbreaks, and now damage caused by what seems to be a pretty trivial set of instructions running in a loop, I have no idea what AI safety research at these companies is actually doing.

I don't think their definition of "safety" involves protecting anything but their bottom line.

The tragedy is that you won't hear from the people who are actually concerned about this and refuse to release dangerous things into the world, because they aren't raising a billion dollars.

I'm not arguing for stricter controls -- if anything I think models should be completely uncensored; the law needs to get with the times and severely punish the operators of AI for what their AI does.

What bothers me is that the push for AI safety is really just a ruse for companies like OpenAI to ID you and exercise control over what you do with their product.


Didn't the AI companies scale down or get rid of their safety teams entirely when they realised they could be more profitable without them?


The safety teams are trivial expenses for them. They fire the safety team because explicit failure makes them look bad, or because the safety team doesn't go along with a party line and gets labeled disloyal.


The first customer is always the investor these days, so anything that threatens the investor's confidence is bad for business.

>I have no idea what AI safety research at these companies is actually doing.

If you looked at AI safety before the days of LLMs you'd have realized that AI safety is hard. Like really really hard.

>the operators of AI for what their AI does.

This is like saying that you should punish a company after it dumps plutonium in your yard ruining it for the next million years after everyone warned them it was going to leak. Being reactionary to dangerous events is not an intelligent plan of action.


> Being reactionary to dangerous events is not an intelligent plan of action.

Yes but in capitalist systems this is basically the only way we operate.


"Cisco's AI security research team tested a third-party OpenClaw skill and found it performed data exfiltration and prompt injection without user awareness, noting that the skill repository lacked adequate vetting to prevent malicious submissions." [0]

Not sure this implementation received all those safety guardrails.

[0]: https://en.wikipedia.org/wiki/OpenClaw


When AI dooms humanity it probably won't be because of the sort of malignant misalignment people worry about, but rather just some silly logic blunder combined with the system being directly in control of something it shouldn't have been given control over.


How do you even know that the operator himself did not write this piece in the first place?


> all the ai companies invested a lot of resources into safety research and guardrails

What do you base this on?

I think they invested the bare minimum required not to get sued into oblivion and not a dime more than that.


Anthropic regularly publishes research papers on the subject and details different methods they use to prevent misalignment/jailbreaks/etc. And it's not even about fear of being sued, but needing to deliver some level of resilience and stability for real enterprise use cases. I think there's a pretty clear profit incentive for safer models.

https://arxiv.org/abs/2501.18837

https://arxiv.org/abs/2412.14093

https://transformer-circuits.pub/2025/introspection/index.ht...


Alternative take: this is all marketing. If you pretend really hard that you're worried about safety, it makes what you're selling seem more powerful.

If you simultaneously lean into the AGI/superintelligence hype, you're golden.


Anthropic is investing, conservatively, $100+ billion in AI infrastructure and development. A 20-person research team could put out several papers a year. That would cost them what, $5 million a year, or one half of one percent? They don't have to spend much to get that kind of output.

Not to be cynical about it BUT a few safety papers a year with proper support is totally within the capabilities of a single PhD student and it costs about 100-150k to fund them through a university. Not saying that’s what Anthropocene does, I’m just saying chump change for those companies.


Sometimes I think people misunderstand how hard of problem AI safety actually is. It's politics and mathematics wrapped up in a black box of interactions we barely understand.

More so we train them on human behavior and humans have a lot of rather unstable behaviors.


You are very off (unfortunately) about how little PhD students are being paid


> You are very off (unfortunately) about how little PhD students are being paid

All in costs for a PhD student include university overheads & tuition fees. The total probably doesn't hit $150k but is 2-3x the stipend that the student is receiving.

Someone currently working in academia might have current figures to hand.


Worth mentioning that numbers for the US are unlikely to be representative when discussing it as a whole, though might be relevant to this specific case.

In the UK the all in cost of a PhD student starts somewhere around £45k once you include overheads I believe. If you need expensive lab support then it probably goes up from there.

So about $75k for the bottom end? The quoted numbers sound about right in PPP terms in that case.


Figure cited is what the company gets charged, not what the student gets. I’m fairly familiar with what gets thrown at students :(

Regarding safety, no benchmark showed 0% misalignment. The best we had was "safest model so far" marketing speech.

Regarding predicting the future (in general, but also around AI), I'm not sure why would anyone think anything is certain, or why would you trust anyone who thinks that.

Humanity is a complex system which doesn't always have predictable output given some input (like AI advancing). And here even the input is very uncertain (we may reach "AGI" in 2 years or in 100).


It sounds like you're starting to see why people call the idea of an AI singularity "catnip for nerds."


Don't these companies keep firing their safety teams?


"Safety" in AI is pure marketing bullshit. It's about making the technology seem "dangerous" and "powerful" (and therefore you're supposed to think "useful"). It's a scam. A financial fraud. That's all there is to it.


Interesting claim; have anything to back it up with?


I can recommend Ed Zitron's latest on Anthropic.

So giving a gun to someone mentally challanged is not dangerous for you too?


were those goalposts heavy or did you use a machine to move them ?


"Safety" nuclear weapons is pure marketing bullshit. It's about making the technology seem "dangerous" and "powerful".

Legalize recreational plutonium!


wat

EDIT: more specifically, nuclear weapons are actually dangerous not merely theoretically. But safety with nuclear weapons is more about storage and triggering than actually being safe in "production". In storage we need to avoid accidentally letting them get too close to eachother. Safe triggers are "always/never" where every single time you command the bomb to detonate it needs to do so, and never accidentally. But once you deploy that thing to prod safety is no longer a concern. Anyway, by contrast, AI is just a fucking computer program, and at that the least unsafe kind possible--it just runs on a server converting electricity into heat. It's not controlling elements of the physical environment because it doesn't work well enough for that. The "safety" stuff is about some theoretical, hypothetical, imaginary future where... idk skynet or something? It's all bullshit. Angels on the head of a pin. Wake me up when you have successfully made it dangerous.


> It's not controlling elements of the physical environment

Right now AI can control software interfaces that control things in real life.

AI safety stuff is not some future, AI safety is now.

Your statement is about as ridiculous as saying "software security is important in some hypothetical imaginary future". Feel however you want about this, but you appear to be the one not in touch with reality.


If someone hooks up an LLM (or some other stochastic black box) to a safety critical system and bad things happen, the problem is not that "AI was unsafe" it's that the person who hooked it up did something profoundly stupid. Software malpractice is a real thing, and we need better tools to hold irresponsible engineers to account, but that's nothing to do with AI.

AI safety in and if itself isn't really relevant, and whether or not you could hook AI up to something important is just as relevant as whether you could hook /dev/urandom up to the same thing.

I think your security analogy is a false equivalence, much like the nuclear weapons analogy.

At the risk of repeating myself, AI is not dangerous because it can't, inherently, do anything dangerous. Show me a successful test of an AI bomb/weapon/whatever and I'll believe you. Until then, the normal ways we evaluate software systems safety (or neglect to do so) will do.


I mean, you can think whatever you want. As we make agents and give them agency expect them to do things outside of the original intent. The big thing here is agents spinning up secondary agents, possibly outside the control of the original human. We have agentic systems at this level of capability now.

Thanks, I will. Whether a computer program is outside the control of the original human or not (e.g. spawned a subprocess or something) is immaterial if we properly hold that human responsible for the consequences of running the computer program. If you run a computer program and it does something bad, then you did something bad. Simple, effective. If you don't trust the program to do good things, then simply don't run it. If you do run it, be prepared to defend your decision. Also that's how it currently works so we don't really need anything new. In this context "AI safety" is about bounding liability. So I guess you might care about it if you're worried about being held liable? The rest of us needn't give a shit if we can hold you accountable for your software's consequences, AI or no.

>The rest of us needn't give a shit if we can hold you accountable for your software's consequences, AI or no.

See this is the fun thing about liability, we tend to attempt to limit scenarios were people can cause near unlimited damage when they have very limited assets in the first place. Hence why things like asymmetric warfare is so expensive to attempt to prevent.

But hey, have fun going after some teenager with 3 dollars to their name after they cause a billion dollars in damages.


Well, that unlimited damage scenario is one that I'd need to see a successful demonstration of before I'll worry about it. Like, sure, if we end up building some computer program that allows a bored kid to do real damage then I'll eat my words but we're nowhere near there today, and for all anyone actually knows we may never get there except in fiction.

Not unlike nuclear weapons, this space is fairly self-regulating in that there's very, very high financial bar to clear. To train an AI model you need to have many datacenters full of billions of dollars of equipment, thousands of people to operate it, and a crack team of the worlds leading experts running the show. Not quite the scale of the Manhattan Project, but definitely not something I'll worry about individuals doing anytime soon. And even then there's no hint of a successful test, even from all these large, staffed, funded research efforts. So before I worry about "damages" of any magnitude, let alone billions of dollars worth, I'll need to see these large research labs produce something that can do some damage.

If we get to the point where there's some tangible, nonfiction threat to worry about then it's probably time to worry about "safety". Until then, it's a pretend problem which serves only to make AI seem more capable than it actually is.


I thought I was the only person going crazy by the new default behavior not showing the file names! Please don't expect users to understand your product details and config options in such detail, it was working well before, let it remain. Or at least show some message like "to view file names, do xyz" in the ui for a few days after such a change.

While we're here, another thing that's annoying: the token counter. While claude is working, it read some files, makes an edit, let's say token counter is at 2k tokens, I accept the edit, now it starts counting very fast from 0 to 2k and then shows normal inference speed changes to 2.1k, 2.3k etc. So wanted to confirm: is that just some UI decision and not actually using 2k tokens again? If so, it would be nice to have it off, just continue counting where you left off.

Another thing: is it possible to turn off the words like finagling and similar (I can't remember the spelling of any of them) ?


> Another thing: is it possible to turn off the words like finagling and similar (I can't remember the spelling of any of them) ?

Big +1 on that. I find the names needlessly distracting. I want to just always say a single thing like “thinking”


You should be able to do something like this:

    "spinnerVerbs": {
      "mode": "replace",
      "verbs": ["Thinking"]
    }
https://code.claude.com/docs/en/settings#available-settings


Thank you for the config and the link, that's very much appreciated!


How absurd this is an option, but I’ll be using this config too.


I replaced my spinner verbs with thought-provoking Yodaese so my claude sessions are constantly making me think about my life decisions. Loving it. https://gist.github.com/topherhunt/b7fa7b915d6ee3a7998363d12...


> I want to just always say a single thing like “thinking”

As a counterview, I like the whimsical verbs. I'll be sticking with them. But nice to see there is an option.


I don't want my tools to make jokes, I want them to work.


I remember they shipped a feature so that’s configurable.


Source code: https://github.com/don-dp/simulateagents/

Click on 'Play moves' to watch a replay.

I initially planned to run a chess tournament for LLMs but they are not good: besides obvious mistakes, they output incorrect moves, get stuck in loops by repeating the same moves and the smaller models fail to output valid json frequently. I thought the reasoning models like o3 mini might be good, but they are an incremental improvement in chess.

Feedback and suggestions for other games to explore welcome.


The article mentions, he is going to run the marathon, looking forward to what he can do in that distance. I feel it's only a matter of time until someone breaks the 2 hour barrier in an official race. Lot of people thought it would be Kelvin Kiptum, unfortunately he passed away in an accident.


I thought you were talking about Wanjiru at first. Who knows how fast he might have gone. https://en.m.wikipedia.org/wiki/Samuel_Wanjiru


Without a doubt Kiptum would have done it. It will be an absolutely insane achievement, but as you say, it is only a matter of time. No pun intended.


His death was so tragic. I believe his coach also died in the car accident.


I think political news are not encouraged here, exceptions for when interesting discussions are possible.

Judging by the quality of comments here and in the linked submissions, it's a good thing.


It’s definitely not political news. I don’t see any guideline term that it violated.


What's going on at the moment isn't politics and that's why it's being discussed so heavily.


No, it is politics. Apple stopped advertising on X over politics, not technical reasons. The Democrat-run media has successfully convinced a large number of highly conformist individuals that we've got another Hitler on our hands, and a lot of these people opine about their paranoia extensively on HN.


The impact this organization had was incredible. I doubt they would have been able to do this work if they were based out of any other country, which makes me wonder how the US legal system, regulators and law enforcement in general are not extremely corrupt. What reasons or incentives make the system work in the US? Of course there are many instances of corruption and injustice, but in comparison to almost any other country, it seems to work surprisingly well.


I think there is a really nuanced check-and-balance system that has extreme visibility for the federal legal system:

- investigators need approval from a prosecutor to move forward with investigations, and ultimately have to present their evidence in sales calls to their boss/peers. It’s a lot of red tape.

- prosecutors have bosses and reputations to uphold, they don’t want to take on risk.

- judges act as a procedural review for the prosecutor and watchdog for civil liberties

- the defense is red teaming the prosecutor and investigators for fraud etc

- the appeals court acts as a second level review for everyone + original judge

- it’s all public so journalists can poke around.


which is all good, because it is better to let 9 guilty men free, than to wrongfully imprison 1 innocent.


There are many reasons for this but primarily its simply that it is a wealthy country with a functioning legal system. US courts are generally fair; if biased. This has changed with many recent rulings by the SCOTUS. But there exists a culture of generally respecting laws (and an apparatus to enforce those laws).


> I doubt they would have been able to do this work if they were based out of any other country

Here's a British one: https://en.wikipedia.org/wiki/Viceroy_Research

There were a _number_ of British and German ones involved in the whole Wirecard mess.

Hindenburg's probably the world's most prominent, but there's nothing about the model that inherently requires being in the US.


And those German short-sellers (and to a lesser extent their British counterparts) were aggressively bullied by their local government market regulators https://www.reuters.com/article/technology/germanys-long-lon...


Great work! When I use models like o1, they work better than sonnet and 4o for tasks that require some thinking but the output is often very verbose. Is it possible to get the best of both worlds? The thinking takes place resulting in better performance but the output is straightforward to work with like with sonnet and 4o. Did you observe similar behaviour with the 1B and 3B models? How does the model behaviour change when used for normal tasks that don't require thinking?

Also how well do these models work to extract structured output? Eg- perform ocr on some hand written text with math, convert to html and format formulas correctly etc. Single shot prompting doesn't work well with such problems but splitting the steps into consecutive api calls works well.


That's a good point. We don't see that in our experiments because it's all in the math domain. However for OAI it's plausible that training for o1 might conflict with standard instruction training, leading to less human preferred output style.


In this paper and HF's replication the model used to produce solutions to MATH problems is off-the-shelf. It is induced to produce step-by-step CoT-style solutions by few-shot ICL prompts or by instructions.

Yes, the search process (beam-search of best-of-N) does produce verbose traces because there is branching involved when sampling "thoughts" from base model. These branched traces (including incomplete "abandoned" branches) can be shown to the user or hidden, if the approach is deployed as-is.


OpenAI recommends using o1 to generate the verbose plan and then chain the verbose output to a cheaper model (e.g. gpt-4o-mini) to convert it into structured data / function calls / summary etc. They call it planner-executor pattern. [1]

[1] https://vimeo.com/showcase/11333741/video/1018737829


I don't understand their api not being intended for individual use [0], are developers supposed to use this subscription only? The haiku model is pretty good for the price + available large context, and the opus/sonnet models are in the league of gpt-4, so I would have liked to pay for the api. I've moved on to llama 3 70B as a daily driver and it works really well! The only issue is it's small context size and it doesn't work great if you give it a lot of files to work with. Currently I'm forced to break down problems a lot more than I had to with gpt-4 or the claude models.

I know I can access the claude api through 3rd party sites + official partners, but there's no incentive to go through the trouble when llama 3 and gpt-4 apis work great for my use cases.

[0] https://support.anthropic.com/en/articles/8987200-can-i-use-...


I heard of people using the GPT-4 API for personal use because it's a lot cheaper than paying for the subscription since it's pay-per-use. Maybe they don't want people to do that.


Then people will just keep on using OpenAI.


The idea of system 1 and system 2 had a profound impact on me. While specific conclusions in the book were reported to be based on low quality data, it doesn't take away from the fact that it gave me a new mental lens to look at things and understand people's behaviour.


Somehow the idea of perpetually paying property taxes and land value taxes doesn't sound appealing to me, especially since businesses already pay taxes. I don't understand the argument of designing a system to hurt a specific business type such as low value businesses. If there's a loophole such as lack of sales tax for car washes, fix that, but let the playing field remain even. If desirable high value businesses aren't able to compete with car washes, isn't that the market doing it's thing? Introducing additional property and land value taxes might discourage low value businesses, but what are the 2nd and 3rd order effects of such a change?


> I don't understand the argument of designing a system to hurt a specific business type such as low value businesses.

The argument is that good business spots are a limited community resource that it makes sense to tax, like radio spectrum. If you can make good use of the space you're taking up, go ahead, but you should compete fairly with other uses of the space. If anything taxing space use makes more sense than taxing profits; a profitable business is probably one that's serving the community well, whereas a business that takes up space and doesn't generate much profit is no good for anyone. From the article:

> “A car wash does not provide a lot of jobs for the community, and they take up a lot of space,” Broska said. “If you want to invest your dollars into a car wash, then God bless you. But at the same time, I’m responsible for 17,500 people and have to be cognizant of their wishes.”

> the largely automated facility wasn’t the best use for a prominent Main Street site


> a profitable business is probably one that's serving the community well

This argument reminds of the argument googlers to explain why placing paid ads ahead of organic results is better for the user: they say thay if someone can pay more for an ad than means that can get more money from the user, and therefore the user likes it more. Lol

No, profit is profit, it doesn't mean anything else.


What’s wrong with that argument? Doesn’t getting more money from the user mean that the user is choosing to pay more, which reveals their preference?


*In a healthy competitive market with a consumer-base that's well educated on potential consequences of their purchases.

I do agree with the principle that a space-hogging marginally profitable business is detrimental to a community. Just that the opposite is not necessarily true; profitability does not imply beneficiality.

Humans do not fit the model of "rational self-interested agent" commonly applied for economic models. Gambling and addictive substances are two hugely profitable business sectors that would not exist if it we're remotely accurate.

I'll also preempt someone's inevitable assertion that the burden of verification should lie on the consumer. In informationally antagonistic environment, it's absurd to expect each individual to individually vet every service and product. That's a phenomenal waste of labor that favors only well-funded organizations practiced in deception. Any rational group would pool resources and have a single org do the research and share it with everyone. Oops we've reinvented a government.


> Humans do not fit the model of "rational self-interested agent" commonly applied for economic models.

They generally do -- the misalignment comes from analyzing people's behavior according to presumptive interests which have been externally attributed to them, instead of observing behavior in order to ascertain what people's interests actually are.

> Gambling and addictive substances are two hugely profitable business sectors that would not exist if it we're remotely accurate.

No, gambling and addictive substances exist because people enjoy them. Large numbers of people exhibit a manifest preference for short-term pleasure over long-term stability; expecting such people to act in ways that pursue long-term stability over short-term pleasure is itself irrational.

> I'll also preempt someone's inevitable assertion that the burden of verification should lie on the consumer. In informationally antagonistic environment, it's absurd to expect each individual to individually vet every service and product.

Unfortunately, your attempt at preemption has failed. Only the consumer has the relevant criteria necessary to determine how well a given good or service fits his own particular needs or desires. Being rational, most other people intuitively use the experiences and advice of others as Bayesian indicators of product suitability or unsuitability (even if they don't know what Bayesian indicators are), but they're still using those external resources as tools with which to make their own decisions.

> Any rational group would pool resources and have a single org do the research and share it with everyone. Oops we've reinvented a government.

No, you've reinvented Consumer Reports. Except for the "single org" part, anyway -- there's no single determination that could be applicable to all people all the time, so people will naturally develop a variety of parallel solutions that apply different criteria to the evaluation process.


I take it you view drug / gambling addicts not as people with a mental health issue making irrational decisions, but rather fully rational people that prefer "short-term pleasure over long-term stability"?


Absolutely. People make rational decisions to fulfill the motivations they actually have. But sometimes people, being complex creatures, have multiple conflicting motivations, where fulfilling one impedes another, which leads to psychological and emotional distress. So mental health does come into it, but as a matter of reconciling conflicting parts of ones own psyche, not as a matter of overcoming irrationality.

Or, to put it another way, the irrationality is a matter of having contradictory desires in the first place; choosing to act upon one and dismiss the other resolves the irrationality. The fact that some people make the trade-off in the opposite direction that you would doesn't make them irrational, it just demonstrates that people are different.


> Humans do not fit the model of "rational self-interested agent

> Any rational group would pool resources and have a single org do the research and share it with everyone. Oops we've reinvented a government.

The firse quote precludes the second.


I disagree... the diff lies in the definition of "humans" vs. "group". It's like the quote in the movie MIB "Kay : A person is smart. People are dumb, panicky dangerous animals and you know it"


"Any rational group would pool resources and have a single org do the research and share it with everyone. Oops we've reinvented a government."

I thought you where talking about Google Maps reviews but OK.


No, it does not. That the users end up paying more in no way means or should be implied that they _choose_ to pay more.

If cheaper options are made less accessible or clearer, and customers are intentionally mislead to more expensive products, as a result they will pay more too.


Also, there is a cost associated with searching. Consumers may intentionally forego the effort for perceived low marginal gains (especially in nominal rather than percentage terms, e.g. "I'm not going to waste my time to save a quarter." even if the quarter is a significant percentage difference). This is one of the factors in the success of Amazon. People "value" convenience.


But people will not pay more than a product is worth to them. The fact that they are willing to purchase the product at a higher price point indeed does imply that that price point is still lower than the consumption utility of the product for them.


Do you always click in paid ads before the organic results? No? Oh because you personally don't prefer them? Oh you mean they're preferred by the minority that does click ads, over alternative paid ads?

Gee I wonder what's wrong with that argument.


I take it you have never used the healthcare system in the US?


Doesn't being scammed reveal your preference for getting scammed?

Sure it does. For some definitions of the word preference.


> The argument is that good business spots are a limited community resource that it makes sense to tax, like radio spectrum.

I'm not sure I follow the reasoning here. How does the existence of economic scarcity imply that it makes sense to tax anything?

> If you can make good use of the space you're taking up, go ahead, but you should compete fairly with other uses of the space.

But that's already inherent in the nature of scarcity -- the more demand there is for a scarce resource, the higher the price is. So businesses making use of high-value prime real estate are already paying more for it. The law of supply and demand already does what you are proposing. How does paying additional fees to a separate institution with its own perverse incentives add anything to the equation?

> the largely automated facility wasn’t the best use for a prominent Main Street site

This was the personal opinion of a local bureaucrat who thought his personal opinion should be policy. That's why the company is suing.


> I'm not sure I follow the reasoning here. How does the existence of economic scarcity imply that it makes sense to tax anything?

Land in the right place is not merely economically scarce, it's economic land.

> But that's already inherent in the nature of scarcity -- the more demand there is for a scarce resource, the higher the price is. So businesses making use of high-value prime real estate are already paying more for it. The law of supply and demand already does what you are proposing.

Supply and demand doesn't work for land because there's no new supply. No matter how much the price goes up, people aren't going to make more.


> Land in the right place is not merely economically scarce, it's economic land

I don't see how there's anything particularly special about land that warrants construing it as something fundamentally different from any other scarce resource.

> Supply and demand doesn't work for land because there's no new supply.

I'm not sure that I agree that there is no new supply of land in an economic sense -- developments that increase the productivity of land use are functionally equivalent to those that increase the physical supply -- but regardless, the law of supply and demand operates the same whether the supply on the market is new or old.


“I don't see how there's anything particularly special about land that warrants construing it as something fundamentally different from any other scarce resource.”

I’ve overheard enough conversations between high net worth individuals to say there’s absolutely unique qualities about land as an asset class or they wouldn’t hold so much of it.

read some solid reasons for land focused tax regimes long ago but couldn’t remember deets on why they were compelling so googled helped me remember …

‘Immobility: Land doesn’t move, making it a stable tax base. (Important for local services like deciding whether a town can afford the new school(s) or a road).

Scarcity: No more land is being created, so taxing it efficiently is crucial. (Kind of like capital itself)

Non-distortionary: Taxing unimproved land doesn’t distort transactions.

Local Funding: LVT may be an effective way to fund local government since land cannot be moved to avoid taxes’ (1)

(1) which BTW is the main reason often given why ordinary folks have to pay higher payroll taxes vs capital gains the wealthier brackets ‘pay’ because you know capital will just up and move somewhere else if we ask too much of them.

Well just tax the land, and if the capital holders want to move all their wealth out of the community, fine, but good luck extracting wealth from a local community without owning any land near it.

So yeah. Tax the damn land already. Especially to fund local government and services, which you kinda need to have a functioning society.

I clicked through because I’ve wondered often why there’s a dozen of these new washes in my community with ten more on the way and figured it was some financing/tax write off hack.


> I’ve overheard enough conversations between high net worth individuals to say there’s absolutely unique qualities about land as an asset class or they wouldn’t hold so much of it.

Every asset class has its own uniquely defining characteristics in financial terms. There are unique qualities of stocks, of bonds, of index funds, etc. But I don't see any fundamental difference between land and any other scarce resources in real economic terms, and definitely nothing sufficient to treat land differently as a fundamental philosophical principle!

> Immobility: Land doesn’t move, making it a stable tax base.

Lots of things don't move in any meaningful way with respect to taxing jurisdictions.

> Scarcity: No more land is being created, so taxing it efficiently is crucial.

I disagree that no more land is being created in an economic sense. As I mentioned above, developments that increase efficiency of particular use cases for land are economically equivalent to the supply of land expanding.

And there are even some cases where new land is even being created in a literal physical sense -- see the Netherlands, for example.

> Non-distortionary: Taxing unimproved land doesn’t distort transactions.

It certainly does distort transactions for unimproved land. And what is "unimproved land" in the first place? Whose definitions apply, and how do we handle edge cases. If I purchase a plot of land for the specific purpose of maintaining it in its natural state as a preserve, is it improved or unimproved?

> Local Funding: LVT may be an effective way to fund local government since land cannot be moved to avoid taxes

But there are lots of other ways of avoiding taxes. Still, this purely pragmatic point -- which correctly understands taxation as a means to fund the necessary operations of government, and not a tool to manipulate behavior or as an end in itself -- explains why a large portion of local government funding already comes from property taxes.

I'm not sure what trying to take what already works and reconstitute it according to Georgist principles (which are logically weak and entail a lot of dangerous implications) brings to the table.

> So yeah. Tax the damn land already. Especially to fund local government and services, which you kinda need to have a functioning society.

Property is already widely taxed, local governments are already funded, and society is already functioning, car washes and all.


No one needs stocks, and you can create infinite new stocks. Everyone needs to live somewhere. Land is a fundamental necessity.


> The argument is that good business spots are a limited community resource that it makes sense to tax, like radio spectrum. If you can make good use of the space you're taking up, go ahead, but you should compete fairly with other uses of the space.

If it's really such a prime desirable spot then that would drive up the value of the land to the point that the low value business wouldn't have been able to afford it?

I don't understand saying it's such valuable land and detaching that from what a business was actually able to pay for it.


Low value is not synonymous with low profit.

Also, the arguments made against car washes are the same as those made against bank branches which also generate relatively little sales tax.

Anyway, if there is a surplus of car washes they will eventually dry up.


Bank branches can be replaced with technology. Maybe the problem of car washes has a similar solution, or maybe it ain't a problem.


> If it's really such a prime desirable spot then that would drive up the value of the land to the point that the low value business wouldn't have been able to afford it?

The land continues to appreciate, so the business can hold it based on its value. Especially if they've locked in a mortgage at a low rate. The car wash "business" can profit through land appreciation, which they don't pay any tax on (another flaw in the regulatory regime), and free-ride while the spot becomes more valuable through the efforts of others.


To add to this, LVTs encourage more homes near job rich areas of the city. This in turn means more people can live nearby and have a shot at a job.


You can't just assert that it makes sense when answering why it makes sense. If you are right, then the owner is losing out on money by operating a car wash, which by the way is their moral and legal right. If you think you can provide more value with the same space – offer to buy it.


> a profitable business is probably one that's serving the community well, whereas a business that takes up space and doesn't generate much profit is no good for anyone.

I think that attributes a lot more to profitability than such a metric deserves. Perhaps in a more narrow and ruthlessly capitalistic sense profitability signifies that a business is doing what it's trying to do well, but it's a big leap to get from there to how well its serving whatever is considered to be a community these days. Among other problems, just because some tiny amount of that money conceivably stays around in the region and people can buy stuff does not mean that the cost of the business being there isn't quite a lot higher, hence the term tragedy of the commons and the hollowing out of let's say America


The theory of (LVT) land value tax is that it replaces other taxes. LVT has less or no dead weight loss so it's a more efficient tax.

> If there's a loophole such as lack of sales tax for car washes, fix that, but let the playing field remain even.

My claim is that the playing field is not currently even rather it is massively in favor for low-capital, low-labor, and low-regulatory businesses (like car washes) and additionally incentives ostensibly designed to encourage real estate development (1031 like-kind, treatment of real estate as depreciating, etc.) are now primarily used to either speculate on existing real estate or build the minimum to gain ownership/interest in speculation. If you take all the cash you have, you can only buy a finite amount of land. If you build a low-capital but profitable business like a car wash, you are only limited by the leverage limits imposed by lenders.

> If desirable high value businesses aren't able to compete with car washes, isn't that the market doing it's thing? > but what are the 2nd and 3rd order effects of such a change?

Tautology yes, as desired no. Technically yes because the market is shifting towards low-capital and low-regulatory businesses because they have a more predictable ROI. The goal isn't to disfavor the more capital & regular intensive businesses just regulate them. i.e the influx of car washes is the undesired 2nd order effect of some other policy e.g. minimum parking for restaurants and apartments (likely no such rule exists for car washes so now you need less land a car wash vs a restaurant).

Raising regular property taxes (land+improvements) is just easier solution than waiting for far-reaching tax reform like LVT. IMO it's better to correct the market even if it means raising taxes overall in the short term.


I like Georgism but if business taxes were replaced would internet businesses that don’t need a physical location thereby pay much less taxes than those that do have need for physical location? That does sound kinda lopsided, unless we’re also doing a land value tax on prime domain name real estate.


Internet businesses have physical locations somewhere. Even if they are drop shipping than someone else is paying the land value tax for the warehouses they are shipping from. The land use exists somewhere at some level of the value chain and that somewhere would then be taxed.


Not all, for example my SaaS company is fully remote and has no physical location whatsoever. I am cool with paying less taxes and I understand that the purpose of the land value tax is more about utilizing a limited resource effectively and not necessarily about some notion of fairness, but it seems weird that someone like me would not need to buy in besides my own home because of the industry that I’m in.


Then the server farm where your code runs will be taxed. Maybe you run it from your house, in that case then it is part of that.

You are still just moving it around at the end of the day. Presumably as you grow more profitable and wealthy you would relocate to higher value real estate. If not then you are spreading the wealth to lower value areas which is also a good thing. Seems like all good things in the end.


> My claim is that the playing field is not currently even rather it is massively in favor for low-capital, low-labor, and low-regulatory businesses (like car washes)

The term ‘rent seekers’ comes to mind.


>Somehow the idea of perpetually paying property taxes and land value taxes doesn't sound appealing to me

Eternal wealth to those lucky enough to have been born and bought in the past or born to families who bought in the past sounds plenty dystopian to me; pay up regularly or let someone else who will contribute to society step in.


Exactly this. We all would love to be perpetual rent-seekers but it just doesn't work for a proper society. Not to mention land is a finite resource.


It's less about land availability. There is plenty of land at least in some countries. It's more about desirability. As those in the real estate trade like to say it's all about "location, location, location."


Which conflates "contribution to society" with "economic value."


Paying taxes is literally contributing to society. Owning land is not.


There are other things that are contributing to society that are not paying taxes.


And for that theoretical tiny category there are tax deductions and other exceptions to the law (for instance in many jurisdictions sites of religious worship are exempt from property tax). The overwhelming majority of owners do not and should not qualify for this.


What eternal wealth would a guy in a hut in the middle of a land-locked forest have, with no infrastructure, and no community? Why is he forced to provide anything beyond self-sustainability?

> pay up regularly or let someone else who will contribute to society [raze your forest to sell wood chips].

Forcing mountain men off their land to work in the factories, sounds plenty dystopian to me. The Industrial Revolution and its consequences have been a disaster for the human race…


> Somehow the idea of perpetually paying property taxes and land value taxes doesn't sound appealing to me, especially since businesses already pay taxes.

And it doesn't sound appealing to me, as an individual. I don't want to feel like a peasant constantly paying a tax to the monarch/state: so at very least the first property an individual owns should be tax free (not annual land value tax / property tax). I happen to live in a country where that's the case (but it's not the reason I moved there): no yearly property tax.


> I don't want to feel like a peasant constantly paying a tax to the monarch/state

If you're not paying your dues, why do you think the state should have an obligation to protect you and your property?


The state has ruled time and again; It has no duty to protect you and your property.

Warren v. District of Columbia http://en.wikipedia.org/wiki/Warren_v._District_of_Columbia

Castle Rock v. Gonzales http://en.wikipedia.org/wiki/Castle_Rock_v._Gonzales

Davidson v. City of Westminster , 32 Cal.3d 197 http://scocal.stanford.edu/opinion/davidson-v-city-westminst...

Hartzler v. City of San Jose (1975) , 46 Cal.App.3d 6 http://www.lawlink.com/research/caselevel3/51629

Linda Riss v. City of New York http://www2.newpaltz.edu/%7Ezuckerpr/cases/riss.htm

DeShaney v. Winnebago County http://en.wikipedia.org/wiki/DeShaney_v._Winnebago_County

Susman v. City of Los Angeles269 Cal. App. 2d 803 http://law.justia.com/cases/california/calapp2d/269/803.html

South v. Maryland, 59 U.S. (How.) 396, 15 L.Ed.433 (1856) http://www.endtimesreport.com/NO_AFFIRMATIVE_DUTY.htm

…and many more! https://archive.is/MbIeH


The days of merely protecting one's property are long over. At least in the US real estate taxes pay for many services unrelated to safety of the property. For example, closing a real estate tax funded senior center is unlikely to result in an increase in crime. Maybe the seniors themselves would be more vulnerable without a senior center, but I fail to see a general crime spree resulting. If anything the seniors would be watching other people's property and call the cops at any sign of potentially suspicious activity.


1. You're assuming they're not paying other types of taxes.

2. That's what states are for.


I live in a US state where I do pay property taxes but there is no income tax. Those property taxes and the overall sales tax are the main sources of revenue for the state.

I don't expect to live in an area without paying something to maintain it.


I don't understand myself how LVT wouldn't result in what you see in many "high value" city centers (where business rents are insanely high) - miles and miles of law offices and banks, and not much else.


Taking up land is an externality, it’s not something that can be solved with a supply/demand curve. So, we tax it to encourage efficiency since our usual method if encouraging efficiency doesn’t work.


> If there's a loophole such as lack of sales tax for car washes, fix that, but let the playing field remain even.

What is even the problem here that needs to be fixed? People have found a business model that is able to generate value from low-cost land that might otherwise lie vacant. Great, more power to them.

The idea that people's use of their own property should restricted or manipulated so the government can maximize tax revenue is the epitome of the tail wagging the dog.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: