Hacker Newsnew | past | comments | ask | show | jobs | submit | SkyBelow's commentslogin

Sometimes there are two groups of people who have different opinions that don't interact, but given the extent they take up the same platform and don't seem to see each other, I'm not sure it is really a fallacy even then.

First, it becomes possible for people who have a double standard to hide behind this. One can try to track an individual's stance, but a lot of internet etiquette seems to be based on the idea of not looking up a person's history to see if they are being contradictory. (And while being hypocritical doesn't necessarily invalidate an argument, it can help to indicate when someone is arguing it bad faith and it is a waste of time as someone will simply use different axioms to reach otherwise contradictory conclusions when they favor each.)

Second, I think there is the ability to call out a group as being hypocritical, even when there are two sub groups. That one group supports A generally and another group supports B generally (and assuming that A + B is hypocritical), but they stop supporting it when it would bring them into conflict indicates a level of acceptance by the change in behavior. Each individual is too hard to measure this (maybe they are tired today, or distracted, or didn't even see it), but as a group, we can still measure the overall direction.

So if a website ends up being very vocally in support of two contradictory positions, I think there is still a valid argument to be made about contradicting opinions, and the goomba fallacy is itself a fallacy.

Edit: Removed example, might be too distracting to bring up an otherwise off topic issue as an example.


I believe in A, I don't take a strong position on B, I am in coalition with people who believe in B and don't take a strong position on A, we both believe in C, D, E, and F, which some other people believe in with differing weights. Browbeating me about position B (or, the most useless kind of Internet banter, complaining about me and my hypocritical position on A+B to your friends who oppose both in a likewise contradictory way, in some venue I've never heard of) is not about making people reevaluate positions, it's about negative factionalism. The only reason it might not fit the familiar categorization of "fallacy" is that you would never use it in rational debate, either in arguing with another person or in reasoning out your own position.

>I believe in A, I don't take a strong position on B

But if A and B are opposed, then there is a question of why a strong position on A can be allowed with a weak position on B, if the reason for the strong position on A would also indicate a strong position against B.

The underlying argument being implied (but rarely ever directly stated) is to question if your reason for the strong position on A is really the reason you state, or if that is just the reason that sounds good but not the real reason for your belief.

In effect, that you don't apply the stated reason to B despite it fitting is the counter argument to why it doesn't actually support A.

If there is an inconsistency in arguments being applied, any formal discussion falls apart and people effectively take up positions simply because they like them, contradictions irrelevant. This generally isn't a good outcome for public discourse.


That's just "why do you hate waffles" with more words.

This is the same logic of 'not a booby trap' booby trap,s which sometimes do work out in the favor of the one setting them if they weren't too open about it. If your commit message is that you are talking about OpenClaw just to booby trap your repo, then I suspect it wouldn't fly, where as if you gave it some plausible deniability, a lawyer would be able to get any suit or charges dismissed.

This is all under the assumption we eventually live in a world where booby trapping repositories becomes a legal issue. On one hand that feels silly. On the other hand, we have had far less sensible cases make it to court and there is a small kernel of similarity which the legal system might latch onto.


If someone doesn't want you to use AI on their repository, they state it. And if they want to "booby trap" (Antropic logic), them it's they right, you have been warned.

I can't see how you rights to use AI is prevalent on the right of anybody to write the string "OpenClaw" or any string forbidden by your AI provider.

Seriously, if the author hides it and trick your AI agent to check it, well maybe. But otherwise, it's not even a question.


What's the chance that it is market motivated? That the companies most likely to succeed are those willing to break the rules (this isn't to say that breaking the rules makes one likely to succeed, you have to break the right rules and not the wrong ones, and that distinction is often times unknown til after the fact).

This might mean that the companies that we see explode in popularity are those whose cultures are already biased in ways that don't consider negative outcomes, as the companies that did consider them already excluded themselves from exploding in the market (they might still be entirely successful startups, but at a vastly smaller scale of success).


It is absolutely market motivated, by the investor market. You can raise a great deal of capital by simply making exaggerated promises, then doing the minimum effort to just about achieve it.

Technically LLMs can be ran in deterministic mode as well, but I don't think that is enough. Even a deterministic LLM is too chaotic, small changes in prompts or the otherwise general context can result in vastly different outputs. This makes me think we need some other factor that is stronger than (or maybe orthogonal to) determinism. A notion of being well-behaved or some other non-chaotic term, so that slightly different inputs don't lead to vastly unexpected results.

Even that doesn't feel quite correct, because a compiler does seem quite chaotic. Forget a semi colon and an otherwise 99.99% code base results in a vastly different output. But it is still a very understandable output. Very predictable. So while treating both compilers and LLMs as functions that map massive input strings to massive output strings, there is some property I can't well define that compilers have that LLMs still lack (and the question is if they'll always lack it). I can't really define what it is, but it is something more than determinism.


As much as I use AI, even for coding, I really do not like the argument. They are too chaotic to be compilers. The descent from prompt to code has far too many branches, and even small requests begin to build up bad patterns.

There is some fun to consider when sufficiently advanced AI allows this in areas where we are okay with things going wrong, but that seems a very limited domain for fun and games and not for serious software that needs to be correct as possible.

I can see vibe coding building very simple systems, and it likely will get better with systems that are one off throw aways where edge cases don't matter because we have a one off need of turning input X into output Y, but when it comes to people using AI in systems where correctness matters, long term support must be provided, and ease of adding new functionality is a serious consideration, it seems we are as far from having prompt as code as we are from AGI.


Even with all training data provided, won't it still be a black box? Unless one trains it exactly the same, in the exact same order for each piece of data, potentially requiring the exact same hardware with specific optimizations disabled due to race conditions, etc., the final weights will be different, and so knowing if the original weights actually contain anything extra still leaves any released weights as a black box, no? There isn't an equivalent of reproducible builds for LLM weights, even if all of this was provided, right?


One issue with factual reporting is what facts are getting reported, given that public attention is a very limited resource. People consistently extrapolate from data without knowing if that data is good or bad. So if I show you news with 100 stories of people doing awful things on channel A and 100 stories of people doing awesome things on channel B, both will be factual, but one will have you living more in fear of everyone while the other will inspire you. These are still biases.

One of the least (to the extent possible given the topic) political examples is stranger danger. Kids are safer than ever before, but due to the way stories are reported when bad things do happen to kids, parents are less trust of strangers than ever before (and this is despite the evidence it isn't the strangers who are the risk to kids). The sum total experience that media provides now leads to parents being far more fearful and restrictive of their children than past generations, all without needing to tell any lies.

If all the police reports and research into stranger danger being a false narrative can't combat it, how will ideas with far less evidence to the contrary be countered? Should parents trust the news when it comes to the topic of stranger danger?


Wait, I thought we were onto racoons on e-scooters to avoid (some of) the issues with Goodhart's Law coming into play.


I fall back to possums on e-scooters if the pelican looks too good to be true. These aren't good enough for me to suspect any fowl play.


It seems most takes on this are that ends either always or never justify the means, but rarely is their discussion on the option that they can and developing a system of when they do and don't. At least in the general public discourse I've seen involving means and ends.


Can't one get randomness and determinism at the same time? Randomly generate the data, but do so when building the test, not when running the test. This way something that fails will consistently fail, but you also have better chances of finding the missed edge cases that humans would overlook. Seeded randomness might also be great, as it is far cleaner to generate and expand/update/redo, but still deterministic when it comes time to debug an issue.


Most test frameworks I have seen that support non-determinism in some way print the random seed at the start of the run, and let you specify the seed when you run the tests yourself. It's a good practice for precisely the reasons you wrote.


Absolutely for things like (pseudo) random-number streams.

Some tests can be at the mercy of details that are hard to control, e.g. thread scheduling, thermal-based CPU throttling, or memory pressure from other activity on the system


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: