Hacker Newsnew | past | comments | ask | show | jobs | submit | heisenbit's commentslogin

When Nena was singing about 99 balloons we thought it was hyperbole. Few understood she was a traveler from a future where soldiers where literally shooting down birthday balloons before progressing to drones. Scary to think about the next level of escalation.

Geez, I even recently watched a video about the making of that song, and completely forgot it was exactly about that. Now I'm even more depressed, but at least I have an upbeat riff stuck in my head.

Society's legally double standard:

- people can create new standards that will be applied retroactively

- lawmakers can create new laws which can not be applied retroactively


This is easy. Have your own standards based on your own reason and navigate any arbitrary standards LCD majority of the society cooks up from time to time.

>lawmakers can create new laws which can not be applied retroactively Still a courtesy:

    Background: Mary Anne Gehris was born in Germany and came to the United States around age 1, growing up entirely in the U.S. as a lawful permanent resident (green card holder). 
The Incident: In 1988, during a quarrel over a man, Gehris pulled another woman's hair. She was charged with misdemeanor battery. No witnesses appeared in court, and on the advice of a public defender, she pleaded guilty. She received a one-year suspended sentence with one year of probation.

    Immigration Consequences: Years later, under the **Illegal Immigration Reform and Immigrant Responsibility Act of 1996 **(IIRIRA)—enacted during the Clinton administration but actively enforced during the Bush Jr. administration—her misdemeanor battery conviction was classified as an "aggravated felony" under federal immigration law. This made her deportable despite having no subsequent criminal record, being married to a U.S. citizen, and having a U.S. citizen child. 
Outcome: Gehris avoided deportation when the Georgia Board of Pardons and Paroles granted her a pardon in March 2000, which removed the immigration ground for her removal.

Source Coverage: The story was detailed in Anthony Lewis's New York Times columns:

    "Abroad at Home: 'This Has Got Me in Some Kind of Whirlwind'" (January 8, 2000)
https://www.nytimes.com/2000/01/08/opinion/abroad-at-home-th...

These columns highlighted how IIRIRA's broad definition of "aggravated felony" swept up many long-term permanent residents with minor, often decades-old convictions, separating families and deporting people who had lived nearly their entire lives in the United States.

The Gehris case became a frequently cited example in immigration advocacy and legal scholarship about the harsh consequences of mandatory deportation provisions for lawful permanent residents. If you'd like, I can search for the original NYT articles or additional reporting on her case.


"If you'd like, I can search for the original NYT articles or additional reporting on her case."

No need but thanks for offering


There is demand for non scalable, not committed to be maintained code where smaller issues can tolerated. This demand is currently underserved as coding is somewhat expensive and focused on critical functions.

What are some examples of when buggy code can be tolerated?

From recent personal examples

We have a somewhat complicated OpenSearch reindexing logic and we had some issue where it happened more regularly than it should. I vibecoded a dashboard visualizing in a graph exactly which index gets reindexed when and into what. Code works, a little rough around the edges. But it serves the purpose and saved me a ton of time

Another example, in an internal project we made a recent change where we need to send specific headers depending on the environment. Mostly GET endpoint where my workflow is checking the API through browser. The list of headers is long, but predetermined. I vibecoded an extension that lets you pick the header and allows me to work with my regular workflow, rather than Postman or cURL or whatever. A little buggy UI, but good enough. The whole team uses it

I'm not a frontend developer and either of these would take me a lot of time to do by hand


Leaf code. Anything that you won't have to build upon long-term and is not super mission critical. Data visualizers, dashboards, internal tools.

Pretty much everywhere where a 80% working tool is better than no tool, and without AI the opportunity cost to write the tool would be too high.


If the code is being used by a small group of people who are willing to figure out and share workarounds for those bugs - internal staff, for example.

Aren’t you also paying internal staff for their time. Waisting their time is waisting your money.

I've been in these situations before. If there's a known bug in an internal tool that would take the development team a day to investigate and fix - aka $10,000s - it's often smarter to send around an email saying "don't click the Froople button more than once, and if you do tell Benjamin and he'll fix it in the database for you".

Of course LLMs change that equation now because the fix might take a few minutes instead.


> If there's a known bug in an internal tool that would take the development team a day to investigate and fix - aka $10,000s - it's often smarter to send around an email saying "don't click the Froople button more than once, and if you do tell Benjamin and he'll fix it in the database for you".

How much will Benjamin's time responding to those calls cost in the long run?


Hopefully none, because your staff will read the email and not click the button more than once.

Or one of them will do it, Benjamin will glare at them and they'll learn not to do it again and warn their coworkers about it.

Or... Benjamin will spend a ton of time on this and use that to successfully argue for the bug to get fixed.

(Or your organization is dysfunctional and ends up wasting a ton of money on time that could have been saved if the development team had fixed the bug.)


> development team a day to investigate and fix - aka $10,000s

What about the non-fictional 99.999999999% of the world that doesn't make $1000/hour?


Large companies are often very bad at organizing work, to the tune of increasing the cost of everything by a large multiple over what you'd think it should be. Most of that cost wouldn't be productive developer time.

It costs them single digit thousands instead.

The alternative is the staff having no software at all to help with their task which wastes even more of their time.

You are setting up to say "I wouldn't tolerate that" for any example given, but if you look at the market and what makes people actually leave, instead of what makes people complain, then basically anything that isn't life-and-death, safety critical, big-money-losing, or data corrupting is tolerable. There's plenty of complaints about Microsoft, Apple, Gmail, Android, and all kinds of 3rd party niche business systems.

[Edit: DanLuu "one week of bugs": https://danluu.com/everything-is-broken/ ]

All the decades people tolerated blue-screens on Windows. All the software which regularly segfaulted years ago. The permeation of "have you tried turning it off and on again" into everyday life. The "ship sooner, patch later" culture. The refusal to use garbage collected or memory managed languages or formal verification over C/C++/etc because some bugs are more tolerable than the cost/effort/performance costs to change. Display and formatting bugs, e.g. glitches in video games. When error conditions aren't handled - code that crashes if you enter blank parameters. Bugs in utility code that doesn't run often like the installer.

One software I installed yesterday told me to disable some Windows services before the install, then the installer tried to start the services at the end of the install and couldn't, so it failed and finished without finishing installing everything. This reminded me that I knew about that, because that buggy behaviour has been there for years and I've tripped over it before; at least two major versions.

Another one I regularly update tells me to close its running processes before proceding with the install, but after it's got to that state, it won't let me procede and it has no way to refresh or rescan to detect the running process has finished. That's been there for years and several major versions as well.

One more famous example is """I'm not a real programmer. I throw together things until it works then I move on. The real programmers will say "Yeah it works but you’re leaking memory everywhere. Perhaps we should fix that." I’ll just restart Apache every 10 requests.""" - Rasmus Lerdorf, creator of PHP. I've a feeling that was admitted about 37 Signals and Basecamp, it was common to restart Ruby-on-Rails code frequently, but I can't find a source to back that up.


Points at the public sector

Any large enterprise. The software organisations write for themselves is pretty dire in most cases even without AI.

Accidents are not normal driving situations but edge cases.

Sort of; accidents are the absolute core of the product. They are rare, but they are the focus of the design.

By edge cases I mean scenarios like the lights going out in an underground garage; low vision due to colourful smoke or dust, or things like optical illusions or occlusion that a human would just need to remember.

Lidar can help, but not really enough to be worth it.


Lidar is by far the most accurate source of range data. You need to explain why Waymo and Zoox use lidar in direct contradiction to what you claim.

Urban operating domain combined with legacy approaches.

If I was designing a robotaxi 10 years ago I would use lidar, designing consumer vehicles for near future L3 it's no longer the best use of resources. I prefer more compute and cameras for the money.

Our current issues are now scene understanding and navigation; followed by parking. We get very little value from LIDAR in the driving cases, so much so that we don't even use it for active nav even on cars that have it. Only for training and parking.


Are you claiming that the detailed 3D point clouds LIDAR provides isn't useful in scene understanding?

Yeah, not compared with the extra money being spent on compute directly. $200 gets you a fair amount of extra processing power, and that's if one LIDAR is even enough, with the solid state style currently around we need several.

Things like when to change lanes, do I need to yield for that ambulance, or what is that pedestrian going to do, are not really improved by point clouds.

I still want the massive point clouds for validation and ground truth, but not for driving.


The Swiss cheese model would like to disagree.

That is a meme and it does not matter whether it happened or not it would be too rare to matter.

What matters is the lack of discipline and respect for boundaries (beyond traditional teen behavior) possibly caused by social deprivation in our social app age. It is brought to the surface in the classroom where teachers have considerable less power than earlier. Physical attack once unthinkable are not rare anymore.


Classroom ‚management‘ and teens can not be observed at the same time and space.

The problem is that these terns have not had meaningful interactions with technology at home where there roughly a 1:1 relationship parent:kid. Now try to get meaningfulness into kids where the ratio is 1:20+ in a classroom.


I suspect European courts would take a dim view on preventing export through switching off an existing mechanism yielding portability.

This almost sounds as if Waymo pivots from taxi service to automated driver OEM?


Data centers in space are the logical progression from the multi trillion business of m2m and edge computing. It removes all physical limits to investment.


You mean physical reality


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: