I took my introductory college writing classes at a college I can’t name-drop without sounding like a jerk, which also did a bunch of LLM research over the years. We used a TON of em dashes in our writing. It’s no mystery, to me, where that stylistically prevalent quirk comes from. I’ve definitely been accused of being an LLM bot.
> I tried talking to my children about leaving as clean of a footprint on the internet as one can in anticipation of future people/systems taking that into consideration.
I don’t think you’re wrong, but the fact that people consider it inevitable we’ll all have an immutable social acceptance grade that includes everything from teenage shitposts to things you said after a loved one died, or getting diagnosed with cancer, makes me regret putting even a moment of my professional energies towards advancing tech in the US.
I think he's wrong and I'm willing to say that. The ability for people to move beyond the fundamental attribution error is well known and takes major resources to correct that. For anyone that posts a comment, assuming you want to have easy attribution later is that you must future proof your words. That is not possible and it is extremely suppressive to express yourself.
For example:
"Ellen Page is fantastic in the Umbrella Academy TV show"
Innocent, accurate, support, and positive in 2019.
Same comment read after 1 Dec 2020 (Transition coming out): Insensitive, demeaning, in accurate.
> That is not possible and it is extremely suppressive to express yourself.
Also for the fact that you cannot predict how future powers will view past comments - for instance, certain benign political views 20 years ago could become "terroristic speech" tomorrow.
I operate by a simple, general rule - I don't often say anything online I wouldn't say directly to someone's face in real life.
I can be a rude prick online sometimes, but I can be in real life too - basically though the reason I do this is I never want it to be some huge surprise IRL if someone sees what I write online and be like, "wow, I didn't know that about him." I'm pretty much what I am online and IRL the same. For some reason this seems to matter for me, at least in the past when people have tried to like, send employers stuff I may have written online. The reaction is like "oh, yea, we knew that already about him."
Nothing terrible, maybe slightly embarrassing, but you know how online spaces can be. just be yourself basically, at least I try to be.
Your framing is interesting. You may feel that you can’t change who you are in real life, but people have a choice on how they behave online (or choose not to engage at all). So you could choose to be nice (or at least not a jerk); I’m pretty sure you wouldn’t get people writing to your employer complaining. I’d argue that if you know you’re sometimes a jerk, it’d be less stressful for you and others if you didn’t bring that energy online.
Sure, there is a choice. it’s rarely/never been stressful for me though, and I value being who I am for my own reasons as a strength and not a weakness. I always try to play by the moderation rules as I can possibly and realistically do. some of what I’ve written online has gotten me opportunities it wouldn’t have if i’d been more hesitant.
My point is if you have a good track record what you maintain online vs irl doesn’t matter as much to people as you’d maybe think as long as you are being true to yourself. I’m an elder millennial though, so that’s always been the case online for me and i dont think i often get out of pocket online anyway.
maybe that won’t be the case in the future. I could write a lot more than I’d care to publicly about personal and implied threats I’ve received based on my writings, but caving to that to me would betray my own values and I choose to consume the web how i choose knowing possible consequences - plus the fact moderation standards and what is “rude” drastically differs amongst platforms.
This really hits a string with me, adding on to this, This is how I believe the same way but I would argue that I might be more nicer online than offline because I am better able to control any emotions imo when I give more thought to it.
Because I don't really appreciate flame wars and when that's the case, I like to take some time to find common ground and just have a respectable discussion when possible.
This approach is harder to work irl because those moments are also spontaneous & it does require significantly more discipline to control one's emotion within seconds rather than minutes, but its something that I think I can work upon as well.
But I would say that aside from that, most of my comments are pretty spontaneously written. I frame it as a question of being honest with myself at times, I think I am mostly pretty much the same IRL and online as well.
Another point but such forums also act like a journal to me for my future to read as well. I try to write comments in such sense that in future, I can read them and try to accurately remember what my mind was thinking during the time/days I wrote that comment for self-retrospection as well.
Edit: Although now that I think about it, there are definitely some subtle changes I might have online vs irl but I would still say that I feel like my accounts are pretty authentic fwiw (personally) but I am happy with my authenticity online but there's definitely a level of my thinking which worries about any comment being permanently available though.
As someone who gets dopamine hits from downvotes on HN, I approve of your behavior!
>just be yourself basically
Yea, it is boring when everyone is the same. I would like a rude but interesting world (even if I might not survive long in one), than a nice, boring one.
This is very import: you don't know how the cancelation culture will be in 20 years.
I like to use the example of a guy who did a blackface in a party back in 2000's. Although reprehensible, was not commom-sense racism back then. Today society sees it as completely unacceptable.
Eventually that guy became prime minister of Canada and things went pretty bad when that photo surfaced decades later.
Is it far to judge someone's actions by the lens of a different culture? When the popular opinion comes, they won't care about historical context.
I think people forget that before about the 2010s plus or minus depending on who and where those sorts of overt bigotry were considered a "solved" problem, things were looking up and you and your buddies dressing up as Klansmen for Halloween was mocking the Klansmen more than anything else.
Depends on what you want to say. It can be safer to say something directly to someone's face than online because it is transient and generally does not involve random passers-by.
I am not going to give examples, because I don't want them to be pinned on me as my views, but I'm sure most of us have enough imagination to come up with them.
I think the problem with this, especially amongst younger people, is having spent so much time online, they don't know where to draw this line anymore.
> I operate by a simple, general rule - I don't often say anything online I wouldn't say directly to someone's face in real life.
I think this isn't enough for the digital age, simply because "comments you'd say to someone's face" can compromise you on the internet.
Some dirty joke, gossip or whatever you tell a friend, if posted online, could come back to bite you in the ass in the dystopian future, lose you your job, or worse.
I think it’s naive to assume the private companies selling these services will know, let alone care, let alone disclose when their black box models botch things like this. The companies currently purporting to provide this exact service to HR departments for hiring decisions clearly didn’t let that stop them.
Not even the most extreme LGBT activist would accuse people who used the name Ellen Page in 2019 of having somehow been insensitive for failing to have a crystal ball. That is as absurd as it sounds. At most someone might be asked to change the name if they’re actively republishing the material in question.
Your point may be more valid when it comes to political attitudes, in cases where the issues were known at the time but the Overton window has shifted since.
well, how about "abortion legal" to "abortion murder"... possible to see this coming, but I know doctors in NY who are now afraid to travel to Texas.
How about DEI initiatives as good things in 2024 and a mark of evil in 2025? Lots of people were fired because in 2024 their boss told them to work on DEI and they did what their boss told them to do. Turns out this was a capital offense.
I am not commenting on your specific example of DEI but I want to make the general point that you are always responsible for what you do, irregardless of whether you were told to do it by your boss, or commanding officer, or whatever.
So again, I don't care about the specific example you used but if something is 'in fashion' and you go along with it, including at work, then you are ultimately responsible for that choice. Because it is always a choice, including being a hard choice that results in you losing your job.
But working on DEI on your boss' orders in 2024 wasn't reprobable, anymore than bringing your boss a cup of coffee to their desk was.
The point is that the shift in what is considered "a capital crime" is arbitrary, this is not the Nuremberg trials. You cannot protect yourself by being a decent person, whatever you do today can be a crime tomorrow, and AI can assist those looking for your flaws.
This term itself is an example of what this thread is talking about. Are you aware that some people now consider this to be a racist term? It’s a reference to the disenfranchisement of black voters in America.
This is easy. Have your own standards based on your own reason and navigate any arbitrary standards LCD majority of the society cooks up from time to time.
>lawmakers can create new laws which can not be applied retroactively
Still a courtesy:
Background: Mary Anne Gehris was born in Germany and came to the United States around age 1, growing up entirely in the U.S. as a lawful permanent resident (green card holder).
The Incident: In 1988, during a quarrel over a man, Gehris pulled another woman's hair. She was charged with misdemeanor battery. No witnesses appeared in court, and on the advice of a public defender, she pleaded guilty. She received a one-year suspended sentence with one year of probation.
Immigration Consequences: Years later, under the **Illegal Immigration Reform and Immigrant Responsibility Act of 1996 **(IIRIRA)—enacted during the Clinton administration but actively enforced during the Bush Jr. administration—her misdemeanor battery conviction was classified as an "aggravated felony" under federal immigration law. This made her deportable despite having no subsequent criminal record, being married to a U.S. citizen, and having a U.S. citizen child.
Outcome: Gehris avoided deportation when the Georgia Board of Pardons and Paroles granted her a pardon in March 2000, which removed the immigration ground for her removal.
Source Coverage:
The story was detailed in Anthony Lewis's New York Times columns:
"Abroad at Home: 'This Has Got Me in Some Kind of Whirlwind'" (January 8, 2000)
These columns highlighted how IIRIRA's broad definition of "aggravated felony" swept up many long-term permanent residents with minor, often decades-old convictions, separating families and deporting people who had lived nearly their entire lives in the United States.
The Gehris case became a frequently cited example in immigration advocacy and legal scholarship about the harsh consequences of mandatory deportation provisions for lawful permanent residents. If you'd like, I can search for the original NYT articles or additional reporting on her case.
That we identify social media as "tech" is very strange.
Yes, they have a lot of servers. But that isn't their core innovation. Their core innovations are the constant expansion of unpermissioned surveillance, the integration of dossiers, correlating people's circumstances, behavior and psychology. And incentivizing the creation of addictive content (good, bad, and dreck) with the massive profits they obtain when they can use that as the delivery vector for intrusively "personalized" manipulation, on behest of the highest bidder, no matter how sketchy, grifty or dishonest.
Unpremissioned (or dark patterned, deceptive, surreptitious, or coercive permissioned) surveillance should be illegal. It is digital stalking. Used as leverage against us, and to manipulate us, via major systems spread across the internet.
And the fact that this funds infinite pages of addicting (as an extremely convenient substitute for boredom) content, not doing anyone or society any good, is a mental health, and society health concern.
Tech scaling up conflicts of interest, is not really tech. Its personal information warfare.
I didn’t say I hated technology, generally— I said I hate what the industry has morphed into in the US. What is or isn’t tech is immaterial. All of the odious things you listed are things that the ‘tech industry’ does, largely unquestioned, these days. Frankly, it’s sickening.
Except noting that it is crazy that we accept the framing of "tech firm" for what are really "psychology engineering" firms, simply because they use tech.
Their use of tech is only perceived as more glamorous than companies addressing far greater technical challenges, because they are making crazy profits. While the only problem they alleviate with any tech ambition, is making more money for themselves, through centralizing ad venues (maximum ad revenue extraction, blind eye to scams and other dark marketers) and social damage externalization (maximum psychological manipulation).
The negative downstream impacts of all this value extraction are many, including the vast sums of money being paid to attention-hacking social influencers. This destructive army is directly funded by social media, whose alibi is they don't want to be censors. But they are not neutral, as that framing would imply. They are very actively financing the dreck!
A huge amount of western society and the way we run institutions is based on pretending everything meets some quasi victorian moral standard and is all proper, everyone consents to and supports how everything runs and everything is fine and dandy when that is very much not the case and people put up with a lot of it because they have no better option.
In light of that what I see happening in the short term is that every institution will start screwing people based on information that basically doesn't matter since that's kind of what they're already set up to do with that information but don't except in exceptional cases since those are the cases in which that information makes it back to them.
Imagine some business owner opening a new location, some social worker renewing their license, some civil engineer creating plans on someone's behalf. All those people need to deal with institutions that in the "normal" case pretend to not have large discretionary components in order to get the public to put up with them, but do in practice have such ability. Now say those institutions pay for some LLM based "who am I dealing with" service that finds everyone's pseudonymous posts and whatnot.
Well, all of these people wind up getting given the run around because even though they do fine work that meets the rules, knowing how the sausage is made has made them jaded and given them opinions that make the institutions they have to deal with want to screw them. The business owner gets given the run around because it turns out he believes the institutions he's seeking permission from are a corrupt racket who's members ought to be hung from the overpass. The social worker gets denied because their career has turned them into a "defund it all and when faced with real consequences most of these people will shape up" type. The civil engineer's plans get rejected and he has to go around in circles because he's been posting about how in light of what corporations with good funding can get approved and the impact thereof it's unconscionable the stuff they try and enforce upon individuals and engineers ought to pencil whip anything that isn't clearly F-ed up.
And so, all these people have to waste time and probably a low five digit sum of money fighting the BS. This would be fine perhaps if these people's conduct was so egregious it made it back to the institutions on it's own (like say some doctor who's preaching quackery on youtube may get his license yanked if he amasses such a following the board hears about it, that's the kind of stuff institutional discretion was set up for) but no real good social interest is served having an LLM dig up petty dirt on everyone. However, the LLM service peddlers stand to make a buck. The institutions stand to make a buck while washing their hands of responsibility. The lawyers who'll fight on wronged parties behalf stand to make a buck. And in the process they can all pretend like society somehow benefits from this enhanced scrutiny when in fact they're just making mountains out of mole hills.
Nobody gives a damn about the dying of a trade. People don’t want their house foreclosed on when they lose their income, or their cancer to kill them when they lose their health insurance, to move an elderly parent into a cheap shitty old folks home because they can’t afford home health care, or not be able to pay for their kid to go on that school field trip.
This would all be pretty fucking swell if the fundamental problems this could cause were even considered before hitting the gas. Instead, you’re going to have a shitload of people with ruined lives, but as a consolation prize, they can vibe code stuff! Wowee!
This very forum was founded by a VC who had great success recruiting 22 year olds with fancy diplomas to automate away the job of the guy who copied the numbers from the TPS report pdf attachment into excel.
I didn't see people on here ranting and taking up the flag of revolution for the TPS report excel paster guy's job that they were automating away with their web2 SaaS startup.
But wait- that guy himself was automating away the job of the lady who used to physically Xerox the TPS report and put it in the filing cabinet down the hall, but that lady was automating the job of the secretary who used to re-type all those TPS reports.
It's automatic filing cabinets all the way down, and ranting because your little slice of the filing cabinet automation machine has been made redundant is a bit silly.
You act as if this is the first time in history technology has wiped out a trade and made people scramble to sort out their lives. No, this has been happening over and over again throughout history and at a rapidly accelerating pace since the industrial revolution. Why should we ask it to slow down on behalf of programmers, when it never did for anybody else? Don't pretend you didn't know this was a possibility when you got into tech in the first place. You might have to downsize your life but humanity as a whole will be better off.
> You might have to downsize your life but humanity as a whole will be better off
This assumes that there will be other jobs to get. If AI replaces a large enough segment of office jobs then huge portions of the population will be unable to afford essentials like food and healthcare.
Walk into a staffing agency, ask for a job. They'll give you a list, pick the one that sounds the least disagreeable. Show up on time, every day, for at least two or three months and you'll convert the temp position into a full time job.
It's literally that easy, showing up reliably is a super power that puts you in the 90th percentile of workers these days. The job probably won't be as comfortable as sitting on a comfortable chair in an air-conditioning office wiggling your fingers at a computer, but so what? Other people make it work, so can you. Man up.
sorry, no jobs at the staffing agency, those are AI. Feel free to walk into a burger king, show up everyday, and flip those fries for minimum wage until you die. Man up brother, other people make it work. Sleeping on the street, well half the year its not even snowing.
The last study I know of that measured the conversion rate from temp to perm employees showed about 15%-30% success… and that was well before the gig economy really took hold. So you’re looking at 4 or 5 temp placements to reliably get a probably underpaying job when very few white collar workers could survive long enough to make the end of a lease, or sell their house, while on a temp job salary. It’s a viable option for a 25 year old that could couch surf for a few months, but not for a mid-late career professional, or anyone with a family.
You can give any complex problem a simple answer if you ignore enough factors.
The industrial revolution resulted in children being worked to literal death. Of people toiling 16 hours ad day and living in cramped up spaces without any windows and barely any hygiene. It brought suffering on a scale never seen before.
Organized labor movements managed to fight back and improve conditions somewhat but will we be able to do it this time?
Humanity will not profit from generative AI, tech billionaires will. It is based on the theft of human labor of millions of programmer, artists and writers without any compensation. If left unchecked it will destroy the environment, any form of democracy, our mental health. It will cause mass unemployment at a grand scale.
Could it be in theory used for good? Maybe. As the current political situation stands it will cause massive suffering for the majority of people.
Your glib dismissal of the real effects of those technological upheavals shows you haven’t actually looked into this. You should probably tamp down that smugness until you find out.
Did you think about it before you got into the tech industry? You should have, technology has been wiping out jobs since forever but you got into tech anyway. Live by the sword, die by the sword. Except you needn't actually die, just walk into a staffing agency and ask for a new job. I have done so before, and will do so again. I have prepared for what is to come, saw it coming 20 years ago and saw the imminence of it when GPT-2 was released. I have sympathy for other kinds of white collar professionals who never could have anticipated these kind of developments, but technologists? Give me a break. You knew, or should have known, that technological developments in this domain were likely.
Please stop pretending that this is only going to replace "tech workers". Do accountants "live by the sword"? Whose jobs did they replace? What about analysts, journalists, radiologists (one day, if not quite yet)?
And even within the realm of "tech", it's kinda bonkers to expect e.g. a firmware engineer to have some deep understanding of trends in ML/AI.
Altogether your #1 priority seems to be "bashing workers", the justification just being a matter of convenience.
Please stop pretending to read comments before replying to them:
> I have sympathy for other kinds of white collar professionals who never could have anticipated these kind of developments, but technologists? Give me a break.
I’m not in the tech industry anymore because in the battle of people who wanted to solve problems with software and money grubbing MBAs, the money grubbing MBAs have won. Now I’m a union machinist, and believe it or not, I’m concerned about the wellbeing of others. In manufacturing, companies are starting to face the consequences of shortsightedly selling out their workforce and are frantically clamoring to use the agonal breaths of its existing manufacturing industry knowledge base to breathe life into a new generation of workers. China becoming a manufacturing powerhouse wasn’t a foregone conclusion: we gave it to them in exchange for short-term profits. Our economy, national security, and the financial viability of a robust middle class is paying the price for their greed and arrogance.
The people running the tech industry can’t see the world past the end of this quarter, so they’ll never learn the lessons our society has learned many times over. Good luck. Unless you’re running a company, you’re going to need it. The soft, arrogant, whiny, maladroit white collar workers coming into the trades are pathetically ill-equipped to do actual work.
You've already done all that I can advise others here do, so congrats, I have nothing to criticize there. You've done it better than me actually, since you're unionized. As for soft white collar wimps washing out, people at the first job I had out of tech were taking bets if I'd show up for the second day, so don't think I don't know what you're talking about. I know it, I did it, and other people can do it too.
The problem with exporting manufacturing to China was this country lost the ability to make shit. I don't think this maps at all to white collar jobs getting gutted by AI; the people who actually make things aren't the white collar workers who should be sweating. Societies paper pushers would effectively be a parasite class leeching off the hard labor of people who actually work, if not for the part where white collar workers are (or have been) necessary to organize the logistics of everything that allows the people who actually do the work to actually do the work. We are on the precipice of dramatic change, and I think we're going to see a radical revaluing across society.
None of this is even new. Computers and other business machines already came for the clerks and secretary pools before most people ITT were born. The loss of these careers was not even remotely a problem for society at large, completely unlike offshoring manufacturing.
Seems like there’s a lot of resources being dumped into those data centers that will not be very useful. Saying it will all be worthwhile because we’ll have the buildings and the modest power grid updates (which are largely paid for by tax payers, anyway,) feels like saying a PS5 is a good long-term investment because the cords and box will still be good long ag after the PS5 has outlived its usefulness.
The "PS5" analogy fails to account for how "useless" hardware often triggers the next paradigm shift. For decades, traditionalists dismissed high-end GPUs as expensive toys for gamers, yet that specific architecture became the accidental engine of the AI revolution.
And you imagine these incredibly expensive-to-operate, environmentally damaging, highly specialized, years-outdated GPUs will trigger some sort of technological revolution that won’t be infinitely better served by the shiny new GPUs of the day that will not only be dramatically more powerful, but offer a ton more compute for the amount of electricity used?
The AI use of GPUs didn’t stem from a glut of outdated, discarded units with nearly no market value. All of those old discarded GPUs were, and still are, worthless digital refuse.
The closest analog i can think of to what you’re referring to is cluster computing with old commodity PCs that got companies like Google and Hotmail off the ground… for a few years until they could afford big boy servers and now all of those, and most current PCs on the verge of obsolescence, are also worthless digital refuse.
The big difference is that Google et al chose those PC clusters because they were cheap, commodity pieces right off-the-bat, not because they were narrowly scoped specialty hardware pieces that collectively cost hundreds of billions of dollars.
Your supposition fails to account for our history with hardware in any reasonable way.
Focusing exclusively on the physical decay and replacement cycle of hardware is a classic case of tunnel vision. It ignores the fact that the semiconductor industry’s true value lies in the evolution of manufacturing processes and architectural design rather than the lifespan of a specific unit. While individual chips eventually become obsolete, the compounding breakthroughs in logic and efficiency are what actually drive the technological revolution you are discounting.
Tunnel vision is ignoring the astonishing amount of money and environmental resources our society is dumping into these very physical, very temporally useful chips and their housing because… of what we learn by doing that. We should have dumped 1/100th of that money into research and we’d have been further along.
This isn’t a normal tech expenditure— the scale of this threatens the economy in a serious way if they get it wrong. That’s 401ks, IRAs, pension plans, houses foreclosed on, jobs lost, surgeries skipped… if we took a tiny fraction of this race-to-hypeland and put towards childhood food insecurity, we could be living in a fundamentally different looking society. The big takeaway from this whole ordeal has nothing to do with semiconductors — it is that rich guys playing with other people’s money singularly focused on becoming king of the hill are still terrible stewards of our financial system.
Dismissing massive capital expenditure as "hypeland" ignores the historical reality that speculative bubbles often build the physical foundation for the next century. The Panic of 1873 saw a catastrophic evaporation of debt-driven capital, yet the "worthless" railroads built during that frenzy remained in the ground. That redundant, overbuilt infrastructure became the literal backbone of American industrialization, providing the logistics required for a global economic shift that far outlasted the initial financial ruin.
Divorcing research from "learning by doing" is a recipe for a bureaucratic ivory tower. If you only funnel money into pure research without the messy, expensive, and often "wasteful" reality of large-scale deployment, you end up with an economy of academic metrics rather than industrial power.
The most damning evidence against the "research-only" model is the birth of the Transformer architecture. It did not emerge from an ivory tower funded by bureaucratic grants or academic peer-review cycles; it was forged in the fires of industrial practice.
History shows that a fixation on immediate social utility or "rational" cost analysis can be a strategic trap. During the same era, Qing Dynasty bureaucrats employed your exact logic, arguing that the astronomical costs of industrialization and rail were a waste of resources better spent elsewhere. By prioritizing short-term stability over "expensive" technological leaps, they missed the industrial window entirely. Two decades later, they faced an industrialized Japan in 1894 and suffered a total collapse. The "waste" of one generation is frequently the essential infrastructure of the next.
> We're still 6-12+ months away from a "killer" AI product. OpenClaw showed what's possible-ish, but it breaks half the time, eats tokens like crazy, and can leak all kinds of secrets.
If you replace OpenClaw with any number of other hot LLM products/projects, I’ve been hearing that same exact sentiment for numerous 6-to-12-month periods. I’d argue we have no idea how long it’s doing to be, but it’s probably not very soon.
Argghhh! When all ye got is a 300 Baud connection and a ASR-33, then ye be thanking your lucky stars for ed! And pray that the ribbon ain't worn out, and that the paper tape don't jam!
A pox o' chads on your house, ya mewlin' landlubber!
Considering anybody with a noggin is going to be separating the SQL into it’s own module or whatever rather than just throwing straight inline SQL at your database wherever you it, you’re hardly less likely to have things like accidental writes, anyway. This is clearly someone who fell in love with Postgres, felt ORM abstractions that diluted the Postgres goodness were bad, and then did some mental experiments to consider all of the theoretical ways ORMs suck.
I’m glad you’re head-over-heels in love with Postgres— it’s really cool, and I’ve occasionally had projects that really benefited from it… but most of those incredible features just aren’t useful for run-of-the-mill projects. Learning how to profile your ORM queries is a lot easier than maintaining a bunch of code from a different language embedded into your code base. If you’re writing articles about Postgres, you probably have no idea how much of a PITA that context switch is in practice. It’s funny how getting expertise in something can make it more difficult to understand why it’s useful to other people, and how they use it.
There are projects, like SQLC, that cover most of the perceived advantages of ORMs, without the downsides.
One of these downsides is, in my opinion, the fact that they hide the very details of the implementation one necessarily needs to understand, in order to debug it.
Let me just say that I wrote my first (professional) SQL queries about 25 years ago and at various post points have worked extensively with Postgres, a bit less so with Oracle, and occasionally with MySQL and MSSQL. (And also some of the JSON object store databases before switching to Postgres for that stuff.) The only ones I’ve used ORMs with are Postgres and MySQL.
SQLC does not address most of the perceived advantages to ORMs. Sure it addresses some of the concerns of hand-writing and sending SQL to databases from various languages, but that’s not what most people I’ve spoken to in the past couple of decades most valued about ORMs. What most projects really need databases for is some place to essentially store context-sensitive variable values. Like what email address to send something to if the user ID is 12345. I’ve never, ever had to debug ORM’s SQL when doing things like that. Rarely have I needed to with more complex chains of filters or whatnot, and that usually involved taking a slightly different approach with the given ORM tools rather than modifying them or writing my own SQL. When I’ve had more complex needs that required using some of the more exotic Postgres features, writing my own queries has been trivial. It’s of paramount importance for developers to understand the frameworks and libraries, such as ORMs, they’re using because those implementation details touch everything in your code. Once you understand that, the code your ORM composes to make your queries is an IDE-click away.
Not having to context switch between writing SQL and whatever native language you’re working in, especially for simple tasks, has yielded so so so much more to my time and mental space than being exactly 100% sure that my code is using that left join in exactly the way I want it to.
Second, an ORM is just a translation layer, i.e. it does not compile to any binary format the database understands, and instead it gets translated to SQL, which is the standard, minus extensions. SQL is ubiquitous. It’s the closest to a lingua franca that we have in the context of software engineering. And I’m going to be blunt here and say that purposefully avoid learning and understanding SQL if it is part of the job, should disqualify anyone from it.
I’ve been around for some decades too, and to me, ORMs haven’t worked out. They are vastly different one from another and they often create issues that are clear as day when the query is written as SQL. If I go from a Typescript codebase to Python to Java, then, according to you, I should learn the intricacies of Sequelize, SQLAlchemy, and JPA/Hibernate, instead of just SQL. And granted, different SQL dialects have different quirks, but more often than not, the pitfalls are more apparent than when switching between ORMs.
And I can guarantee that someone equipped with a good foundation in SQL will be more successful debugging a Sequelize based application, than someone who has relied on SQLAlchemy.
What most people I know and worked with need, is types. Types help glue SQL and any other language together. If I can run any SQL query and the result comes back as an object, I’m good.
Now, if your point is that ORMs are OK for toying around, I may agree, but still, why would I go through that trouble when I know SQL.
SQLC for me has been able to replace most cases of use an ORM for. It made most of the boilerplate of using plain SQL go away, I get type safe responses, and it forces me to be more mindful of the queries I write.
In an app where we do use an ORM (Prisma), we sometimes have weird database spikes and it’s almost always an unintended heavy ORM query.
The only two things I miss in solutions like sqlc are dynamic queries (filters, partial inserts) and the lack of a way to add something to every query by default (e.g., always filtering by tenant_id.)
reply