No, it's worse than that. The city council very much implemented an anti-car (harassment) policy, to the point that car owners felt hounded by their own council's policies. It seriously wasn't a matter of "marginally less privileged".
Motorists are an easy scapegoat but without alternatives it's just political handwaving. And most people are motorists.
Take my city for example. I work in an office block around a 15 minute walk from the centre, which has free parking for employees. Monday this week the city announced that the land is now paid parking to the city effective immediately. When it was pointed out they they hadn't provided any of the necessary signage or machines for this, they decided it was illegal to park there at all, with fines and tow trucks for non compliance. An email from them suggested "cycling or using public transport as the weather is nicer".
I cannot stress this enough. No warning, no compromise, no other use for this land, just an immediate draconian announcement.
It's very easy to call another group entitled if you're not one of them
> the city announced that the land is now paid parking to the city
what a strange way to put it... why didn't they just say that they are not using any more taxpayer money to finance your parking space? Land in a city is not "for free".
> It's very easy to call another group entitled if you're not one of them
I'll be totally honest in that I don't know what the arrangement was before, but that free parking was previously enforced by permits so it's a reasonable assumption that it was not at the tax payers expense
Your job in any political office is not to leave everything as-is and to cement yourself into that position, but to make marginal improvements, even if doing so costs you the next elections or inconveniences people (hopefully only temporarily).
Most of those marginal improvements can only be seen as something positive in retrospective, not while they're being made. While they're being made, they'll always be unpopular, as the voter base is usually not keen on defending the people that are currently in charge. That doesn't mean they won't show up in the next elections, just that they are quieter in the meantime.
in the ideal world maybe - but we don't live in the ideal world: most are trying to get re-elected, or elected to a higher office now that they have experience.
and even in the ideal world a great leader can do more in the next term if they get relected.
I don’t know that it’s a helpful distinction. A lot of people do it all - drive, walk, bike, and take public transit. Only in this kind of discussion do I see people declaring it a team you have to choose.
The starting point is anti-anything-but-a-car, so it's understandable that in the process of getting to any sort of parity you'd feel like it's "harassment".
It's like claiming getting rid of slavery is "harassment", because your unfair privileges are being taken back.
The rule of 3 is awful because it focuses on the wrong thing. If two instances of the same logic represent the same concept, they should be shared. If 10 instances of the same logic represent unrelated concepts, they should be duplicated.
The goal is to have code that corresponds to a coherent conceptual model for whatever you are doing, and the resulting codebase should clearly reflect the design of the system. Once I started thinking about code in these terms, I realized that questions like "DRY vs YAGNI" were not meaningful.
Of course, the rule of 3 is saying that you often _can't tell_ what the shared concept between different instances is until you have at least 3 examples.
It's not about copying identical code twice, it's about refactoring similar code into a shared function once you have enough examples to be able to see what the shared core is.
But don’t let the rule of 3 be an excuse for you to not critically assess the abstract concepts that your program is operating upon and within.
I too often see junior engineers (and senior data scientists…) write code procedurally, with giant functions and many, many if statements, presumably because in their brain they’re thinking about “1st I do this if this, 2nd I do that if that, etc”.
I agree. And I think this also distills down to Rob Pike’s rule 5, or something quite like it. If your design prioritizes modeling the domain’s data, shaping algorithms around that model, it’s usually trivial to determine how likely some “duplication” is operating on shared concepts, versus merely following a similar pattern. It may even help you refine the data model itself when confronted with the question.
> If two instances of the same logic represent the same concept, they should be shared. If 10 instances of the same logic represent unrelated concepts, they should be duplicated.
Worth pointing out a success story: all ACM publications have gone open access starting this year[1]. Papers are now going to be CC licensed, with either the very open CC-BY[2] license or the pretty restrictive (but still better than nothing!) CC-BY-NC-ND[3] license.
Computer science as a discipline has always been relatively open and has had its own norms on publication that are different from most other fields (the top venues are almost always conferences rather than journals, and turn-around times on publications are relatively short), so it isn't a surprise that CS is one of the first areas to embrace open access.
Still, having a single example of how this approach works and how grass-roots efforts by CS researchers led to change in the community is useful to demonstrate that this idea is viable, and to motivate other research communities to follow suit.
That works nicely if your institution participates in ACM Open (no such institution in my country, and no, my country is not in the list of lower-middle income countries).
The combination of 'publish or perish' with 'pay for publication' and 'miserly grant money' is deadly.
While in theory the idea is nice, in practice this is a problem (maybe not in most rich countries, but here definitely).
Nowadays, you could always get the article you are interested in, even if it is beyond a paywall. Hence, perversely, the old model (which I hate, for reasons well explained in the original post) worked better for me. :-(
> you can't spec out something you have no clue how to build
Ideally—and at least somewhat in practice—a specification language is as much a tool for design as it is for correctness. Writing the specification lets you explore the design space of your problem quickly with feedback from the specification language itself, even before you get to implementing anything. A high-level spec lets you pin down which properties of the system actually matter, automatically finds an inconsistencies and forces you to resolve them explicitly. (This is especially important for using AI because an AI model will silently resolve inconsistencies in ways that don't always make sense but are also easy to miss!)
Then, when you do start implementing the system and inevitably find issues you missed, the specification language gives you a clear place to update your design to match your understanding. You get a concrete artifact that captures your understanding of the problem and the solution, and you can use that to keep the overall complexity of the system from getting beyond practical human comprehension.
A key insight is that formal specification absolutely does not have to be a totally up-front tool. If anything, it's a tool that makes iterating on the design of the system easier.
Traditionally, formal specification have been hard to use as design tools partly because of incidental complexity in the spec systems themselves, but mostly because of the overhead needed to not only implement the spec but also maintain a connection between the spec and the implementation. The tools that have been practical outside of specific niches are the ones that solve this connection problem. Type systems are a lightweight sort of formal verification, and the reason they took off more than other approaches is that typechecking automatically maintains the connection between the types and the rest of the code.
LLMs help smooth out the learning curve for using specification languages, and make it much easier to generate and check that implementations match the spec. There are still a lot of rough edges to work out but, to me, this absolutely seems to be the most promising direction for AI-supported system design and development in the future.
I mean, look at all the startups that succeeded despite being complete shitshows behind the scenes... the baseline for leadership, organization, coordination or, hell, execution for a startup to succeed isn't exactly high either.
Living with ADHD also increases your chances of getting into a car accident substantially. I can't find the numbers now, but the increase is non-trivial and there are some clear mechanisms (inattention, impulsivity and risk-seeking behaviors).
ADHD is a big part of the reason I don't drive. I'm lucky enough to live in Berkeley which is very walkable with decent transit, and I would hesitate to move anywhere more car-oriented exactly because I have ADHD.
Yeah, ADHD does affect one's ability to drive safely. On the other hand, I've been driving for over 50 years. I've had one accident that I was responsible for. Various other vehicles have been involved in five other accidents where the other driver backed into my parked car.
I think the reason I've been hypervigilant about safe driving practices is that my father owned a rigging company, and I was driving forklifts and stake trucks in the yard from about 13. I understood the impact a vehicle could have on other things, people included. Living in that world from about age nine on teaches you to be obsessive about properly securing a load (Molding machines, air handling units, lathes, etc.).
I've often thought people would be better drivers if they started their driving experience with the motorcycle safety training course curriculum and drove for a year on motorized two wheels, taking up the lane and keeping up with traffic.
When I was younger I was lucky enough to live somewhere rural where I got into a couple of single car accidents that I walked away from. Now my ADHD hyper focus is super attentive when driving.
There's a lot of area on the spectrum between where we are today and "sugary beverages are all banned".
For example, Starbucks could limit the sizes it sells and advertises—you'd still be able to have as much sugar as you would like by buying multiple drinks, but it would raise the activation energy needed to do that. Making the healthier choice the path of least resistance works wonders.
It's not inevitable, it's just poor leadership. I've seen changes at large organizations take without being crudely pushed top-down and you'd better believe I've seen top-down initiatives fail, so "performance management" is neither necessary nor sufficient.
> crystal clear software development plan and the exact know-how to implement it
This is simply not how expert programmers work. Programming is planning, and programming languages are surprisingly good planning tools. But, of course, becoming an expert is hard and requires not only some general aptitude but also substantial time, experience and effort.
My theory is that this is a source of diverging views on LLMs for programming: people who see programming languages as tools for thought compared to people who see programming languages as, exclusively, tools to make computers do stuff. It's no surprise that the former would see more value in programming qua programming, while the latter are happy to sweep code under the rug.
The fundamental problem, of course, is that anything worth doing in code is going to involve pinning down a massive amount of small details. Programming languages are formal systems with nice (well, somewhat nice) UX, and formal systems are great for, well, specifying details. Human text is not.
Then again, while there might be a lot of details, there are also a lot of applications where the details barely matter. So why not let a black box make all the little decisions?
The question boils down to where you want to spend more energy: developing and articulating a good conceptual model up-front, or debugging a messy system later on. And here, too, I've found programmers fall into two distinct camps, probably for the same reasons they differ in their views on LLMs.
In principle, LLM capabilities could be reasonably well-suited to the up-front thinking-oriented programming paradigm. But, in practice, none of the tools or approaches popular today—certainly none of the ones I'm familiar with—are oriented in that direction. We have a real tooling gap.
> My theory is that this is a source of diverging views on LLMs for programming: people who see programming languages as tools for thought compared to people who see programming languages as, exclusively, tools to make computers do stuff. It's no surprise that the former would see more value in programming qua programming, while the latter are happy to sweep code under the rug.
i'd postulate this: most people see llms as tools for thought. programmers also see llms as tools for programming. some programmers, right now, are getting very good at both, and are binding the two together.
Most people? I'd suggest few people see LLMs as tools for thought and more that they're slop machines being cynically forced upon workers by capitalists with dollar signs in their eyes. Over and over and over again we see real-world studies showing that the people far more excited about genAI are managers than the people doing the actual work.
reply