The first person who walked the final path. Everyone who agreed with that person and walked the same path compounded his decision. Everyone who crossed the field should have thought about what they were doing at the time.
Far too many people, and especially developers, are way more focused on absolving themselves from responsibility for anything bad than they are willing to put the thought in at the beginning, and that's the root cause of a lot of problems.
But that person could not possibly have predicted the future actions of everyone else, and therefore it's a mockery of moral reasoning to hold them responsible.
> it's a mockery of moral reasoning to hold them responsible.
What are we holding them responsible for? Was something amoral? They clearly made the first decision for better or worse. Now if there was a sign that said "No walking on the grass", they clearly violated something.
You need to read it in the context of onion2k's original post and how onion2k's post relates to the article. There is a question of responsibility in that context, whether we like it or not.
Yes, this is a metaphor for people building algorithms (or other systems) they don't fully understand that could have poor consequences for others / society. At least, that's how I understood the relevance of the metaphor.
Criticizing "people building algorithms (or other systems) they don't fully understand that could have poor consequences for others / society" implies there is a conceivable alternative. That is it possible to build algorithms with a full understanding of all the negative consequences they could ever have for society. That seems obviously absurd to me, so the criticism is vacuous.
Er, no. The alternative is to not build those systems. Here clearly suggested is, not to let algorithms make all the decisions, but let people actively manage those funds. That might be generally less efficient (in terms of volume of trades and profit), but the assumption is that a total disaster (e.g. flash crash) wouldn't happen with "slow" humans in the loop.
How is a flash crash a "total disaster"? It seems to me that just describing it that way indicates your mind is addled by technology, since before modern times nobody would expect continuous pricing of everything every millisecond of every day or think that everything was worthless because it wasn't being quoted appropriately for an instant.
Someone has to take it upon themselves to put a sign there saying, "STOP! Habitat Restoration in Progress. Please choose another path."
And that's just it. Humanity has carved a path that says, "Make as much money as possible with the least amount of effort."
These machines have taken something we already know is bad, the 90 day, quarterly earnings cycle, which had already obliterated our long-term thinking down to nanoseconds and we want it even shorter.
We have machines moving virtual money around corporations that use money to move real material around the earth -- and beyond.
That is the path we are on right now in this very moment.
Who is going to put a sign up saying, "STOP! Humanity Restoration in Progress. Please choose another path."
That's a stretch. I know you're enamoured by your synopsis, but realistically emergent systems aren't designed or thought about. They might be tweaked.
Dirt paths are created by people making the same choice when there is not enough difference in grass levels for someone to notice. It’s only after a significant number of people make the same choice that feedback occurs.
Other systems may or may not be created in a similar fashion.
realistically emergent systems aren't designed or thought about
I'm aware of how they work. I'm arguing that someone who builds a system is responsible for it even if they don't know how it works. Ignorance, even in the face of a system so complex that no person could ever understand the underlying causes of what it does, is not an excuse. No one should be able to hide behind complexity.
Developers must either build in protections against their systems going wrong or they shouldn't deploy them.
I want to agree with your thesis, but it's impossible to foresee every possible outcome. Leaky abstractions aside, bugs aside, misaligned incentives (the new "I was just following orders", as someone here immortally put it) aside, it is impossible to imagine a priori all the ways a certain outcome that is desirable now will be undesirable in the future.
Every step of the development ladder is fraught with possibilities for error and catastrophe. To quote James Goldman:
"It is too simple. Life, if it's like anything at all, is like an avalanche. To blame the little ball of snow that starts it all, to say it is the cause, is just as true as it is meaningless."
It's useful to distinguish between responsibility and accountability here. The algorithms may be responsible for a particular outcome but the people who commissioned them should be accountable.
What about turnover? If the buck stops at the CEO, what happens when that person moves on? Is their successor responsible for everything that went on prior? Im not saying either way, just asking how that should work.
Is their successor responsible for everything that went on prior?
Yes.
That wouldn't even be a change to the current system. That's how it works now. If you take on the role of CEO and it turns out that years earlier the company did something terrible you will be expected to resign. It's one of the reasons they demand so much money.
Pretty much always if the incoming CEO can competitively negotiate and is a good hire.
Contrary to popular belief, the vast majority of career CEOs are good hardworking people. Like any high profile position, the outliers skew perception for everyone.
Lets take, for example, a red light camera software that is trained to recognize plate numbers and issue traffic tickets. The decision to design and deploy the software was made by a human. The decision to send you a ticket was not. Nothing to do with complexity or being wrong, there is simply no human anywhere who made that specific decision.
Right, so we just need to be smart enough (and have enough data) to centrally (and a priori) manage the collective and emergent action of millions of humans. Seems like a reasonable expectation.
I read in a self help book that a doctor, while being a child, used to like waking up really early the first day it snowed and make a wild path in the snow, just for giggles.
Everybody else was just following his path through the snow.
Isn't that the equivalent of blaming the first single celled organism that evolved rudimentary flagellum for, as an example, our current climate crisis?
I was imagining desire paths in a park or something. It takes a lot of walks to wear down the grass, and probably hundreds before a path becomes visible.
Having walked in both long and short grass I can tell you it largely depends on a number or factors. The largest seeming to be the amount of time passing between each person.
The path is optimized for A) Where people enter and B) Where they are going. If there is no purpose. A path is simply a path, and ascribing meaning to it is fruitless.
- Whoever gave cachet to certain characteristics that are present at specific points of the grass
- Whoever made it desirable or necessary that passersby reach a certain location at the periphery of the field
- Whoever set up restrictions to entry at certain locations around the periphery of the field
- Whoever made any changes to the field (including non-human intervention)
More generally, culture at large and historical precedent.
—
Imagine a green field of grass.
There are minor differences across the field, but some of these differences can be noticed: a slightly darker patch of grass, a slightly yellowed patch of grass, some stones of varying sizes, a section that is slightly raised from the rest of the field.
Around the field, other things can catch the eye: there's a road to one side, some trees of varying sizes, one currently in bloom.
Over the seasons, the profile of this grass changes. At some point there was a park bench to one side of the field; later, it was moved to another side. There was once a storefront near to one side of the field.
You going to hold the icecream man responsible when the path has a pothole in it? He'll claim a flow of people made it and it's not his responsibility. The fairy tale is useful because it focuses on one broad idea in exclusion of the others.
Who is the man holding the smoking gun when algorithms collapse? The business owner is legally left with the responsibility, but finding who programmed the algo to make a mistake is useful & vital.
Yes - agree. If you build a bridge (in many places) you have to sign off on the design - warranting that it is your professional opinion that it will do the job. To do this you normally need to be a chartered or certified engineer and you need professional insurance.
If the bridge collapses you get sued and your insurance pays out. You probably don't get more insurance.
This is not a perfect system, but we need several elements of it for algorithms. We need the ideas of tolerances, confidence intervals and analysis - certification of fitness for purpose must be limited and constrained to have value. We need professional certification that enables employers and the public to recognize experience and capability. People have confidence in pilots because of their uniforms, qualifications and the checks that they undertake (training, retraining, blood testing, medicals) on a regular basis. As we have seen with the 737 max, when these things breakdown there's trouble.
Except usually paths are made because they're the shortest path. Taking the shortest path is not random, even if you have a curved pavement path, people will cut it and walk through grass to get the shortest path. So it's basically a pre-programmed algorithm in our brains saying we want to spend the least amount of time.
A decision to follow the herd and a decision to take the easiest road are still decisions.
You might say that a person made a decision to break down blades of grass to create a path and that others chose to take a path already created, but they are still making choices that they are responsible for.
Imagine a small box where every day at 6 am, an instruction on paper is written and the groundskeeper checks it.
Imagine a network of voters who vote on the instruction to be placed in the box. Their wide variety of votes will be automatically distilled into the winning instruction.
The groundskeeper dutifully clears a path, shouts at people to stay on the path, fills in errant paths.
Finally there is one path.
Who made the decision?
I only bring this up to suggest “emergent” decisions is a vapid framework to analyze this. Every phenomenon involving the aggregation of different agents is an emergent one, whether it’s an emergent system of politics leading to regulated actions or an emergent system of market participants leading to a price.
This doesn’t distinguish a situation of high regulation from a situation of low regulation.
Clearly the most absolute answer is to have every individual track and identify their own commits and changes to the computer system in an authoritative manner. Then you must balance each catastrophic failure against the reasonable expectation that the programmer could predict the outcome. Then you effectively strip the license to program (in that company/industry at least) from the offending 'rogue traders' when they try to run away with billions of dollars that was built on exploiting said catastrophe and get caught.
Yes, but you could create the path just as you plant the grass and amuse yourself to no end on how sheep-like people really are.
Just because there's a path there, it doesn't mean its optimal or good for the public at large.
People start walking on it. In all kinds of directions.
Slowly paths through the grass start to emerge.
Those paths attract more people to walk on those.
The stronger the path through the grass, the more people use it.
Finally, there is one clearly visible path through the grass everyone uses.
Who made the decision?