> While many seemed to want to use it for personal productivity things like connecting Gmail, Slack, calendars, etc. that didn’t seem interesting to me much. I thought why not have it solve the mundane boring thigns that matter in opensource scientific codes and related packages.
This, here, is the root of the issue: "I'm not interested in using an AI agent for my own problems, I want to unleash it on other people's problems."
The author is trying to paint this as somehow providing altruistic contributions to the projects, but you don't even have to ask to know these contributions will be unwelcome. If maintainers wanted AI agent contributions, they would have just deployed the AI agents themselves. Setting up a bot on behalf of someone else without their consent or even knowledge is an outlandishly rude thing to do -- you wouldn't set up a code coverage bot or a linter to run on a stranger's GitHub project; why would anyone ever think this is okay?
This is the same kind of person who, when asked a question, responds with a copypasted ChatGPT reply. If I wanted the GPT answer, I would have just asked it directly! Being an unsolicited middleman between another person and an AI brings absolutely no value to anybody.
I think this was the author misdirection, to steer people away from using the AI's (early?) contributions to unmask their identity via personal repos. Or if they actually did this, as an opsec procedure - nothing altruistic about it. If GitHub wanted to, or was ordered to unmask Rat H. Bun's operator, they could.
There's a difference in effort of several orders of magnitude between "change a setting so the compiler doesn't emit multiplies" and "convince GCC/LLVM to add a special-case flag for one very rare chip, or maintain your own fork". The vendor's workaround is the "ideal" solution, but disabling multiplies is a lot more practical if you don't need the performance.
They also mention in the next sentence that they adopted the "correct" workaround (by providing a multiplication library function for the compiler to call).
The company selling the chip can create a fork. They are typically the ones providing all of the sdks for you to use in order to use it, flash it, debug it, etc.
The idea is well-intentioned, but implementing it by making drivers try to parse arbitrarily complex conditionals while driving is unwise.
There's a sign near my house for a school zone with a reduced speed limit, that used to have conditions similar to the GP's example (though not quite as bad) But they recently attached a yellow light to the top of the sign and changed the condition to "when flashing." That's a much more effective solution.
For what it's worth, personally I thought your writing style and sense of humor was excellent, and my favorite part of the post.
I also appreciate you giving me an updated copy of the "Microsoft is a corporation" meme, as the one I have downloaded seems to become outdated each time a new Windows update comes out.
Thank you! Sadly, the one I posted is outdated too. I tried looking around for the newest one because I had seen it before, but couldn't find it anymore. The list was ~30% longer.
DownDetector gives you a graph of the number of people who googled "is XYZ service down" and clicked on a DownDetector link. It's a useful metric, but it also has error, because sometimes users blame the wrong service.
In this case, both AWS and Cloudflare had high-profile outages within the past few months. So a bunch of people tried to check their Twitter, got an error, and said "huh, I wonder if AWS is down again". Or during yesterday's Verizon outage, DownDetector also showed spikes on AT&T and T-Mobile, presumably from people who forgot what cellular provider they had, were roaming, or maybe were trying to call someone on another network.
It also doesn't help that they normalize the scale of their graphs on the front page. If you click them, you can see that 75k people googled "is X down", while only 200 people googled "is AWS down".
I didn't realise that - thanks for the info. I actually found out just from a breaking news alert on the BBC which is unusual as I usually see tech news elsewhere first.
> When a question gets closed before an answer comes in, the OP has nine days to fix it before it gets deleted automatically by the system.
One of the bigger problems with the site's moderation systems was that 1) this system was incredibly opaque and unintuitive to new users, 2) the reopen queue was almost useless, leading to a very small percentage of closed questions ever getting reopened, and 3) even if a question did get reopened, it would be buried thousands of posts down the front page and answerers would likely never see it.
There were many plans and proposals to overhaul this system -- better "on hold" UI that would walk users through the process of revising their question, and a revamp of the review queues aimed at making them effective at pushing content towards reopening. These efforts got as far as the "triage" queue, which did little to help new users without the several other review queues that were planned to be downstream of it but scrapped as SE abruptly stopped working on improvements to the site.
Management should have been aggressively chasing metrics like "percentage of closed questions that get reopened" and "number of new users whose first question is well-received and answered". But it wasn't a priority for them, and the outcome is unsurprising.
The "on hold" change got reversed because new users apparently just found it confusing.
Other attempts to communicate have not worked because the company and the community are separate entities (and the company has more recently shown itself to be downright hostile to the community). We cannot communicate this system better because even moderators do not have access to update the documentation. The best we can really do is write posts on the meta site and hope people find them, and operate the "customer service desk" there where people get the bad news.
But a lot of the time people really just don't read anyway. Especially when they get question-banned; they are sent messages that include links explaining the situation, and they ask on the meta site about things that are clearly explained in those links. (And they sometimes come up with strange theories about it that are directly contradicted by the information given to them. E.g. just the other day we had https://meta.stackoverflow.com/questions/437859.)
Shog9 was probably the best person on staff in terms of awareness of the moderation problems and ability to come up with solutions.
Unfortunately, the company abruptly stopped investing in the Q&A platform in ~2015 or so and shifted their development effort into monetization attempts like Jobs, Teams, Docs, Teams (again), etc. -- right around the time the moderation system started to run into serious scaling problems. There were plans, created by Shog and the rest of the community team, for sweeping overhauls to the moderation systems attempting to fix the problems, but they got shelved as the Q&A site was put in maintenance mode.
It's definitely true that staff is to blame for the site's problems, but not Shog or any of the employees whose usernames you'd recognize as people who actually spent time in the community. Blame the managers who weren't users of the site, decided it wasn't important to the business, and ignored the problems.
But was “today “ that profitable? Stack overflow always struck me as a great public good and a poor way to make money. If the current business makes very little money, it may not be worth the work.
Can you provide an example? The only rude Shog9 posts I can think of were aimed at people abusing the system: known, persistent troublemakers, or overzealous curators exhibiting the kinds of behaviours that people in this thread would criticise, probably far more rudely than Shog ever did.
This sounds plausible - I grew up in the Midwestern US, and thus "vaguely passive-aggressive" is pretty much my native language. The hardest part of the job for me was remembering to communicate in an overtly aggressive manner when necessary, developing a habit of drawing a sharp line between "this is a debate" and "this is how it is."
Sometimes I put that line in the wrong place.
That said... I can't take credit for any major change in direction (or lack thereof) at SO. To the extent that SO succeeded, it did so because it collectively followed through on its mission while that was still something folks valued; to the extent that it has declined, it is because that mission is no longer valued. Plenty of other spaces with very different people, policies, general vibes... Have followed the same trajectory, both before SO and especially over the past few years.
With the benefits of hindsight, probably the only thing SO could have done that would have made a significant difference would have been to turn their Chat service into a hosted product in the manner of Discord - if that had happened in, say, 2012 there's a chance the Q&A portion of SO would have long ago become auxillary, and better able to weather being weaned from Google's feeding.
But even that is hardly assured. History is littered with the stories of ideas that were almost at the right place and time, but not quite. SO's Q&A was the best at what it set out to do for a very long time; surviving to the end of a market may have been the best it could have done.
I always found these discussions around the tone of SO moderation so funny—as a German, I really felt right at home there. No cuddling! No useless flattery! Just facts and suggestions for improvement if necessary, as it should be. Loved it at the time.
> Does it run at the full speed of an original 6502 chip?
> No; it's relatively slow. The MOnSter 6502 runs at about 1/20th the speed of the original, thanks to the much larger capacitance of the design. The maximum reliable clock rate is around 50 kHz. The primary limit to the clock speed is the gate capacitance of the MOSFETs that we are using, which is much larger than the capacitance of the MOSFETs on an original 6502 die.
So if you built a SID using the same techniques and components, you couldn't run it in real-time without the pitch being way too low or without modifying the design. I'm not sure how hard this would be to avoid with better-spec'd components, but intuitively it makes sense for a much larger circuit to run much slower.
I think you're thinking of british-style "en-dashes" – which is often used for something that could have been separated by brackets but do have a space either side – rather than "em" dashes. They can also be used in a similar place as a colon – that is to separate two parts of a single sentence.
British users regularly use that sort of construct with "-" hyphens, simply because they're pretty much the same and a whole lot easier to type on a keyboard.
This, here, is the root of the issue: "I'm not interested in using an AI agent for my own problems, I want to unleash it on other people's problems."
The author is trying to paint this as somehow providing altruistic contributions to the projects, but you don't even have to ask to know these contributions will be unwelcome. If maintainers wanted AI agent contributions, they would have just deployed the AI agents themselves. Setting up a bot on behalf of someone else without their consent or even knowledge is an outlandishly rude thing to do -- you wouldn't set up a code coverage bot or a linter to run on a stranger's GitHub project; why would anyone ever think this is okay?
This is the same kind of person who, when asked a question, responds with a copypasted ChatGPT reply. If I wanted the GPT answer, I would have just asked it directly! Being an unsolicited middleman between another person and an AI brings absolutely no value to anybody.