Hacker Newsnew | past | comments | ask | show | jobs | submit | MaulingMonkey's commentslogin

> They didn't need more unmanned testing to find the issue; they needed to stop ignoring it.

Should such testing have been needed? No.

Was such testing needed, given NASA's political pressures and management? Maybe. Unmanned testing in similar conditions before putting humans on it might've resulted in a nice explosion without loss of life that would've been much harder to ignore than "the hypothesizing of those worrywart engineers," and might've provided the necessary ammunition to resist said political pressures.


> Unmanned testing in similar conditions before putting humans on it might've resulted in a nice explosion without loss of life that would've been much harder to ignore

The loss of the Challenger was the 25th manned orbital mission. So we can expect that it might have taken 25 unmanned missions to cause a similar loss of vehicle. But what would those 25 unmanned missions have been doing? There just wasn't 25 unmanned missions' worth of things to find out. That's also far more unmanned missions than were flown on any previous NASA space program before manned flights began.

Even leaving the above aside, if it would have been politically possible to even fly that many unmanned missions, it would have been politically possible to ground the Shuttle even after manned missions started based on the obvious signs of problems with the SRB joint O-rings. There were, IIRC, at least a dozen previous manned flights which showed issues. There were also good critiques of the design available at the time--which, in the kind of political environment you're imagining, would have been listened to. That design might not even have made it into the final Shuttle when it was flown.

In short, I don't see your alternative scenario as plausible, because the very things that would have been required to make it possible would also have made it unnecessary.


Record low launch temperatures are exactly the kind of boundary pushing conditions that would warrant unmanned testing in a way that not all of those previous 25 would have been. Then again, so was the first launch, and that was manned.

> I don't see your alternative scenario as plausible

Valid.


> Record low launch temperatures

Were not necessary to show problems with the SRB joint O-rings. There had been previous problems noted on flights at temperatures up to 75 degrees F. And the Thiokol engineers had test stand data showing that the O-rings were not fully sealing the joint even at 100 degrees F. Any rational assessment of the data would have concluded that the joint was unacceptably risky at any temperature.

It might have been true that a flight at 29 degrees F (the estimated O-ring temperature at the Challenger launch) was a little more unacceptably risky than a flight at a higher temperature. But that was actually a relatively minor point. The reason the Thiokol engineers focused on the low temperature the night before the Challenger launch was not because they had a solid case, or even a reasonable suspicion, that launching at that cold a temperature was too risky as compared with launching at higher temperatures. It was because NASA had already ignored much better arguments that they had advanced previously, and they were trying to find something, anything, to get NASA to stop at least some launches, given that they knew NASA was not going to stop all launches for political reasons.

And just to round off this issue, other SRB joint designs have been well known since, I believe, the 1960s, that do not have the issue the Shuttle SRBs had, and can be launched just fine at temperatures much colder than 29 F (for example, a launch from Siberia in the winter). So it's not even the case that SRB launches at such cold temperatures were unknown or not well understood prior to the Challenger launch. The Shuttle design simply was braindead in this respect (for political reasons).


I should point out that the Buran launched and took earth, with bad conditions, completely automated. It's sad how it ended.

> So we can expect that it might have taken 25 unmanned missions to cause a similar loss of vehicle.

That doesn't follow. If those were unmanned test flights pushing the vehicle limits you can't just assume they would have gone as they actually did.


> If those were unmanned test flights pushing the vehicle limits

As far as the launch to orbit, which was the flight phase when Challenger was lost, every Shuttle flight pushed the vehicle to its limits. That was unavoidable. There was no way to do a launch that was any more stressful than the actual launches were.


You can push the environmental conditions of the launch e.g. winds and temperatures.

See my response to Mauling Monkey upthread on why the cold temperature of the Challenger launch actually wasn't the major issue it was made out to be.

Note also my comments there about other SRB designs that were known well before the Shuttle and the range of temperatures they could launch in. Those designs were used on many unmanned flights for years before the Shuttle was even designed. So in this respect, the unmanned test work had already been done. The Shuttle designers just refused to take advantage of all that knowledge for braindead political reasons.


Skeptical notes based on my own experiences in Seattle (≈1148ft average per article - which might be considered high enough that the article already considers the mission for fewer bus stops a success?):

Some of the routes I've taken had "express" variants that skipped many stops, yet still stopped at my usual start and exit. I never bothered waiting for them - the savings were marginal, and taking the first bus was typically fastest, express or not. Time variation due to traffic etc. meant you couldn't really plan around which one you wanted to take either.

The buses already skip stops where they don't see anyone waiting for the bus, and nobody pulls the coord to request an exit, and said skipping tends to happen even during the dense rush hour. Additionally, stop time seems to be dominated by passenger load/unload. Clustering at fewer bus stops doesn't significantly change how much time that takes much, it just bunches it together in longer chunks. The routes where this happens a lot also tend to be the routes where they're going to be starting and stopping frequently for traffic lights anyways - often stopping before a light for shorter than the red, or after a light and then catching up to the next red.

What makes a significant difference in bus speed is the route.

If the bus takes a route where a highway is taken - up/down I-5 or I-405, or crossing Lake Washington, there are significant time savings. This isn't "having less/fewer bus stops", this is "having some long distance routes that bypass entire metro areas".

Alternatively, buses that manage to take low density routes - not highways per se, but places where there are still few if any traffic lights, and minimal traffic - tend to manage a lot better speed, compared to routes going through city centers. They may have plenty of bus stops, but again skip many of them due to lower density also resulting in lower passenger numbers, and when they do stop it's for less time than a typical traffic light cycle. A passenger might pull the coord, get up to exit, stand while the bus comes to a stop, hop off, and watch the bus pull off, delaying the bus by what... 10 seconds pessimistically for the stop itself, and another 10 seconds for deacceleration and then acceleration back to the speed limit?

Finally, there's also grade separated light rail, grade seperated bus lanes, and bus tunnels through downtown Seattle, that significantly help mass transit flow smoothly even in rush hour, for when you do have to go through a dense metro area. While these are far from fast or cheap to implement, axing a few bus stops isn't going to make other routes competitive when these are an option.


I'll note another fun pattern I've seen:

• Bus crawls along behind traffic during rush hour traffic, or a long line of traffic bottlenecked by a busy stop sign

• Bus stops to load/unload, blocking traffic for a bit, with a gap opening up in front of it as a result of cars not being able to get around (e.g. the stop is just directly on the typical curb/sidewalk with one lane in that direction.)

• Bus continues, and quickly catches up to the car it was behind before, since traffic was going slower than the speed limit as a result of bottlenecks

The stop was free, in these cases.


(equivalent C file: https://github.com/id-Software/wolf3d/blob/master/WOLFSRC/WL... )

> Was this translated automatically from C?

I'll note that when I convert code between languages, I often go out of my way to minimize on-the-fly refactoring, instead relying on a much more mechanical, 1:1 style. The result might not be idiomatic in the target language, but the bugs tend to be a bit fewer and shallower, and it assists with debugging the unfamiliar code when there are bugs - careful side-by-side comparison will make the mistakes clear even when I don't actually yet grok what the code is doing.

That's not to say that the code should be left in such a state permanently, but I'll note there's significantly more changes in function structure than I'd personally put into an initial C-to-Rust rewrite.

The author of this rewrite appears to be taking a different approach, understanding the codebase in detail and porting it bit by bit, refactoring at least some along the way. Here's the commit that introduced that fn, doesn't look like automatic translation to me: https://github.com/Ragnaroek/iron-wolf/commit/9014fcd6eb7b10...


> Would have to be F32, no?

Generally yes. `NonZeroU32::saturating_add(self, other: u32)` is able to return `NonZeroU32` though! ( https://doc.rust-lang.org/std/num/type.NonZeroU32.html#metho... )

> I cannot think of any way to enforce "non-zero-ness" of the result without making it return an optional Result<NonZeroF32>, and at that point we are basically back to square one...

`NonZeroU32::checked_add(self, other: u32)` basically does this, although I'll note it returns an `Option` instead of a `Result` ( https://doc.rust-lang.org/std/num/type.NonZeroU32.html#metho... ), leaving you to `.map_err(...)` or otherwise handle the edge case to your heart's content. Niche, but occasionally what you want.


> `NonZeroU32::saturating_add(self, other: u32)` is able to return `NonZeroU32` though!

I was confused at first how that could work, but then I realized that of course, with _unsigned_ integers this works fine because you cannot add a negative number...


You'd still have to check for overflow, I imagine.

And there are other gotchas, for instance it seems natural to assume that NonZeroF32 * NonZeroF32 can return a NonZeroF32, but 1e-25 * 1e-25 = 0 because of underflow.


One thing I appreciate about Rust's stdlib is that it exposes enough platform details to allow writing the missing knobs without reimplementing the entire wrapper (e.g. File, TcpStream, etc. allows access to raw file descriptors, OpenOptionsExt allows me to use FILE_FLAG_DELETE_ON_CLOSE on windows, etc.)

Where I live (pacific northwest), it's not snow that's the problem, but windstorms. Presumably knocking over trees, which in turn takes down power lines - which of course implies said trees are tall, in proximity to the power lines, and not cut down. I maybe average 24 hours of outage per year (frequently less, but occasionally spiking to a multi-day outage.)

I don't think that's something that can be solved with just "build quality"... but it presumably could be solved through "maintainence" (cutting down or trimming trees, although that requires identifying the problem, permissions, a willingness to have decreased tree coverage, etc.)


> It'll mean GOG has to do less work

[citation needed]

GOG's launcher team is presumably already familiar with their codebase, already has a checkout, already has a codebase that's missing 0 features, has a user interface that already matches their customer's muscle memory, and presumably already has semi-decent platform abstraction layer, considering they have binaries for both Windows and OS X. Unless they've utterly botched their PAL and buried it under several mountains of technical debt, porting is probably going to be relatively straightforward.

I'm not giving Linux gaming a second shot merely because of a bunch of ancedata about proton and wine improvements - I'm giving it a second shot because Steam themselves have staked enough of their brand and reputation on the experience, and put enough skin in the game with official linux support in their launcher. While I don't have enough of a GOG library for GOG's launcher to move the needle on that front for me personally, what it might do is get me looking at the GOG storefront again - in a way that some third party launcher simply wouldn't. Epic? I do have Satisfactory there, Heroic Launcher might be enough to avoid repurchasing it on Steam just for Linux, but it's not enough to make me want to stop avoiding Epic for future purchases on account of poor Linux support.


Phase Alternating Line? What's "PAL" here?


Given the context probably Platform Abstraction Layer.


You can specify:

    "runOptions": { "runOn": "folderOpen" }
In tasks.json, which I use for automatically `git fetch`ing on a few projects. While I don't recall it's interaction with first run / untrusted folder dialogs, it's entirely automatic on second run / trusted folders.


> Wanting to be able to use anybody's machine is very strange, agreed.

Very useful if people are struggling to create reliable repro steps that work for me - I can simply debug in situ on their machine. Also useful if a coworker is struggling to figure something out, and wants a second set of eyes on something that's driving them batty - I can simply do that without needing to ramp up on an unfamiliar toolset. Ever debugged a codegen issue that you couldn't repro, that turned out to be a compiler bug, that you didn't see because you (and the build servers) were on a different version? I have. There are ways to e.g. configure Visual Studio's updater to install the same version for the entire studio, which would've eliminated some of the "works on my machine" dance, but it's a headache. When a coworker shows me a cool non-default thing they've added a key binding for? I'll ask what key(s) they've bound it to if they didn't share it, so we share the same muscle memory.


I bucket Eclipse under "heavyweight IDE". I used to use it, plus the CDT plugin, for my C++ nonsense.

Then Visual Studio's Express and later Community SKUs made Visual Studio free for ≈home/hobby use in the same bucket. And they're better at that bucket for my needs. Less mucking with makefiles, the mixed ability to debug mixed C# and C++ callstacks, the fact that it's the same base as my work tools (game consoles have stuff integrating with Visual Studio, GPU vendors have stuff integrating with Visual Studio, the cool 3rd party intellisense game studios like integrates with Visual Studio...)

Eclipse, at least for me, quickly became relegated to increasingly rare moments of Linux development.

But I don't always want a heavyweight IDE and it's plugins and load times and project files. For a long time I just used notepad for quick edits to text files. But that's not great if you're, say, editing a many-file script repository. You still don't want all the dead weight of a heavy weight IDE, but there's a plethora of text editors that give you tabs, and maybe some basic syntax highlighting, and that's all you were going to get anyways. Notepad++, Sublime Text, Kate, ...and Visual Studio Code.

Well, VSC grew some tricks - an extension API for debuggers, spearheading the language server protocol... heck, I eventually even stopped hating the integrated VCS tab! It grew a "lightweight IDE" bucket, and it serves that niche for me well, and that's a useful niche for me.

In doing so, it's admittedly grown away from the "simple text editor" bucket. If you're routinely doing the careful work of auditing possibly malicious repositories before touching a single build task, VSC feels like the wrong tool to me, despite measures such as introducing the concept of untrusted repositories. I've somewhat attempted to shove a round peg into a square hole by using VSC's profiles feature - I now have a "Default" profile for my coding adventures and a "Notes" profile with all the extensions gone for editing my large piles of markdown, and for inspecting code I trust enough to allow on disk, but not enough to autorun anything... but switching editors entirely might be a better use of my time for this niche.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: