I hope for the same, but it's based on the assumption that space exploration will be human-driven. Unfortunately it's possible that it will be robot-driven instead and museums will stay on earth.
I seriously doubt we will stay on earth like that. Humans have always been explorers and pioneers. You can only look at something through a screen for so long before you get the itch that you cant scratch through a screen. Humans will inevitably leave this planet.
Robots on other planets is a rough process. Most planets, lacking atmosphere, have dust that gets EVERYWHERE, and its really bad for mechanical, electrical operations. The moon has a static charge that causes the regolith powder to seep into the smallest cracks. Humanoid robots will not last long, and all other forms are just a single failure from uselessness. Until they can either self-repair, or repair others (meaning n+1 minimum robots sent), it will not be useful for long exploration. And, honestly, id be pretty worried to find robots that could self-repair or repair others. Thats just a small step away from self-replication, and that leads down other scary paths.
> With my change: 95% of people who are shown scans have cancer and are treated earlier. 5% of people do not have cancer and get CT scans. 0.5% of people get useless biopsies Without my change: many of those 95% die, the 0.5% do not get useless biopsies
You assume that treating cancer automatically improves the outcome. Treating cancer often kills you, so treating a non-fatal tumor can easily be a bad decision. And a lot of the tumors found by agressive scans are like that, but we don't know yet how much exactly and how to tell one from the other. It's a new question that requires decades-long observations to answer.
> This is wrong. If you had a 100% accurate cancer detector, fewer people would die of cancer with no downside
You're saying it as if detection somehow cures cancer, it doesn't.
> You're saying it as if detection somehow cures cancer, it doesn't.
No, I didn't say the detector would cause cancer to be cured.
I said fewer people would die with no downsides. If treatment is sometimes harmful then the detector also fixes that, you'd never treat people without cancer
No, the detector doesn't fix that, that problem is not treating people without cancer. The problem is treating people with cancer that won't kill or harm them during their lifetime. In this case even a low risk treatment becomes harmful, let alone cancer treatments.
Whether treatment is net harmful or not depends on the level of the risk with no treatment. If you apply treatment with 15% chance of severe side effects to a tumor that will kill the patient with 50% chance in the next five years, of course it's net positive. If you apply it to a first-stage cancer that has 10% chance of progressing to the second stage, the very same treatment will be net harmful.
So no, most cancer treatments aren't net harmful now, but they likely will be in a world where your programme is implemented. Even something as "simple" as biopsy has mortality rates far from zero. Applying it at scale may not have the effect you expect. And surgeries and chemo are much worse.
But do you see how crazy that sounds? If we know the numbers, we would just not do the treatment in those cases.
And in reality it's not actually close like this. Early treatment is so overwhelmingly better than it completely dwarfs all considerations of biopsy risk.
Late stage lung cancer has less than half the 5 survival of early stage, and around 50% of lung cancer detection is late stage. That's around 30000 lives you could save every year from just lung cancer.
Let's say you don't protocol for all adults. Typically post CT or PET at most 10% get a biopsy. So let's say we increase the false positives that lead to CT or PET by X% of adults. That's 270M * X additional biopsies. Lung biopsy mortality is under 2%.
So the worst case you need a false positive rate under 5% for this to he net beneficial for lung cancer. That's low but with 3 scans 6 months apart and a good radiologist, it's not unreasonable
I believe I've assumed the worst case for all the quantities. In reality the addition screened people would have much lower CT detection rate than the background population, and biopsy mortality has been decreasing. Plus you can do this for only people above 40 where the benefits are higher
Yes, IF we know the numbers, that's the whole problem. Unprompted full-body MRI scans turn up a completely new probability distribution. We CAN'T know the numbers as of today, it takes decades of controlled studies to get them. (Which is exactly what doctors advocate for)
Even for something as studied as colon cancer it's still controversial whether mass colonoscopies are better than occult blood tests. US sticks to the former, Europe to latter. US have higher rates of polyp removal, higher rates of cancer detection, higher rates of surgeries and ... higher mortality rates than EU. Why this happens is being studied today, there are some early results. What is absolutely clear though is that higher detection rates don't automatically lead to better outcomes, it can easily be the opposite.
Or take something as simple as sunscreen. It's well-known that sun exposure for light-skinned people can cause cancer and sunscreen is advised. A few multi-decade studies have shown that while it indeed eradicates the skin cancer it also shortens the life expectancy by a few years.
Primum non nocere, doctors and scientists are unsure whether sunscreen should be applied at scale and you effectively call for mass biopsies and surgeries.
So if this is true, it seems that we must accept that many people will die of cancer we could have detected and cured with frequent scans, because doing frequent scans will overall cause more harm to people who didn't need treatment. So the overall death/harm rate would be worst with more frequent scans?
Isn't that then just a problem with the scan and diagnosis? With more frequent scans it seems highly unlikely that we wouldn't improve this process and end up in a better place.
There's a theory that the first-stage cancer is way more common than we think, it's just doesn't develop further most of the time, cause no symptoms and remains undiagnosed throughout the lifetime.
There's some support for this view because agressive screening for thyroid and prostate cancers increases the number of surgeries a lot but doesn't seem affect the mortality rates.
Risks from a surgery are non-negligible, if you perform it to treat a low-risk condition it may be a net loss in the end.
So you're technically right about the "early-detecting" part, but the "much easier to treat" step is problematic because it's unclear what a net-positive treatment looks like for low-risk cases. Probably it comes down to yearly monitoring of whatever was detected, not the actual treatment.
Idk, to me it feels much much better than just picking one root when defining the inverse function.
This desire to absolutely pick one when from the purely mathematical perspective they're all equal is both ugly and harmful (as in complicates things down the line).
But couldn't we just switch the nomenclature? Instead of an oxymoronic concept of "multivalue function", we could just call it "relation of complex equivalence" or something of sorts.
The question is meaningless because isomorphic structures should be considered identical. A=A. Unless you happen to be studying the isomorphisms themselves in some broader context, in which case how the structures are identical matters. (For example, the fact that in any expression you can freely switch i with -i is a meaningful claim about how you might work with the complex numbers.)
Homotopy type theory was invented to address this notion of equivalence (eg, under isomorphism) being equivalent to identity; but there’s not a general consensus around the topic — and different formalisms address equivalence versus identity in varied ways.
I have MS in math and came to a conclusion that C is not any more "imaginary" than R. Both are convenient abstractions, neither is particularly "natural".
Natural numbers are "natural" enough but N as the "set of all natural numbers" not so much. It only takes N to build the Hilbert's hotel. Uncountable set of all subsets of N is probably even worse.
All that, of course, doesn't make N bad or useless. It just shows that mathematical objects don't have to follow the laws or intuition of the real world to be useful in the real world.
Not op, but the number of cards doesn't matter. Only one shuffle can exist at a time, the "number of shuffles" is not a number of natural objects but rather a cardinality of a set. And as we know sets and cardinalities open the gates of hell.
This doesn't mean it's not a "relevant thing to talk about". It just means that these mathematical constructs while useful don't maintain a direct connection to reality, kind of like complex numbers.
> Only one shuffle can exist at a time, the "number of shuffles" is not a number of natural objects but rather a cardinality of a set.
I really don't understand what this means in practice. If there are exactly 50 rocks in front of me right now, I can't talk about 51? It doesn't maintain a direct connection to reality to talk about what would happen if I threw another rock on the pile? Or if that's connected cause another rock exists, what about if I have exactly 20 chickens, and I want to talk about what would happen when another is born? Is this "connected to reality" and "a number of natural objects"? Or "the cardinality of a set" instead?
I'm not the person you're asking, but I also have an MS in math and the same opinions.
Most mathematicians see N as fundamental -- something any alien race would certainly stumble on and use as a building block for more intricate processes. I think that position is likely but not guaranteed.
N itself is already a strange beast. It arises as some sort of "completion" [0] -- an abstraction that isn't practically useful or instantiatable, only existing to make logic and computations nice. The seeming simplicity and unpredictability of primes is a weird artifact of supposedly an object designed for counting. Most subsets of N can't even be named or described in any language in finite space. Weirder still, there are uncountable objects behaving like N for all practical purposes (see first-order Peano arithmetic).
I would then have a position something along the lines of counting being fundamental but N being a convenient, messy abstraction. It's a computational tool like any of the others.
Even that though isn't a given. What says that counting is the thing an alien race would develop first, or that they wouldn't immediately abandon it for something more befitting of their understanding of reality when they advanced enough to realize the problems? As some candidate alternative substrates for building mathematics, consider:
C: This is untested (probably untestable), but perhaps C showing up everywhere in quantum mechanics isn't as strange as we think. Maybe the universe is fundamentally wavelike, and discreteness is what we perceive when waves interfere. N crops up as a projection of C onto simple boundary conditions, not as a fundamental property of the universe itself, but as an approximate way of describing some part of the universe sometimes.
Computation: Humans are input/output machines. It doesn't make sense to talk about numbers we'll physically never be able to talk about. If naturals are fundamental, why do they have so many encodings? Why do you have to specify which encoding you're using when doing proofs using N? Primes being hard to analyze makes perfect sense when you view N as a residue of some computation; you're asking how the grammatical structure of a computer program changes under multiplication of _programs_. The other paradoxes and strange behaviors of N only crop up when you start building nontrivial computations, which also makes perfect sense; of course complicated programs are complicated.
</rant>
My actual position is closer to the idea that none of it is natural, including N. It's the Russian roulette of tooling, with 99 chambers loaded in the forward direction to tackle almost any problem you care about and 1 jammed in pointing straight down at your foot when you look too closely at second-order implications and how everything ties together. Mathematical structures are real patterns in logical space, but "fundamental" is a category error. There's no objective hierarchy, just different computational/conceptual trade-offs depending on what you're trying to do.
[0] When people talk about N being fundamental, they often talk about the idea of counting and discrete objects being fundamental. You don't need N for that though; you need the first hundred, thousand, however many things. You only need N when talking about arbitrary counting processes, a set big enough to definitely describe all possible ways a person might count. You could probably get away with naturals up to 10^1000 or something as an arbitrary, finite primitive sufficient for talking about any physical, discrete process, but we've instead gone for the abstraction of a "completion" conjuring up a limiting set of all possible discrete sets.
N pretty much is "arbitrary-length information theory". As soon as you leave the realm of the finite, you end up with N. I'm not convinced that any alien civilization could get very far mathematically or computationally without reinventing N somewhere, even if unintentionally (e.g, how does one state the halting problem).
Threats only works if the threatened entity thinks they can avoid them via compliance.
Tariffs come anyway, both Canada and Denmark are under threat of annexation, and ICC suspensions of Microsoft emails show that governments cannot rely on US tech.
Yes, or as Cory Doctorow put it: "So now we have tariffs, and if someone threatens to burn your house down unless you follow orders, and then they burn it down anyway, you really don’t have to keep following their orders."
The only reasonable interpretation of "possibly prefaced" is that the CNAMEs either come first or not at all (hence "possibly"). Nowhere the RFC suggests that they may come in the middle.
Something is broken in Cloudflare since a couple of years. It takes a very specific engineering culture to run the internet and it's just not there anymore.
Surely governments will not see any issues with this project whatsoever.
Though in US I think you can try publishing the code and blueprints as a book and claim the First Amendment, following the PGP story. May or may not work.
reply