I went to the "use my calculator yourself" page. The calculator looks nice but the interface is either awfull or I don't get it. I tried for 5 minutes to type in "1/5 + 1/5 + 1/5" and failed...
While I would love for that to happen. Quite recently (e.g. [1], also discussed on HN, I believe) the Julia community seemed content with more niche applications, quite deliberately choosing not to challenge the biggies (TensorFlow, Torch, etc.) on deep learning, because the amount of mainainers needed to be competitive was just not there.
Is there reason to expect this to change? Maintaining something like Torch and keeping it competitive in terms of speed is a HUGE amount of work for systems development, writing insane numbers of GPU kernels, etc. After having read it, I wouldn't quite yet call the tone of the discussion you linked "serious talk"...
Correctly implemented QKD gives you key distribution without assumptions about how difficult certain mathematical problems are in relation to how much compute your adversary has. Key distribution is nowadays done with assymetric cryptography, so QKD can replace some assymmetric cryptography. You can also have authentication (Wegman-Carter) with symmetric keys. What's not quite clear is how you would do certificates and PKI. However, given key distribution, you could probably use symmetric keys for that as well.
It's unlikely that your adversaries can decrypt your traffic right now (break things like RSA). However, advances in number theory and/or computing power might enable them to do that in the future. Your adversary can just record your encrypted traffic and wait until the means to decrypt it become available. Thus, for data that has to stay secure for a long time (and where you want to be as sure as possible that it will) it's not good to rely on predictions into the future about advances in number theory or computing. This is the niche that QKD is aiming at.
For what it's worth, China has a huge QKD network, which cost them a lot of money. Their QKD satellite also cost a lot of money. They are in fact world leaders in quantum communication technology as well and spend a lot on researching it. I wonder why they made this investment, whether it was smart, and what they get out of it.
I also have doubts that QKD will see much use in the coming decades and even more doubts that its use will be done properly and actually make a lot of systems more secure. Securing systems is very hard and securing individual communication links (what QKD does) is not the main problem. In the current landscape, securing your data and communications to a reasonable level just isn't worth it for the vast majority of buisnesses, since they can offload most of the damages of being breached to their customers. There is a danger that QKD will be seen as "magic fairy dust" that you sprinkle over your systems just to claim you're trying very hard to secure them (this image is still widespread about standard cryptography as well).
Messages sent using classical crypto should be viewed as being public after an unknown delay. They can be decoded at your adversary's leisure with techniques and equipment invented in the future.
Quantum cyrpto must be broken immediately to be broken at all.
If what you are encrypting is, for example, credit card information, it's perfectly fine if that becomes public in a decade. Your information will have changed.
If what you are encrypting needs to remain secret for the next fifty years, do not use classical encyrption and a public channel. It may well be made public while the information is still sensitive. This is why QKD has some early adopters. It's the only long-term secure alternative to having people carry one time pad's back and forth in suitcases full of hard drives, which has its own security issues.
It's not clear that the majority of Germans are willing to do that, especially since it wouldn't just end the war right away. I've also heard a politician on German news quote the estimated "few percent GDP decline" as a result of banning gas imports from Russia, which apparantly implies that it's "not possible" ("nicht machbar"). They make it sound like it's on par with breaking the laws of physics and our of their hands...
You're suggesting that Russia wants Germany to stop importing gas? If so, why doesn't Russia stop selling it, like they threaten to every now and then?
Russia wants to put Germany in a bad negotiation position. If internal pressure to shutdown Russian gas becomes too strong, and shutting down Russian gas is a tremendous self-inflicted harm, a better alternative for politicians is to find a quick compromise with Russia.
Without russia having to publicly force this to happen.
I recently had a discussion on AI and morality with a philosophy PhD candidate (who publishes on ethics and human rights. His publications do not concern AI though). Specifically, we discussed whether it was OK to allow self-driving cars, despite not having a solution to moral questions such as "given the choice, should you run over 2.3 grandmas aged 71.2 or 1.7 kids aged 11.3?", and whether it was realistic that socially established "correct solutions" to such problems could be incorporated into AI.
His opinion was that such "deep ethical problems" have been around for millenia and it's unreasonable to expect anyone to "just solve" them. Therefore, self-driving cars will not have solutions to these fundamental issues and, as a consequence, society should not and probably will not accept self-driving cars.
I agree that we will not "just solve" such questions (i.e., arrive on a consensus across humanity) any time soon. However, I also think such questions are almost irrelevant, because the "conundrums" ethical philosophy discusses don't happen in practice. There is no need to "solve" these problems in order to use self-driving cars. We can (and will) slowly progress towards a consensus-ish on what we want (or, at least, can tolerate) the "moral choices" of self-driving to be in almost all situations that arise in practice. In fact, AI can be a great step forward in "practical morality", because an AI will actually do what it "considers" morally right.
Of course, there will be many difficult questions to answer. However, I think it's a fundamental error to just give up and take the position of my philosopher friend. Moral qualms have not stopped technology in the past and I find it unplausible that societey will somehow "not accept" it. As a philosopher, or even just a member of society, you have to see AI as a chance and an obligation to advance morality. It's pretty clear that human morality is changing (I believe advancing) over the millenia. AI marks a transition where the moral questions of the past begin to make a difference in the real world because what we set as moral standards has a much larger effect on what people and things do.
To make progress on this, we have to accept that it is a fools errand to try "deriving" correct morality from "first principles" (Kant famously derived from absolute and eternal first principles that it's morally OK to kill "illegitimate" new-borns as a means of birth control). Rather, it's an exercise in consensus building. Likewise, it is not reasonable to expect moral solutions to arrive at something "perfect and complete". Practically relevant morality will be fuzzy and everchanging, just like judicial systems.
I am quite sad that so many philosophers and members of the public seem reluctant to accept this challenge at overhauling the millenia old stagnated academic debates. If they don't participate, engineers will "solve" these problems themselves, perhaps choosing ease of implementation over moral considerations.
> I am quite sad that so many philosophers and members of the public seem reluctant to accept this challenge at overhauling the millenia old stagnated academic debates.
Many philosophers and public members are sad that you insist on ignoring their input and are going to charge ahead long term consequences be damned. (I’m not taking sides here)
> If they don't participate, engineers will "solve" these problems themselves, perhaps choosing ease of implementation over moral considerations.
It’s OK. Congresses, parliaments, and other policy making bodies, basing their decisions on populist emotional feedback loops, will regulate these solutions in ways that leave both the moralist and the solver confused and unhappy.
> Many philosophers and public members are sad that you insist on ignoring their input
On the contrary, I strongly encourage them to give input, and I criticise those who would rather give up, dismiss the questions as impossible to solve, and lament how technology has been destroying society for the last 2000 years, while self-driving cars still start being used leading to suffering that could have been prevented by thinking about things more and seeing them from more points of view.
Which inputs by philosophers or the public are being ignored?
Because codifying any behaviour is explicitly justifying any behaviour, and few engineers want to be responsible for signing off on the feature to run over Grandma.
A workaround thus far has been to abstract the problem into small enough pieces that ARE palatable to sign off on, as your comment shows. "Minimize the number of Grandmas run over" is a different framing than "Should we run over Grandma?".
It won't be random, it will be based on what they feel is best. It's immaterial in this case, because people will blame the AI as a whole when someone dies (even if a death was unavoidable). This is not an option with you the driver, because banning people from driving means nobody can drive a car at all. So the responsibility is shifted towards smaller details, so folks can feel safe in the knowledge that they may drive and they just have to do "nothing wrong". Even if that wrong is ill-defined and some situations have only "wrong" solutions.
Humans also directly suffer consequences for those actions. They show remorse and suffer emotionally attempting to grapple with the outcome of their choice. The legal system takes remorse and suffering into consideration, as it is designed to do.
Do self-driving engineers personally commit to be punished and suffer remorse for their algorithm’s choices? And before you say “it’s not fair, the CEO is at fault!” think about who’s writing the code. The CEO doesn’t make the self-driving car possible, the engineer does.
> If AI drivers generally have less accidents and in the few cases left behave like humans, wouldn't that be a win?
Yes. I think that's a big part of why it's not necessary to "completely, once and for all solve" ethical problems to automate things that might run into them. One could easily argue (and people of course have) that it's also immoral not to take measures that will reduce accidents, which I'm quite sure will happen with AI drivers in the not too distant future.