Oh, yay. Given the magic of AI and ML, that's going to be a clusterfuck of epic proportions. If we're lucky, we'll even survive to tell the tale.
It's going to be vulnerable to (accidentally or intentionally) biased training data, and on top of that we're making rapid progress on adversarial patterns. On top of all that, the results will be relatively inscrutable. (Yes, you can somewhat explain the results you get. But the lure of ML is that it's right most of the time, and you start trusting it. Not to mention that explanations cost time and money)
It's a reasonable strategy for processing images and other sensor data. People aren't perfect for the task either. We're expensive, get bored easily, and even at the best of times we're also vulnerable to plenty of adversarial patterns.
Automated or not, the nature of the military mission guarantees it will take the lives and destroy the property of innocent people. (So don't take this as an endorsement of what the military is doing.)
If all you care about is winning and you accept that human sentries sometimes shoot the wrong people, as all military forces do, then automation of sentry tasks makes perfect sense.
Quoth: For all tested DL models, on average DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop.
One second for triggering a failure, on any tested model. On a commodity laptop.
That's just the technology you want to entrust with intelligence tasks.
There has to be some kind of DNN research on using methods such as these to enhance accuracy of DNNs by using the antagonist examples as input.
You could just take your existing labeled data and add light-medium noise to all your data (easiest with images?) and thereby get a lot more training data less prone to overfitting.
It's so obvious to me so someone must have thought of it already. So my guess is that these kinds of easy antagonist examples won't be a thing for very long.
Rule of thumb: If you find a blatantly obvious solution and you're not an expert in the field, assume people who are have thought of it and discarded it - or it would exist.
(Note: I'm not an expert in the field, either. But the above rule works most of the time :)
It's worth pointing out that their technique requires access to the internals of the network:
First, we compute the gradient of the outputs of the neurons in both the output and hidden layers with the input value as a variable and the weight parameter as a constant.
A well designed production system would not support this.
Wow, trying to stir things up, are we? Based on your comments, I don’t even think you read the article.
Not sure how image processing will kill us all. That’s the conclusion that you drew.
To address your pedantic comment further, I’d like to add that speeding up image processing or event correlation would reduce the grunt work and allow the intelligence folks to be more productive. Kind of like replacing typewriters with word processors. Definitely not the end of the world as you seem to think.
Count down until a contract includes extracting "anonymous", "aggregate only" models built from private communications back into their commercial activities, then techies start doing 'deep dream' type stuff to reveal swathes private data. 5 years..
Came to Ctrl-F for Skynet. Was not disappointed. This seems to be a classic situation of reasonable strategic action with problematic potential of stepping over the fine line to automatic actions based on AI-derived intelligence. Maybe not now, but think a decade into the future where enough time and space has passed between the initial planning/implementation personnel and those advancing what "has always been there". We can only hope the others love their children too.
People won't be leaving that field any time soon. There's already a massive shortage of data science talent in private industry, and adding demand from the government won't help. But with the government shelling out buckets of cash for programs with AI/ML, there will be even more demand, meaning contractors will need to lure in DS talent. People in DS will be making way too much money to want to leave the industry.
Were they not using AI and ML techniques before? Or is this announcement just indicating that they are going to put a greater emphasis on deep learning? I have trouble seeing how the NSA would function without ML techniques, for example.
I suspect that they were using classical ML techniques, and a lot of traditional scientific computing techniques, but not a lot of deep learning, given how new it is, and how slow government operates.
It's also not clear how a lot of the classical ML can be translated into a DL setting- a lot of graph analysis [1] is traditionally solved using matrix factorisation, for instance, which doesn't directly translate into deep learning. You can find different ways of doing it with DL, but it's not as transferable.
[1]: e.g. analysing an individual's social graph. I believe that Facebook uses a lot of classical techniques, like PCA. I've heard that Facebook has been pouring a lot of money into research that uses DL on arbitrary graphs.
Government pioneered, or pushed contractors to pioneer, lots of technologies - just think about NASA.
I'm pretty sure that, with their budget, the NSA is at the forefront of technologies that can be used in their field, but they obviously don't want to disclose what they do. Adopting a publicly available technology shouldn't be difficult at all.
Sure they were. But they have so much data. Potentially all Internet (and other telecommunications) data. There are limitations in private network bandwidth and processing, of course. But they get around that using distributed clusters.
Analysis, and particularly extraction of actionable intelligence, is the key challenge. They have lots of analysts, for sure. But organizing and integrating their output has been a slow process. Reports, summaries of reports, summaries of summaries, etc.
To play devil's advocate: why is it dark? If an AI can process significantly more data than us humans to find actionable intelligence is that not a good thing? Imagine a building is on a strike list and the intelligence on it needs to be revetted. An analyst pulls in SIGINT, GEOINT, OSINT, HUMINT, etc. The analyst has 48 hours to get his products uploaded for battlefield commanders to make an air strike or not. The analyst can't possibly process all the data in time. The analyst assesses it's a weapons cache based on what they can process. In reality, it was a makeshift hospital. This is how mistakes happen in war.
I'll admit my previous post was thin on substance.
Sure, if you believe that a radical increase in capacity to surveil everyone and exert overwhelming violence against anyone with absolute obedience would certainly never be used to destroy the lives of politically inconvenient people and tear innocents off to torture or extrajudicial assassinations, I guess it's all sunshine and fluffy bunnies.
I doubt the training data/AI will be accountable or visible to the citizens (yes, the analyst gets fired/van'd, but someone else takes his place).
So we're trusting the intel agencies, but they're trusting an AI to some extent. They're plainly not dumb enough to let the AI drive, but it still plays a part.
I'm not worried about an AI being able to sift data given a training set to some degree of accuracy. I'm more worried about the training set as a dark way of generating actionable intelligence, for good or bad purposes.
All things used in war is a double-edged sword. Unfortunately, that's the reality of it. Personally, the benefits of AI in intelligence outweighs the costs from my own experience.
Which is why a permanent war posture is a frightening thing in a government.
I'm curious about your personal experience of the benefits and costs of AI in intelligence? It seems unlikely that you will have been on the receiving end of the full potential cost of that AI.
I've heard that most federal agencies are just starting to fix basic problems in their data management. Like vast troves of inconsistent spreadsheets being ETL'd into actual databases now. ML wasn't feasible because everything was such a mess.
It's going to also eventually revolutionize warfighting, and I suspect that the US will be slower to adopt it than other countries because of the ethical and moral considerations of turning over decisions about life and death to machines.
I worry that in 5-10 years, we might be facing a blitzkreig sort of a surprise as an ai powered drone army captures a huge amount of territory before human soldiers even manage to get their pants on to defend it, setting off a new and deadly arms race.
> It's going to also eventually revolutionize warfighting, and I suspect that the US will be slower to adopt it than other countries because of the ethical and moral considerations of turning over decisions about life and death to machines.
I like your assumption that the US actually cares about moral or ethical considerations, considering how many governments they've overthrown, how many terrorists they've funded, etc...
Not to mention the fact that they are already the leaders in drone warfare, and routinely use drones to drop bombs on civilians. I really don't see them giving a fuck if an algorithm bombs a wedding instead of someone staring at a screen pressing the button...
And the fact that people can assert otherwise without blushing kind of goes to show it's all right on track, including the psy ops aspect. If control of the hemisphere requires control of the planet, if modern communications mean control can only be maintained by controlling speech and thought in every nook and cranny -- then so be it. The people who don't want power at the expense of becoming a monster don't reach high echelons of power, they remain at the PR level.
To think in terms of countries, instead of interest groups within countries, is not even wrong. It's not countries vying for power, it's elites within a country using the elites in another country to suppress and exploit "their" populace. Two dictators in two neighboring countries might genuinely hate each other, but they still can use the other one as the perfect excuse to clamp down.
Soldiers are at best there to defend other soldiers. It's like having a debugger that can fix bugs only the debugger introduces, with the operative word being "can" -- so the net result is negative. What goes for soldiers goes for killer drones, and a white list approach, killing everyone on the planet who is not flagged for survival, would be already easier to make than, say, self-driving vehicles. We're already there, now all that remains is to sell it, and that is going butter smooth. We're still wide eyed and sleep walking, we absolutely need more powder in this keg so when it goes boom, nobody will even have time to ask anyone "what did you know, what did you choose to not learn, what did you do and what have you failed to do?".
In your comment, you don't have a single statement that would actually compare US to any other government. I assume you understand the difference between absolute and relative value judgements?
I think it's unlikely there will be a confident enough solution to nuclear MAD within 5-10 years, which would be a requirement for being bold enough to blitzkrieg... even with AI drone armies.
Full Spectrum Dominance; war by air, land, sea, RF, nuclear, and through subversion of media, polticial processes, food supplies, pharma, medicine, ecology, culture, religion, economies, manipulation of demographics...
This is the reality of power struggle (all means are in the game), but this new level of reach, spread and deep integration has been within reach and in active development for at least 200 years, and it's just going get worse. It's enabled by bureacracy, "one voter, one vote" democracy in the face of a mature media manipulation ("mind control") industry, capitalism and other trappings of civilization. ML is just going to end up automating the doctrine of FSD.
I think I'm moving out of the city to learn to live off the land, be independent and raise a family. I think it may be very interesting to see how I can use modern technology in that context, while making sure I don't absolutely depend on it.
Anyone else think it's quite possible there will be a lot of new wars in Western countries in 20-40 years, when the tipping point of replacement migration is reached?
> ML is just going to end up automating the doctrine of FSD.
That's it in a nutshell. The ultimate black box to bow down to. We already do this with in so many ways, just following orders, just repeating what we hear, and so on.. and people think we're become less spineless and murderous as this wet dream of sociopathy is introduced? Yeah, right. Some list biased training data as a pitfall, from the "correct" perspective that is absolutely a feature, that is the point.
> Anyone else think it's quite possible there will be a lot of new wars in Western countries in 20-40 years
In real life? I can't think of anyone I can have normal, intelligent conversations with who doesn't express worries that get flagged on HN. And I'm not talking leftist young people, I'm talking everybody, from 20 to 85.
It's going to be vulnerable to (accidentally or intentionally) biased training data, and on top of that we're making rapid progress on adversarial patterns. On top of all that, the results will be relatively inscrutable. (Yes, you can somewhat explain the results you get. But the lure of ML is that it's right most of the time, and you start trusting it. Not to mention that explanations cost time and money)