Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You need a very low false positive rate for something to be useful in screening for unlikely events. We can't compute the false-positive rate here, though, because we don't know how many people the dogs "scanned"—only how many positives they gave.

Take an example: assume you have a detection technology that has a 1% false positive rate. That is, if you scan 100 people (without bombs) you get 1 positive (which is false, of course). Ignore for a moment any false negatives (people with bombs that are missed); assume the false-negative rate is 0.

Now, what is the chance of a random person you're screening having a bomb? It's very low. Consider that there are nearly a billion air passengers per year in the US, and almost no bombers. Let's say one in 200 million passengers has a bomb.

So, you screen 1 billion passengers, you get 5 true positives. You also get 1% false positives—that is, 10 million.

So, you unjustly searched (as you put it) 10 million people to catch 5 (not 5 million, just 5). And also, its not cheap anymore; you're spending a lot of time (thus money) on those 10 million useless searches.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: