Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So how will this intelligent thing come about.


Once we have AIs as smart as humans, they can do AI research as good or better than human researchers. And they can make AIs that are even better, which in turn can make even better AIs, and so on.

Dumb evolution was able to create human-level intelligence with just random mutations and natural selection. Surely human engineers can do better. But in the worst case, we could reverse engineer the human brain.


> Once we have AIs as smart as humans

Whether a true 'generalist' AI is possible in the forseeable future is debatable.

> they can do AI research as good or better than human researchers.

Now your AI is not just a 'generalist' but rather a specialist in AI. A very big leap of faith has occurred here. This also presumes that the AIs are even capable of effective invention and improvisation rather than mimicry and optimization (the only two features we have seen from the very best of cutting-edge ML work so far.)

> And they can make AIs that are even better, which in turn can make even better AIs, and so on.

All of which is predicated on the limiting factor being software and not hardware. If it is the latter then these postulated AIs are hitting the same brick wall as humans and thinking about the problem faster or harder does not magically make the necessary hardware appear.


>Whether a true 'generalist' AI is possible in the forseeable future is debatable.

Sure, but see the survey I posted above. The rate of progress of AI is incredible. We will almost certainly be approaching human level in a few decades at most.

>Now your AI is not just a 'generalist' but rather a specialist in AI.

That's what general intelligence is. The ability to learn different specializations. AI researchers are not literally born as AI researchers and capable of nothing else.

> This also presumes that the AIs are even capable of effective invention and improvisation rather than mimicry and optimization

Why wouldn't they be? If they are generally intelligent and can do all the same tasks humans can do. Whats magical about invention that would prevent computers from ever doing it?

>All of which is predicated on the limiting factor being software and not hardware.

All the same arguments apply to hardware. Hardware has been improving exponentially for a much longer time than AI has. And I think hardware may already be close enough. Transistors are orders of magnitude faster and denser than biological synapses.


I am familiar with the survey you presented and can only point out that if you had passed out the same survey back in the late 80s when I was in the field it would have had a similar result. The long-term estimates of people whose current paycheck depends on these long-term estimates being achievable are basically useless BS.

The rate of progress in AI is actually not "incredible" and is in-line with the advances in hardware which have made old research suddenly applicable to a wider range of problems. As someone on the outside looking in it may appear as though magic is happening, but the field is mostly progressing at an only marginally faster rate than it has done over the previous few decades. What has changed significantly to lead to all of these "incredible" results you see is the larger data sets available and improved hardware upon which to run massively parallel but weakly connected computations.

As for why AI can't invent or improvise I am simply suggesting that so far we have only seen optimization and in fact invention and similar feats may actually require more work than we realize. A statistical simulation (based upon a huge corpus) of how a human would respond to various situations is NOT general intelligence, and so far you are just making hand-waving assumptions that paper over a large number of hard problems that no one has a clue how to solve.


As far as I know, there hasn't really been a quantum leap in AI in decades. Most of the fundamentals of the things you see in AI and machine learning are old, like 70s and 80s. The big change is that we have way more and cheaper processing power, so you can see AI happening in fields and areas that were just impossible in the past.

A true Turing Test passing machine, is still something that we seem to be more than a quantum leap behind of.


Who says we need a quantum leap? Most technologies progress by slow iterative improvement. The idea of sudden significant breakthroughs is mostly a myth. Even in biology, the human brain only has some slight differences from other primates. Which in turn aren't terribly different from other animals.

In any case, I think there has been tons of progress since the 80s. Taking the best algorithms from the 80s and running them on modern hardware would fail. The core idea of backpropagation and gradient descent was there, but not much else.


I suspect the "quantum leap" itself will be yielded by a new method of parameter updates, perhaps one that expands the scope of neural networks as they are presently defined.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: