"The funny thing about AI is that it’s a moving target. In the seventies, someone might ask “what are the goals of AI?” And you might say, “Oh, we want a computer who can beat a chess master, or who can understand actual language speech, or who can search a whole database very quickly.” We do all that now, like face recognition. All these things that we thought were AI, we can do them. But once you do them, you don’t think of them as AI. It has this connotation of some mysterious magical component to it, but when you actually solve one of these problems, you don’t solve it using magic, you solve it using clever mathematics. It’s no longer magical. It becomes science, and then you don’t think of it as AI anymore. It’s amazing how you can speak into your phone and ask for the nearest Thai restaurant, and it will find it. This would have been called AI, but we don’t think about it like that anymore. So I think, almost by definition, we will never have AI because we’ll never achieve the goals of AI or cease to be caught up with it."
If you want to feel happy again read “Robot” or "Mind Children” by Hans Moravec. He has some pretty good arguments based on human vision that support a AGI around 2030 to 2040 assuming Moore’s Law (the general not specific law) hold.