I think the sentence in the article is fair. They're right that projects aimed at AGI failed; everything you mention are used for narrow AIs that tackle particular tasks.
Also, regarding search in gameplaying, I would argue the opposite: the trend is that breaking into bigger and more difficult domains has required abandoning search. Tree search is limited to small games like board games or Atari. In more open-ended games we see model-free (i.e. no search) approaches; e.g. AlphaStar and OpenAI Five, the AIs for Starcraft 2 and Dota 2, were both model free. So was VPT (https://openai.com/research/vpt) by OpenAI, which tackled Minecraft. Even in board games, DeepNash (https://www.deepmind.com/blog/mastering-stratego-the-classic...), a 2022 project by DeepMind similar in scale to MuZero/AlphaGo, had to abandon tree search because of the size of the game and the challenges of applying tree search to hidden information domains.
Also, regarding search in gameplaying, I would argue the opposite: the trend is that breaking into bigger and more difficult domains has required abandoning search. Tree search is limited to small games like board games or Atari. In more open-ended games we see model-free (i.e. no search) approaches; e.g. AlphaStar and OpenAI Five, the AIs for Starcraft 2 and Dota 2, were both model free. So was VPT (https://openai.com/research/vpt) by OpenAI, which tackled Minecraft. Even in board games, DeepNash (https://www.deepmind.com/blog/mastering-stratego-the-classic...), a 2022 project by DeepMind similar in scale to MuZero/AlphaGo, had to abandon tree search because of the size of the game and the challenges of applying tree search to hidden information domains.