There is a comment in the article about having models watch TV and play video games, and he's talked about that before too in his Lex Friedman interview. Seems like his approach is to take existing model architectures, apply some tweaks and experimental ideas and then use datasets consisting of TV (self supervised learning maybe?) and classic video games (RL I guess?).
The video game part at least sounds like what Deepmind is already doing. I guess we'll just have to wait and see what he plans to do differently.
It seems to me like his expertise would be most valuable in optimizing model architectures for hardware capabilities to improve utilization and training efficiency. That will be important for AGI especially as the cost of training models skyrockets (both time and money). If I was a startup doing AI hardware like Cerebras or Graphcore I would definitely try to hire Carmack to help with my software stack. Though he doesn't seem interested in custom AI hardware.
The video game part at least sounds like what Deepmind is already doing. I guess we'll just have to wait and see what he plans to do differently.
It seems to me like his expertise would be most valuable in optimizing model architectures for hardware capabilities to improve utilization and training efficiency. That will be important for AGI especially as the cost of training models skyrockets (both time and money). If I was a startup doing AI hardware like Cerebras or Graphcore I would definitely try to hire Carmack to help with my software stack. Though he doesn't seem interested in custom AI hardware.