There's a lot of internal contradiction in his message. One the one hand, he wants to "have a maniacal sense of urgency" and on the other, he thinks that it's "critical to study and advance AI safety... to bring about AI that’s safe and beneficial to humanity". If you're building something that you think is so powerful that you fear for the safety of humanity, then why are you building it at "blistering velocity"? Shouldn't you be taking a more careful approach and studying the impact of what you're building, and then only releasing it if you know it's safe? How do you do that if you are under constant pressure to move faster than everyone else?