This is a false dichotomy. Clearly getting money and control are the main objectives here, and we're all operating over a distribution of possible outcomes.
I don't think so. If Altman is prepping for an exit (which I think he is), I'm having a very hard time imagining a world in which he also sincerely believes his company is about to achieve AGI. An exit only makes sense if
OpenAI is currently at approximately its peak valuation, not if it is truly likely to be the first to AGI (which, if achieved, would give it a nearly infinite value).