I don't think the goal is for them to "decide what's good for the world". You can classify disruptiveness/risk of a piece of tech fairly objectively.
Delaying release is to give others (most clearly social media) time to adjust and ensure safety within their own platforms/institutions (of which they are the arbiters). It also gives researchers and entrepreneurs a strong motivation of "we have to solve these risk points before this technology starts being used". While there are clearly incentive issues and gatekeeping in the research/startup community, this is a form of decentralized decision-making.
I don't see a strong case for why access should be open-sourced at announcement time, especially if it's reproducible. Issues will arise when their tech reaches billions of dollars to train, making it impossible to reproduce for 99.99% of labs/users. At that point, OpenAI will have sole ownership and discretion over their tech, which is an extremely dangerous world. GPT-3 is the first omen of this.
Delaying release is to give others (most clearly social media) time to adjust and ensure safety within their own platforms/institutions (of which they are the arbiters). It also gives researchers and entrepreneurs a strong motivation of "we have to solve these risk points before this technology starts being used". While there are clearly incentive issues and gatekeeping in the research/startup community, this is a form of decentralized decision-making.
I don't see a strong case for why access should be open-sourced at announcement time, especially if it's reproducible. Issues will arise when their tech reaches billions of dollars to train, making it impossible to reproduce for 99.99% of labs/users. At that point, OpenAI will have sole ownership and discretion over their tech, which is an extremely dangerous world. GPT-3 is the first omen of this.