Both super-alignment leads resigned. One way to interpret this is that it's an interesting research topic that has wide repercussions the closer they get to AGI, but it's more of a extra cost that doesn't directly help with monetisation and productisation. The real killer applications are in business verticals with domain specific expertise and customised (and proprietary) datasets, not so much the 'pure' academic research.
OpenAI dedicated 20% of compute to the effort which sounds kind of like Google's 20% side project time :)
It looks like "Super-alignment" was to automate monitoring, and their job was to find AI researchers who wanted not to build new things but find problems.
But there's really zero glory or profit in doing QA, much as users complain about quality issues.
So perhaps they succeeded and built a team, or perhaps they found they couldn't recruit a decent team, or perhaps they failed to produce anything useful (since the goal of not offending anyone or embarrassing the company or doing harm is relatively difficult to concretize).
Regardless, I suspect there's no good answer yet to how to manage quality, in part because there's no good answer on how these things work. That problem will likely remain long after people have forgotten about Ilya, Jan, or even Sam.
Though it might be solved by rethinking the idea that "attention is all you need".
OpenAI dedicated 20% of compute to the effort which sounds kind of like Google's 20% side project time :)