I think that's supposed to be the idea of reasoning functionality, but in practice, it just seems to allow responses to continue longer than that would have otherwise by bisecting the output into warming an output and then using maybe what we would consider cached tokens to assist with further contextual lookups.
That is to say, you can obtain the same process by talking to "non-reasoning" models.
Of course then you ask her to write it and of course things get fixed. But strange.