Take it from someone in the business of exploiting race conditions for money: that’s about as average as you can get. Additionally, whatever Azure is considering “traditional” methods may be bare bones poorly optimized automated code reviews given the egregious issues they’ve had in the past.
As a side note:LLMs by definition do not demonstrate “understanding” of anything.
Quoting https://buttondown.com/hillelwayne/archive/ai-is-a-gamechang... about https://zfhuang99.github.io/github%20copilot/formal%20verifi... "In the post, Cheng Huang claims that Azure successfully used LLMs to examine an existing codebase, derive a TLA+ spec, and find a production bug in that spec." This is not the behavior of the "average" anything.