Not yet, because the reliability isn't there. You still need to validate everything it does.
E.g. I had it autocompleting a set of 20 variable#s today Something like output.blah=tostring(input[blah]). The kind of work you give to a regex.
In the middle of the list, it decides to go output.blah=some long weitd piece of code, completely unexpected and syntactically invalid.
I am still in my AI evaluation phase, and sometimes I am impressed with what it does. But just as possible is an unexpected total failure. As long is it does that, I can't trust it.
Ppl dont want to hear that, but you see less and less offers and not only for junior positions.
Hard truth is that like with any tool/automation - the higher performance improves, the less ppl are needed for this kind of work.
Just look at how some parts of manual labor were made redundant.
Why ppl think it wont be the same with mental work is beyond me.