There were already teams building ML-based code completion, code suggestions, and code repair before LLMs blew up a couple years ago. So the principle of it isn't driven by AI hype.
Yes, there are oodles of people complaining about AI overuse and there is a massive diversity of opinion about these tools being used for coding, testing, LSCs, etc. I've seen every opinion from "this is absolute garbage" to "this is utter magic" and everything in between. I personally think that the AI suggestions in code review are pretty uniformly awful and a lot of people disable that feature. The team that owns the feature tracks metrics on disabling rates. I also have found the AI code completion while actually writing code to be pretty good.
I also think that the "% of characters written by AI" is a pretty bad metric to chase (and I'm stunned it is so high). Plenty of people, including fairly senior people, have expressed concern with this metric. I also know that relevant teams are tracking other stuff like rollback rates to establish metrics around quality.
There is definitely pressure to use AI as much as reasonably possible and I think that at the VP and SVP level it is getting unreasonable, but at the director and below level I've found that people are largely reasonable about where to deploy AI, where to experiment with AI, and where to AI to fuck off.
Yes, there are oodles of people complaining about AI overuse and there is a massive diversity of opinion about these tools being used for coding, testing, LSCs, etc. I've seen every opinion from "this is absolute garbage" to "this is utter magic" and everything in between. I personally think that the AI suggestions in code review are pretty uniformly awful and a lot of people disable that feature. The team that owns the feature tracks metrics on disabling rates. I also have found the AI code completion while actually writing code to be pretty good.
I also think that the "% of characters written by AI" is a pretty bad metric to chase (and I'm stunned it is so high). Plenty of people, including fairly senior people, have expressed concern with this metric. I also know that relevant teams are tracking other stuff like rollback rates to establish metrics around quality.
There is definitely pressure to use AI as much as reasonably possible and I think that at the VP and SVP level it is getting unreasonable, but at the director and below level I've found that people are largely reasonable about where to deploy AI, where to experiment with AI, and where to AI to fuck off.