count me among the skeptics. the big problem i see is that there is no way to verify whether any AI output is correct. it is already very hard to prove that a program is correct. proving that for AI is several levels more difficult, and even if it were possible the cost would be so high to make it not worth it.
I am personally somewhere in between. Language models do allow me to do things I wouldn't have patience to do otherwise ( yesterday chatgpt was actually helpful with hunting down a bug it generated:P ). I think there is some real value here, but I do worry it will not be captured properly.