This is generally my impression as well. It's interesting that right now is still a ton of novel "big' and "small" data sets that the algorithms use to further and further refine the models. But a greater and greater share of that output (the median) will then become the new input as things head forward (i.e right now there is some small percentage of art, images, etc that are AI generated that are resubmitted by users to the web and eventually fed back into new AI image generators, that percentage will grow and grow). Where that semi-closed feedback loop ends up is anyone's guess.
If you trained a model on what human faces looked like exclusively based on images generated in the 80-90s and compared it to models trained on pictures taken in the 10s-20s you might come away thinking that humans have somehow esthetically become more "beautiful" over that time frame if you didn't account for the use of filters (which at this point are practically baked in) as one example.
If you trained a model on what human faces looked like exclusively based on images generated in the 80-90s and compared it to models trained on pictures taken in the 10s-20s you might come away thinking that humans have somehow esthetically become more "beautiful" over that time frame if you didn't account for the use of filters (which at this point are practically baked in) as one example.