I wonder if you could specialise a model by training it on a whole movie or TV series, so that instead of hallucinating from generic images, the model generates things it has seen closer-up in other parts of the movie.
You'd have to train it to go from a reduced resolution to the original resolution, then apply that to small parts of the screen at the original resolution to get an enhanced resolution, then stitch the parts together.
You'd have to train it to go from a reduced resolution to the original resolution, then apply that to small parts of the screen at the original resolution to get an enhanced resolution, then stitch the parts together.