IIRC, there was a paper which showed GPT models pay most attention to the beginning and end of context window, and much less attention to what's in the middle. In that regard, they behave like human brains. But I'm wondering if these efforts to increase context window actually make the models pay almost uniform attention to all the prompt?
You are correct. The paper is called "Lost in the middle" [1] and it is probably one of the worst drawbacks of this technology. It makes a lot of use cases biased (think of law).
The research is slightly misleading... the models they experimented all had an original pretrained context length significantly less than the fine tuned context length they tested for, e.g. they used MPT-30B-Instruct, which was pretrained for 2k sequence length and then fine tuned for 8k sequence length. A real test of if current self attention has this issue would be natively training a model with the extended sequence length.
I have seen this also, but attention is far better for gtp-4, it seems to follow a system prompt for say, outputing json, uniformly compared to gpt-3.5. I have also found that gpt-3.5 follows a system prompt to do the same for only 2 successive outputs. You have to give it that prompt with every single time. So I think increasing context windows may not make it follow the system prompt uniformly.