I thought I'd heard that the initial compromise was social engineering to get some initial access to a tech person's account. ? All your 4 points stand, but social engineering may be making a comeback too.
I'm personally waiting for the first public reports of criminal gangs using Chat GPT to automate spear phishing at scale.
I'm sure it is happening already, but the victims are keeping it quiet. The use case is too obvious. Identify key people in an organization from the website, Linked In, etc. Find them on social media. Find their less technical friends. Compromise the friends' accounts in various ways. Then send targeted phishing attacks to your real target. With every step of this automated by LLMs.