>And AI is being used to help because this method is lossy?
AI is the method. They put somebody in a brain scanner and flash images on a screen in front of them. Then they train a neural network on the correlations between their brain activity and the known images.
To test it, you display unknown images on the screen and have the neural network predict the image from the brain activity.
> Then they train a neural network on the correlations between their brain activity and the known images.
Not onto known images, onto latent spaces of existing image networks. The recognition network is getting a very approximate representation which it is then mapping onto latent spaces (which may or may not be equivalent) and then the image network is filling in the blanks.
When you're using single-subject, well-framed images like this they're obviously very predictable. If you showed something unexpected, like a teddy bear with blue skin, the network probably would just show you a normal-ish teddy bear. It's also screwy if it doesn't have a well-correlated input, which is how you get those weird distortions. It will also be very off for things that require precision like seeing the actual outlines of an object, because the network is creating all that detail from nothing.
At least the stuff using a Utah array (a square implanted electrode array) is not transferrable between subjects, and the fmri stuff also might not be transferrable. These models are not able to see enough detail to know what is happening- they only see glimpses of a small section of the process (Utah array) or very vague indirect processes (fmri). They're all very overfitted.
AI is the method. They put somebody in a brain scanner and flash images on a screen in front of them. Then they train a neural network on the correlations between their brain activity and the known images.
To test it, you display unknown images on the screen and have the neural network predict the image from the brain activity.