I spent a long time trying to do some sort of machine learning over the OpenBCI helmet's data with the eventual goal of moving a cursor, but I didn't seem to get anywhere. The data was indeed _extremely_ noisy, and I don't believe my model ever converged to anything useful.
That said, I was just a high schooler and so my method of collecting training data was to run the script and "think really hard about moving left". Probably could have been a good deal more sophisticated too.
If it's any consolation for young you, this is a really hard problem, even with electrodes implanted in the brain. There's an amazing podcast by lex with the team from NeuralLink, and they go into depth on how even with good neuron signals, there's still a lot of work on the software side to "moving a cursor". The first recipient of their implant still has to do a 10-30min calibration run every morning to be able to move a cursor reliably on the screen. So all in all, don't beat yourself up, it's a really hard problem even with good data.
That said, I was just a high schooler and so my method of collecting training data was to run the script and "think really hard about moving left". Probably could have been a good deal more sophisticated too.