Hacker News new | past | comments | ask | show | jobs | submit login

Even if the LLM hallucinates every word, just knowing when to say something versus stay quiet based on EEG data would be a huge breakthrough.



If that's all they were doing - showing when the patient wanted to speak - that would be fine. Presenting speech as attributable to that patient, though? That feels irresponsible without solid evidence, or at least informing the families of the risk that the interface may be just totally hallucinating. Imagine someone talking to an LLM they think is their loved one, all while that person has to watch.


You’ll get no argument from me there. The whole LLM part seems like a gimmick unless it’s doing error correction on a messy data stream like a drunk person fat fingering in a question to ChatGPT except with an EEG. It might be a really fancy autocorrect.

I’m just saying that EEG data is so unreliable and requires so much calibration/training per person that reliably isolating speech in paralyzed patient would be a significant development.


Definitely, seems like the wrong tool even, or not the right first one, surely you need some sort of big classifier for EEG patterns to words or thoughts/topics; then if used an LLM it would be 'clean up this nonsense into coherent sentences keeping the spirit of the ideas or topics that are mentioned'?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: