To be clear, I'm not against AI or LLM as a technology in general. What I'm against is the unethical way how these LLMs trained and how people are dismissive of the damage they're doing and saying "we're doing something amazing, we need no permission".
Also, I'm very aware that there are many smaller models in production which can run real-time with negligible power and memory requirements (i.e. see human/animal detection models in mirrorless cameras, esp. Sony and Fuji).
However, to be honest I didn't see the same research on LLMs yet. Can you share if you have any, because I'd be glad to read them.
Lastly, I'm aware that AI is not something only covers object detection, NLP, etc. You can create very useful and light AI systems for many problems, but how LLMs pumped with that unstopping hype machine bothers me a lot.
Also, I'm very aware that there are many smaller models in production which can run real-time with negligible power and memory requirements (i.e. see human/animal detection models in mirrorless cameras, esp. Sony and Fuji).
However, to be honest I didn't see the same research on LLMs yet. Can you share if you have any, because I'd be glad to read them.
Lastly, I'm aware that AI is not something only covers object detection, NLP, etc. You can create very useful and light AI systems for many problems, but how LLMs pumped with that unstopping hype machine bothers me a lot.