1. I believe AI is detrimental. It makes us go too fast. It's all about production now, pure efficiency over individuals.
2. AI is too dangerous. Whatever innovations used for benign applications will be eventually used for more dangerous applications such as advanced genetic engineering and the military.
3. AI uses too much energy. It's disrespectful to the resources that we have.
4. AI is an apex technology amongst technologies designed to further enrich the elite and strengthen the power structure.
5. AI will also be used to completely replace workers at a speed much faster than other automations and I don't agree with that. The new jobs that have been created are demeaning such as "AI Prompt Engineer".
6. AI is one step closer to technology creating autonomous technology, and that's a bad thing.
Society needs to slow down and find alternative, more sustainable solutions. AI is aligned with short-term economic efficiency and that is detrimental.
I strongly agree with your points 1, 3, 4 and 5, and I would add another one:
7. This idea of "AI" and how it is expected to be used is detrimental to human intellectual development, particularly for junior generations, and the presumption that AI will solve everything is what actually may bring us closer to the world of Idiocracy.
I agree with that. I think AI may not make us dumber in every way, but it certainly will make us dumber when it comes to being able to plot out independent, large-scale solutions. We will be as dependent on AI for certain sorts of decision-making as we are on water treatment to treat our polluted water sources.
There are a couple of sentences in "Dune" about this:
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
We turn our money over to investments, hoping this will set us free.
That's a new thing in the world, ordinary people investing in their savings, 401k, retirement, mortgages, index-linked accounts. Not many hundreds of years old, but people advise it as if it's as solid as the mountains. And work for 50 years watching the numbers go up for the carrot of freedom at the end.
I am currently having a ride with chatgpt allowing me to write applications at 3 times the speed compared to before (where before may be "never" for some technologies) and I am happy for everyone contributing to this.
But all your points are well grounded, I will have to figure out a way to think about them, while keeping my day job.
ChatBLT and Copilot break every license of every repo they were trained on. Even the most liberal project license states you have to include, and not modify, the license file. So you’re glad for code thieves. Interesting…
So to give you the benefit of the doubt that you’re not just another dude who has hitched their financial wagon to this current AI slopfest, I just retried a question about writing OSSEC rules and the response, while convincing looking, was completely wrong. Again.
I don't begrudge you for trying to keep your job. I myself do things for my own job that I consider questionable. I guess it's all something we should think about.
Faster is not necessarily better, and if 2/3 of your value comes from LLMs, that doesn’t bode well for job security.
There’s a lot that engineers can due that are well beyond the limits of LLMs. If you really want to keep your day job, I would really commit yourself to that gap when you can!
The time freed here gives me more time to spend on what actually brings value.
My primary job is not to write applications.
And if it was, I would not include "the process of editing lines of code" in my job description.
I am not afraid to be fired, but at the same time there is no discussion about the ethics of using AI and whether ethics is a good reason not to in, my workplace.
- AI output is taken as an ultimate source of truth despite frequently, and dangerously, getting details wrong. Fact-checking is abdicated as a personal responsibly while simultaneously marketing and designing products to people who are weak at or are otherwise indifferent about critical thinking. (This is similar to social media products telling its users to "consume responsibly" while designing them to be as addictive as possible.)
- AI is expensive. Microsoft, Google and Meta are the only companies that can afford to train. I don't feel comfortable with allowing these companies to be the ultimate arbiter of truth behind the scenes.
The AI proponents that support this have no serious solution to the mass displacement of jobs thanks to AI. They actually don't mention any alternative solutions and instead scream about nonsense such as UBI which has never worked at a large sustainable scale.
> Society needs to slow down and find alternative, more sustainable solutions. AI is aligned with short-term economic efficiency and that is detrimental.
I don't think they can come up with sensible alternatives or any sustainable solutions to the jobs displaced as there is no alternative.
Because the rich and powerful people who will reap the most benefit from all the automation will not redistribute the wealth to the now-useless ex-labor force.
> AI is an apex technology amongst technologies designed to further enrich the elite and strengthen the power structure.
This one I somewhat agree with. Ideally these technologies are owned by nobody.
Though it does give me hope when I see Facebook of all companies leading the charge in regards to open sourcing AI. The fact that their business model incentivizes them to do this, is good (lucky) for everyone, whatever your other opinions of the company are.
AI is a technology that, in principle, makes it possible to have a "Star Trek communism" society.
I agree with you that it can also be abused to make the existing state of affairs even worse. But if we resist technical progress on this basis, we'll never get better, either.
I have the same opinion as the person you replied to. My arguments are basically that I don’t support plagiarism, and LLMs/diffusion models of our generation have been trained on a massive corpus of copyrighted material, ignoring the fundamentals of the Berne Convention.
I belong to an internet community whose artists make for a third of its population – they are mostly hostile to generative art and never gave their consent for their art to be plagiarized, yet their artstyle end up in Civitai and their content shows up on haveibeentrained.com.
Personally, my hostility towards generative art would stop if training was opt-in, and I would use GitHub again if it, AT LEAST, allowed members to opt-OUT.
GP doesn’t actually need arguments for it. As change agents AI companies need to argue why they should be allowed to train on others’ code, and clearly in GP’s case they’ve failed to meet the burden of proof.
You are welcome. I probably am not nearly as good as you. It was just a few programs I made in my spare time like some games and a Sudoku solver. I am sure if I were elevated to your coding level, my departure from GitHub would be a true loss. I hope one day I can reach a tenth of your level.