> As long as we don’t have ranked choice voting ... we will continue to vote in the servants of the billionaire class.
I don't think RCV would do much to change that. In order to be elected, you need to be seen, so you need a sizeable media presence. The billionaire class controls enough of the media (traditional, social and "independent") that the people will keep voting for their servants under pretty much any voting system, bar a few exceptions here and there. It's a fundamental issue of electoral democracy, not of the voting system.
One potential alternative would be to switch to non-electoral democracy, e.g. drawing representatives at random rather than electing them, but that's even less likely to happen, and it may end up having different problems. At least it'd suppress all the circus around elections and all that party nonsense, so there's that.
I have that worry as well, but it may not be as bad as I feared. I am currently developing a Python serialization/deserialization library based on advanced multiple dispatch, so it is fairly different from how existing libraries work. Nonetheless, if I ask LLMs (using Cursor) to write new functionality or plugins within my framework, they are surprisingly adept at it, even with limited guidance. I expect it'll only get better in the next few years. Perhaps a set of AI directives and examples for new technologies would suffice.
In any case, there has always been a strong bias towards established technologies that have a lot of available help online. LLMs will remain better at using them, but as long as they are not completely useless on new technologies, they will also help enthusiasts and early adopters work with them and fill in the gaps.
A while back I had made my own alternative to s-exprs that works with Racket. I have no idea if it still works, but I still think it looks nice, and at a glance I feel it's "purer" than shrubbery: http://breuleux.net/blog/liso.html
> Humans could have free will in the same sense, that you can't predict what they are doing, without actually simulating them.
I think it's a good point, but I would argue it's even more direct than that. Humans themselves can't reliably predict what they are going to do before they do it. That's because any knowledge we have is part of our deliberative decision-making process, so whenever we think we will do X, there is always a possibility that we will use that knowledge to change our mind. In general, you can't feed a machine's output into its input except for a very limited class of fixed point functions, which we aren't.
So the bottom line is that seen from the inside, our self-model is a necessarily nondeterministic machine. We are epistemically uncertain about our own actions, for good reason, and yet we know that we cause them. This forms the basis of our intuition of free will, but we can't tell this epistemic uncertainty apart from metaphysical uncertainty, hence all the debate about whether free will is "real" or an "illusion". I'd say it's a bit of both: a real thing that we misinterpret.
You are right about the internal model, but I wouldn't dismiss the view from the outside.
Ie I wouldn't expect humans without free will to be able to predict themselves very well, either. Exactly as you suggest: having a fixed point (or not) doesn't mean you have free will.
The issue I have with the view from the outside is that it risks leading to a rather anthropomorphic notion of free will, if the criterion boils down to that an entity can only have free will if we can't predict its behavior.
I'm tempted to say an entity has free will if it a) has a self-model, b) uses this self-model as a kind of internal homunculus to evaluate decision options and c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information). It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.
I don't understand why a self-model would be necessary for free will?
> [...] c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information).
I don't think humans reach that threshold. Though it depends a lot on how you define things.
But as far as I can tell, most of my second-to-second decisions are very much coloured by the fact that we have gravity and an atmosphere at comfortable temperatures (external factors), and if you changed that all of a sudden, I would decide and behave very differently.
> It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.
Your homunculus is one hell of a complexity threshold.
> However, it is not AI that is ultimately deified and worshipped, but humanity itself---which, in this way, becomes enslaved to its own work.
Doesn't that describe all religion? I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings? The hubris! It seems obvious to me that the gods of all religions are designed by human minds to be receptive to human interests, otherwise nobody would bother worshipping them. In other words, we have always been worshipping ourselves. At least there is reason to think that AI could, at least in theory, be what we expect God to be.
You seem to have many misconceptions about what Catholics actually believe. And then you seem to take exception to these misconceptions. So your exceptions are only with beliefs that exist in your own mind.
It's not really a misconception, this was Feuerbach's and also Nietzsche's or Stirner's criticism of Christianity. It projects human attributes on an ostensibly divine subject "othering" and worshipping them, in reality just attempting to sanctify humanity. (in Stirner's words creating Mensch (human/mankind) with a capital M". This is incredibly obvious in the psychology underpinning a lot of Christian beliefs, the Manichaean good and evil worldview, the meek inheriting the earth, the day of judgement, equality, immortality i.e. trying to escape death, and so on.
It is at least historically important to note that at least Nietzsche and Stirner were reacting to Protestantism as expressed in "modern" Germany.
I'm not trying to make a "No True Christian" argument but rather just want to assert that reform does happen both for good and ill. Luther's original reform, in part, was to point out that political concerns within the church were overriding the spiritual concerns of the laity. He wanted to refocus faith on a personal relationship with God. One major criticism of that refocus is that it caused individuals to become over-focused on the self instead of God (as embodied in the institution of the church).
In both cases you could argue that the principle problem is when the focus of faith is something in the world (either the church or the individual). So I think it is perhaps too far to say that "we have always been worshipping ourselves" when the criticisms within and without the church are pointing that out as the problem that triggers the reform.
That is, both Luther and Stirner can be correct in their criticism of religious institutions. There is more than one way to get it wrong.
It's funny to see the Vatican reusing the Feuerbach thesis about humanity creating the idea of God and then becoming slaves of that idea to talk about AI, as they are the gatekeepers of the original Artificial Idea called God :)
But also in this text we can feel the idea of the human soul and free-will crumbling, that also are the core of secular humanism.
Marxist analysis is also challenged, as we can speculate that AI would make the organic composition of capital to go to the roof... but you can really talk about OCC in regards of singularity AIs resembling more the Aladdin lamp or the Green Lantern ring than a highly automated factory, without even mentioning the possibility of an agency on their own?
> I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings?
Yes.
> The hubris! It seems obvious to me
I would turn that around and claim hubris on your part. You seem to think that your mind and the mind of God are similar, and limitations you perceive are limitations for God.
> You seem to think that your mind and the mind of God are similar,
How come? You think I'm saying that the infinite creator of the universe is unlikely to care about the fate or well-being of humans because... I wouldn't if I was him? I mean, I would. Because I have a human mind. But if there are indeed no similarities between God's mind and my own, well, anything goes, doesn't it? Him caring is just one small possibility out of trillions of alternatives.
> and limitations you perceive are limitations for God.
What limitations? I haven't listed any limitations. Neither a God who cares nor a God who doesn't care is limited. I just don't see why I would assign a particularly significant probability to the former case. It sure would be convenient, but I feel like God being moral in any way that I can relate to would inevitably be projection on my part.
> I just don't see why I would assign a particularly significant probability to the former case.
"And [Jesus] came to Nazareth, where he was brought up: and he went into the synagogue, according to his custom, on the sabbath day: and he rose up to read. And the book of Isaias the prophet was delivered unto him. And as he unfolded the book, he found the place where it was written:
'The spirit of the Lord is upon me. Wherefore he hath anointed me to preach the gospel to the poor, he hath sent me to heal the contrite of heart, To preach deliverance to the captives and sight to the blind, to set at liberty them that are bruised, to preach the acceptable year of the Lord and the day of reward.' And when he had folded the book, he restored it to the minister and sat down. And the eyes of all in the synagogue were fixed on him. And he began to say to them: 'This day is fulfilled this scripture in your ears.'"
> Doesn't that describe all religion? I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings?
I'm a Christian, and I absolutely agree with you that this is absurd! And if God hadn't said it Himself and then proved it true by His actions (both historically, and even in my own life), I'd be right there with you to call it idolatry.
For what it's worth, however, the quoted argument does also feel somewhat hubristic to me: As I see it, it boils down to "I don't understand how God could be this way, and therefore He cannot be this way." I believe that, somewhat ironically, He is beyond our understanding even when it comes to knowing what it means for Him to be beyond our understanding.
> I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings?
Do you care about the functioning of every cell in your body? Ask any cancer patient if they do.
> It seems obvious to me that the gods of all religions are designed by human minds to be receptive to human interests, otherwise nobody would bother worshipping them
Nah that's just what atheists convince themselves. There's nothing obviously nor truthful about this conclusion or the line of reasoning behind it.
All arguments for and against the existence of God are inherently unfalsifiable, but that doesn't mean atheism is inherently more logical than theism.
In fact, from my point of view, the existence of God is way more logically sound than the alternative, and atheists are the ones following delusions and worshipping their own egos
All arguments for and against the existence of God are inherently unfalsifiable, but that doesn't mean atheism is inherently more logical than theism.
I'm guessing you're one of those people who thinks atheism means a belief in the absence of a god, rather than its actual meaning, which is an absence of a belief in a god.
"Writers disagree on how best to define and classify atheism, contesting what supernatural entities are considered gods, whether atheism is a philosophical position or merely the absence of one, and whether it requires a conscious, explicit rejection; however, the norm is to define atheism in terms of an explicit stance against theism." (emphasis mine)
There's no need for us to argue against the existence of God or other ludicrous hypotheticals, that's the whole point of Russell's Teapot.
As to the particulars of the imagined God, actually we do have some evidence for the parameters. The Princess Alice experiments in particular illustrate one desirable property, God (in the experiment, "Princess Alice") should provides behavioural oversight. An imaginary being can deliver effective oversight which would otherwise require advanced technology, but to do so the being must also believe in these arbitrary moral rules.
And that matches what we observe. People do buy Sithrak T-shirts, but, more or less without exception they don't actually worship Sithrak, whereas loads of people have worshipped various deities with locally reasonable seeming moral codes and do to this day.
I wasn't making an atheistic argument. I'm saying that if God exists and is the infinite creator of everything, it's suspiciously convenient that he also happens to be interested in human affairs. Why does theism have to go hand-in-hand with the belief that God loves us? The former may have philosophical merit. The latter, which makes the bulk of the religious, is what I am saying is made up. We can certainly assign moral value to our own lives, but to assert that God just so happens to assign equivalent moral value to us is what I view as hubris.
Rich people currently have little trouble controlling people who are much smarter and more capable than they are. Controlling resources and capital goes a long way and it isn't a given that AGI would transcend that dynamic.
If we can be confident of that, then most of the worst problems with AI are already solved.
Part of the problem is that "do what I said without question" will lead to disasters, but "figure out what I would approve of after seeing the result and be creative in your interpretation of my orders accordingly" has different ways it can go wrong.
(IMO, RLHF is the latter).
Both of those seem to be safer than "maximise my reward function", which is what people were worried about a decade ago, and with decent evidence given the limits of AI at the time.
> If we can be confident of that, then most of the worst problems with AI are already solved
which leaves unprecedented power in the hands of the most psychopathic[0] part of the population. so even if AI take off doesn't happen, we're still getting the boot on our necks.
> Roughly 4% to as high as 12% of CEOs exhibit psychopathic traits, according to some expert estimates, many times more than the 1% rate found in the general population and more in line with the 15% rate found in prisons.
On the plus side, this is still a small minority.
On the down side, these remind me a lot of Musk:
> CEO who worked with several pregnant women told people that he had impregnated his colleagues.
By way of Neuralink.
> CFO thought his CEO had a split personality, until he realized that he was simply playing different characters based on what he needed from his audience.
"Will my rocket explode?"-Musk is a lot more cautious and grounded than everything-else-Musk — including other aspects of work on SpaceX.
> Autocratic CEO fired a well-respected engineer “just to make a statement.” He fired anyone who challenged him, explaining there was no reason to second-guess him because he was always right and needed people to execute his vision rather than challenge it.
Basically all of Twitter, plus some other anecdotes from Starlink, SpaceX, Tesla.
And, this month, fighting with Asmongold about cheating in Path of Exile 2, before admitting to what he was accused of but trying to pretend it's fine rather than "cheating".
> CEO would show up to work and begin yelling at an employee (usually someone in sales) for no obvious reason.
The guy he called a pedo for daring to say a submarine wasn't useful for a cave rescue, the Brazilian judiciary, members of the British cabinet, …
But it looks to me like there's a decent number amongst the other nine who know what grenades are and don't want them to get thrown by the tenth.
The power dynamics here could be just about anything; I don't know how to begin to forecast the risk distribution, but I definitely agree that what you fear is plausible.
it's possible that the other 9 would keep the 10th under control, but if you look at the direction the US has taken, when two billionaires took over and declared inclusion verboten, the others rolled over and updated their policies to fall in line.
In the past, human workers were displaced. The value of their labour for certain tasks became lower than what automation could achieve, but they could still find other things to do to earn a living. What people are worrying about here is what happens when the value of human labour drops to zero, full stop. If AI becomes better to us at everything, then we will do nothing, we will earn nothing, and we will have nothing that isn't gifted to us. We will have no bargaining power, so we just have to hope the rich and powerful will like us enough to share.
If anything like that had actually happened in the past, you might have a point. When it comes to what happens when the value of human labor drops to zero, my guess is every bit as good as yours.
I say it will be a Good Thing. "Work" is what you call whatever you're doing when you'd rather be doing something else.
The value of our labour is what enables us to acquire things and property, with which we can live and do stuff. If your labour is valueless because robots can do anything you can do better, how do you get any of the possessions you require in order to do that something else you'd rather be doing? Capitalism won't just give them to you. If you do not own land, physical resources or robots, and you can't work, how do you get food? Charity? I'd argue there will need to be a pretty comprehensive redistribution scheme for the people at large to benefit.
What we see through history is that human labour cost goes up and machine cost goes down.
Suppose you want to have your car washed. Hiring someone to do that will most likely give the best result: less physical resources used (soap, water, wear of cloth), less wear and tear on the car surface and less pollution and optionally a better result.
Still the benefit/cost equation is clearly in favor of the machine when doing the math, even when using more resources in the process.
What is lacking in our capitalist economic system is the fact of hiring people to perform services is punished by much higher taxes compared to using a machine, which is often even tax deductible. That way, the machine brings only benefits to the user of the machine (often a more wealthy person), less much to society as a whole. If only someone could find a solution to this tragedy.
Forgetting the offhand implication that $6,000 is not out of reach for anyone, this will do nothing. If we're really taking this to its natural conclusion, that AI will be capable of doing most jobs, companies won't care that you have an AI. They will not assign you work that can be done with AI. They have their own AI. You will not compete with any of them, and even if you find a novel way to use it that gives you the gift of income, that won't be possible for even a small fraction of the population to replicate.
You can keep shoehorning lazy political slurs into everything you post, but the reality is going to hit the working class, not privileged programmers casually dumping 6 grand so they can build their CRUD app faster.
But you're essentially arguing for Marxism in every other post on this thread, whether you realize it or not.
Yeah, there's always some reason why you can't do something, I guess... or why The Man is always keeping you down, even after putting capabilities into your hands that were previously the exclusive province of mythology.
I prefer to not use -ist's and -ism's. I read that Marx wrote he was not a Marxist. Surely his studies and literature got used as a frame of reference for a rather wide set of ideologies. Maybe someone with a deeper background on the topic can chime in with ideas?
What value do you bring to the venture, though? What makes your venture more likely to succeed than anybody else's, if the barrier is that low? I mean, I'll tell you: if anyone can spend $100 to design the same new gadget, the winner is going to be whoever can spend a million in production (to get economy of scale) and marketing. Currently, financial capital needs your brain, so you can leverage that. But if they can use a brain in the cloud instead, they're going to do just that. Sure, you can use it and design anything you can imagine, but nobody is going to pay you for it unless you, yourself, bring some irreplaceable value to the table.
Since everyone has AI, then it stands that humans still make the difference. That is why I don't think companies will be able to automate software dev too much, they would be cutting the one advantage they could have over their competition.
It stands that humans will make the difference if they can do things that the AI cannot. The more capable the AI gets, however, the less humans will meet that threshold, and they are the ones that will lose out. Capital, on the other hand, will always make a difference.
I don't think RCV would do much to change that. In order to be elected, you need to be seen, so you need a sizeable media presence. The billionaire class controls enough of the media (traditional, social and "independent") that the people will keep voting for their servants under pretty much any voting system, bar a few exceptions here and there. It's a fundamental issue of electoral democracy, not of the voting system.
One potential alternative would be to switch to non-electoral democracy, e.g. drawing representatives at random rather than electing them, but that's even less likely to happen, and it may end up having different problems. At least it'd suppress all the circus around elections and all that party nonsense, so there's that.
reply