Does anyone know how to handle the depression/doom one feels with these updates?
Yes, it's a great technical achievement, but I just worry for the future. We don't have good social safety nets, and we aren't close to UBI. It's difficult for me to see that happen unless something drastic changes.
I'm also afraid of one company just having so much power. How does anyone compete?
>Does anyone know how to handle the depression/doom one feels with these updates?
Realize that it's a choice to respond to things this way. This feeling comes from a certain set of assumptions and learned responses.
Remember that people are bad at predicting the future. Look at the historical track record of people predicting the implications of technological advancement. You'll find that almost nobody gets it right. Granted, that sometimes means that things are worse than we expect, but there are also many cases where things turn out better than we expect. If you're prone to focusing on potential negatives, maybe you can consciously balance that out by forcing yourself to imagine potential positives as well.
Try to focus on things you personally have control over. Why worry about something that you can't change? Focus on problems that you can contribute to solving.
I've been personally affected by technology advancements, and had to spend lots of time and effort recovering professionally from it. Mind you, I'm not saying it cannot be done, but those that do get affected have to work harder than those that don't.
It's easy to say "don't worry" if you haven't been affected by events like this. I feel it's stronger for society to say "I don't know what will happen, but we'll work through it together."
It is easy wax philosophical when it doesn't affect you directly. There are folks honing on their VFX skills for close to a decade and they will be impacted in a significant way.
It's not waxing philosophical, it's concrete advice for how to handle negative emotions associated with uncertainty and instability. The way things are going, it's very likely I'll be affected directly by these developments at some point. When that time comes, I'm not sure what will be better advice than focusing on the problems that I'm personally able to solve and looking at the potential upsides of the situation.
I think the issue with “don’t worry about things you can’t control” is, in this tech forum, not as valid as you might think.
We are building technology, to suggest no agency is helpful in avoiding any feeling of responsibility or guilt — perhaps rendering your comment within the realms of waxing philosophical.
Who better to worry about this than the people of hacker news?
From a pure mental health standpoint, sure, it’s solid advice but I think it’s narrowed the context of the broader concern too much.
An alternative to learned helplessness of “nothing you can do” is to encourage technologists to do the opposite.
Instead of forgetting about it, trying to put it out of your mind, fight for the future you want. Join others in that effort. That’s the reason society has hope — not the people shrugging as people fall by the wayside.
Depression mediation by agency feels more positive, but I don’t have a lot of experience tbh. Just a view that we, technologists, shouldn’t abdicate responsibility nor encourage others to do so.
That culture, imo, is why a large section of tech workers, consumers and commentators see the industry in a bad light. They’re not wrong.
EDIT: to add, “what problems can I personally solve” also individualises society’s ability to shape itself for the better. “What problems can I personally get involved in solving”, “what communities are trying to solve problems I care about” is perhaps the message I’d advocate for.
I think the point is to start considering a back up plan and then...hakuna matata.
Cat's out of the bag. There is no legislation that will stop this. Not unless/until it has some obscene cost and AI gets locked down like nuclear weapons. But even then, it's just too simple to make these things now that the tech is known.
I sure don't know the answer but we just don't know what's coming next. Gonna have to wait and see.
Sure, I would include a broad set of things under "what you can control", including joining an organization, donating, voting, etc. The OP is excessively worrying about things they truly can't control, like the long-term political implications of emerging technologies.
HN is in the perfect position to wax philosophical; this behemoth is coming for tech too. I've started plotting out what will happen if I have to use my hands to make a living and I'd really rather not be doing that.
But reality is as reality is and nobody is owed a desk job. These are very exciting times with what type of society could be built with this tech, human inefficiencies are responsible for a lot of suffering that we can might be ab;e to stamp out soon.
As someone who has suffered immensely from anxiety disorders and worrying / anger my whole life, this comment is wisdom right here.
Not in my thirties and almost nothing I worried about has come true. I mean, tomorrow we might get wiped out by a runaway technological singularity, but I could've spent the last 30 years of my life worrying a lot less too..
I know that I should stop worrying as I have no control over what might happen but I can't stop worrying. What was the key for you that helped you let things go?
I went to cognitive behavior therapy, for me it was like someone opened up my mind and showed it to me on a screen, it was a mirror into my head. It was amazing how it felt like I could rewire my thought patterns over the course of a few months.
The main takeaway from it all however, was the mantra: _thoughts are not facts_.
If you can realize that your thoughts are not objective truths, you will be much better off in almost every aspect of your life, because after living this mantra for many years, putting it to the test constantly, I know it's solid.
Later on I read a lot of Buddhist philosophy which matched incredibly well with the therapy because a lot of Buddhist thinking and meditation practice is quite similar in it's approach. This sort of reinforced the validity of the CBT because I realized wise people have known about seeing things in an objective light for millennia, which was validating for me and helped me continue on the introspective path.
Basically, we're all hallucinating in one way or another, almost all of the time, and that is ok, just be aware of that. When we're worried about the future, we're worried about something which doesn't yet exist, which is actually crazy.
Of course it doesn't mean we should just ignore long term problems, no one advocates for that. But we shouldn't assume we know the outcome in advance because that often causes stress.
Warning: I think that for most westerners, it's "safer" to get into something like CBT, Buddhism comes with some IMO very confronting ideas for a lot of people where as CBT is much more user friendly for westerners.
The implications of technological advancement are always the same- if it can be used to replace people at a satisfactory level, it will. Appealing to stoicism is nice, but it's a bittersweet salve in this situation.
Honestly though, as if technological advancement has been overall worse for humans. Without it, we'd be fighting lions for food in the Savannah forever, that might be appealing to some, but I'd have prefer to have spears, fire, shelter, medcine etc.
Industrial scale technology might ruin us though, so you might have some point, mostly I'm referring to climate change which is for sure the greatest existential threat imaginable right now. However it seems technology might bail us out here too, nuclear and renewables.
I have no issue with technological advancement, it's obviously one of the pinnacles of human achievement- I have an issue with how those advancements are spread about and shared, especially shortly after large technological advancements happen.
We undoubtedly have reaped immense benefits from the industrial revolution for example- that doesn't mean I'd have any interest in living through it or that it was executed in a way that prioritized the people who lived during those times.
Open source stuff is great, and I support it and have contributed to projects myself, but people bandy it out as if it's a silver bullet and I have my reservations there. The issue goes way beyond technology itself, it's structural/sociological/cultural and that's not going to be fixed just because there are open source alternatives.
It doesn't stop that, but would you prefer a world where you're unemployed and locked out of the technology, or unemployed, and have access to the technology so you can learn and use it for free to maybe get back in the game?
there must be a fallacy name for 'more of a good thing is always a good thing' line of reasoning. almost every good out there is good in a certain range. outside of that range it becomes detrimental, possibly deadly. there is even a Swedish word for it, https://en.wikipedia.org/wiki/Lagom. a few examples.
material:
- water: too little => thirst, too much => drown
- heat: too little => freeze, too much => burn
- food: too little => starvation, too much => obesity
spiritual:
- courage: too little => cowardice, too much => foolhardiness
- diligence: too little => slothfulness, too much => workaholism
- respect: too little => disregard, too much => idolatry
I understand your sentiment entirely, but it's not what I said, I didn't say an abundance is everything we should strive for , I said that having more efficient systems is good.
> Realize that it's a choice to respond to things this way.
Why do people always say this / think that saying this is helpful? Try saying to someone with ADHD, "realize that you are choosing not to get your chores done today. You're choosing not to get out of bed on time. You're choosing to show behavior that your peers describe as 'lazy'. This will keep happening as long as you let it!"
So what if you have the ability to choose whether you are depressed or not? Not everyone got the same choice. Not everyone still has that choice.
I don't really expect another solution, but this always kind of bothers me when I see people saying everything is a choice.
With neurodivergence and mental disorders, what you see as "choice" can end up not being a choice at all.
At a physical level, we don't have control over anything, it's all just subatomic particles bumping into each other. That doesn't mean all perspectives are equally helpful for solving problems and functioning in the world. I mostly agree with your points, but where we might disagree is whether it's useful to have certain psychological categories or disorders become part of one's identity.
> where we might disagree is whether it's useful to have certain psychological categories or disorders become part of one's identity.
You might read my comment as trying to claim that my disorders define me and that because I have these disorders I can afford to give up on this stuff because 'it's hopeless'. Truth is I've been trying to get past this for damn near a decade at this point and it's not nearly as easy as you make it out to be, and that's why I say that I don't have the same choice you think I do.
I didn't even know I had ADHD until a year or so ago, I'd just routinely lose the ability to do the stuff I love and I'd have to go find something else to do instead. Depression would stem from all the things I knew I loved but that I could no longer motivate myself to do. In fact I was probably even worse off before I knew about this because I thought that I was just doing something wrong, not being controlled by an invisible menace that most other people don't even know exists
I don't mean to be hostile or to impose that it can't be as easy as you're describing. I just don't think that it's right to say it's always just a choice how you react.
I have tons of completely involuntary reactions caused by primarily trauma, but I can't control them. They do things like force me literally out of consciousness with overwhelming guilt and/or sadness. That's not a choice. I didn't choose that. That's completely autonomous!`
It is objectively a better survival strategy in a complex enough society, to focus on unfair advantages and let the society burn to the ground. The suckers are going to take care of it and eliminate themselves too, and in a sense there's nothing more important than improving your own short term self preservation. This is actually psychopathic, and also kind of psychopathic too.
>> What I posted is what I have personally found to be the most useful advice in overcoming self-destructive mental habits
I'm glad a one-time, one-line quip worked for you, but in my experience, positive mental habits are built over time, through support and continuous practice.
I apologize for over-responding, but let me attempt to be more clear:
If you are responding to people's problems with common one-liners, it can be interpreted as belittling someone. It could be interpreted as an attempt to over-simplify or attempt to make them feel they are "inferior" to see and solve their issues, when their issues are to them, much larger than a random one-line quip.
The OP was asking for advice dealing with negative emotions. I gave what I consider to be the best advice for dealing with negative emotions. Just because something is a "one-liner" doesn't mean it isn't also a deep truth about human psychology. If you interpret what I wrote as belittling them or trying to make them feel inferior, all I can say is I disagree with you, because I know what my motives were in responding.
This is excellent advice. I will also add that with change and uncertainty, it’s difficult for us to imagine how banal things can ultimately turn out to be.
For example, I’m getting text messages all day long from random politicians asking for money. If you told people 50 years ago that one day we’d be carrying devices where we could be pinged with unwanted solicitations all day and night, they might have imagined an asphyxiating nightmare. But in reality, it’s mainly a nuisance.
The point is that your brain makes all kinds of emotional predictions about the future, but they aren’t really very useful and if you’re experiencing depression or anxiety, I can guarantee they are biased predictions.
The tens of thousands of people working in entertainment building other people's visions can now be their own writers, actors, and directors. And they'll find their own fans.
Studios will go away. Disney will no longer control Star Wars, because your kids will make it instead. In fact, the very notion of IP is about to drive to zero.
And OpenAI won't own this. They won't even let you do "off book" things, and that's a no-go for art. Open source is going to own this space.
There are other companies with results just as mature. They just didn't time a press release to go head to head with Gemini.
Sorry, but this is a 5th-grade take everyone on tech-heavy forums loves.
Only some people can make Star Wars (the pinnacle of independent filmmaking if you read Lucas's biography). It has nothing to do with the tools.
IP in the arts is how artists get paid.
I can assure you that no one in the creative industry feels liberated by these tools. Do you realise that just because you are good at lighting, you don't want to be an actor and make a movie? No, you like to be good at lightning, work with others who are good at what you do, and create a great work of art together.
AI imagery only knows what exists. It's tough to make it do innovative technical effects and great new lightning. "oh my god, stock video sites are dead" Yes, exactly; stock, by definition, is commoditised.
What I see is the tens of thousands of people in troll factories producing content for 3/4 of the world population ready to believe whatever they see in the TV.
I think a more likely scenario is that people will be so used to it that a lot of people are going to have trouble believing that real things are real. Conspiracy theorists already suffer from this and it's going to get so much worse.
I think in the initial years there'll be some major incidents where a fake thing gets major attention for a few days until it's debunked, but the much larger issue will be the inverse.
> Be excited! The tens of thousands of people working in entertainment building other people's visions can now be their own writers, actors, and directors. And they'll find their own fans.
It's terrible news for the people being replaced. Their training and decades of experience is their competitive advantage and livelihood. When that experience becomes irrelevant because anyone can create similar quality work at the push of a button, they're suddenly left with nothing of value in a world flooded with competition.
Fully agree. I got a bit depressed Nov 22 when chatgpt and midjourney dropped… and then realized midjourney would let me create images I’ve had in my head for years but could never get out. (At least, MJ gave a reasonable approximation)
People should already be skeptical of everything they see/read on the internet. I don't think this is going to change my media consumption habits dramatically.
Dall-E was crazy and then suddenly people were doing the same thing on consumer hardware with an open model within a year.
Filmmakers being able to bring their vision to life using generative models is going to create such a huge expansion of the market.
What people don't realize is that long term these advances are a death knell for mega-corps, not for individuals.
Why do I need to kiss Weinstein's ass to get my movie made if I can do it with a shoestring budget and AI and have the same assistance to create marketing materials, etc. I need a lot less money to break even and can focus on niche markets aligned with my artistic vision instead of mass appeal to cover costs plus the middlemen involved in distribution and production.
Film/video editing isn't exactly known as the industry where everybody loves their job and doesn't want to kill themselves.
I made a twitter thread[1] with weird metal cybertrucks using Midjourney a couple days ago. I personally enjoyed the process and do not have the talent nor the time to do that without generative AI. There are people who do have that talent, but honestly I doubt anyone else would've put in the time.
I think you might have it a little backwards. For most people, the fun part is "making a movie", not "watching hundreds and hundreds of hours of footage picking between 10 different shots". That's the drudgery, and that's the part generative AI can eliminate.
I've made my living that way and absolutely loved it. What I did not love (and partly why I left the industry) was the difficulty of getting paid decently at the bottom tier; I had the bad timing to come in right as the bottom was beginning to fall out of the indie market and making straight-to-video b-movies 3 or 4 times a year ceased to be a viable business model.
I think you might have it a little backwards. For most people, the fun part is "making a movie", not "watching hundreds and hundreds of hours of footage picking between 10 different shots". That's the drudgery, and that's the part generative AI can eliminate.
No, that's the craft, and solving problems where the continuity doesn't line up, or production had to drop shots, or the story as shot and written sucks in some way, is where the art comes in.
The drudgery is things like ingesting all the material, sorting it into bins, lining up slate cues, dealing with timecode errors, rendering schedules, working your way through long lists of deliverables and so on. You have literally confused the logistics part with the creative act.
I have not confused it, I'm simplifying to make a point. Yes, of course there are many people who love the art of editing, or taking the right shot, or acting, or directing, or special effects, or all of the 100s of things that go into making a movie or TV show or other video.
But many of those things involve a lot of drudgery, and the drudgery is what these "AI" solutions are best at. If you want to go above and beyond and craft the perfect shot, that opportunity would still be available to you. Why would it not?
When we invented machines that make clothes, did that reduce the number of jobs in the clothing industry? When we got better and better at it, did that make fashion worse? No. If you want a machine made suit for $50, you can find one. If you want a handmade suit for $5000, you can find one.
Tech like this expands opportunities, it does not eliminate them. If and when it gets to the point where Sora is better at making videos than a human in every conceivable dimension, then we can have this discussion and bemoan our loss. But we're not even close to that point.
I don't buy this simplification claim; you literally described the core skillset as drudgery. Put another way, what parts of film editing do you not consider drudgery? Could it be that you tried it previously and just didn't really like it?
And with your suit example, you're looking at it from the point of view of consumer choice (which is great) without really looking at the question of of how people in the clothing/textile industry are affected. It's difficult to find longitudinal data at the global level, but we can look at the impact of previous innovations (from outsourcing to manufacturing technology) on the US clothing market; employment there has fallen by nearly 90% over 30 years: https://www.statista.com/statistics/242729/number-of-employe...
The usual response to observations like this is 'well who wants to work in the clothing industry, those people are now free to do other things, great opportunity for people in other parts of the world etc.', but the the constant drive to lower prices by cutting labor costs or quality has big negative externalities. Lots of people that used to make a living thanks to their skill with a sewing machine, at least in the US, are no longer able to monetize that and had to switch to something else; chances they were less skilled at that other thing (or they'd have been doing it instead) and so suffered an economic loss while that transition was forced upon them.
The "someone must have lost out economically" argument falls fairly hollow when you actually look at the stats and see that the vast, vast majority of people end up better economically when we develop technology and increase efficiency.
Luddism is never the answer.
Scratch that; luddism is the answer for people who don't actually care about humanity as a whole (but frequently pretend they do) and just want their hobby or their job or their neighborhood to stay the same and for everyone else to stop ruining things. But for the rest of the world, increasing technological efficiency means more people get more things for less. This is good actually.
This reduces filmmaking to only editing. Filmmakers won't be choosing between 10 different shots but instead between 10 different prompts and dozens of randomized outputs of those prompts, and then splicing them together to make the final output.
Prompts are just the starting point. Take image generation for example and the rise of ComfyUI and ControlNet, with complex node based workflows allowing for even more creative control. https://www.google.com/search?q=comfyui+workflows&tbm=isch
I see these AI models as lowering the barrier to entry, while giving more power to the users that choose to explore that direction.
All that amounts to just more complex ways of nudging the prompt, because that prompt is all an LLM can "comprehend." You still have no actual creative control, the black box is still doing everything. You didn't clear the barrier to entry, you just stole the valor of real artists.
So wrong. There are some great modern artists in the AI space now who are using the advanced AI tools to advance their craft.. look at eclectic method before AI and look at how he evolving artistically with AI
Shadiversity made the same class of attribution error. AI users aren't evolving artistically, the software they are using to simulate art is improving over time. They are not creators, they are consumers.
Photographers have a great deal of creative control. Put the same camera in your hands versus a professional and you will get different results even with the same subject. You taking a snapshot in the woods are not Ansel Adams, nor are you taking a selfie Annie Leibovitz. The skill and artistic intent of the human being using the tool matters.
Meanwhile with AI, given the same model and inputs - including a prompt which may include the names of specific artists "in the style of x" - one can reproduce mathematically equivalent results, regardless of the person using it. If one can perfectly replicate the work by simply replicating the tools, then the human using the tool adds nothing of unique personal value to the end result. Even if one were to concede that AI generated content were art, it still wouldn't be the art of the user, it would be the art of the model.
It takes different skills depending on how deep you want to go. Try setting up your own video creating lab using stable diffusion to generate frames. It can make AI videos, you also need to have a lot of Linux dev op skills and python skills..
I did in fact make the twitter thread. The images I used in said thread were generated using midjourney, which I stated here and in the thread (which I made, by tweeting).
I appreciate you being straight up about it. I wasn’t trying to be harsh, and I apologize for not being clear. I find the terminology used when using ai to create things interesting. “I wrote this using X” versus the never used “I instructed X to write this for me”.
Are you honestly comparing taking a photograph (and "properly", i.e. thinking about lighting and composition and such, versus firing off a snapshot on your phone) with typing "Make me a picture of Trump riding a dragon"?
Are you genuinely equating the profound and labor-intensive process of painting, with its meticulous brushstrokes, profound understanding of lighting, composition, and the tactile relationship between artist and canvas, to the trivial button pressing of photography?
Disclaimer: This post was generated using an llm guided by a human who couldn't be bothered explaining why you're wrong.
This comparison doesn't work because when you talk about photography, you say you need to do it "properly", but you don't apply that same logic to prompt crafting. Typing "Make me a picture of Trump riding a dragon" is not "proper" use of generative AI.
I became a software engineer because I enjoy coding. If you told me software will now be written by simply describing it to a computer, I would quit because that sounds like a fucking terrible way to spend your life. I assume that video editing and post production is the same: a creative problem that is enjoyable to solve in itself. When you remove any difficulty or real work from the equation, you probably get a lot of bad, meaningless content and displaced people without marketable skills
It's not that long ago in human history that basically none of the jobs we do now existed. So it is kind of myopic to think that any current career is a calling. Art can become a craft again, not a career. There is nothing wrong with that.
The issue is that those jobs that got automated to "become a craft again" have mostly vanished, except for high-end stuff. Some examples: shoe making, artisan furniture, tailors, watchmakers. Unless you are the best of the best these are hobbies now not something you make money from.
Nowadays most people make money in bleak half-automated jobs (e.g. construction, factory workers) or in white collar jobs sitting in front of a computer in some cubicle doing some mind numbing task for a megacorp.
I'm usually hyped about technological advancement, but very bleak about AI. I think it will just bring more sublte propaganda for state actors, more subtle advertising for megacorps, the dieing of creative jobs like graphic artists or actors is just a sad sideeffect (these will still exist, but only as high end -- we will always have real AAA actors, but the days of extras on movie sets are counted -- lots of the Hollywood protests were because studios started doing contracts for noname actors that stated that the studio will regain rights of the actor's digital likeness)
When is a time in history when everyone had really great jobs? Before the industrial revolution, you had most people doing subsistence farming. During the industrial revolution, you had 14 hour a day exploited laborers working in factories. Maybe there was a brief period after World War II where you had a large middle class with stable careers and affordable housing. That's not the norm for the millions of years of history of human evolution.
To me, this reflects a perfectionist mindset. Life is better today for billions of people than it has been at any other point in the history of the human species. If you consider it a "bug" that we don't live in some sort of utopia where everyone's dreams are fulfilled, maybe you need to change your expectations and view things in a larger historical perspective.
It is perfectly possible to see that we live in the best time humanity has ever lived in and be concerned that we’re are at risk of regressing. Especially with people claiming that any regression is simply not viewing things in a larger historical perspective.
Nope. People are concerned. There has been a million times when people recklessly and blindly did things without carefully examining the consequences leading to terrible results and human suffering. Some examples: DDT, Iraqi war, fast fashion, early usage of radioactive materials as medicine, asbestos, etc
> Some examples: shoe making, artisan furniture, tailors, watchmakers.
> Nowadays most people make money in bleak half-automated jobs (e.g. construction, factory workers) or in white collar jobs sitting in front of a computer in some cubicle doing some mind numbing task for a megacorp.
And all the while they enjoy abundance of shoes, furniture, clothes and watches with value/price ratio absurdly high by standards of most of human history.
Just wanna point out that making stuff is different from having stuff. Making your shoe is much different from buying a Nike from the store (and I don't make shoes ;) ).
The craft is an activity, kind of an art by itself. Many find it enjoyable.
It's a luxury journey that most people around the world simply can't afford. Modern world is a marvel because it feeds and clothes them. If they had to pay a market rate to the artisanal shoemaker, they would walk barefoot.
There's nothing "bleak" about building stuff with your hands. Many building trades workers like what they do. And they generally appreciate technology improvements because those tend to make the work safer and less physically demanding.
This sounds nice, but having worked with many artists in the past a lot of them do it because they're good at it, it's enjoyable enough, and it pays their bills so they can eat.
Telling them, "You're now free to make the art you really wanted to make!" doesn't bring much comfort when you're taking away their ability to put food on the table.
Exactly, there are lot of arm chair experts in the forum today who have no clue about the reality of the industry, people do it because they are passionate about it and devote thier whole life to get good at it, this is just taking food from thier mouth.
It takes a lot of time to develop that craft, which won't be available to you if you have to do drudgery to keep a roof over your head. You're arguing for art to be at best a hobby, and full-time pursuit of it to be limited to rich kids.
Also I take issue with your argument about 'none of the jobs we do now' existing through most of history. Farming, construction, fighting, bookkeeping, cooking, transport, security are all jobs that have been around as long as people have lived in settlements.
Sure, you could point to the long history of nomadic hunting and gathering prior to that, but that's like expanding your argument back to the origin of cellular life or forward to the heat death of the universe in order to make your interlocutor's arguments look insignificant on a cosmic scale. It's not a helpful contribution to addressing the real challenges of the present.
There’s also loads artists that do web and graphic design, make videos for product demos, ad campaigns, and so on. It’s perhaps not the purest form of art, but it is one way in which artists can apply their craft and still put a roof over their heads. A lot of these AI tools seems squarely aimed at eliminating those positions.
For what it’s worth, I think we’re going to see a slide in quality. Maybe there will be a niche for some. But, I think companies will settle for 70% quality if it means eliminating 100% of a full-time position.
How long of a time are we talking about here? It was a lot easier to make a modest but steady living in the arts 30 or even 15 years ago. It's probably easier to have a breakout hit today on YouTube or Tiktok and maybe make a lot of money fast, but not to making a living consistently without sweatshopping content or being extremely personally attractive or similar.
Also not that long ago electricity and clean drinking water weren't a thing. The fact that people can make a career as an artist now, and couldn't before, is something I'd consider an advancement! "Nothing wrong with that" is a conclusion that simply doesn't follow from the rest of your post.
Yes, much in the same way that hiring someone to cater a dinner party makes me a great chef.
(edit to give some body to my comment above:
Hosting a great dinner party is hard work and requires coordination between food, decor, seasonality, people attending, etc. It is akin to a director coordinating the parts of a film. So I do think hosting a good dinner party can count as artistic expression.
I don't know the parent comment's intended reading, but I was reacting to the idea that typing a Sora prompt makes someone a good artist. If the parent means instead that AI allows people to coordinate multiple media in a broader expression that was not possible otherwise, then I fully agree.
So everyone at your dinner party gets to eat "better" food? Unless the point of the party was for you to cook then it's an improved experience.
GenAI is a tool that lets creators of one medium expand to other mediums without much effort. Like having transcripts auto-generated for a visual podcast, just in the other direction. Low budget (or amateur) poems/songs can turn into short videos; or replace generic album art with better quality generic album art.
The draw will be the primary medium, the rest will just be an extra bonus.
It's the samr discussion we had long ago when digital cameras cane abouut and image editing became easy and commonplace. Yes, there is a lot of badly edited stuff around now. For example, most meme images on social media are made by putting new captions on old content, and maybe changing a few details about the rest of the image. No, photographers didn't become obsolete. They professionalized.
When cameras were harder to use, you had national geographic taking you all over the world to photograph different locations because only you and a handful of people knew how to take a picture properly.
Now you just hire a local person with a camera to go take the picture you want since it’s much easier to use a camera.
You had people doing photography for ads, now stock photography will do for most brands.
You had people buy high school portraits, I am not sure if people buy those anymore, but a picture of what you looked like in high school is worth a lot less when you can take a selfie every other day.
I'd argue that it allowed people that lacked the creative confidence to create original art, to now have the confidence to make generic art. I don't mean this in a deeply negative way. I just think that people's view of "good art" is so narrow.
AI allows mediocre people to make an endless stream of mediocre, dull, aseptic, sterile content. FTFY.
I don't see it as a negative per se, the thing is most people won't have the decency to keep all that shit for themselves and, say, share just the best 1% they produce. They will flood their social networks and the rest of us will have to sweep through the crap for our daily dose of internet memes.
My point is that I am an aspiring artist, who is waiting tables and I invest all my spare cash and time to get better at my craft and hopefully allow my craft to support me financially.
Any hope of financial benefit coming from my craft is quickly taken away by Dall-e. This has nothing to do with how much I enjoy my art.
There will always be things humans can do that AI cannot. And if there ever comes a point when that's not the case, there will be no need to distinguish the two.
What does “better” mean in this context? The camera was a better at capturing realism than any painter who ever lived. While we still have people who paint in that style, there aren’t nearly as many, and art took new shapes and forms.
Most “good” art isn’t just what you see, it’s also the story behind it. Why was it made? What is the story of the artist? What does it make you feel?
AI might allow more people to tell some of those stories they may have lacked the raw skills to tell before. And for those who have the skills, they can make exactly what they envision, without being limited by some of the randomness in the AI. I think there will always be a place for that, and at the top of the market, that’s what people want.
This use case is a direct threat to actors when AI can create realistic footage with human and non human subjects, add to this generated speech, you have totally replaced hiring actors and killed thier employability.
Sorry, that's like claiming that the cinema has killed the theater, or that computer games have killed movies. Or that photorealistic 3D games have killed 2D slider games.
Blockbuster movies depend to a large extent on the pedigree and abilities of their cast. For the big studios, these models are therefore quite useless apart from bringing dead actors alive again. If publishing material created from living actors without isn't illegal already, in a few years it will be.
This might actually save the movie industry and force it to improve the quality of its output. There will be a huge indy scene of movie makers using models that can only compete via the content of the movies they produce. The realism of the characters won't matter because everyone can have those now. The current big studios will be forced to make very good use of human actors to compete with them though, and become innovative again.
Your analogy doesn't quite make sense. The reality of the TV/Film production is that most of what we watch are created by big production houses and not indie creators. These companies will do whatever it takes to reduce thier costs, biggest of that is salary for hundreds of staff that they currently employ.
Now with such AI tools, you can write scripts, create art work, crate footage, record voice overs and dialogues. All of this means less need for labor - creative that will not only cause huge employment in the sector but also lead to protests, it already happened last year in Hollywood, it's going to get louder and louder unless we put regulations to prevent job disruptions.
But those tools can also be used by the very employees that got laid off. They be would become part of the indie scene. The film studios will be left with their trademark portfolio that will be milked for profit. We might see an Avenger movie every month. There will be an absolute glut of such productions, to the point that people might not be interested in it anymore. Can't tell what happens next. We might lose ourselves in the holodeck, or we might again appreciate media produced with more human touch.
I like going to theater or opera. Even for famous pieces, the performance will be slightly different and unique every time. Imperfect, but with changing and nevertheless accomplished actors, singers, musicians, and dancers. Many people feel the same and that's why they watch live performances of singers, bands, and DJs.
Most likely market will be consolidated by existing popular and prescient actors who will add IP protections to their AI likeness and benefit from it. Especially after a certain age
>We are giving the enjoyable parts of life to a computer. And we are left with the drudgery.
Yesterday I asked a local llm to write a python script to have a several multimodal llms rank 50,000 images generated by a stable diffusion model. I then used those images to train a new checkpoint for the model and can now repeat the process ad infinitum.
In the olden days of 2020 I would have had to hire 5000 people each working for a day to do the same.
> We are giving the enjoyable parts of life to a computer
Are we though? People still do plenty of things out of interest or hobby, despite it being fully automatable?
e.g. blacksmithing or making certain homemade things?
While these are non digital things, why can't we apply the same thing here?
Some people still hand write assembly out of the novelety and interest of it. Despite there being better tools or arguably better ways of writing code.
There is no moat. This will all be commonplace for everyone soon, including with a rich open source community.
OpenAI won't let you do nudity or pop culture, but you can bet your uncle that models better than "Sora" will be doing this in just a few months.
> We are giving the enjoyable parts of life to a computer. And we are left with the drudgery.
No. This means that the tens of thousands of people working in entertainment building other people's visions can now be their own writers, actors, and directors.
This is a collapse of the Hollywood studio system and the beginnings of a Cambrian explosion of individual creators.
Because there will be millions of other people all making AI movies. And you'll be competing for attention with all of them in an attention lottery with seven or eight (nine?) figure odds against you.
The only original creativity will be in creating new formats and new kinds of experiences - which will mostly mean inventing new kinds of AI.
Everything made in an existing format will either be worthless or near as.
Same applies to software dev. Far more quickly than most people expect, it will also apply to AI dev.
Couple of years more, and everyone* will be able to generate these things for themselves, with the only requirement being knowing what to generate next. We might get very bored, and we might decide to appreciate partly or fully human-made things again.
*: except the ones starved for compute resources of course
Will there even be "blockbuster" video games, movies, books, etc? If hours after release, there are hundreds of lookalike clones, will there be "hits" like we know of today? We see this in the App Store today. It is just hard for me to see that part-time product being a big success, when at the first whiff of an interesting idea it will get repackaged, probably into something more effective.
"have scientific journals disappeared" -- ironically, in the AI field most of the action is on arxiv / github / twitter. journals have been obsolete for decades, and the '10s obsoleted conferences too. the only function journals / conferences still serve in the AI field is to stack rank researchers and provide signal for hiring / funding decisions.
Yours is an optimistic take and frankly I do agree with most of it: there isn't an upper bound to economic opportunities as long as everyone gets to use the tools, since the cost/risk to produce something new will significantly decrease, this will boost the diversity of creative industries from which countless gems will be made. However the problem is what if everyone loses their current opportunities before these techs become widely available? How to handle the transition period? A monopoly/oligopoly is not going to care about helping the average person in accomplishing that transition, because it won't make their next quarterly earnings report look pretty.
I don’t think this plays out this way in reality. Look at music streaming. Record labels are still important to making or breaking music careers even in the age where any artist is discoverable, there is no ‘switching cost’ and there is 0 cost of production and distribution (making and shipping CDs).
In a world where attention is scare I sadly think big corps and power brokers will still play a large role. Maybe not though.
Filmmakers being able to bring their vision to life using generative models is going to create such a huge expansion of the market.
Of what market? Certainly not film production. I have my doubts about whether it will expand the market for films, in the economic sense. The lower the cost of producing and distributing a film, the lower the monetary value people will place on it.
Look how most music artists are no longer able to survive on royalties, and a few massive streaming companies have an astounding profitable oligopoly of consumers' music interest. Yes, many pre-streaming publishers were exploitative or unethical, but I'm not convinced that it was to a greater degree than the current market leaders. Consider also that the streaming revolution steamrolled many, perhaps most, indie record labels that supported niche genres; some live on but are no longer able to sustain physical output and a reduced to being digital marketing companies.
Now, people will continue to tell stories and entertain others, so technology like this will be good for people with an artistic vision who can't easily access publishers for whatever reason. It will certainly allow people to pursue bold artistic visions that would not otherwise be economically feasible - exotic locations, spectacular special effects, technically complex perspective moves. Those are good things; I worked in the film industry for a long time and have several unproduced scripts that I'd like to apply this technology to, so I'm not rejecting it.
However, more content doesn't necessarily translate into more economic activity; I think it very likely that visual media will be further devalued as a result. People who have spent years or a lifetime developing genuine craft will be told to abandon it in favor of giving suggestions to a computer system, and those who don't will be laughed at or suspected of fakery, because fakery is so widespread these days (Relevant recent example: https://news.ycombinator.com/item?id=39379073). The easier it becomes to make something, the less value the market will assign to it; rational from the abstracted perspective of pure price theory, disastrous in real life.
Increasingly, we seem to be tilting towards a Huxley-esque dystopia of stunning and infinite-feeling virtual worlds to which we can escape on demand, and an increasingly shitty real world marked by the brutal economic logic of total resource and information exploitation. Already a stock rejoinder to complaints about the state of things is that humanity is on paper richer than ever before, to the point that bums have smartphones and anyone can afford an xbox. I have a homeless neighbor who's living in his car, spending his dying years watching YouTube on his phone to fall asleep because he's lonely. Technically this is an expansion of the market, but I don't think it's a good outcome.
Mega-corps exist by dividing work into tasks that can be procedurally performed by minimally skilled laborers, then keeping the delta between cost of time and value of the product. Was it Karl Marx who argued this first?
AI turns skilled labor into cheap ones, supposedly, right? It's a massive enabler for mega-corps. Not a death knell.
I am not sure we’re having fewer megacorps as technological progress marches on. We probably have bigger companies with broader influence now than 50 or 100 years ago, right?
I seem to be immune to it now. I’ve just accepted that I’m going to feel less and less useful as time goes by, and I should just enjoy whatever I can. Life will probably never be as good as it was for people 30 years older than me, but it’s not something that looks likely to change.
Nothing about the future looks particularly good, other than that medicine is improving. But what’s the point of being alive in such a sanitised, ‘perfect’, instant-dopamine-hits-on-demand kind of world anyway?
Just say to hell with it and bury yourself in an interesting textbook. Learn something that inspires you. It doesn’t matter if ‘AI’ can (or soon will be able to) do it a billion times better than you.
1) My wife trained as a typesetter on a photo typesetting machine. That was already replacing typesetters working with lead, and the people sorting the used lead, and working with inks etc. They still needed a past-up artist and more. Eventually the GUI based computer arrived, with PageMaker, Quark, Indesign etc. These days she is super productive with a massive online icon library available, full printing and distribution capabilities. Able to do a job that could have involved half a dozen or more people previously.
Are those people unemployed now? Not really (we are talking 1.5 generation now, so not the same people). Unemployment levels are low, and the workforce is significantly larger, with men and women working. The working (outside of the home) part of the population has gone up significantly over a few generations, despite all the new enabling productive tech.
What I see is a lot of visually higher quality work being delivered, but often with the same core content. So productivity has increased, but you get a glossy new shiny report in a PDF, instead of a photocopy of a typewritten page. (Yes, I do simplify. But I think you get the gist.)
2) I started as a system admin, the a systems analyst, worked through project manager, etc. until I was leading startups. In the space where I work now, circular economy and food production, there is so much work to do, that any AI support we can get is welcome. But as the work is innovative, new and not done before, most often the AI tools aren’t that useful, yet. That may change, but with a society that needs to replace a significant part of the infrastructure and processes to achieve a long term sustainable society, I actually don’t worry the AI tools will take my job or any of my colleagues job away. There is plenty to do. I have enough new things in front of me that I could probably keep a whole big venture fund occupied for a long time.
This reminds me of some claims I've heard about domestic cooking and baking. Supposedly recipes got more sophisticated as we developed machinery like blenders and more pre-processed ingredients to make work quicker which ultimately resulted in time to cook or bake when hosting guests to take roughly the same amount of time as before. The dishes were just more elaborate.
Same with finance, we could all live an upper-middle-class life with all the luxury, for one parent working in the home, if we were willing to live the same lifestyle as the 1950s. But life today is much easier than even then — and we’d rather pay more for that extra luxury than live the spartan lifestyle that would’ve been luxury then.
I think that's a bold claim. Many things that were cheap then are now unreachably expensive. Many careers that paid well then are gone. The converse is also true; many things that were unreachably expensive then are cheap now, but that doesn't mean that we can easily live such lifestyles if we choose to. Forgoing a cellphone and seatbelts doesn't make it easy to afford a bungalow! Nor is such a lifestyle even legal, in many cases. And you won't find a payphone to call your family. The world moves around us, and it's not a matter of choice to be pulled along.
This is just flat out untrue for a variety of reasons. People need to stop thinking of 1945-1975 as "the norm," it was a world historic anomaly that was a direct response to earlier events- the economy targeted full and fair employment to stop people from drifting to more extremist ideologies because everyone literally just lived through the result of highly unregulated capitalism for example. That was a large impetus behind Keynesianism and Bretton Woods, which was ultimately unsustainable and led to the broader global economy we have now- which itself seems more unsustainable by the day.
- 30% of food produced is gone in losses and waste
- People in middle to high income countries have significant obesity and related problems (some countries have either malnourished people or obese people, and less in between)
- Pollution from our agriculture and aquaculture is killing the ocean near land
- Our intensive agriculture is threatening biodiversity
- We have lost up to 70% of insects in many industrial nations
- Essentially all (90%?) of ocean fish stocks are overfished or at capacity. Even a supposedly rational and environmentally aware nation like Sweden can stop the overfishing. The cod stock has collapsed and now the herring is going too
- The soils are being destroyed or depleted
- The phosphor and nitrogen cycle is broken (fossil fuels or resources that are mismanaged)
I could go on. It is well documented.
I work on circular food production, where we really care about putting together highly efficient nutrient loops and make sure they work locally/regionally. A mix of tech (automation, IT, climate control, etc), agriculture, horticulture, aquaculture, insects etc. As part of this there are very interesting completing pieces with ways of getting the nutrients in creative ways (new food tech), dealing with animal disease (new tech), combined with sensors, ML and just plain old common sense, that can make a huge impact. If we just think through the process a bit more and take responsibility for the externalities, which really are starting to bite.
Much is still overhyped in foodtech imho, specifically the stuff which claims silver bullets without proper circularity. Which is detrimental to the real solutions as investors like simple superscalable solutions, and the simple solutions are mostly not sustainable. (There are of course exceptions).
The parent commenter is saying that the moment automation entirely replaced traditional typesetting, people moved on and started using the new technology.
Sure a part of the population is slow to adapt and therefore at a disadvantage. But the others, like his wife, adapted.
The idea is that this wave of automation will be no different than other times this has happened to us in the past.
The difference could be that A(G)I will automate away the would-have-been new jobs as well, instantly, as it will function as a smarter human that needs no sleep and demands no pay, sort of like a young programmer but without capital owners even having to supply caffeinated beverages.
A lot of decisions are not based on intelligence alone. A lot is about personal beliefs and tastes.
I've never totally understood this binary moment when AGI does "everything" better. How can one even define everything?
Our AI partner could be the most intelligent mathematician or researcher. That's great then we can bounce ideas off of them and they can help us realize our professional / creative ambitions.
Sure if our goal is to maximize profit then maybe we can outsource the decisions to an AI agent.
You can get a computer to create infinite remixes of songs. I haven't seen that replacing music producers doing the same.
The I in AGI doesn't really mean intelligence as in "a mathematician has to be more intelligent than a janitor to do his job" *, it means the productivity equivalent of whatever human conciousness is; that is, to be able to have beliefs and tastes as well. And since it will have infinite patience, arguments such as "I prefer to have an actual human musician playing" is also up for persuasion.
When everyone can just press a button and have better music automatically generated, based on their exact preferences inferred from their DNA or an fMRI brain scan, what are your creative ambitions?
I'm obviously not talking about today's limited (public) AI, but far into the future, like in 5 years.
* whether or not that is actually the case is irrelevant
So much about music appreciation is about knowing the artist and for example knowing that what they sing about is shaped by their personal history allowing you to identify with it.
A lot about the appreciation of art is the process and the intention behind the artwork. The final image or sond is just one part.
I find it very difficult to define "better" when we're talking about art.
That isn't the OP's point I believe. I think the point was if the more productive means of production is ultra-centralized to a few owners of AI, the question wouldn't be whether to go outside, but whether you can afford to not be permanently outside, if the superstructure of society assigns housing to capital and not humans.
> if the more productive means of production is ultra-centralized to a few owners of AI
But AI is different than previous waves, like search engines and social networks. You can download a model on a stick. You can run it on a CPU or GPU, even a phone. These models are easy to work with, directly in natural language, easy to fine-tune, faster, cheaper, and private under your control. AI is a decentralizing technology, will empower everyone directly, it's like open source and Linux in that it puts users in control.
And that sand takes a very, very long time with lots of big brains to figure out how to manipulate at the nanometer level in order to give you a "beep boop"
It's not like Intel could decide tomorrow to spin up a fab and immediately make NVIDIA and TSMC irrelevant. They're the next closest thing given they make chips, have GPU technology, and also foundry experience and it's still multiple years of effort if they chose that direction.
Your statement is a lot like saying "poker has predictable odds" and yet there is still a vast ocean of poker players.
Anyone can make a cotton gin.. Industrialization of an industry basically centralizes its profits on a relatively small number of winners who have some advantage of lead time on some important factors as it becomes not worthwhile for the vast majority of participants from when it required more of the population.
One can't really enjoy life much if you don't have financial means to survive. This technology promises to wipe of hundred's of thousands of jobs in media production - from videographers, actors, animators, designers, camera person working in TV, Movie production all are one click away from losing job.
i thought the point of tv was to sit back and be entertained, usually through some form of storytelling. personally, i don't want to have any part in the creation. if anything custom content would be annoying, because i'd lose the only social aspect of tv (discussing with others)
"I’ve just accepted that I’m going to feel less and less useful as time goes by"
It's probably the same feeling farmers had in the beginning of the 20th century when they started seeing industrialized farming technologies (tractors, etc). Sure, farming tech eliminated tons of farming jobs, but they have been replaced by other types of jobs in the cities.
It's the same thing with AI. Some will lose their jobs, but only to find different types of jobs that AI can't do.
Sorry, but comparing this to previous technology seems totally short-sighted to me (and it’s not as though you’re the first to do so). If (if) we end up with truly general AI (and at the moment we seem to be close in some ways and still very far off in others), then that will be fundamentally different from any technology that has come before.
> jobs that AI can't do.
Sure, by definition, you’ve described the set of jobs that won’t be replaced by AI. But naming a few would be a lot more useful of a comment. It’s not impossible to imagine that that set might shrink to being pretty much empty within the next ten years.
> It’s not impossible to imagine that that set might shrink to being pretty much empty within the next ten years.
No but it’s also not impossible to imagine the opposite. AI beat humans at chess decades ago but there are more humans generating income from chess today than there were before Deep Blue.
No one pays anyone to play chess because it’s useful.
Chess players get paid because it’s entertaining for others to watch.
So your argument only shows that we can expect work as a form of entertainment to survive. Outside of YouTube, where programmers and musicians and such can make a living by streaming their work live, this is a minuscule minority.
The strongest interpretation of what you’re saying seems to be that we’ll end up in a world where everything (science, engineering, writing, design) is a sport and none of it really matters because ultimately it’s ‘just a game’. Maybe so… but is that really something to look forward to?
They get paid because the people who can't play chess professionally watch it as a mental escape from their drudgery jobs because it reminds them of their youth when they could still dream about becoming a great chess player, and then you can use marketing displayed during the chess tournament to trick them into preferring to spend the money they make from the drudgery on the adveritser's product.
Now upgrade AI to do every job better than humans so that there are no drudgery jobs. What money are they going to spend?
Not too long ago, people would come and visit the first family in the village who had installed running water, because it was a new and exciting thing to see. And yet people don't wake up every day excited to see water coming from their kitchen tap.
Think more broadly than that single example. Perhaps humans will always be interested in economic activity that involves interacting with other humans, regardless of what the robots can do.
My intuition tells me humans will always have needs that AI can't fulfill. If AI does more and more jobs, cheaper, faster, and better than humans, then the price of these services and goods are going to drop, and that means people will have more disposable income to spend on other services and goods that are more expensive because AI can't produce those (yet).
Imagine a breakthrough not only in AI but also robotics, allowing restaurants to replace the entire staff (chefs, cooks, waiters, etc) with AI-powered robots. Then I believe that higher-end restaurants will STILL be employing humans, as it will be perceived as more expensive, more sophisticated, therefore worth a premium price. What if robot cooks cook better and faster than human cooks? Then higher-end restaurants will probably have human cooks supervising robot cooks to correct their occasional errors, thereby still providing a service superior to cheaper restaurants using robot cooks only.
I agree but also think this discussion need to go deeper into its assumptions. They can't really hold in a world with AGI. Can anyone acquire/own AGI? Why? Why not? Will anyone pay anyone for anything? Will capital, material and real estate be the only things with steep price tags? What would a computer cost if all work was done by AI?
>It's probably the same feeling farmers had in the beginning of the 20th century
Not remotely comparable. Farming is a backbreaking job, many were happy to see it going away. This is taking over the creative functions. Turns out what Humanity is best at, is menial labor?
Well, replacing novel creative functions with derivative creative functions. That's the big change I see here; similar to the difference between digitally editing an image vs. applying a stock sepia filter to it. Yes, we can use a model to regurgitate a mish mash of the data it was trained on, and that regurgitation might be novel in that nothing like that has been regurgitated before, but it will still be a regurgitation of pre-existing art. To some degree humans do this too, but the constraints are infinitely different.
Humanity will not be best at anything. Even menial labor will be automated.
So the downside is we have lives devoid of meaning. The upside is we live in a scarcity free paradise where all diseases have been cured by superhuman ai and we can all live doing whatever we want.
> but they have been replaced by other types of jobs in the cities.
But when one is 30+ years old, or even 40+ years old, it's hard to completely switch careers, especially when you're also dealing with the fact that it's not because you were bad at your job. Rather, a machine was made to replace you and you simply can't compete with a machine.
It's evolution, of course, but it is a stressful process.
I see this "just adapt" response a lot and it misses the point. The goal of research like this is to create a machine that can do any job better than humans.
That's been the prediction with many technological updates, but here we are. This setup works just fine for the small group of fantastically wealthy and powerful people that dictate society's requirements for the rest of us.
I can't imagine anything changing our culture's insistence that personal responsibility in employment means zero responsibility for employers, policy makers, or society at large. That is, short of a large scale armed rebellion, or maybe mass unionization.
With all of the great AI-driven public opinion influencing tools? I can't imagine they'd need to TBH. To be clear, I think the likelihood of an armed rebellion is zero, and a successful one would be less than zero. While mass unionization may be more likely, as soon as it starts to significantly impact the top's bottom line across the board, we'd see a bunch of laws that cripple unions.
History is full of examples of people with power deciding that they like resources but don't need the people who live on top of them, and solving this problem by going on a killing spree.
Bear in mind that a substantial portion of people (perhaps 30%) don't feel satisfied unless they see someone else worse off. We are not an inherently egalitarian species.
"People" find different jobs but individuals don't. Many people displaced by technology don't recover even despite retraining programs and go work in service industry or go into early retirement. The new jobs go to a younger generation.
I've started reading again, because reddit/instagram/etc. has become kind of boring for me? Like, I still go on them to get an instant dopamine hit from time to time, but like you said burying yourself in a textbook just feels so much more rewarding.
I've abandoned all online content sources except HN, Substack, and YouTube. The latter two are aggressively filtered and still feel like they're getting less interesting over time. HN isn't the best habit, either, but it's good to have at least one source of news.
Maybe someone needs to start a small group of people who specifically want to do this — seek refuge from the chaotic and increasingly worrying world (in particular the threat of replacement by extremely general automated systems) by immersing themselves in learning, and sharing the results with others.
I’m sure such groups already exist, but maybe not specifically with this goal in mind.
Learning for its own sake really is the answer to lasting happiness… for some of us, anyway.
> seek refuge from the chaotic and increasingly worrying world (in particular the threat of replacement by extremely general automated systems) by immersing themselves in learning, and sharing the results with others.
I think his point is that people have felt like the world is going to shit for a very long time, it's just that with the presence of hindsight we can see that in the past everything worked out, but we can't see the future so our present is troubling.
But none of these feelings are new, just different problems manifesting the same.
In any group there are people that are more talented, more persuasive and/or have more initiative than others. These people will naturally become the group's leaders. This can only be avoided in groups which don't have to make on decisions or conduct activities.
Hey, Euclid's ideas from 2000+ years ago are still going strong.
I doubt much of what we know today will turn out to be wrong. Maybe our abstractions will turn out to have been naive or suboptimal, but at least they're demonstrably predictive. They're not just quackery or mysticism.
Well… in subjects like mathematics they kind of do, don’t they? There’s not much room for opinion on what’s true and what isn’t. Of course, how something is done or the language used to describe it is always up for debate.
You did say ‘wrong’, though, not ‘considered wrong’.
There are no negatieve quanta and there are no negatieve qualities. It would be hilarious to suggest there would be products of the two.
You have 3 baskets with 5 apples each, you remove 7 apples from each basket, remove 5 baskets and you have -2 baskets with -2 apples each thus therefore you have 4 apples left all without the involvement of trees, like Jezus!
Not really. Universities barely even pretend to be ‘churches of learning’ — at least anymore. Going to university, for the vast majority of students these days, is more an exercise in CV-building than self-development and learning.
Such groups are by definition reclusive, hard to find on social media, and might be a lot more fringe or "weird" than you'd prefer. For a while, subreddits were a bit like this.
I don't think it's too far fetched to hypothesize that the next major global conflict will be between accelerators (e/acc) and decelerators. I see a parallel with political/economic ideologies like capitalism and communism. One of them will eventually prevail (for most of the world) but it won't be clear which until it happens. Scary but also exciting times ahead!
Is this a joke? Go outside. Go hiking. Make a garden. Visit Yosemite. Take up bouldering. Learn to surf. Cycle. Go camping. There's a world of living and massive communities but around real life. Explore what your body and mind can do together. Find kinship because it's out there in spades for people not obsessed with the automation of machined content.
> Life will probably never be as good as it was for people 30 years older than me
> Nothing about the future looks particularly good, other than that medicine is improving.
How do you reconcile your thoughts with what the CEOs of these AI companies keep telling us? I.e. "the present is the most amazing time to be alive", and "the future will be unimaginably better". I'm paraphrasing, but it's the gist of what Sam Altman recently said at the World Government Summit[1].
Are these people visionaries of some idealistic future that these technologies will bring us, or are they blinded by their own greed and power and driving humanity towards a future they can control? Something else?
FWIW I share your thoughts and feelings, but at the same time have a pinch of cautious optimism that things might indeed be better overall. Sure, bad actors that use technology for malicious purposes will continue to exist, but there is potential for this technology to open new advancements in all areas of science, which could improve all our lives in ways we can't imagine yet.
I guess I'm more excited about the possibilities and seeing how all this unfolds than pessimistic, although that is still a strong feeling.
> How do you reconcile your thoughts with what the CEOs of these AI companies keep telling us? I.e. "the present is the most amazing time to be alive", and "the future will be unimaginably better". I'm paraphrasing, but it's the gist of what Sam Altman recently said at the World Government Summit[1].
Three ways:
* It's the job of CEOs to advocate the benefits of what they're doing.
* Those things might be true, for them.
* Those things might be true, from a global perspective, even if there are some people who are worse off. White-collar workers might just be those people worse off.
If you think that's bad, try being 18 :) this field may not exist (at least in its present form) by the time I'm out of uni (I'm planning to hedge my bets by studying physics), and it seems the world is getting less stable by the minute. There seems to be no sense of urgency or even medium-term thinking in stopping Putin, and Article 5 appears to be becoming less sacrosanct by the minute. Society is increasingly divided, with absolutely no attempt to find common ground (particularly evident in my demographic) and the majority of my generation having a miniscule (and shrinking) attention span through their direct stream of Chinese propaganda. And, of course, the climate-shaped elephant in the room.
I'm just trying to not let it get in the way of appreciating the world. I'm planning to travel to mainland Europe sometime next year (gap year). SpaceX has reignited spaceflight, and there's so much cool stuff going on in that space. Science marches on, with a steady stream of interesting discoveries.
And programming is great - for now. It feels slightly strange spending a week writing a project that may be finished with a single prompt in a few year's time, but it's enjoyable.
Maybe I'm overreacting? I've grown up in a pretty calm period, with the west in a clearly dominant position. Maybe this is, paradoxically, a return to normality?
Cheer up. It's not real. Generative AI is going to force us to confront what makes us human and the real world real, and learn to love it all over again. Sure, a lot of people will be lost in digital realms. Some might even like it. But I think that many will embrace the messy, imperfect, poignant realm we live in.
I’m the complete opposite, I wish I was being born 20 years in future. I am kinda terrified of being 80 when they come out with some technique for heavily slowing down aging and our generation just has to sigh and accept we just missed the cutoff.
I'll just say: I have Type 1 diabetes, and in my lifetime, we have invented
- fast acting analog insulins that are metabolized in 2-3 hours instead of 6-7
- insulin pumps that automatically dose exactly the right proportion of insulin
- continuous glucose monitoring system that lets you see your BG update in real time (before, it was finger sticks 4-5 times a day; before that, urine test strips where you pee on a stick to get a 6 hours delayed reading (!))
- automated dosing algorithms that can automatically correct BG to bring it into range
In aggregate, these amount to what is closer than not to a functional cure for type 1 diabetes. 100 years ago, this was a fatal condition.
You are partially correct. Although notice that diabetes, both type I and II have dramatically increased due to a direct result of bad advice and environment. A little like giving a deaf person a hearing aid, while not addressing factors like loud noises that may lead to hearing loss.
>Medical understanding is not getting worse, unless I’m severely mistaken.
You are mistaken. To realize that you will have to look back several decades and read the literature of those times, of what is left. Now note I'm taking about chronic illnesses (diabetes, cancer etc) not acute ones like an infection etc. The medical practitioner of yesteryear did not have the fancy diagnostic tools that we have today, but several of them appear to me to be sharp observers.
You're exactly right, but most people just believe the headlines about cancer cures and "individualized medicine" that pop up every week and don't realize that literally none of them produce anything that helps real life patients. Medicine is not getting better - it's getting more expensive and less efficient.
I dunno, I can casually get an MRI to check the status of slime in my nose these days. It may not be strictly ‘better’ but the availability certainly goes up.
A majority of what you wrote is objectively false FUD. The only thing that I found accurate is:
> it's getting more expensive and less efficient
There have been a ridiculous number of medical advances in the last few years, advances that are actively improving and saving lives as I write this. Remember that time we had a pandemic, and quickly designed and produced a massive number of vaccines? Saved millions of lives, kept hundreds of millions from being bed ridden for weeks? The medical technology to design those vaccines, and to produce them at that speed and scale didn't exist 20 years ago. Cancer treatments, which you specifically mentioned, are entirely better than they were 10 years ago.
The actual issue, which is the only worthwhile thing you wrote about, is cost and availability.
You are simply ignoring what I actually said. My criticism was directed at specific fields: name one cancer treatment or "individualized medicine" approach that has been proven to save lives or increase quality of life in the last 3-5 years. I'll wait.
The vaccines were not the result of medicine getting "better" - they just happened to have a solution for the right thing at the right time, which is fortunate (and we're lucky that it worked, because there was no guarantee of that beforehand) but if the pandemic hadn't happened, what advances would we be discussing? What advances are actually making medicine better aside from once-in-a-hundred-year worldwide emergencies?
For what it's worth, there have been a lot of situations like this in the past. Maybe not as fast as this, but tech has displaced jobs so many times like with the cotton gin and computers, but more jobs have come about from those (like probably your job). Now, you can say that this is different but do we really have any data to back that up aside from speed of development?
As for social safety nets: if this affects people as heavily as you think (on an unprecedented, never before seen level), the US will almost certainly put _something_ into place and add some heavy taxes on something like this. If tens of millions of Americans are removed from the work force and can't find other work because of this, they'll form a really strong voting block.
Also consider that things are never perfect. We've had wars around the world for a notable amount of time. Even the US has been in places we shouldn't be for a serious chunk of the last century, but things have worked out. We have a ton of news and access now so we're just more aware of these things.
Hopefully that perspective helps a bit. HN and social media can have "doomer" tones quote a bit. Hopefully some perspective can help show that this may not be as large a change as we think.
Or maybe I'm an idiot, as some child comments may point out shortly.
By definition, we don't have data for events we haven't seen before. So instead I reason as well as I can:
Consider the set of all jobs a human being could do. Consider the set of all jobs an AI system could perform as well as a human being but more cheaply. Is the AI set growing, and if so, how quickly?
Prior technology is generally narrow and dumb: I cannot tell my cotton gin to go plant cotton for me, nor can I ask it to fix itself when it breaks. Therefore I take on a strategic role in using and managing my cotton gin. The promise of AI systems is that they can be general and intelligent. If they can run themselves, then why do I need a job telling them what to do?
Isn't this making the assumption that the stuff that needs to get done is fixed size? New technologies also create entirely new categories of jobs.
"Computer" used to be a profession, where people would sit and do multiplication tables and arithmetic all day [1]. Then computing machines came along and put all those people out of work, but it also created entire new categories of jobs. We got software engineers, computer engineers, administrators, tons of sub-categories for all of those, and probably dozens more categories than I can think of.
I think that there's a very high likelihood with the current jobs that humans do better than computers, most will be replaced by cheaper AI labor. However, I don't see why we should assume that set of things that humans do better than computers is static.
I'm trying to point to the set of all jobs a human being could do, which includes future jobs enabled by future technology.
This is not as nebulous of a set as it sounds because it has real human boundaries: there are limits to how fast we can learn, think, communicate, move, etc. and there are limits to how consistently we can perform because of fatigue, boredom, distraction, biological needs like food or sleep, etc. The future is uncertain, but I don't see why an AI system couldn't push past these boundaries.
Maybe if AI could do all jobs humans could do, we'd setup some system where the AI works and we don't since we tax them or somehow at least part of the created goods and services flows to everyone. Anything AI "creates" is worthless unless it's consumed, and AI being a machine/software won't inherently want to consume anything (like burgers for example).
I also struggle to think about all this, but I imagine if you can flip a switch and everything produced and consumed in the economy could be done in half the time, is that a good or bad thing? If we keep flipping that switch and approaching a point where everything is being produced with almost no human effort, does it become bad all of a sudden?
Somehow we'd need to distribute all this production, I'm not sure how it would work out, but just going from what we have now to half or 25% of effort needed is probably an improvement, at least I'd take that.
We are close to 1.5 degree global warming. And the world is rather busy with war, than actually make a unified effort to change things. That is depressing to me, not that AI can make somewhat convincing background scenery movies (as standalone videos I do not found them convincing, all in all impressive, sure, but too many errors).
I’m more worried that if AI really works out, businesses will end up consuming as much energy as they possibly can using it, because using it more than the competition will provide another edge. It’s not clear how we are supposed to reduce energy consumption. Is there a boundary of diminishing returns that would impose a limit?
I think the big visionaries behind the main ai developers are all hoping to achieve agi, and that it will be able to fix everything, outweighing the short-term vast usage of energy trying to create it
Not worried, I trust in my taste. I still haven't seen anything made by AI that moved me. I'm buying physical books written before AI was a thing, backing up music and film. Visiting concerts and museums. The information and experience in my head will become more rare and valuable compared to the AI slop that will soon permiate everything. Oh your model is trained on the billion most read online texts in the english language? cute. I'm pulling inspiration from places that aren't captured by any model.
Most of my programming job is tightly coupled with the business processes and logistics of the company I work for, AI will not replace me there.
Also I'm not convinced this is sustainable, I'm thinking this will be like GCI where the first iron man film looked phenomenal but where huge demand + the drive to make it profitable will drive down the quality to just above barely acceptable levels like the CGI in current marvel blockbusters.
Yes. These systems are working on a 3D problem in a 2D world. They have a hard time with situations involving occlusion. A newer generation of systems will probably deduce 3D models from 2D images, build up a model space of 3D models, generate 3D models, and then paint and animate them. That's how computer-generated animation is done today, which humans driving. Most of those steps have already been automated to some degree.
Early attempts to do animation by morphing sort of worked, and were prone to some of the same problems that 2D generative AI systems have. The intermediate frames between the starting and ending positions did not obey physical constraints.
This is a good problem to work on, because it leads to a more effective understanding of the real world.
How deep is the pool? Just because you have descended to depths that others were skeptical of doesn't mean there isn't a floor. Besides, these videos are less janky, but still obviously fake, for for sure they're cherry-picked for maximum effect.
Well - the videos in the link gave me same kind of response as literal Latin American gore videos, so line 3 to 5 still applies, and line 1 to 2 still applies to ChatGPT w/GPT-4 Turbo, so... I don't know what to make of this, maybe people like gore videos. Or something.
not to add to the doomerism, but I often wonder about how much AI-generated content I've consumed without realizing it - especially from times before generative AI became mainstream
I don't get this angle at all. To me that's like "organic" food labels. What do I care if my content is "AI" made. When I watch a CGI animated movie there isn't a little artisan sitting in the video camera like in a Terry Pratchett novel, it's all algorithms anyway for like 30 years.
When I use Unity I write ten lines of code and the tool generates probably 50k. Ever looked into the folder of a modern frontend project after typing one command into a terminal? I've been 99% dependent on code generation for ages.
Does it matter to you whether you're interacting with a human on some level when watching a show or movie, specifically on an artistry level?
Maybe some movie you've watched has been spun up by a Sora-like platform based on a prompt that itself was AI-generated from a market research report. Stephen King said that horror is the feeling of walking into your house and finding that all of your furniture has been replaced by identical copies - finding out that all of the media everybody consumes has actually been generated by non-human entities would give me the same feeling
>Does it matter to you whether you're interacting with a human on some level when watching a show or movie, specifically on an artistry level?
Yes it matters to me a great deal. But there's a reason Stephen King made that observation a long time ago. All the actors in a modern Marvel movie look like they've been grown in some petri-dish in a Hollywood basement and all the lines sound like they come from LLMs for the last fifteen years. There's been nothing recognizably human in mass media for decades. 90% of modern movies are asexual Ken doll like actors jumping around in front of green screens to the demands of market research reports already.
I'm not saying the scenario isn't scary, I'm saying we've been in that hellscape for ages and the particularly implementation details of technologies used to get us there ("AI" in this case) don't interest me that much. And in the same vein, an authentic artist can surely make something human with AI tools.
You say that but most things that are commercially produced aren’t made by individuals but via collaborative processes. The AI won’t even get credit, they’ll just use it to dilute everyone else’s contribution.
I felt depressed after seeing this, so I had a long hug with my partner, and remembered the serenity prayer:
"God grant me the serenity to accept the things I cannot change, Courage to change the things I can, and Wisdom to know the difference."
If AI dystopia is coming, at least it's not here quite yet, so I'll try to enjoy my life today.
The writing has been on the wall for that for awhile though...
Every large animation studio has continually been looking for ways to decrease the number of artists required to produce a film, since the beginning of the field.
I don't know who in the upper parts of the various guilds in Hollywood saw this demo'd last summer, but they really really took notice. Those strikes went on for a long time, and it seems that holding out and getting the clause in to exclude this kinda tech was a brilliant bit of foresight. Holy heck, in 5 years, maybe just a year, this kinda tech is going to take nearly all their jobs.
The only reason this is possible is because of the content those people created. This literally doesn’t exist without them. Not sure what you’re trying to say….
The thing that fills me with dread watching these videos is not (much) the thought of how many jobs it might make useless. It's the thought that every single pixel, every movement is fake. There is literally not a speck of truth in these videos, there is nothing one can learn about the real world. Yes they're often "right" but any detail can be wrong at any moment. Just like ChatGPT hallucinating but in a much deeper way- we know that language can be used to lie or just make up things, but a realistic video hits in a different way. For example the video of the crested pigeon- a bird I haven't seen before- is beautiful and yet it can be wrong in an infinity of details- actually, I don't even know if such a bird exists.
Best way to deal with the sense of doom imo is to actually use it. You'll find how dumb it really is by itself, and how much of your own judgment/help/editing is still necessary to get anything usable. It might look like magic from these manicured press releases, but once you get your hands on it, it quickly becomes just another tool in your toolbox that, at best, helps you do the work you were doing anyway, more quickly.
I work with these models professionally (well, I’m a web dev working along side people manipulating these models). When you give it a prompt and it spits out a pretty image, remember that the range of acceptable outputs is very large in that context. It demos very well but it’s not useful outside of stock image/stock video use cases. What artists and engineers actually do is work under a rigorous set of constraints. Getting these models to do a very specific thing, correctly adhere to those constraints, and still maintain photorealism (or whatever style you need) is a very much unsolved problem. In that case the range of valid outputs is relatively tiny.
The more I play with AI the more I realise that The "I" part of AI is just clever marketing. People who are freaking out about AI should just play around with it, you will soon realise how fundamental dumb it is, and maybe relax about it.
AI has no spark, no drive, no ambition, no initiative, no theory of mind, and it's not clear to me that it will ever have these things. Right now, it's just a hammer that can build 100 houses a second, but who needs 100 slightly wonky houses?
Um. A hammer that can build 100 houses a second would be incredibly valuable, both solving and causing some very important problems. So good analogy from my perspective I suppose, but I don't think it supports your conclusion?
AI and AGI are practically two different concepts that most of the industry and the mainstream media are doing a poor job making the distinguishment between them.
Also, 100 slightly wonky houses will sell like hot cakes if each one costs less than 1/100th of a not-slightly-wonky house. People will buy 100 of them instead of 1 and just live in a different one every day/hour so they always notice the novel parts instead of the wonky parts. We've had mass manufacturing for centuries and they always prevail when the trade-offs are acceptable.
While I do not think that it is impossible to get there, I totally agree that this is a key step that current AI is missing. Auto-GPT seemed to be the big thing that can outlay, plan, execute and reiterate complex tasks but ultimately wasn't able to do anything like that. Kind of ironic that it is reinforcements learning that the models seem to be so bad at.
Sorry if my comment didn’t give enough context. I’m not the OP, so I’m not asking any questions.
I was interpreting the parent comment as saying the spark of consciousness only needed a cost function.
Personally, I disagree that our current neural nets are accurate representations of what goes on in the human brain. We don’t have an agreed upon theory of consciousness, yet ML businesses spread the idea that we have solved the mind and that current LLMs are accurate incarnations of it.
More than the functionality of ai replacing current human jobs, I worry what we will lose if we stop wondering about the universe in between our ears thinking we know everything there is to know.
I can see all of my plans for world domination coming together right in front of my eyes. A few years ago I was absolutely certain I’d die without achieving my dream of becoming God Emperor of a united Planet Earth.
Why should we always take the pessimistic viewpoint? Think of all the beautiful things that can be built with something like that. All the tutorials that could be created for any given subject. All the memories that could be relived. Upload a photo with your grandparents, give it context, and see them laughing and playing with you as a toddler. Feed it your favorite book and let it make a movie out of it. I mean, fuck me, the possibilities are endless. I don’t feel depressed. I feel blessed to be able to live in an era when all these marvelous things materialize. This is the stuff we read in science fiction decades ago.
> Upload a photo with your grandparents, give it context, and see them laughing and playing with you as a toddler.
Exactly. People just aren't seeing this. You don't even have to limit the fake memories to real people. Don't have a girlfriend? Generate videos and photographs of you and your dream girl traveling the world together, sharing intimate moments, starting a family. The possibilities are so exciting. I think the people who hate this idea are people who already have it all. They're not like me and you.
Idk about you but I would not consider AI-generated memories of my grandparents even remotely close to being an authentic experience whatsoever. One of my grandparents passed before I was born, so any synthetic depictions of us are fake. That frankly sounds like a post-apocalyptic experience, if not worse than that.
It's literally nothing. Generative images haven't really gotten better at the things people care about, like getting specific details right and matching exact descriptions, and avoiding uncanny animals and humans. There's no reason to think video will be any different. No reason to panic - just take it for what it is: something funny to amuse yourself with for a few hours.
I think you can create an alternate reality with these tools in a way that we havent even thought can alter ones own self.
We have seen this in small scale of social media that ones self esteem.
We will see a new set of problems that would be much deeper. Videos and image that make you believe false reality, reliance of GPT will generate false knowledge.
False reality problems have started popping up everywhere. It is going to be much deeper. I think we are in for a really crazy trip
It's worth considering that throughout history there have been people who have felt this way. That suggests this perception is a natural tendency of humans and it does not have a good track record of turning out be be correct.
The way to generate the wealth required for something like UBI requires large scale technology driven deflation in the cost of goods and services (in real terms).
Advances like this are necessary steps to get us there.
People will be displaced in the short term, as they have for every other large scale advance... Cars, assembly line and so on. Better to focus on progress and helping those disaffected the most along the way
AI is going to be massively deflationary. How useful is UBI when the cost of goods and services approaches zero due to automation?
With that said, I can imagine the federal reserve will then helicopter in money to everyone in order to reach its 2% inflation target, which kinda sounds like UBI.
Right! I keep saying, that at least we have to kickoff the process. Not even the legislative process, but convincing the public that we'll need it eventually (alternatively a whole different system worldwide, but that will be even harder). Will take a long time anyway.
Beware of UBI, simply from the perspective there is no way our puritanical members of society will allow it, and if it does get enacted will have negative ramifications rendering it more of an economic one way trap than a safety net. We're simply to easy to other others, and when those budgeting the entire economy look at the UBI population, their funding will be cut just like they cut education and social services today. I'm afraid of UBI, because I don't trust it's enactment to be fair, honest or worth accepting.
And we have been told that with innovation and disruption, a new breed of jobs and skill sets are created. But we don't know (or are very bad at predicting) what those would be, especially now that the world has 100's millions of people linked to these economies (film, writing, gaming).
Many people (including myself) have bought into the narrative that history will repeat here and things will be better eventually, but not how much has to break first, and it's used as a hammer by OpenAI and probably every innovator who disrupted.
They advertised "Safety" but no "Economic Impact" analysis because the latter is less scary and requires difficult predictive work, the former is just narrow legalese defined by 80-year-old congressman they have to abide by to "release" v1.0. There is at-least a Congressional Budget office(CBO) where the 80-year-olds work, flawed as it maybe...
OpenAI is one major privacy/compliance scandal away from losing that power. I believe it's inevitable, and MS 'will' throw them under the bus when that happens.
Do you feel the doom related to yourself or related to the future of humanity? If it's the first - I can't think of something else than having a money safety net for 6-12 months and having a flexible mind. You can try to learn just in case some phisical skills like electrician if the doon feeling is that bad. If you feel doon for humanity's future, I don't want to be mean, but you shouldn't worry about things you can't control, try to spend more time with nature and with ppl that spend time close to nature
Related to competition - the same thoughts people had when thinking about roman/any empire, how could it break, how could others compete. In the end everything ends, giants like IBM are just shadows of their past success, some are saying google is the next ibm and probably openai will be the next ibm-ed google...
I just don’t find much value of the things that they are generating so I don’t feel that’s a problem. If there is anything this things is positive, is that it reminds us how boring and predictable the daily life of normal people are.
My fear is the alternative reality that these tools could provide. Given the power and output of the tooling, I could see a future where the "normal" of a society is strategically changed.
For example, many younger generations aren't getting a license at 16. This is for a variety of reasons: you connect with friends online, malls cost money, less walkable spaces, less third places.
If I'm a company that makes money based off of subscription services to my tools, wouldn't it be in my best interest to influence each coming generation?
Making friends and interacting with people is hard, but with our tooling you can find or create the exact friend you want and need.
We can remember now that life is beautiful - but what's to stop from making people think that the life made by AI is most beautiful?
And yeah, I've heard this argument before with video games, escapism, etc. I'm talking more about how easy it is to escape now, and how easy it'd be to spread the idea that escapism is better than what is around you.
One thing to remember is that change never stops and we're certainly not in any perfect society right now where we'd want change to stop at. We've seen huge magnitudes of societal change over and over throughout history.
For the most part, the idea of change is rarely inherently bad (even though, IMO, it's natural to inherently resist it) -- and humans adapt quickly to the parts that have negative impacts.
Humans are one of, if not the most, resilient race on the planet. Younger generations not getting licenses, sticking to themselves more, escaping in different ways, etc are all "different" than what we're used to, but to that younger generation it's just a new normal for them.
One day they'll be posting on HN2, wondering whether the crazy technological or societal changes about to come out will mean the downfall of their children (or children's children), and the answer will still be the same: no, but what's "normal" for humankind will continue to change.
> Humans are one of, if not the most, resilient race on the planet. Younger generations not getting licenses, sticking to themselves more, escaping in different ways, etc are all "different" than what we're used to...
As long as they keep having unprotected sex with each other.
In Europe there's no need. Got a licence over two decades ago have never needed to drive. Shops in walking distance, public transport anywhere in the country, convenient deliveries, walkable and cyclable cities.
Meanwhile other places have no freedom from cars, locked into expensive car financing, unable to access basic amenities without a car, and motorists have normalised killing millions of people a year.
This is the same as slacklining over a ravine with no harness. Are the views epic? Yes. Does the adrenaline rush feel good? Yes. Are the consequences irreversible if you happen to mess up? Probably yes. The last point is why there's so much more doomerism compared to OpenAI's previous products. We don't have that harness and we don't know if we'll ever have it.
It's an analogy.. perhaps a bit hyperbolic, but it's within the acceptable window IMO.
When Facebook came out very few considered it an existential risk. Turns out, it has immense power over elections. Elections have consequences for the well being of billions of people on the planet. Not to mention it might negatively impact the mental health of its users (a large chunk of the human population).
More climate change, war, microplastics in our body and now extreme joblessness ?
If I woke up and I saw a headline that said OpenAI has developed and AI which told us how to sequester huge amounts of cO2 then I’d be excited and agree.
Exactly. I'm sick of people advocating changes as "progress" until we get some fundamental baseline sampling of humanity's well-being. When "are you depressed?" "do you contemplate suicide?" "are you exhausted?" go up for 10 years around the globe, then people will look like lunatics saying this is "progress" and maybe we'll have a better conversation about where progress actually is.
The e/acc camp will tell you AGI can solve all of those, which is why AI research needs to move as fast as possible. What they don't tell you is only an aligned AGI can solve it in a way beneficial for humans.
We had a half-assed lockdown for a few months where most people just kind of stayed indoors and saw noticeable environmental improvements world-wide. An unaligned AGI can easily conclude the best way to fix these problems is to un-exist all humans.
There has never in the history of humanity been anything "aligned" in the sense that AI doomers use that word. Yet humanity has had a clear progression towards better, safer, and more just societies over time.
You say as you comfortably type on a thinking machine while indoors sheltered from the elements, presumably without any concern for war or famine or marauding gangs of lawless raiders.
Climate change is looking pretty existential to me. Might be comfortable now, but where I am we're having extreme snow fall shortages which are going to effect our water supplies and ability to farm. God knows what it's going to look like at this rate.
So practically massive destruction of the biosphere so people can sit there smug on there computers in the air-con.
Anyway you reinforced my point. Progress is a good idea, I'm not sure we all have the same ideas about what good progress looks like. Lifting the rest of the developing world by selling them arms and fossil fuels so I can sit on my computer in my room reading smug comments is probably not good progress.
Like you said, it's a feeling. Once you've identified it, just remember you have many many buttons that can be pushed to generate feelings. It's just a program installed long long long ago. Visualize that, breathe and just laugh at that poor bash program.
There is some usefulness to those feelings - this announcement will probably have an impact on your life soon enough. But you cant let every button push and distant threat pull you down can you.
Also remember, life has its own ways: as far as you know, it could also be the beginning of the best days of your life.
Exciting if you don't think about how tons of people are going to be out of work with no safety nets or how easily millions of people are going to be scammed or how easily it is going to be to be impersonate someone and frame them or etc etc etc
Let's say, for the sake of argument, AI could generate absolutely perfect invented videos of arbitrary people doing literally anything. The consequence will be that video will no longer be taken seriously as evidence for crimes. People will also quickly not trust video calls without an extreme level of verification (e.g. asking about recent irl interactions, etc.)
Yes some people will be scammed as they always have been, such as the recent Hong Kong financial deepfake. But no, millions of people will not keep falling for this. Just like the classic 419 advanced free fraud, it will hit a very small percentage of people.
OK, but I did like living in a universe where I could watch video news of something happening in another country and treat it as reasonably strong evidence of what is happening in the world. Now I basically have to rely on only my own eyes, which are limited to my immediate surroundings, and eyewitness accounts from people I trust who live in those places. In that sense, I feel like my ability to be informed about the world has regressed to pre-20th-century levels.
I predict that we will have blockchain integration of media crating devices such that any picture / film that is taken will be assigned a blockchain transaction ID that moment it is generated. We will only trust media with verifiable blockchain encryption that allows us to screen against any tampering from the source.
Video alone has never been considered evidence of a crime in a court of law (At least in the United States). A person needs to authenticate the evidence.
Think what it was like before the invention of the camera, and then after, this is a similar level of innovation. I'm sure a lot of people who wrote books were terrified by the prospect of moving pictures, but everything worked out and books still exist.
IMHO humanity will be fine, decades from now kids will be asking what it was like to live before "AI" like how we might ask an old person what it was like to live before television or electricity.
Consensus reality is already cracking up due to the internet, smartphones and social media. The Media theorist Marshall McLuhan had a lot to say about this well in advance, but nobody listened.
The more people AI displaces then the less customers they would have right? Wouldn't there be some equilibrium reached where it can no longer grow due to falling profits? Let's say they mostly sell B2B, who are those other businesses selling to if no one (generally speaking) has expendable income?
The last two claims always felt wrong to me, because they're assuming a society where these kinds of tools are easy to use and accessible to everyone, yet the society at large is completely oblivious to these tools and their capabilities. Arguably, you couldn't ever fully trust images before, people claimed something was photoshopped for decades now. Instead of something "looking realistic", trusting people and organizations will take its place - when, for example, the BBC posts a photograph, I'm inclined to trust it not because it looks real, but because it's the BBC.
Assuming OpenAI's lobbyists don't convince Congress to ban open models because of {deepfakes, CP, disinfo, copyright infringement} or make it impossible to gather open datasets without spending billions on licensing.
We've been on an unsustainable trajectory for quite a while now. I take hope from things like this. Maybe this time it'll finally be the shock we need to rethink everything.
> Historically, letting technology eliminate their jobs has been a sacrifice people have made for their kids' sakes. Not intentionally, for the most part, but their kids ended up with the new jobs created as a result. No one weaves now, and that's fine.
Ah perfect, all we have to do is consider a vague analogy to a totally different event in the past and it's clear that there's no worries if AI takes the vast majority of human jobs in the next 50 years.
As a side note I shudder to think how many nightmare fuel cursed videos the researchers must have had to work through to get this result. Gotta applaud them for that I guess.
Sentence 1 seems historically illiterate, and I think pg knows how ridiculous it sounds because he walks it back almost immediately. "Historically people made a sacrifice, but not intentionally, for the most part" is incoherent.
> No one weaves now, and that's fine.
Did horses find new jobs when we moved to steam power? Leave aside the odd horse show and fairground ride. By the numbers, what do you think happened?
I can't imagine Paul Graham actually thought through the scenario he's describing. The kids of the parents who lost their jobs, throwing their lives into disarray and desperation, are not going to be the primary recipients of the new shiny technologically advanced careers.
Something I've realized over time is that however good things get, what people really want is to have more than their neighbors rather than any particular quality of life. In many ways you can with a relatively basic job live far, far better than the richest kings could dream of a few hundred years ago. Something as simple as being able to eat strawberries in December was described as literal magic in fairy tales fairly recently. Nevertheless this does not satisfy that need for social prestige and they are profoundly unhappy as a result.
I don't think anything will fix or change this, definitely not UBI, the situation is a fundamental part of the human condition. I share your dread and fear that I will not be able to compete, even if my life improves by all other measures.
The one thing that would dramatically change my calculus is medical advances that significantly push back death and aging.
Does anyone know how to handle the depression/doom one feels with these updates?
I just dread, *shrug*. You don't have to be depressed or doomed, it all comes from premature predictions.
That said, we are surely at the phase similar to that one right before the internet, if not electronics. I genuinely don't understand people who write AI off as yet another Bitcoin or "just an enhanced chatbot". They'll have to catch up on an insanely complex area (even ignoring rocket science behind all that) which will do without them, and right now is the opportunity to jump the ship early.
It's only my nobody's opinion, but I can't see the way in which _that_ could fail. I find it incredibly stupid to think so and to just live your life as if nothing happens. If you're not the ruling class or a landlord, f...ing learn.
If you think EU funds are going to be there funding those social safety nets in the Brave New World where AGI decimates industry... They're not even sustainable as is.
You should feel the opposite, see it as a new tool in your pocket.
Industrialisation and computers/automation took away massive amount of jobs while globally improving people lives, this may possibly (maybe not) do the same.
If in the future, anybody can write the book, create a photograph or a motion picture or an music album with just few words describing what they have in mind, this will be a tremendous productivity improvement and will unleash an overflow of human creativity.
I like to compare it to what Jobs said about computer, they are the "bicycle of the mind" [1].
I tell myself it’s important to try and be less myopic.
One reason is, readiness of tech does not mean it’s being applied.
Another is just like one OpenAI came out of no where, others will too. It’s normal to be focused on a few things to lose sight of that.
Gemini realistically does some impressive things.
What can we do?
Building the tech is important but applying it well for actual adoption is still wide open for the average persons use.
It does seem to mean that what we think might take 5y probably will take 1y in 2024, if not less like 2023. So think 10x, and 10x again as the real goal.
People will not loose their jobs, because you still need someone to input prompts for 10 hours straight in order to get a piece of video you want.
Natural language is not perfectly precise to get exactly what you want from a model like this, and the results remain kinda random. Instead of making video in a traditional tool you will be spinning AI roulette until it generates your desired result. And even then you will probably want to edit it.
I think the deal is that these breakthroughs, aren't really great in any general case. It helps with specific instances of work, and makes individuals way more productive. That by itself might won't end up making some people obsolete. I think by and large, its just going to make people more productive. You probably don't want to work somewhere where that isn't a welcome thing.
Learning and fitness are great ways to avoid the feeling of doom.
Open-source will catch up in <6 months. Note that Meta will ship llama3 anytime. So will Mistral.
I am a PM, and switched to becoming a builder. Enjoy learning, keep building. What people take time to realize is that building things is a habit. As you combine that with ongoing learning, you will enjoy the process and eventually build something to earn a living.
Sure, I can learn how to use these models, but then how do I find things to build? I've always struggled with finding real ideas, and so I just watch AI progress and come up with blanks whenever I try to think of ways to contribute.
UBI, social safety nets, power.. Because of videos? I don't get it. Obvious second-order effect is a devaluation of visual media. Let it all burn, who cares? Go live in the real world. Current mass media culture is an anomaly stemming from hyper-centralization of culture creation. Where we're going is a reversal to the mean.
OpenAI is not going to solve global warming and we will be well into widespread collapse of farming systems, mass migrations and wars of scarcity long before the robots will be doing all the work. AI isnt going to solve any of that so...if you're looking for depression/doom, that's probably a better place to look.
I’m ecstatic about the future of education. I remember many occasions of teachers going, “Gosh, I wish I could show you guys this”. Now, they can with speed and ease. I’m particularly excited for ESL learners to have high-quality low-cost tools on hand for personalized learning for every child.
For most of human history, people didn’t have constant access to art work or videos. Things were fine. Maybe instead of watching manufactured shit on social media, go see a live theatre production. Seek genuineness.
You can live a great life without ever seeing a picture or video.
Nothing on this page has any relevance to employment or UBI. Also, there is strong evidence that UBI doesn't affect employment much one way or the other.
Whether people are employed or not is a policy decision of the central bank and not related to how good AI is.
Yeah today spooked me too. Between this and the large context length on google side and ability to understand video (and thus say video feed for work tasks) it sure seems like the amount of jobs in the firing line just jumped massively
Just bask in the knowledge that if those "social safety nets" and UBI become a reality, you'll have more problems than you do now. You'll look back at this moment in time with fondness. Enjoy it now.
I don't think any of those are things AI can't excel at in 5-10 years. The latter 3 will require integration with robotics, but that's not exactly science fiction these days either, it's just something that maybe nobody bothers to do.
I believe you're right. There may be a better quality of life by looking at the ways of the ancient past. All Ai is going to do is make the rich richer. Why try to compete with this?
There is no UBI coming, govt can barely fund current budgetary needs without borrowing tons of money. If here are no jobs means no tax payers which will further shrink gov budgets. We are on our own as I see it.
All those videos made me so scared of what’s about to come in next few years. India is already a major market for perpetrators of misinformation and with major social media giants only paying lip service to our concerns, with western countries being their major focus, things portend to get even more darker for the poor, the disenfranchised in our side of the world.
UBI was tested during 2020 on a nearly global scale. In the US, the CARES Act which provided stimulus checks for every tax-paying US citizen as well as extensions to unemployment was essentially a giant UBI experiment. Not for AI, but for a giant shift in economic activity where many individuals became unemployed nonetheless.
UBI has three basic properties that the CARES act fulfills none of
1. Covers cost of living for some basic standard (debatable, but should include food, water, and shelter at minimum)
2. Is available to everyone without onerous requirements or means-testing (IE is "universal")
3. Carries a reasonable expectation of continuity such that people can plan around continuing to have it
The CARES act was an emergency measure that absolutely zero people expected or intended to be permanent, it was laden with all the means-testing and bureaucratic hurdles that unemployment generally carries, and it very clearly did not provide adequate support for quite a lot of people
It's meaningless to call something a "test" when it carries none of the properties that proponents of a policy claim would make it desirable. The only perspective from which the comparison even makes sense is from that of someone who's not considered it seriously and come up with a strawman to argue against it (IE something like "UBI is the government gives people some money")
It also seems worth mentioning that I really don't buy the highly political claim that some people seem to view as self-evident: that people remained unemployed longer because they got extended unemployment benefits, rather than as a result of the massive economic shock that prompted that decision in the first place
It may be modeled on UBI, but it's not. Universal basic income is perpetual and unconditional, while the CARES Act was a one-time payment in response to COVID. I'm sure there's still a lot we can learn from it, but I also expect many of the psychological effects will be someone muted.
I can't say I know what the future economy will look like, but I can say for certain it won't just be the current one minus 99% of jobs (with all those previously employed people living in abject poverty). Capitalism doesn't work without customers. Capitalism doesn't work without scarcity. Capitalism depends on a minimum money velocity where paradoxically if you collect 100% of the money, it becomes completely worthless.
To me, it seems guaranteed that will be drastic changes. There will be many attempts at new ways of organizing society with successes and failures along the way. Not out of altruism or desire to share but out of self-interest of those who collect the power afforded by AI and automation.
AGI would give you access to millions of times more resources than you currently enjoy. So I would suggest that you have absolutely nothing to worry about on the income/employment front.
One company having that much power is a different matter, and I address it by looking at how we can distribute GPT training through decentralized and open platforms.
It won't be me or you. Whoever it is, they will not share any of the economic upsides of AI with the public unless they are legally forced -- zero, zip, zilch, nada. Even then, they will keep the lion's share for themselves, and they will use their surplus to shape society to their advantage.
So yes, many millions of us have a big problem to worry about, especially considering how much struggling there already is now.
If the AGI is open source and operates through a decentralized platform, that everyone/no-one owns it, and the upside will be fully distributed to end users.
But even if it stays in private hands, one company monopolizing a technology and keeping it expensive/out-of-reach is generally not how technological innovation works. There is generally intense competition between providers, with each aggressively cutting prices to capture market share.
> AGI would give you access to millions of times more resources than you currently enjoy. So I would suggest that you have absolutely nothing to worry about on the income/employment front.
I'm puzzled as to how you can characterize a description of AGI's functions as "theology." AGI represents the automation of what we would describe as human-level thought, transforming it into a mass-produced service that costs almost nothing to acquire. Consequently, the cost of any product or service that requires human labor is expected to trend toward zero.
We're already witnessing this with the creation of textual and graphical content through ChatGPT. It's now possible to generate various types of text content and a wide range of graphics at the cost of 10 cents for the dozen ChatGPT API calls. And the work is completed in a few minutes, as opposed to several hours. This represents a several orders of magnitude increase in per capita productivity for these specific tasks. As AI technology advances, the scope of applications benefiting from such productivity boosts is expected to widen, which means human civilization will experience a revolutionary increase in productivity, and with it, resource abundance.
The graphics I've produced are absolutely phenomenal IMHO. I find that that textual content depends highly on the prompt, and often does take quite a few iterations to get right.
Often with text, the GPT is more of an assistant/advisor/proofreader for me, rather than a stand-alone creator of quality content.
Sometimes it works really well for emails. Like for producing responses to formal communications. It can save cut the time to respond from 10 minutes to 30 seconds.
What will you do with millions of times more resources than you currently enjoy?
I for one, would be overwhelmed. In the meantime I will be passionate and joyful about the things I like regardless of whether AI can do them a million times better. I have fun doing it.. while the AI is.. just AI.
Personally, I love life, and I expect AI will allow me to spend more of my life taking it in instead of running through the gauntlet of errands needed to stay alive. I also expect it will help us live much longer, which is an absolute blessing considering how precious every moment is.
Stop pathologizing normal human feelings? If you're worried, learn how to use the tools to give yourself a competitive advantage. See steam trains, electricity, microchips, computers and the Web for historical examples of worried people adapting to game changing tech.
I am. I know we're in a situation now as programmers where there is more AI tooling and more programming jobs - but it's difficult for me to see that last.
You could be the best at using the tools, but I think there could be a point where there is no need to hire because the tools are just that good.
Have you considered what an enormous jump in career that is? Or that all the people who already started building AI products are being obsoleted by OpenAI a year after they started?
What concerns me is that Google and OpenAI are racing us to a point where almost no product is valuable. If I can just have AI generate me a booking.com clone, then what’s booking.com worth ?
There is zero chance this tech is going to be locked up by a few companies, in a year or two open models will have similar capabilities, I have no idea what this world looks like but I think it’s less of a concern for individuals and more of a concern for the global economy in the short term.
Outside of all of this, yeah we’re either going I have to adapt or die.
Well, alone I was able to launch a software company in 2010. From accounting to nginx, everything was automated.
Alone, maybe I will be able to launch a unicorn in 2030. It’s just tools with more leverage. The limit is just the computing resources we have, so we’ll have to use computing resources to calculate how much earth resources each of us can use per year, but that seems a usual growth problem.
That is my point though, I mean it's good you could launch the company, I just don't know what happens to the large companies that employ a lot of people. Seems like they're heading into dangerous territory.
Times change. People adapt. Happened before, will happen again. Some adapt willingly, some hesitatingly.
Current AI wave is a corporate funded experiment desperate to find something compelling beyond controlled demos to economically recoup the deepening hole in their balance sheet. The novelty has begun to wear off, the innovation has started to stagnate and the money running out. The only money making innovation left to be seen is in creating more spam. Thats where I see this wave headed.
OpenAI has proven it's a shit company with rotten fundamentals playing with a shiny new toy. They will crash and burn spectacularly. As many before have done in various fields.
My reaction after using any AI tool from the last couple years to do anything meaningful ends with just a big facepalm.
As Antonio Gramsci said: “Pessimism of the intellect, optimism of the will.”
The forces of blind or cynical techno-optimism accelerating capitalism may feel insurmountable, but the future is not set in stone. Every day around the world, people in seemingly hopeless circumstances nevertheless devote their lives to fighting for what they believe in, and sometimes enough people do this over years or even decades that there’s a rupture in oppressive systems and humanity is forever changed for the better. We can only strive to live by our most deeply held values, within the circumstances we were placed, so that when we look back at the end of our lives we can take comfort in the fact that we did the best we could, and just maybe this will be enough to avert the inevitable.
Honestly? Just an embrace of cheerful, curious nihilism. Between this and climate change, we are entering interesting times, and remembering that I’ll be able to “opt-out” at a time of my choosing, and then embracing the time left with happiness and curiosity.
“Glad did I live and gladly die, And I laid me down with a will … Here he lies where he longed to be; Home is the sailor, home from sea, And the hunter home from the hill.”
And yet you're dodging the question: how did it end up for the Luddites, specifically? You're not a hypothetical person in the far-future that has had time to adapt to this technology, you're a person in the here and now, and the wave is rushing towards you.
Well, smashing the machinery certainly didn't get them their jobs back. Worrying about the machinery didn't get them any new jobs either. I guess some of them fell destitute because they were too angry or unfortunate, some others got jobs doing something else, and others got jobs operating the machines that had replaced them.
There are winners and losers, but it's absurd to think that we should avoid progress to protect jobs.
My preferred method for ensuring a just transition during times of technological progress (and to eliminating involuntary unemployment generally) is a Federal Job Guarantee http://www.jobguarantee.org/
Personally, the advances in AI have just made my job easier and allowed me to get more done. I don't see that trend changing either.
It's just a digital CONTENT! machine, it's not a big deal. CONTENT! consoomers - rejoice, producers - keep producing. Where does the sense of doom come from? What power does this company even have? The power to churn out more movieslop? Is that powerful? We've had decades of that, it's tiresome.
Touch grass, tend your garden, play with your kids, drink a beer, bake a pie, write a poem, take a walk, carve a sculpture, play a board game, mend a sweater, take a breath. Relax.
The climate is burning. The Amazon is likely collapsing sooner than we expect. There are plenty of wars around the globe and a major multi country conflict brewing in Africa. Western politics are laughable, and still the best if you want to be free to say what you want and have rights. Inequality is incredibly high and rising. And so on.
So there are a lot of things to be depressed about before you get depressed about a little increase in misinformation and idiocy on the interwebs. I mean... things like polio and the measles are literally back to fuck with us because people are so fucking stupid they think vaccines are a bad thing.
A lot of the things you mention are happening because of rampant misinformation. Something these tools will help create more of as an unstoppable rate.
The point I was trying to make was that there is no reason to worry about us setting fire to a fire. Of course you're correct, it'll get worse, but it's not like it wasn't terrible to begin with.
If anything the optimist in me is hoping that all this "AI" generated content is going to make the internet so useless that our society (well the part that doesn't believe the earth is flat and that Bill Gates has mini clones in the vaccines) finally get away from it. In my region of Denmark our local police posts their immediate updates on twitter, which was fine when everyone could see them, not so great now that you need an account. I very rarely care about what they post, but around new years a fireworks container blew up near here, and I had to register (and then later delete) a twitter account to figure out if I had to worry about it or not. It'd be nice if the impending doom of fake content is going to move our institutions and politicians away from big tech SoMe platforms and it just might if they become useless.
I think it's naive to attribute all of the world's problems to "misinformation". You can give everyone the same information; in fact, we all already have the same information. But perspectives will vary, and there will be conflict.
I don't attribute it all to misinformation. I attribute it all to greed but misinformation is a great tool for the ruling class to satisfy their greed.
Unless you think everyone else is lying, people can and do find meaning in their lives, in the activities they love. AI has no bearing on that (just careers) so there's no reason to believe it would "accelerate" anything.
Thanks for taking the time to explain. I do kind of think people are lying (to themselves). Ignorance is (temporary) bliss.
I'm seeking lasting meaning; not 'meaning' that dissolves after a season, or at best, at the end of a life.
What I meant by 'accelerat[ing] the realization' is that all of our earthly desires will more readily be fulfilled, and we will see that we still feel empty. AI is like enabling a new cheat code in the game of life, and when you have unlimited ammo the FPS becomes really fun for a moment but then loses its meaning quickly.
I can get that too. We're the arbiters of meaning in our lives.
> earthly desires will more readily be fulfilled, and we will see that we still feel empty
You misunderstand if you believe that the secular perspective on meaning is to reach it through sensualism, by consuming. That is all that AI can more readily fulfill.
Read the Bible. Specifically Revelation, 1/2 Thessalonians, and Daniel. If you haven't before, you'd be surprised how much of what's taking place now is prophesied.
Many people, rightfully, (over-)react to the American caricature of Christianity (mega churches, Kenneth Copeland, etc.) as the definition of what it is (that's arguably the deception hinted at in the Bible), but reading/trusting the raw word—what's referred to as "sola scriptura"—is remarkably helpful in navigating what's taking place.
> Read the Bible. Specifically Revelation, 1/2 Thessalonians, and Daniel. If you haven't before, you'd be surprised how much of what's taking place now is prophesied.
…said every doomsday preacher since the Bible was written.
I'm not a preacher and I don't subscribe to any church/denomination. I think, generally speaking, religious leadership in the world is in a state of apostasy and is guilty of leading people away from God/Christ's message (which the Bible prophesies would happen).
Read them. They're all quite similar, mostly changing in tone or structure. I recently built an app to side-by-side ESV, KJV, NASB, NLT, AMP, and ASV translations and they're all very similar. Even obscure translations follow the same structure and message (they have to, doing the opposite is warned against in the Bible).
Protestantism copied/copies a lot of the non-Biblical tradition propagated by Catholicism (e.g., Sunday replacing the Sabbath, recognition of non-Biblical holy days, claiming "Jesus nailed the law to the cross" when the exact opposite is stated in the Bible, etc).
That's neither here nor there when the entire point was to eschew non-Biblical tradition maintained by the Church. Notwithstanding that there are various Protestant sects and some do not practice what you're accusing them of. It's a large tent, the beliefs aren't that specific.
You're completely missing the point. Unfortunately, it seems you lack the capability to see the point—to wit, you're not special. So, as the Lord commanded us in Isaiah 1:16 — I wash my hands of thee.
And religion in general is in a state of apostasy and is guilty of leading people away from reality. That's why it was invented after all, and it sure has done it's job. You'll never find a place more full of delusional self indulgence and aggrandizement than a church, regardless of which religion or denomination they subscribe to.
> You'll never find a place more full of delusional self indulgence and aggrandizement than a church, regardless of which religion or denomination they subscribe to.
Correct, which is why I avoid religion (in the institutional church sense). I'm a bit of an odd duck because I came to the Bible after having been a practicing Buddhist for several years and generally being unexposed to Christianity (save for a lukewarm exposure to Jesuit Catholicism) or any religion growing up.
Having lived a mostly-secular life and only later (at age ~30) coming to Christianity, I can confidently say that in regard to reality, it's taught me that it's highly subjective. What most people consider as "reality" is just the interpretation of what they see that keeps them from losing their mind. For some, reality is being an unhinged hedonist, for others it's planting a garden, and for others it's generally just "trying to be nice and getting along."
Personally, God/Christ (and by extension, what's recorded in the Bible) is the interpretation of reality that makes the most sense to me. In practice/study, I've found that it maps 1:1 with what I see while also filling in the blanks on things I can't explain (e.g., the ability for the human body to heal itself, the pace/behavior of nature, or humanity's unrelenting drive to destroy what it doesn't/refuses to understand).
I guess it’s nice that you believe that, but the truth of the matter is that you are about as close to traditional right-wing mainstream Christianity as it’s possible to conceivably get. Like, if I were to imagine the archetypal Christian hypocritical sinner… it would be you.
Nothing protects your other examples from corruption.
> why choose one of the most corruptible and corrupted approaches?
Because when the non-prophetic elements of it are applied to life, all of the anxiety, fear, and dread you feel evaporates. It's only when you view it through the lens of a "church" or "leader" (read: group) that it loses its meaning.
I've read the I Ching and it lacks a religious/church element which leads to the conclusion you've had. It's not until people take it and turn it into something it isn't that it loses its value.
Arguably, Christianity, due to its claims, has become weaponized. Interestingly, this very outcome is prophesied in the Bible (which, personally, cements my faith in it what it prescribes).
Yes, it's a great technical achievement, but I just worry for the future. We don't have good social safety nets, and we aren't close to UBI. It's difficult for me to see that happen unless something drastic changes.
I'm also afraid of one company just having so much power. How does anyone compete?