Because only young people can make time for things.
There are plenty of professionals with jobs and families making time for AOC because they enjoy it. Doing the problems at the same time as everyone else is a VERY different experience from doing them whenever you'd like.
If you don't want to make the time for it, power to you. I'd recommend most people to drop off after the first 10ish days. But don't delude yourself by ascribing this as the domain of "young people" or those without responsibilities. You're making a decision. Own it.
I appreciate your perspective and it is correct. I should have phrased it differently.
Imho: I worked with code that has long history for my entire career. If the goal is to look at some objective quality of solution then I do not believe in time limits. The longer I work the more things getting patches/updates/remasters and value of better code goes up and value of arriving at any kind of solution overnight goes down.
For software that's meant to be maintained for long periods, especially by others, I agree with you.
The thing about AOC is that it's really less about the code that you generate, and more about the process of solving the problem. The challenge is really what you make of it. Some people will golf it, some will go for speed, other for performance, etc.
That's why it's so different to solve the problems in "real time". There's a huge community of people solving the same problem that you can interact with and bounce ideas off of. Even just a few days after the problem is released, most of that active discussion has dried up, so you can no longer participate in that discourse.
So, again, I don't think there's anything wrong at all with what you're saying, but there are other elements to consider beyond maintainable code and pristine solutions.
> Doing the problems at the same time as everyone else is a VERY different experience from doing them whenever you'd like.
I agree and I happen to think the experience of doing it later than everybody else is significantly better. If I search for “AoC 2024 day 12 hint”, I’ll get better results on Jan 12 than Dec 12.
“Dictated” no, “highly influenced by,” yes. It’s insane someone could think otherwise.
If your organization communicated by carrier pigeon, it’d make some types of organizations and cultures and solutions possible while precluding others.
Obviously communication methods influence organizations :|
We switched to Campsite and our mode of thinking changed immediately. Suddenly we were able to have long-lived, complex conversations. Which is important to us as a company that has to solve complex problems over long periods of time.
$10M is never work again money literally anywhere in the world. Don't kid yourself. Buy a $3.5M house outright and then collect $250k per year risk free after taxes. You're doing whatever you want and still saving money.
The problem is if you are the type of person able to get to $10M, you'll probably want more, since the motivation that got you there in the first place will keep you unsatisfied with anything less. You'll constantly crave for more in terms of magnitudes.
NYC and SF both appear to have ~1% property tax rates
Utilities are an order of magnitude less than being taxed ~$35k/yr and hardly worth worrying about while discussing eight figures
Maintenance can vary, but all 3 costs you mentioned combined would be 2 orders of magnitude lower annually than that net worth, which seems easily sustainable?
The neat part is that for a 3.5M house in the Bay area, the only maintenance required is changing the rain fly every year and the ground pad every couple.
And who is going to fix your shower when it leaks, and install solar panels, or redo your kitchen because your parents are living with you now and can't bear to leave their traditional cooking behind?
A whole new shower is less than $200 at REI, and solar generators ship directly to your house.
(And on a serious note - if your parents are both still alive and moving in with you while they have hobbies and self-actualization, you're way ahead of the game)
Assuming they're 40, how far do you think $250k will go 20-30-40 years from now? It's not a stretch to think dollars could be devalued by 90%, possibly even worthless, within 30 years.
I love how the comment I'm responding to literally says "then collect $250k per year risk free after taxes," and then you all pile onto me with downvotes telling me that's he's not just going to invest in treasuries (which is exactly the implication of HIS comment and not mine).
i've gone back and forth on this over the last few months.
I started out thinking that we've all been conditioned by bad customer support chatbots whose only purpose is to look up facts from the FAQ and then tell you to call the real customer support line to actually handle your problem. the problem was that the chatbots weren't granted hee ability and authority to actually do things. wouldn't it be great if you could aks a bot to cancel your account or change your billing info and it would actually do it?
but then i realized... anything with a clearly defined process or workflow like that would be even better if it were just a form on an account settings page. why bother with a chatbot?
customer support lines run by humans exist for two reasons:
- increase friction for things you don't want your user to do (like cancel their account without first hearing a bunch of sales pitches)
- handle unanticipated problems that don't fit into the happy-path you've set up on the settings page
My worry is that business dudes will get excited about making chatbots that can do the former and they'll never trust an AI to be able to handle the later. So I'm now of the opinion that having AI customer support will only be used to make things worse.
Customer support isn't paid well, so they often aren't motivated to become very skilled beyond the level of a chatbot before they move on to other things. So the interface to bad docs doesn't matter much. And good docs are very hard to produce. AI magnifies problems when good docs are lacking.
> aren't motivated to become very skilled beyond the level of a chatbot
Everyone has some amount of common sense. The current state of the art does not, so it cannot make decisions. This is why these things can't currently replace real support beyond being a search function exceedingly capable of interpreting natural language queries and, optionally, rephrasing what the found document says to fit onto the query better
You can't even have these systems as first line support, verifiying whether the person has searched the docs because you can't trust it with a decision about whether the docs' solutions are exhausted and human escalation is needed. There currently simply needs to be a way to reach a human. I'm as happy as the next person to find that a no-queue computer system could solve my problem so I use it when my inquiry is a question and not a request, but a search function is all they are
Chatbots are loaded with issues. But I have also had a lot of issues with humans.
By the time I have an issue, I have usually covered basic ideas and FAQs already. Currently, I tend to use perplexity supported by ChatGPT before engaging online tech support, and I create a document for them before beginning.
There's a third case: dealing with folks who just aren't technically savvy enough to figure some things out on there own, no matter how intuitive, well documented, or fully featured your product is.
I think I'd rather troubleshoot with a well-scripted AI chatbot, than a human being who's forced into the role of an automaton - executing directly from a script. Just, FFS, let me escalate to an actual competently trained human being once I've been through the troubleshooting.
There's no wait in line. There's no waiting 2 min for each response in chat, or waiting 5 min on hold while the rep figures out what to do. And I've, shockingly, gotten issues resolved faster and better.
Using one semi-popular consumer app -- once it pointed me to docs on their site that Google wasn't finding because I didn't know what keywords to use. And twice it escalated me to send a message to the relevant team, where I got a response that addressed my problem -- and where escalation would have been necessary with a human call-center rep anyways.
The point is that it was far, far faster than any chat rep OR phone rep. And it's far faster to escalate too.
I'm sure this experience isn't universal, but I've been truly shocked at how it's turned what are otherwise 15-20 minute interactions into 3 minute interactions. At the same level of quality or better.
There's a non-zero chance that real humans working as customer service agents will invent facts, too (whether to try and be helpful about something they're not completely sure, or just to get a problematic customer to leave them alone)
I've recently encountered one that just sends you in a loop, and there is literally no way to actually speak to a real person. Unless you want to give them more money; they're very responsive in that case.
This is a billion-dollar company you have definitely heard of.
I've had exactly one AI chatbot point me to the right documents. All the other interactions were exercises in frustration, and I've canceled more than one product due to shitty AI support. When I have a question, if an automated system could handle it, I wouldn't have a question.
Doesn't need to be AI, most customer support was already automated before ChatGPT rose to prominence. Hell, I developed a mobile website once for a power company that was basically a wizard / checklist of "Have you checked for known outages? Have you checked your breakers? Have you checked if your neighbours have issues too?" before they were shown the customer service number.
Human contact doesn't scale, or is prohibitively expensive. I sat with customer support a while ago (again energy sector, but different company) to observe, and every phone call was at least ten minutes, often 20-30, plus some aftercare in the form of logging the call or sending out a follow-up email.
They also did chat at the time, where a chatbot (which wasn't ChatGPT / AI based yet but they're working on it) would do the initial contact, give low-hanging fruit suggestions based on their input, and ask for their information like their address before connecting to a real human. The operator was only allowed to handle two chats at a time, and each chat session took about half an hour - with some ending because the person on the other side idled too long. I mean granted, the UI wasn't great and the customer service guy wasn't a fast typer, but even then it was time-consuming and did not scale. They had two dozen people clocked in, if they were all as fast as this one person, they can handle 50 support calls an hour at most.
It does not scale. This was for a company with about 2.5 million users who rarely need customer support. Compare with companies like Google or Facebook that have billion(s) of users. They automated and obfuscated their customer support ages ago.
2.5 million users : 24 support staff
1 billion users : 9600 support staff
If it scales linearly, that's about 10k support per billion users. I was going to say that a 10,000 person department for handling customer support sounds like it doesn't scale, but maybe I'm wrong, given that that is only about 5% of google's headcount.
Also in terms of costs: if those support staff cost 100 grand a year in salary and other costs, staffing the 2.5M-user company with those 24 support crew 24/7 (3 shifts, let's pretend it's equally busy at 3AM) results in some 25 cents per month per user that need to be priced into the product. The transaction fees on a monthly billing system are likely higher than that of a skilled support team if this is a representative scale for the industry
I frankly doubt the numbers, surely it costs more than this for an average company?
People want their support solved as quickly as possible. They don't want to talk to AI support bots because it's just an inefficient, error-prone wrapper over the documentation, which if you have an actual support need (as opposed to "I just haven't read any of the documentation") that kind of AI support isn't going to be helpful.
If you have an AI customer support that can actually support customer service requests and provide resolution, people will use it and be happy about it, or at least indifferent.
This will depend on your product. I have a side project where I get a few support calls per day. 95% of the calls can be handled by just quoting documentation/FAQ verbatim. The customers are typically not very sophisticated computer users.
Broadly, I agree. And I am furious with Progressive insurance for requiring a smart phone/mobile app to file roadside assistance claims, and my inability to get someone real on a call.
But,
In this particular story, the people were asking questions that were answered in the instructions.
No one wants to waste their time answering stupid questions, particularly if they are a solo small shop who gets entitled people asking questions around the clock.
This isn’t really customer support, but prisma (popular typescript ORM) has an AI that can answer just about any prisma-related question. It’s got a great RAG setup, can help think through novel scenarios, and always refers to specific docs pages and GitHub issues.
I think it’s made by a company called kapa. Those guys are gonna go far. That thing works SO well. I’ve been imagining how good life would be with a prisma-style AI docs assistant for things like massive, obtuse google APIs.
I want to talk to an AI for customer support as the first line so long as there is always a "Talk to a human" escape hatch.
And for less than about $50 a month, I understand why they need to spend less than half an hour per month to retain me. It'd be net negative profit otherwise. (unless they offshore, in which case the math is only slightly better).
And, yet, millions already do. The point of AI for customer support is to handle the very simple requests (maybe half). The rest, you can escalate. If AI doesn't know what to do, "Hmm, I'm not sure. Let me escalate your question/request to my manager." For most normies, this will work well.
No one wants to perform customer support either. Generally, people who are smart and capable of offering good support will stop doing it because there are more fruitful and enjoyable things for them to do.
uh, I beg to differ. I felt like an autocomplete with a knowledge base and "direct links to the right email forms" would have been faster than the fake chat interface that the "bot" uses.
(Also, if you own a home in NY and use lemondade -- do know that they don't cover cast iron piping (extremely popular in NYC). I found that out at renewal...)
I am implementing a support system for my side project, which combines the knowledge base (FAQ) with the chatbot. You can access all the answers by browsing the FAQs. If you want to contact me, you first talk to the chatbot, which has been prompted to only answer based on what the FAQ says. If it cannot answer based on that, it will make sure all the details and problem description is there, and then forward the ticket to me. In other words, chatbot is the first line of support.
Eh... I think there's a balance to be struck. You could leverage AI to handle the initial messages (90% of which are tire kickers or scammers) and funnel worthy exchanges to continue the conversation manually.
Once people notice AI is responding they will skip it and will request to talk to a human. AI will look the same as FAQs or Chatbots, people don't want to interact with them, they want a human being that is able to understand their problem exactly as it is.
The right pattern is to put them directly in a queue to talk to a person, but have an system (AI or otherwise) in the queue to gather the minimal information. Like having the person explain the problem (and have something transcribe it) and have the system transfer them to the appropriate team after parsing their problem.
Or for really common cases (ie. turn it on and off, you're affected by an outage, etc), redirect them to an prerecorded message and then let them know that they are still in the queue and can wait for a person. 9/10 it'll solve everything, but also reduce friction of simple things that might be answered.
Most chatbots are both useless and tedious to interact with. But I've also had plenty of interactions with human first-level support that's just following a script without any actual understanding. An AI would be able to provide a genuine improvement over that.
AI isn't an improvement for companies that already provide great customer support, but it has the ability to seriously raise the bar for companies that want to keep customer support costs low or that have a lot of trivial requests that they have to deal with cost-effectively
That is exactly what is happening at my employer, and it’s been really effective for trivial support, especially when it’s empowered to make meaningful changes on the customer’s behalf. It’s got large swaths of the whole UX in chat, with an authenticated session. You could see it being better a better experience than clicking around anyhow. It does a great job at search too. Lots of room to improve but it’s hitting its targets for reducing human support time and as a sales tool.
But there are many ways in which AI can improve or help support. So even if "AI chat support" turns out to not work, AI can still be very helpful in automating support.
Like detecting duplicates, preparing standard answers, grouping similar requests, assigning messages to priorities and/or people and so on.
Even LLMs can do many of what I mention. Categorizing, grouping, assigning prios etc. maybe not as good as dedicated AI trained for this purpose only (I guess many could be "simple" bayesian filters even) but good enough and readily available.
The only thing I care about is are my problems solved for minimal effort and time invested on my part. Whether it's AI or human doing the solving, I don't care.
Every single post of this type is wrong. You don't need to read beyond the first line. "Scrum is horrible" - no, it's not (at least not necessarily) and asserting that you somehow know that this is true for _everyone_ is patently ridiculous. The author is taking their experience, finding others that validate it, ignoring those who disagree with them, and making sweeping assertions that make absolutely no sense.
Focusing on whether Scrum, Kanban, XP, etc. are good or bad misses the point entirely. They are neither good nor bad in principal: they are either good or bad for different teams in different situations, and even then depending on the specific implementation. There are very happy teams using scrum and performing excellently as a result. You cannot tell them they are doing it wrong. They're not.
My teams don't run scrum, but I consult for many who do, and I know how well they're doing and how happy they are with their situation. I also know of others who hate it.
I don't know why it's so difficult for engineers to understand this. Your experience is not universal. You don't know everything.
Sounds like you indeed didn't read beyond the first line - this article argues that it is not scrum (etc.) which is horrible, but the context in which these systems are used.
The first 4 paragraphs were enough to know that it isn't worth the time. You can't spend 50% of the article being so off base and have the rest be worthwhile.
I thought the opening line "Scrum is horrible" made it immediately clear that this is an opinion piece. I can't speak for this author, but when I say something like "your data fits in RAM" or "just say no to microservices" I am well aware that some data actually does not fit in RAM and sometimes microservices are actually appropriate.
I'm pretty happy at a Scrum shop right now, but I appreciated reading this. If you have a passionate opinion forged from personal experience, I think it's best to just present it that way rather than trying to tone it down or dress it up into something tamer but more universally correct.
Who is Scrum for? I would argue it's not for the developers. In that case, whether or not some teams are happy using it doesn't really matter. It's a tool to give the illusion of metrics and estimation in an situation that is antithetical to it.
Someone looked at the Agile manifesto -- specifically that part of "people over processes" -- and decided what was really needed was a strict process and a bunch of ceremonies to go with it! Can developers be happy with this? Absolutely. Are there plenty of developers unhappy with it? Also absolutely. I think that means we should still look at it critically.
> Who is Scrum for? I would argue it's not for the developers
The point of Agile/Scrum is to give the developers total peace from interruptions for a sprint of development. _Nobody_ can come and tell them to change the plans mid-sprint. If someone tries it's the Scum Master's job to tell them politely yet firmly to leave.
If someone insists on changes, the Official Process says that they delete everything from the start of the sprint and start again - this is to give a clear price for interruptions.
"You want this door over there instead of here where we designed it a month ago? Cool" burns down the house "Let's start over then".
In 99.99% of the cases the request suddenly isn't THAT important and they can wait until the end of the sprint.
BUT
Scrum can only work if it has the buy-in of the whole company, you can't start it from the bottom down without some really big cojones.
> The point of Agile/Scrum is to give the developers total peace from interruptions for a sprint of development.
That part can be done without scrum.
However, you are omitting that scrum also drags: Product Owners, sprint plannings, poker, daily standups, sprint demos. And you know what follows a sprint? Another sprint...And every time you miss that (self imposed, totally arbitrary) deadline, you fail.
I simply don't understand why people do this to themselves.
I think it's a management strategy to keep developers under constant pressure with feet close to the fire. There is always something to do, report, meet, discuss in a formalized way, damn everything else. Even if you didn't do much this week you still have to report something. Do this two weeks in a row and your already on the naughty list. Doesn't matter if your tenure was stellar so far.
In my experience the only ones loving it are of course management, and highly vocal and relatively young developers who seek refuge in arbitrary procedures and fake structure.
As others said, it's also a way to try to treat developers as assembly line workers, which can be replaced at any time. Doctors or lawyers have no such thing. Nor any other engineering profession that I know of.
Software development is a realm of snake oil salesman, move fast - break things philosophy, and no unions. So no wonder we get this kind of crap shoved down our throats. And just like with any mass of people, a certain percentage will absolutely adore it.
And I think it's a good way to communicate how the team works.
If you want new features, you talk to the product owner and they'll add it to the backlog - at a priority they choose.
You don't barge into the team's area demanding "just this one quick fix", there's a clear process to get your stuff in the queue.
If your workplace already works like this, you don't need agile, scrum or any other method with a fancy name. Keep doing what you're doing.
Agile is a way to save developers in orgs where it's normal for the sales guy to come in to a team's room and announce that they sold feature X to a client already and it needs to be delivered next Friday.
> Agile is a way to save developers in orgs where it's normal for the sales guy to come in to a team's room and announce that they sold feature X to a client already and it needs to be delivered next Friday.
How does it do that? If the sales guy sold feature X to a client you're going to have to build it -- Scrum be damned. Some process is going to get in the way of making money? I don't think so.
That's why it only works properly when there is FULL buy-in from the C-level down.
EVERYONE in the company must acknowledge that there is a clear process on how to get new features implemented and randomly barging into the team space demanding features is not it.
But I don't need Scrum to get buy in of non-shitty processes from the C-level down. I have that already without Scrum. Clearly people are doing Scrum that is awful. So it's neither a necessary nor sufficient property of Scrum itself.
I agree that not having management/sales directly involved in low-level development planning is a good. But that's not Scrum.
> As others said, it's also a way to try to treat developers as assembly line workers
Yes, a constant reminder that, even if you are highly paid cog in the machine, you are still a cog that can be replaced at any moment.
Because, you know, real decisions about what the product have to be made by a businessy dude, while developers are relegated to playing with their ci/cd and expensive toys. I guess what pisses me off the most is the patronizing attitude. All these methodologies scream: 'I don't trust you, so I will flood you with process where I can micromanage you'.
If you keep missing "deadlines", you aren't planning properly. You can always take more work for a sprint, don't overcommit.
Product Owner is just the one who prioritises the features the team implements, they're the one whose job is to know what the client wants and translate it to more technical terms and individual features.
Sprint planning is just... planning what you'll do for the spring? Don't you usually plan your projects or do you just YOLO it? Standups can be just two lines of text in a daily Slack thread, but usually it's handier to just gather around and go through what you did and what you'll be doing.
Demos are the best part of Scrum, you get to show what you accomplished to people who understand the client's requirements and you'll get immediate feedback for the next sprint.
What kind of project model do you like if Scrum is The Devil Incarnate? Who takes care of customer requirements? Do you plan your time use? How do you teach other people in the company not to bother the team during implementation?
> Standups can be just two lines of text in a daily Slack thread, but usually it's handier to just gather around and go through what you did and what you'll be doing.
Why? So a middle manager can create a report and submit it to their superiors? What if we omit all these reporting steps (and the middle manager) and gather when there is need to gather only? The team knows what they are doing, they can see it in each other's commits and each other's tickets. Who are you trying to inform/report to here?
> What kind of project model do you like if Scrum is The Devil Incarnate?
Kanban
> Who takes care of customer requirements?
Developers. You can also implement a feedback form and read it as a team.
> Do you plan your time use
Roughly. If we knew exactly what each thing entails to plan it in detail, that'd be waterfall. Or scrum, which is mini waterfalls one after another, never ending. That time you spend on planning, better spend it doing, and without a ceremony master.
> How do you teach other people in the company not to bother the team during implementation?
We told them once. If you have a problem when business types are barging in all the time and bothering your devs, maybe the business types can get a course in patience and basic manners?
So you know at all times what everyone in your team does? Do you chat constantly with everyone on your team? Nobody just hunkers down and does stuff for a few days?
It's these cases where the standard checkup time (daily standup) comes in handy. It's not for "the middle management", it's for the team to know how everyone is progressing.
--
How do you manage releases with Kanban? Do you have a release train and release whatever happens to be complete at the time it leaves the stateion or something else?
--
So Developers take the time to arrange a meeting with the customer, talk with them about their problem, write down the features etc? And this is fine, but spending a Monday morning every two weeks planning a sprint is anathema?
Nobody in the team is an introvert who considers root canal without anaesthesia more enticing than spending a full day talking with non-technical customers about project requirements?
--
You clearly have the cojones and corporate political power to tell business types and middle managers to fuck off. Many MANY other developers do not, if they tell the sales lead to fuck off with his "urgent" requests mid sprint, they'll be looking for a new job the next morning.
I have seen multiple teams using scrum and not a single one of them was happy. Members of literally every one complained when having a safe option to complain.
Scrum is widely disliked by people who work on teams. It is liked by managers and consultants.
I haven't read the article but in response to your comment here:
Scrum is widely used and also widely disliked. I find that apologists for scrum generally excuse the dislike as 'that's not how scrum is supposed to be implemented' - just as you have here. It's exactly the same form of argument you get from apologists for Marxism or Communism when confronted with the widespread dislike for those systems. If something has generally failed wherever it has been implemented, it's not because it hasn't been implemented properly, it's because it is fundamentally broken.
What I said is much more nuanced. I certainly wouldn't call myself a scrum apologist.
There are good implementations of scrum, and there are teams that like it. In my experience across companies and industries, it's a much larger percentage than unfounded comments like yours and the article would lead you to believe, perhaps 30-40%. Probably due to the SV slant on this site (also why people here always think they know everything). It absolutely has not failed everywhere it's been implemented, and starting from that position is simply incorrect.
My position is that the article (and you) are making incorrect assertions and that you don't even consider that it's a possibility.
Again, your experience is not universal. Stop claiming or thinking that it is and that you know everything. You don't.
I've been on teams where we got to pick what we could deliver in a sprint and utter peace to do it for 2 weeks. Daily scrums were mostly skipped and lasted 5 mins max while we had our morning coffee, standing up.
I've been on a team where "daily scrum" was a 30-60 minute daily Skype call with no agenda, goal or point. The only thing related to Scrum in that company was the terminology, basically a cargo cult.
I've also been on a team that used scrum terms and kinda sorta the process but it was closer to Kanban with releases. But it didn't matter, everyone on the team had 15+ years of experience.
That's funny because maybe my favorite thing about An Elegant Puzzle (the interviewee's book on engineering management) is its economy - it lays out a position and briefly explains then moves on to the next topic.
That is not a benefit. If you use a tool like this to try to compete with sophisticated actors (e.g. all major firms in the capital markets space) you will lose every time.
We come up with all sorts of things that are initially a step backwards, but that lead to eventual improvement. The first cars were slower than horses.
That's not to suggest that Renaissance is going to start using Chat GPT tomorrow, but maybe in a few years they'll be using fine tuned versions of LLMs in addition to whatever they're doing today.
Even if it's not going to compete with the state of the art models for something, a single model capable of many things is still useful, and demonstrating domains where they are applicable (if not state of the art) is still beneficial.
It seems to me that LLMs the metaphorical horse and specialized algorithms are the metaphorical car in this situation. A horse is a an extremely complex biological system that we barely understand and which has evolved many functions over countless iterations, one of which happening to be the ability to run quickly. We can selectively breed horses to try to get them to run faster, but we lack the capability to directly engineer a horse for optimal speed. On the other hand, cars have been engineered from the ground-up for the specific purpose of moving quickly. We can study and understand all of the systems in a car perfectly, so it's easy to develop new technology specialized for making cars go faster.
Far too much in the way of "maybe in a few years" LLM prediction relies on the unspoken assumption that there will not be any gains in the state of the art in the existing, non-LLM tools.
"In a few years" you'd have the benefit of the current, bespoke tools, plus all the work you've put into improving them in the meantime.
And the LLM would still be behind, unless you believe that at some point in the future, a radically better solution will simply emerge from the model.
That is, the bet is that at some point, magic emerges from the machine that renders all domain-specialist tooling irrelevant, and one or two general AI companies can hoover up all sorts of areas of specialism. And in the meantime, they get all the investment money.
Why is it that we wouldn't trust a generalist over a specialist in any walk of life, but in AI we expect one day to be able to?
> That is, the bet is that at some point, magic emerges from the machine that renders all domain-specialist tooling irrelevant, and one or two general AI companies
I have a slightly more cynical take: Those LLMs are not actually general models, but niche specialists on correlated text-fragments.
This means human exuberance is riding on the (questionable) idea that a really good text-correlation specialist can effectively impersonate a general AI.
Even worse: Some people assume an exceptional text-specialist model will effectively meta-impersonate a generalist model impersonating a different kind of specialist!
> Even worse: Some people assume an exceptional text-specialist model will effectively meta-impersonate a generalist model impersonating a different kind of specialist!
Specialists exist because the human generalist can no longer possibly learn and perfect all there is to learn in the world not because the specialist has magic powers the generalist does.
If there were some super generalist that could then the specialist would have no power.
Yeah... no chance. There will be no code in 5 years. What a joke. Even if it were technically possible (it's not), the idea that the humans involved would be able to make it happen in such a short amount of time is laughable.
Seriously, think harder about what you're suggesting here. It's ridiculous.
Dude chill, it's a thought experiment based on how things are moving right now. Maybe i'm off by a magnitude of years, who cares. Whether it happens in 5, 15, 50 years, it's going to happen.
Who said so? It might not happen at all. We might well be coding in a thousand years, why not? Non-programmers seem to not understand that a programming language is just that, a language. We use it just because it is more productive than using plain English, which now with LLMs is becoming a better tool to program. But prompt engineering is still programming and that won't change in a thousand years.
The idea that programmers won't want to code, or that artists won't want to art, or that musicians won't want to music, is absurd. Creativity is one of the most rewarding things a human being can engage in, just because corporations would prefer to pay for shitty, broken hallucinated code or art doesn't mean humans will stop doing it for their own satisfaction.
This concept obviously flies well over the heads of the LLM/AI/AGI bros who think that our creative lives is going to be rendered obsolete by vegetable silicon. They lack creativity and imagination.
You didn't present it as a thought experiment, and it's also not something to treat flippantly. If it happens, it will be because we achieve AGI and nothing less. I know it's in vogue to think that's 5 years away right now, but we're not remotely close and it's also in no way inevitable.
If it does come to that, whether or not there is code will be the least of our worries.
I don't think that's the point. Death is just a dead-obvious example of the fundamental reality that our actions are constrained. Like I guess I could grudgingly admit that maybe hustle culture's absolutely absurd levels of optimism or self belief might somehow help a person overcome obstacles, but I can't help but feel that a person who thinks this way just fundamentally doesn't understand how the universe works.
We're all just one head injury from being unable to most of the things we want. Too much kinetic energy and we die or become permanently maimed. This is an unpleasant thing to consider, but I have trouble accepting that its better to delude oneself.
In my experience people who have the "I can do anything!" attitude tend to step on everyone else's toes and take up space from people with better ideas but a more realistic self appraisal. The whole "go big or go home" attitude typical of some parts of startup culture takes up a lot of resources from great ideas that are more realistic.
There are plenty of professionals with jobs and families making time for AOC because they enjoy it. Doing the problems at the same time as everyone else is a VERY different experience from doing them whenever you'd like.
If you don't want to make the time for it, power to you. I'd recommend most people to drop off after the first 10ish days. But don't delude yourself by ascribing this as the domain of "young people" or those without responsibilities. You're making a decision. Own it.