Hacker Newsnew | past | comments | ask | show | jobs | submit | CKMo's commentslogin

There's definitely a big problem with entry-level jobs being replaced by AI. Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person. In many cases, it's both. I work with some people who I believe have the capacity and potential to one day be competent, but the time and resource investment to make that happen is too much. I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now. If I handed it off to them I would not get it fast, and I would need to also go through it with them in several back-and-forth feedback-review loops to get it to a state that's usable.

Given they are human, this would push back delivery times by 2-3 business days. Or... I can prompt and handhold an AI to get it done in 3 hours.

Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.


This is where the horrific disloyalty of both companies and employees, comes to bite us in the ass.

The whole idea of interns, is as training positions. They are supposed to be a net negative.

The idea is that they will either remain at the company, after their internship, or move to another company, taking the priorities of their trainers, with them.

But nowadays, with corporate HR, actively doing everything they can to screw over their employees, and employees, being so transient, that they can barely remember the name of their employer, the whole thing is kind of a worthless exercise.

At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers, upon returning to Japan. It was well worth it.


I agree that interns are pretty much over in tech. Except maybe for an established company do do as a semester/summer trial/goodwill period, for students near graduation. You usually won't get work output worth the mentoring cost, but you might identify a great potential hire, and be on their shortlist.

Startups are less enlightened than that about "interns".

Literally today, in a startup job posting, to a top CS department, they're looking for "interns" to bring (not learn) hot experience developing AI agents, to this startup, for... $20/hour, and get called an intern.

It's also normal for these startup job posts to be looking for experienced professional-grade skills in things like React, Python, PG, Redis, etc., and still calling the person an intern, with a locally unlivable part-time wage.

Those startups should stop pretending they're teaching "interns" valuable job skills, admit that they desperately need cheap labor for their "ideas person" startup leadership, to do things they can't do, and cut the "intern" in as a founding engineer with meaningful equity. Or, if you can't afford to pay a livable and plausibly competitive startup wage, maybe they're technical cofounders.


>At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers,

Damn, I wish that was me. Having someone mentor you at the beginning of your career instead of having to self learn and fumble your way around never knowing if you're on the right track or not, is massive force multiplier that pays massive dividends over your career. It's like entering the stock market with 1 million $ capital vs 100 $. You're also less likely to build bad habits if nobody with experience teaches you early on.


I really think the loss of a mentor/apprentice type of experience is one of those baby-with-the-bath-water type of losses. There are definitely people with the personality types of they know everything and nothing can be learned from others, but for those of us who would much rather learn from those with more experience on the hows and whys of things rather than getting all of those paper cuts ourselves, working with mentors is definitely a much better way to grow.


Yup. It was a standard part of their HR policy. They are all about long, long-term employment.

They are a marquée company, and get the best of the best, direct from top universities.

Also, no one has less than a Master's, over there.

We got damn good engineers as interns.


>Also, no one has less than a Master's, over there.

I feel this is pretty much the norm everywhere in Europe and Asia. No serious engineering company in Germany even looks at your resume it there's no MSc. degree listed, especially since education is mostly free for everyone so not having a degree is seen as a "you problem", but also it leads to degree inflation, where only PhD or post-docs get taken seriously for some high level positions. I don't remember ever seeing a senior manager/CTO without the "Dr." or even "Prof. Dr." title in the top German engineering companies.

I think mostly the US has the concept of the cowboy self taught engineer who dropped out of college to build a trillion dollar empire in his parents garage.


Graduate school assistant in the US pay such shit wages compared to Europe that you would be eligible for food stamps. Opportunity cost is better spent getting your bachelors degree, finding employment, and then using that salary to pay for grad school or have your employer pay for it. I’ve worked in Europe with just my bac+3. I also had 3-4 years of applied work experience that a fresh-faced MSc holder was just starting to acquire.


Possibly also because they don’t observe added value of the additional schooling.

Also because US salaries are sky high compared to their European counterparts, so I could understand if the extra salary wasn’t worth the risk that they might not have that much extra productivity.

I’ve certainly worked with advanced degree people who didn’t seem to be very far along on the productivity curve, but I assume it’s like that for everything everywhere.


> horrific disloyalty of both companies and employees

There’s no such a thing as loyalty in employer-employee relationships. There’s money, there’s work and there’s [collective] leverage. We need to learn a thing or two from blue collars.


> We need to learn a thing or two from blue collars.

A majority of my friends are blue-collar.

You might be surprised.

Unions are adversarial, but the relationships can still be quite warm.

I hear that German and Japanese unions are full-force stakeholders in their corporations, and the relationship is a lot more intricate.

It's like a marriage. There's always elements of control/power play, but the idea is to maximize the benefits.

It can be done. It has been done.

It's just kind of lost, in tech.


>It's just kind of lost, in tech.

Because you can't offshore your clogged toilet or broken HVAC issue to someone abroad for cheap on a whim like you can with certain cases in tech.

You're dependent on a trained and licensed local showing up at your door, which gives him actual bargaining power, since he's only competing with the other locals to fix your issue and not with the entire planet in a race to the bottom.

Unionization only works in favor of the workers in the cases when labor needs to be done on-site (since the government enforces the rules of unions) and can't be easily moved over the internet to another jurisdiction where unions aren't a thing. See the US VFX industry as a brutal example.

There are articles discussing how LA risks becoming the next Detroit with many of the successful blockbusters of 2025 being produced abroad now due to the obscene costs of production in California caused mostly by the unions there. Like 350 $ per hour for a guy to push a button on a smoke machine, because only a union man is allowed to do it. Or that it costs more to move across a Cali studio parking lot than to film a scene in the UK. Letting unions bleed companies dry is only gonna result them moving all jobs that can be moved abroad.


Almost every Hollywood movie you see,that wasn’t filmed in LA, was basically a taxpayer backed project. Look at any film with international locations and in the film credits you’ll see a lots of state-backed, loans, grants, and tax credits. Large part of the film crew and cast are flown out to those locations. And if you think LA was expensive, location pay is even more so. So production is flying out the most expensive parts of the crew to save a few dollars on craft service?


> Because you can't offshore your clogged toilet or broken HVAC issue to someone abroad for cheap on a whim like you can with certain cases in tech.

Yet. You can’t yet. Humanoids and VR are approaching the point quite rapidly where a teleoperated or even autonomous robot will be a better and cheaper tradesman than Joe down the road. Joe can’t work 24 hours a day. Joe realises that, so he’ll rent a robot and outsource part of his business, and will normalise the idea as quickly as LLMs have become normal. Joe will do very well, until someone comes along with an economy of scale and eats his breakfast.


All the Joes I know would spend serious time hunting these robots.

IMO, real actual people don’t want to live in the world you described. Hell, they don’t wanna live in this one! The “elites” have failed us. Their vision of the future is a dystopian nightmare. If the only reason to exist is to make 25 people at the top richer than gods? What is the fucking point of living?


> If the only reason to exist is to make 25 people at the top richer than gods?

You just described most medieval societies.

It's been done before, and those 25 people are hoping to make it happen again.


Hoping is the wrong word. They’re trying harder than ever.


I have been in Union shops before working in tech. In some places they are fine in others its where your worst employee on your team goes to make everyone else less effective.


I personally care a lot about people, but if I was running a publicly traded for-profit, I would have a lot of constraints about how to care for them. (A good place to start, by the way, is not bullshitting people about the financial realities.)

Employees are lucky when incentives align and employers treat them well. This cannot be expected or assumed.

A lot of people want a different kind of world. If we want it, we’re gonna have to build it. Think about what you can do. Have you considered running for office?

I don’t think it is helpful for people to play into the victim narrative. It is better to support each other and organize.


Interns and new grads have always been a net-negative productivity-wise in my experience, it's just that eventually (after a small number of months/years) they turn into extremely productive more-senior employees. And interns and new grads can use AI too. This feels like asking "Why hire junior programmers now that we have compilers? We don't need people to write boring assembly anymore." If AI was genuinely a big productivity enhancer, we would just convert that into more software/features/optimizations/etc, just like people have been doing with productivity improvements in computers and software for the last 75 years.


Where I have worked new grads (and interns) were explicitly negative.

This is part of why some companies have minimum terminal levels (often 5/Sr) before which a failure to improve means getting fired.


Isn't that every new employee? The first few months you are not expected to be firing on all cylinders as you catch up and adjust to company norms

An intern is much more valuable than AI in the sense that everyone makes micro decisions that contribute to the business. An Intern can remember what they heard in a meeting a month ago or some important water-cooler conversation and incorporate that in their work. AI cannot do that


It's a monetary issue at the end of the day.

AI/ML and Offshoring/GCCs are both side effects of the fact that American new grad salaries in tech are now in the $110-140k range.

At $70-80k the math for a new grad works out, but not at almost double that.

Also, going remote first during COVID for extended periods proved that operations can work in a remote first manner, so at that point the argument was made that you can hire top talent at American new grad salaries abroad, and plenty of employees on visas were given the option to take a pay cut and "remigrate" to help start a GCC in their home country or get fired and try to find a job in 60 days around early-mid 2020.

The skills aspect also played a role to a certain extent - by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple - I don't care if you can prove Dixon's factorization method using induction if you can't tell me how threading works or the rings in the Linux kernel.

The Japan example mentioned above only works because Japanese salaries in Japan have remained extremely low and Japanese is not an extremely mainstream language (making it harder for Japanese firms to offshore en masse - though they have done so in plenty of industries where they used to hold a lead like Battery Chemistry).


> by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple

That doesn’t fit my experience at all. The applied math vs engineering continuum is mostly dependent on whether a CS program at a given school came out of the engineering department or the math apartment. I haven’t noticed any shift on that spectrum coming from CS departments except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware.

That’s the same across countries though. I certainly haven’t noticed that Indian or Eastern European CS grads have a better understanding of the OS or the underlying hardware.


> I certainly haven’t noticed that Indian or Eastern European CS grads have a better understanding of the OS or the underlying hardware.

Absolutely, but that's if they are exposed to these concepts, and that's become less the case beyond maybe a single OS class.

> except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware

I feel that's part of the issue, but also, CS programs in the US are increasingly making computer architecture an optional class. And network specific classes have always been optional.

---------

Mind you, I am biased towards Cybersecurity, DevOps, DBs, and HPC because that is the industry I've worked on for over a decade now, and it legitimately has become difficult hiring new grads in the US with a "NAND-to-Tetris" mindset because curriculums have moved away from that aside from a couple top programs.


ABET still requires computer architecture and organization. And they also require coverage of networking. There are 130 ABET accredited programs in the US and a ton more programs that use it as an aspirational guide.

Based on your domain, I think a big part of what you’re seeing is that over the last 15 years there was a big shift in CS students away from people who are interested in computers towards people who want to make money.

The easiest way to make big bucks is in web development, so that’s where most graduates go. They think of DBA, devops, and cybersecurity as low status. The “low status” of those jobs becomes a bit of a self fulfilling prophecy. Few people in the US want to train for them or apply to them.

I also think that the average foreign worker doing these jobs isn’t equivalent to a new grad in the US. The majority have graduate degrees and work experience.

You could hire a 30 year old US employee with a graduate degree and work experience too for your entry level job. It would just cost a lot more.


I just can't agree with this argument at all.

Today, you hire an intern and they need a lot of hand-holding, are often a net tax on the org, and they deliver a modest benefit.

Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more. Their total impact will be much higher.

The whole "entry level is screwed" view only works if you assume that companies want all of the drawbacks of interns and entry level employees AND there is some finite amount of work to be done, so yeah, they can get those drawbacks more cheaply from AI instead.

But I just don't see it. I would much rather have one entry level employee producing the work of six because they know how to use AI. Everywhere I've worked, from 1-person startup to the biggest tech companies, has had a huge surplus of work to be done. We all talk about ruthless prioritization because of that limit.

So... why exactly is the entry level screwed?


Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more.

Maybe tomorrow's interns will be "AI experts" who need less hand-holding, but the day after that will be kids who used AI throughout elementary school and high school and know nothing at all, deferring to AI on every question, and have zero ability to tell right from wrong among the AI responses.

I tutor a lot of high school students and this is my takeaway over the past few years: AI is absolutely laying waste to human capital. It's completely destroying students' ability to learn on their own. They are not getting an education anymore, they're outsourcing all their homework to the AI.


It's worth reminding folks that one doesn't _need_ a formal education to get by. I did terrible in school and never went to college and years later have reached a certain expertise (which included many fortunate moments along the way).

What I had growing up though were interests in things, and that has carried me quite far. I worry much more about the addictive infinite immersive quality of video games and other kinds of scrolling, and by extension the elimination of free time through wasted time.


I mean, a lot of what you mentioned is an issue around critical thinking (and I'm not sure that's something that can be taught), which has always remained an issue in any job market, and to solve that deskilling via automation (AI or traditional) was used to remediate that gap.

But if you deskill processes, it makes it harder to argue in favor of paying the same premium you did before.


They don't have the experience to tell bad AI responses from good ones.


True, but this becomes less of an issue as AI improves, right? Which is the 'happier' direction to see a problem moving, as if AI doesn't improve, it threatens the jobs less.


I would be worried about the eventual influence of advertising and profits over correctness


Why is the company who employs the intern paying for an AI service that corrupts its results with ads?


If AI improves to the point that an intern doesn’t need to check its work, you don’t need the intern.

You don’t need managers, or CEOs. You don’t even need VCs.


Too reductionist.


Exactly the right amount of reductionist.


> will need less hand-holding, will be able to leverage AI to deliver more

Well, maybe it'll be the other way around: Maybe they'll need more hand-holding since they're used to relying on AI instead of doing things themselves, and when faced with tasks they need to do, they will be less able.

But, eh, what am I even talking about? The _senior_ developers in a many companies need a lot of hand-holding that they aren't getting, write bad code, with poor practices, and teach the newbies how to get used to doing that. So that's why the entry-level people are screwed, AI or no.


You’ve eloquently expressed exactly the same disconnect: as long as we think the purpose of internships is to write the same kind of code that interns write today, sure, AI probably makes the whole thing less efficient.

But if the purpose of an internship is to learn how to work in a company, while producing some benefit for the company, I think everything gets better. Just like we don’t measure today’s terms by words per minute typed, I don’t think we’ll measure tomorrow’s interns by Lines of code that hand – written.

So much of the doom here comes from a thought process that goes “we want the same outcomes as today, but the environment is changing, therefore our precious outcomes are at risk.“


You’re right that AI is fast and often more efficient than entry-level humans for certain tasks — but I’d argue that what you’re describing isn’t delegation, it’s just choosing to do the work yourself via a tool. Implementation costs are lower now, so you decide to do it on your own.

Delegation, properly defined, involves transferring not just the task but the judgment and ownership of its outcome. The perfect delegation is when you delegate to someone because you trust them to make decisions the way you would — or at least in a way you respect and understand.

You can’t fully delegate to AI — and frankly, you shouldn’t. AI requires prompting, interpretation, and post-processing. That’s still you doing the thinking. The implementation cost is low, sure, but the decision-making cost still sits with you. That’s not delegation; it’s assisted execution.

Humans, on the other hand, can be delegated to — truly. Because over time, they internalize your goals, adapt to your context, and become accountable in a way AI never can.

Many reasons why AI can't fill your shoes:

1. Shallow context – It lacks awareness of organizational norms, unspoken expectations, or domain-specific nuance that’s not in the prompt or is not explicit in the code base.

2. No skin in the game – AI doesn’t have a career, reputation, or consequences. A junior human, once trained and trusted, becomes not only faster but also independently responsible.

Junior and Interns can also use AI tools.


You said exactly what I came here to say.

Maybe some day AI will truly be able to think and reason in a way that can approximate a human, but we're still very far from that. And even when we do, the accountability problem means trusting AI is a huge risk.

It's true that there are white collar jobs that don't require actual thinking, and those are vulnerable, but that's just the latest progression of computerization/automation that's been happening steadily for the last 70 years already.

It's also true that AI will completely change the nature of software development, meaning that you won't be able to coast just on arcane syntax knowledge the way a lot of programmers have been able to so far. But the fundamental precision of logical thought and mapping it to a desirable human outcome will still be needed, the only change is how you arrive there. This actually benefits young people who are already becoming "AI native" and will be better equipped to leverage AI capabilities to the max.


So what happens when you retire and have no replacement because you didn't invest in entry level humans?

This feels like the ultimate pulling up the ladder after you type of move.


imo comparing entry-level people with ai is very short sighted, I was smarter than every dumb dinosaur at my first job, I was so eager to learn and proactive and positive... i probably was very lucky too but my point is i don't believe this whole thing that a junior is worse than ai, i'd rather say the contrary


I don't get this because someone has to work with the AI to get the job done. Those are the entry-level roles! The manager who's swamped with work sure as hell isn't going to do it.


It's not that entry-level jobs / interns are irrelevant. It's more that entry-level has been redefined and it requires significant uplevelling in terms of skills necessary to do a job at that level. That's not necessarily a bad thing. As others have said here, I would be more willing to hand-off more complex tasks to interns / junior engineers because my expectation is they leverage AI to tackle it faster and learn in the process.


I thought the whole idea of automation though was to lower the skill requirement. Everyone compares AI to the industrial revolution and the shift from artisan work to factory work. If this analogy were to hold true, then what employers should actually be wanting is more junior devs, maybe even non-devs, hired at a much cheaper wage. A senior dev may be able to outperform a junior by a lot, but assuming the AI is good enough, four juniors or like 10 non-devs should be able to outperform a senior.

This obviously not being the case shows that we're not in a AI driven fundamental paradigm shift, but rather run of the mill cost cutting measures. Like suppose a tech bubble pops and there are mass layoffs (like the Dotcom bubble). Obviously people will loose their jobs. AI hype merchants will almost definitely try to push the narrative that these losses are from AI advancements in an effort to retain funding.


We've been doing the exact opposite for some positions.

I've been interviewing marketing people for the last few months (I have a marketing background from long ago), and the senior people were either way too expensive for our bootstrapped start-up, or not of the caliber we want in the company.

At the same time, there are some amazing recent grads and even interns who can't get jobs.

We've been hiring the younger group, and contracting for a few days a week with the more experienced people.

Combine that with AI, and you've got a powerful combination. That's our theory anyway.

It's worked pretty well with our engineers. We are a team of 4 experienced engineers, though as CEO I don't really get to code anymore, and 1 exceptional intern. We've just hired our 2nd intern.


> Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

1. Because, generally, they don't.

2. Because an LLM is not a person, it's a chatbot.

3. "Hire an intern" is that US thing when people work without getting real wages, right?

Grrr :-(


Interns make $75k+ in tech in the US. It's definitely not unpaid. In fact my school would not give course credit for internships if they were unpaid.


Companies reducing young hires because of AI are doing it backward. Returns on AI will be accelerated by early-career staff because they are already eagerly using AI in daily life, and have the least attachment to how jobs are done now.

You’re probably not going to transform your company by issuing Claude licenses to comfortable middle-aged career professionals who are emotionally attached to their personal definition of competency.

Companies should be grabbing the kids who just used AI to cheat their way through senior year, because that sort of opportunistic short-cutting is exactly what companies want to do with AI in their business.


If the AI can write code to a level that doesn’t need an experienced person to check the output, you don’t need tech companies at all.


This is always the case though. A factor of 50x productivity between expert and novice is small. Consider how long it take you to conduct foot surgery vs. a food surgeon -- close to a decade of medical school + medical experience -- just for a couple hours of work.

There have never been that many businesses able to hire novices for this reason.


This is a big part of why a lot of developers' first 1-3 jobs are small mom & pop shops of varying levels of quality, almost none of which have "good" engineering cultures. Market rate for a new grad dev might be X, it's hard to find an entry level job at X but mom & pop business who needs 0.7 FTE developers is willing to pay 0.8X and even though the owner is batshit insane it's not a bad deal for the 22 and 23 year olds willing to do it.


Sure. I mean perhaps, LLMs will accelerate a return to a more medieval culture in tech where you "have to start at 12 to be any good". Personally, I think that's a good (enough) idea. By 22, I'd at least a decade of experience; my first job at 20 was as a contractor for a major national/multinational.

Programming is a craft, and just like any other, the best time to learn it is when it's free to learn.


I think for a surgeon as an example, quality may be a better metric than time. I'll bet I could conduct an attempted foot surgery way faster than a foot surgeon, but they're likely to conduct successful foot surgeries.


Sure, but no one has found a good metric for actually quantifying quality for surgeons. You can't look at just the rate of positive outcomes because often the best surgeons take on the worst cases that others won't even attempt. And we simply don't have enough reliable data to make proper metric adjustments based on individual patient attributes.


> Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person.

The AI will definitely require handholding. And that hand-holder will be an intern or a recent college-grad.


Are you honestly trying to tell us that the code you receive from an AI is not requiring any of your time to review and tweak and is 100% correct every time and ready to deploy into your code base with no changes what so ever? You my friend must be a steely eyed missile man of prompting


Consider that there are no humans in existence that fulfill your requirements, not to mention $20/mo ones


why would i consider that when there absolutely are humans that can do that. your dollar value is just ridiculous. if you're a hot shit dev that no longer needs junior devs, then if you spend 15 minutes refactoring the AI output, then you're underwater on that $20/mo value


>Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.

A company that I know of is having a L3 hiring freeze also and some people are downgraded from L4 to L3 or L5 to L4 also.. Getting more work for less cost.


"intern" and "entry level" are proxies for complexity with these comparisons, not actual seniority. We'll keep hiring interns and entry level positions, they'll just do other things.


> Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

AI can barely provide the code for a simple linked list without dropping NULL pointer dereferences every other line...

Been interviewing new grads all week. I'd take a high performing new grad that can be mentored into the next generation of engineer any day.

If you don't want to do constant hand holding with a "meh" candidate...why would you want to do constant hand holding with AI?

> I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now.

Not sure what you are working on. I would never prioritize speed over quality - but I do work in a public safety context. I'm actually not even sure of the legality of using an AI for design work but we have a company policy that all design analysis must still be signed off on by a human engineer in full as if it were 100% their own.

I certainly won't be signing my name on a document full of AI slop. Now an analysis done by a real human engineer with the aid of AI - sure, I'd walk through the same verification process I'd walk through for a traditional analysis document before signing my name on the cover sheet. And that is something a jr. can bring to me to verify.


I think it’s the other way around.

If LLMs continue to become more powerful, hiring more juniors who can use them will be a no-brainer.


Yup, apart from a few companies at the cutting edge the most difficult problems to solve in a work environment are not technical.


This is basically what happened after 2008. The entry level jobs college grads did basically disappeared and didn't really come back for many years. So we kind of lost half a generation. Those who missed out are the ones who weren't able to buy a house or start a family and are now in their 40s, destined to be permanent renters who can never retire.

The same thing will happen to Gen Z because of AI.

In both cases, the net effect of this (and the desired outcome) is to suppress wages. Not only of entry-level job but every job. The tech sector is going to spend the next decade clawing back the high costs of tech people from the last 15-20 years.

The hubris here is that we've had a unprecedented boom such that many in the workforce have never experienced a recession, what I'd call "children of summer" (to borrow a George RR Martin'ism). People have fallen into the trap of the myth of meritocracy. Too many people thing that those who are living paycheck to paycheck (or are outright unhoused) are somehow at fault when spiralling housing costs, limited opportunities and stagnant real wages are pretty much responsible for everything.

All of this is a giant wealth transfer to the richest 0.01% who are already insanely wealthy. I'm convinced we're beyond the point where we can solve the problems of runaway capitalism with electoral politics. This only ends in tyranny of a permanent underclass or revolution.


This is a big issue in the short term but in the long term I actually think AI is going to be a huge democratization of work and company building.

I spend a lot of time encouraging people to not fight the tide and spend that time intentionally experimenting and seeing what you can do. LLMs are already useful and it's interesting to me that anybody is arguing it's just good for toy applications. This is a poisonous mindset and results in a potentially far worse outcome than over-hyping AI for an individual.

I am wondering if I should actually quit a >500K a year job based around LLM applications and try to build something on my own with it right now.

I am NOT someone that thinks I can just craft some fancy prompt and let an LLM agent build me a company, but I think it's a very powerful tool when used with great intention.

The new grads and entry level people are scrappy. That's why startups before LLMs liked to hire them. (besides being cheap, they are just passionate and willing to make a sacrifice to prove their worth)

The ones with a lot of creativity have an opportunity right now that many of us did not when we were in their shoes.

In my opinion, it's important to be technically potent in this era, but it's now even more important to be creative - and that's just what so many people lack.

Sitting in front of a chat prompt and coming up with an idea is hard for the majority of people that would rather be told what to do or what direction to take.

My message to the entry-level folks that are in this weird time period. It's tough, and we can all acknowledge that - but don't let cynicism shackle you. Before LLMs, your greatest asset was fresh eyes and the lack of cynicism brought upon by years of industry. Don't throw away that advantage just because the job market is tough. You, just like everybody else, have a very powerful tool and opportunity right in front of you.

The amount of people trying to convince you that it's just a sham and hype means that you have less competition to worry about. You're actually lucky there's a huge cohort of experienced people that have completely dismissed LLMs because they were too egotistical to spend meaningful time evaluating it and experimenting with it. LLM capabilities are still changing every 6 months-1 year. Anybody that has decided concretely that there is nothing to see here is misleading you.

Even in the current state of LLM if the critics don't see the value and how powerful it is mostly a lack of imagination that's at play. I don't know how else to say it. If I'm already able to eliminate someone's role by using an LLM then it's already powerful enough in its current state. You can argue that those roles were not meaningful or important and I'd agree - but we as a society are spending trillions on those roles right now and would continue to do so if not for LLMs


what does "huge democratization of work" even mean? what world do you people live in? the current global unemployment rate on my planet is around 5% so that seems pretty democratised already?


I've noticed that when people use the term "democratization" in business speak, it makes sense to replace it with "commodification" 99% of the time.


What I mean by that is that you have even more power to start your own company or use LLMs to reduce the friction of doing something yourself instead of hiring someone else to do it for you.

Just as the internet was a democratization of information, llms are a democratization of output.

That may be in terms of production or art. There is clearly a lower barrier for achieving both now compared to pre-llm. If you can't see this then you don't just have your head stuck in the sand, you have it severed and blasted into another reality.

The reason why you reacted in such a way is again, a lack of imagination. To you, "work" means "employment" and a means to a paycheck. But work is more than that. It is the output that matters, and whether that output benefits you or your employer is up to you. You now have more leverage than ever for making it benefit you because you're not paying that much time/money to ask an LLM to do it for you.

Pre-llm, most for-hire work was only accessible to companies with a much bigger bank account than yours.

There is an ungodly amount of white collar workers maintaining spreadsheets and doing bullshit jobs that LLMs can do just fine. And that's not to say all of those jobs have completely useless output, it's just that the amount of bodies it takes to produce that output is unreasonable.

We are just getting started getting rid of them. But the best part of it is that you can do all of those bullshit jobs with an LLM for whatever idea you have in your pocket.

For example, I don't need an army of junior engineers to write all my boilerplate for me. I might have a protege if I am looking to actually mentor someone and hire them for that reason, but I can easily also just use LLMs to make boilerplate and write unit tests for me at the same time. Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.

The junior engineer can also do this too, albeit in most cases less effectively.

That's democratization of work.

In your "5% unemployment" world you have many more gatekeepers and financial barriers.


Just curious what area you work in? Python or some kind of web service / Jscript? I'm sure the LLMs are reasonably good for that - or for updating .csv files (you mention spreadsheets).

I write code to drive hardware, in an unusual programming style. The company pays for Augment (which is now based on o4, which is supposed to be really good?!?). It's great at me typing: print_debug( at which point it often guesses right as to which local variables or parameters I want to debug - but not always. And it can often get the loop iteration part correct if I need to, for example, loop through a vector. The couple of times I asked it to write a unit test? Sure, it got a the basic function call / lambda setup correct, but the test itself was useless. And a bunch of times, it brings back code I was experimenting with 3 months ago and never kept / committed, just because I'm at the same spot in the same file..

I do believe that some people are having reasonable outcomes, but it's not "out of the box" - and it's faster for me to write the code I need to write than to try 25 different prompt variations.


A lot of python in a monorepo. Mono repos have an advantage right now because the LLM can pretty much look through the entire repo. But I'm also applying LLM to eliminate a lot of roles that are obsolete, not just using it to code.

Thanks for sharing your perspective with ACTUAL details unlike most people that have gotten bad results.

Sadly hardware programming is probably going to lag or never be figured out because there's just not enough info to train on. This might change in the future when/if reasoning models get better but there's no guarantee of that.

> which is now based on o4

based on o4 or is o4, those are two different things. augment says this: https://support.augmentcode.com/articles/5949245054-what-mod...

  Augment uses many models, including ones that we train ourselves. Each interaction you have with Augment will touch multiple models. Our perspective is that the choice of models is an implementation detail, and the user does not need to stay current with the latest developments in the world of AI models to fully take advantage of our platform.
Which IMO is....a cop out, a terrible take, and just...slimey. I would not trust a company like this with my money. For all you know they are running your prompts against a shitty open source model running on a 3090 in their closet. The lack of transparency here is concerning.

You might be getting bad results for a few reasons:

  - your prompts are not specific enough
  - your context is poisoned. how strategically are you providing context to the prompt? a good trick is to give the llm an existing file as an example to how you want it to produce the output and tell it "Do X in the style of Y.file". Don't forget with the latest models and huge context windows you could very well provide entire subdirectories into context (although I would recommend being pretty targeted still)
  - the model/tool you're using sucks
  - you work in a problem domain that LLMs are genuinely bad at
Note: your company is paying a subscription to a service that isn't allowing you to bring your own keys. they have an incentive to optimize and make sure you're not costing them a lot of money. This could lead to worse results.

see here for Cline team's perspective on this topic: https://www.reddit.com/r/ChatGPTCoding/comments/1kymhkt/clin...

I suggest this as the bare minimum for the HN community when discussing their bad results with LLMs and coding:

  - what is your problem domain
  - show us your favorite prompt
  - what model and tools are you using?
  - are you using it as a chat or an agent? 
  - are you bringing your own keys or using a service?
  - what did you supply in context when you got the bad result? 
  - how did you supply context? copy paste? file locations? attachments?
  - what prompt did you use when you got the bad result?
I'm genuinely surprised when someone complaining about LLM results provides even 2 of those things in their comment.

Most of the cynics would not provide even half of this because it'd be embarrassing and reveal that they have no idea what they are talking about.


But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?


So your critique of AI is that it can't read your mind and figure out what to do?

> But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?

I mean....i'm doing it and getting paid for it so...


Yes, because AGI is advertised(or reviled) as such. That you plug it in and it figures everything else out itself. No need for training and management like for humans.

In other words, did the AI actually replace you in this case? Do you expect it to? Because people clearly expect it, then we have such discussions as this.


You are incredibly foolish to get hung up on marketing promises and ignoring llm capabilities that are a reality and useful right now

good luck with that


Tell that to all these bloodbathers. I am trying it out myself and in touch with the reality.


You're trying it out with literally the expectation that it can read your mind and do what you want with no effort involved on your part.

So basically you're not trying it out. Please just put it down, you have nothing interesting to say here


Maybe. But are you aware that noone, at least in management, wants to hear "you must make the effort"?


> What I mean by that is that you have even more power to start your own company or use LLMs to reduce the friction of doing something yourself instead of hiring someone else to do it for you.

> Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.

this sounds like the death of employment and the start of plutocracy

not what I would call "democratisation"


> plutocracy

Well, I've said enough about cynicism here so not much else I can offer you. Good luck with that! Didn't realize everybody loved being an employee so much


not everyone is capable of starting a business

so, employee or destitute? tough choice


I spent a lot of time arguing the barrier to entry for starting one is lower than ever. But if your only options are employee or being destitute, I will again point right to -> cynicism.


Uploaded to HuggingFace (https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B) without any fanfare or even announcement by the DeepSeek team, but the GMI Cloud team is hosting it already!


I genuinely liked him, even as an atheist. He seemed to be trying his best to make the world a better place and I can't fault him for that.


He riled many of his flock and hierarchy when he said that "even atheists can be redeemed". [0]

I will always applaud a person who retreats — even just a little — from dogma and fanaticism.

https://www.npr.org/sections/parallels/2013/05/29/187009384/...


> He riled many of his flock and hierarchy when he said that "even atheists can be redeemed".

It's quite a bit above our pay grade to proclaim categorically who supposedly cannot be redeemed; it verges on blasphemy.

Cf. Job. 38:

1. Then the Lord spoke to Job out of the storm. He said:

2 “Who is this that obscures my plans with words without knowledge?

3 "Brace yourself like a man; I will question you, and you shall answer me.

4 “Where were you when I laid the earth’s foundation? Tell me, if you understand.

5 "Who marked off its dimensions? Surely you know! Who stretched a measuring line across it?

6 "On what were its footings set, or who laid its cornerstone—

7 "while the morning stars sang together and all the angels[a] shouted for joy?"

(etc.)

https://www.biblegateway.com/passage/?search=Job%2038&versio...


> It's quite a bit above our pay grade to proclaim categorically who supposedly cannot be redeemed; it verges on blasphemy.

And the idea that atheists can be saved isn't novel in Catholic teaching – it is implicit in the Holy Office's 1949 condemnation of Feeneyism, [0] in which it declared that a person who doesn't believe in Catholicism due to "invincible ignorance" can be saved by an "implicit desire" for God. Although it didn't include the case of atheists, it didn't exclude them either – suggesting that an atheist who doesn't believe in God in their head (due to some intellectual issue) but nonetheless believes in God in their heart can be saved.

[0] https://en.wikipedia.org/wiki/Feeneyism


> ... who doesn't believe in God in their head [..] but nonetheless believes in God in their heart ...

I'm racking my brain right now dissecting what that even means. Believing there is no one but wishing it wasn't so?


In Catholic theology, God is believed to be Goodness itself – in a sense, identical to Plato's Form of the Good (but going far beyond Plato's idea at the same time).

Hence, anyone who loves Good loves God... so a person who truly loves Good, but who due to some intellectual obstacle, isn't able to call that Good "God" – from a Catholic viewpoint, it can be said that they love God without knowing that it is God whom they love – and by that love they can be saved


if you're good but in your mind reject God, I guess they're saying that it's good enough


Pretty much, although it also depends on your mind's reasons.

If you start from the assumption that Christianity is true, and some people know this, and others don't – you have to ask why the people who don't know it, don't know it. And this is where Catholic theology distinguishes between "vincible" and "invincible" ignorance - "vincible" means the ignorance is your own fault, "invincible" means your ignorance is through no fault of your own.

How to distinguish the two? Ultimately, it is up to God to decide – nobody else knows for sure what's going on in your head. At best, theologians would give some examples of hypothetical situations which could be said to be one or the other – but the real world is often much messier than any such hypothetical can capture.

Which is part of why, the traditional Catholic teaching, is that (with rare exceptions) you can't actually know where people are going to end up. The idea is that if you make it to heaven, you might be surprised to find a lot of people there you weren't expecting, and also maybe some people you were sure would be there aren't.


As an agnostic who spends a lot of time reading scriptures of several religions, trying to grasp the themes and motivations of others I share a world with -- those passages are particularly inscrutable.


It's pretty easy to parse if you understand that God isn't actually asking anyone for the dimensions of the Earth. It's more about proffering humility to Job by comparing his understanding of things to God's.


It's repeating, over and over, the extreme ignorance, and thus presumption, of Job in running from what God told him to do.

Edited to add: this is a single passage with verse markings.


You might enjoy unsongbook.com, a main theme of which is contemplating the meaning of that passage (and, related to that, making whale puns).


> As an agnostic who spends a lot of time reading scriptures of several religions, trying to grasp the themes and motivations of others I share a world with -- those passages are particularly inscrutable.

I think the author's intent is to remind us that some things are simply beyond our ken (to which I'd add: For now).


> It's quite a bit above our pay grade [...] it verges on blasphemy.

Cheers! As I understand the term blasphemy, our presumptuous species has a great deal to assert about the unknowable. ^_^


Absolutely, but Pope Francis said a lot of things that were absolutely core, canon, Catholic beliefs but still made a bunch of Catholics unreasonably angry.


> He riled many of his flock and hierarchy when he said that "even atheists can be redeemed".

Which is "interesting", considering how much of the New Testament is about redemption and reaching out to outsiders. Aren’t we all supposed to be God’s creation, and wasn’t Jesus supposed to teach us about salvation, redemption and forgiveness?

(And by "interesting", I mean that it is yet another of example cognitive dissonance amongst fundamentalists. If anyone can be redeemed, it implies that atheists can, as well.)

> I will always applaud a person who retreats — even just a little — from dogma and fanaticism.

Indeed. He was not perfect but he was better than most. I hope the next one won’t be a catholic version of patriarch Kirill.


It's funny you mention Kiril. I keep thinking about Pope Francis's (apparently deep and genuine) friendship with Bartholomew, Ecumenical Patriarch of the Orthodox Church.

It is traditional for the EP to visit Rome on the patronal Feast of Saints Peter and Paul and for the Pope to visit Istanbul on the Feast of Saint Andrew, which is apparently when the friendship first formed. My absolute favorite story about Francis is his deciding to send some of the most precious relics in the Vatican to Bartholomew as a gift: https://www.vaticannews.va/en/pope/news/2019-09/pope-francis... (That sent some people into a fury).

Actually, it's my second favorite story. My favorite story is his insistence that he live in the Vatican guesthouse (and not the Papal apartments). Or perhaps the fact that as archbishop of Buenos Ares he insisted on taking the subway.


"Actually, it's my second favorite story. My favorite story is his insistence that he live in the Vatican guesthouse"

I believe that had mainly power reasons, because pope Paul II was pretty out of the loop, what the cardinals were doing.

And Francis likely expected to face opposition in what he was doing, so being closer to the "people" was likely helpful on having an eye on them.


Mind explaining your issues with Kirill?

Haven't really been paying attention. Wasn't he the one who got Russia into defending persecuted Christians wherever (Syria etc)?


The man declared Putin's war to be a literal crusade against the West:

> From a spiritual and moral point of view, the special military operation is a Holy War, in which Russia and its people, defending the single spiritual space of Holy Rus', fulfill the mission of the "Restrainer", protecting the world from the onslaught of globalism and the victory of the West that has fallen into Satanism.

> After the end of the SVO, the entire territory of modern Ukraine must enter the zone of exclusive influence of Russia. The possibility of the existence on this territory of a Russophobic political regime hostile to Russia and its people, as well as a political regime controlled from an external center hostile to Russia, must be completely excluded.

https://www-patriarchia-ru.translate.goog/db/text/6116189.ht...


He also said that russian men who die fighting in ukraine are guaranteed salvation. In orthodox theology this sort of thing has historically been recognized as a straightforward heresy. We do not claim to know in advance who will be saved, or by what specific acts. Not even bishops or metropolitans. So even from a strictly orthodox perspective he is dangerously divisive and has broken from one of our most important traditions.

(The recognition of saints is a little different, happening always after their death and depending on some degree of regional consensus. It's sloppy but whatever, it is actually not as similar as it might look.)


Read up on him more. He's essentially former KGB that was originally assigned to keep an eye on the token remnants of the church in Soviet Russia. He's now saying the war against Ukraine is "holy and justified", signing up to fight is "guaranteed to wipe away your sins", etc. He's designed to manipulate a segment of the population. He's Putin's method to "religiously justify" whatever Putin wants.


("He" here is Kirill not the Pope)


The Russian Orthodox Church has been a Chekist front since Stalin revived it for nationalistic reasons during WW2. Kirill is just continuing the tradition.


> Which is "interesting", considering how much of the New Testament is about redemption and reaching out to outsiders. Aren’t we all supposed to be God’s creation, and wasn’t Jesus supposed to teach us about salvation, redemption and forgiveness?

As religion has shrunk in participation in most of the west, it has become hugely susceptible to manipulation. My wife (now atheist, but grew up evangelical) often has to correct me when I make snide remarks about Christianity. Recently I made some comment about hypocrisy amongst Christians for supporting a multiply-divorced man who bragged about groping women for president (who has probably never read the bible), to say nothing of the people around him. She quickly snapped back at me that "they actually see themselves in him, have you not noticed all the sex scandals that happen in so many churches?" and then went on to list the "questionable" relationships in her own youth group. (I am NOT saying all Christians are like this, but religion is often used to cover up or excuse misdeeds).

It is not unique to Christianity or even Islam, though. You're seeing a lot of religion being used to justify many terrible things, including many smaller ones in Africa and Asia that have been used to justify atrocities and genocide.


> She quickly snapped back at me that "they actually see themselves in him, have you not noticed all the sex scandals that happen in so many churches?"

I think she is right for some of these people. It is a human reaction, but it is still a moral failing. The proper Christian (well, Catholic, anyway) thing to do would be what is expected in a confession: recognise one’s failings, express regret, and accept consequences, including punishment. Then comes redemption.

Something that irks me fundamentally with most Christian religions is how they believe that they are Good People because they accepted God and rejected Evil. It’s all good as long as you play the part. Once you start looking for excuses, you failed twice: first, because of your behaviour, and then for failing to repent. If you support someone because he made the same error you did, then you fail yet again. This behaviour is understandable, but trippy incorrect from a religious perspective and very hypocritical.

In the grand scheme of things, it is very easy to get forgiveness, you just have to be sincere in your regrets (again, for Catholics, which is what I know).


My (and my wife's) background is protestant. In this realm, there's no forgiveness unless you totally repent and accept the whole christian shebang. In extreme cases, it's not the the sin itself, but the rejection of god/jesus that's the worst you can do. Taken to the extreme, you see this manifested very strangely, like Chick tracts where the secular lifetime do-gooder burns in hell, but the terrible multiple murdering rapist gets into heaven because they repent "in time".

I know there are wonderful ministers, christians, and people of all religions. But I've come to the conclusion that if said minister/church/religion gets involved in politics, there's a greater chance than not that it's being run by manipulative power-hungry people. And those people want strict control, making mistakes (often the way people learn best) is not tolerated by them. It's in some ways gotten worse, because they're now treating other people's refusals to follow (gay marriage, no prayer in schools, etc) as direct attacks on them.


> My (and my wife's) background is protestant.

Sorry I misinterpreted. Protestant denominations are convenient for politics, because there are so many of them and hey have so different positions.

> In this realm, there's no forgiveness unless you totally repent and accept the whole christian shebang. In extreme cases, it's not the the sin itself, but the rejection of god/jesus that's the worst you can do.

That’s fertile ground for extremism and reinforces the group dynamics, for sure.

> Taken to the extreme, you see this manifested very strangely, like Chick tracts where the secular lifetime do-gooder burns in hell, but the terrible multiple murdering rapist gets into heaven because they repent "in time".

I think Pascal wrote something about this behaviour. I won’t chase the source but IIRC the conclusion was that these people were hypocrites using religion to be terrible people and I tend to agree. Personally I find also weird to believe that God is so easily fooled, but that’s just me.

> But I've come to the conclusion that if said minister/church/religion gets involved in politics, there's a greater chance than not that it's being run by manipulative power-hungry people.

Definitely. It is too effective as a tool for control and coercion. At least the Catholic Church mostly retreated from this. They do some lobbying but nobody is asking for a Catholic theocracy anywhere that I know of.

> It's in some ways gotten worse, because they're now treating other people's refusals to follow (gay marriage, no prayer in schools, etc) as direct attacks on them.

Yes. It is the end of enlightenment and the end of liberal democracies if enough people behave that way. These people are functionally similar to the imams who keep babbling about the shariah, it’s time we see them that way.


I guess it's good to correct an incorrect accusation of hypocrisy. But it's not great when doing so takes the form, "People aren't being hypocrites in not condemning someone in power for the bad things he does, because they do those bad things too".


> As religion has shrunk in participation in most of the west, it has become hugely susceptible to manipulation

That’s an interesting correlation. Do you have any ideas about the dynamics associated with it?

I do seem to remember experiencing my tradition as less manipulative when I was young, but have never been sure if that was me not seeing it. And if true, I’m not sure whether to attribute it to size, or the internet, or political influence, or something else.


Same here. Although I grew up a Catholic and am now an atheist, my father counselled me that there were few institutions in the world that look after the downtrodden. The Catholic church has often not done that, but under Francis moved more towards that goal than any other time in recent history.


> look after the downtrodden. The Catholic church

gifted all women indissoluble marriage, which was practiced by the Roman aristocracy as "confarreatio".

This was trashed as soon as possible, and the trashing was billed as great progress.


this is terribly inaccurate. they teach gainst using birth control even in poor, AIDS ridden regions (see Mother Theresa in Africa), treat women as lesser beings (including not recognizing that marital rape is a thing), cause the mistreatment of queer and homosexual and trans people, etc etc


Any Abrahamic religion that teaches otherwise isn't compatible with tradition or scripture period.

Also the condom thing is false. Keep up to date.


I don't keep up to date with how slightly less random the fairy tales decided to be this year


He felt like a throwback to me, in a good way. He reminded me of a time when Christians weren't so afraid of being subsumed by the secular progressive mainstream, when they could still see love and forgiveness as the core of their faith.


I'm not religious either, but was educated in a Jesuit school. He brought a well needed breath of fresh air to the church. He was a pope for our times. Let's see if the church will be able to make another strong selection to replace him.


A prevalent sentiment.

I'd researched popes' policies and statements toward the poor some years back, and he really had no peer going back centuries.

Partial exception in the late 1900s, under Leo XIII (1878--1903), in the encyclical Rerum novarum.


Rerum Novarum was the basis of catholic social teaching since, so...

But yes, one thing is statements another is actions, regarding the latter the Latin Church's actions have often not been in keeping with their lofty writings.


> .. even as an atheist

lots of christians didn't like him, considering he was too progressive


On the other hand, lots of christians liked him because he was progressive (more than his predecessor, anyway). Catholics are not all fundamentalists and in general don’t have much in common with the catholics bishops in the US, who are for the most part downright medieval.


Only American Protesting Catholics had issues with him. The same ones that post Deus Vult memes on Facebook.


Plenty(well, some) Catholics in Poland had an issue with him for the exact same reason - just way too progressive for them. Although I do think that American Catholics are particularly.....fervent in their beliefs.


Look up some numbers, his approval ratings outside of America were rapidly declining (at least in Latin America). [1] Interestingly the US is the one place where his approval ratings didn't decline over time, probably owing to the perfectly divided nature of contemporary politics. As he lost support from one side he gained it in equal proportions from the other. But in places like Argentina, his birth place no less, his approval rating dropped 27 points as he got increasingly involved in Progressive stuff.

[1] - https://www.pewresearch.org/short-reads/2024/09/26/how-peopl...


I saw a map of countries he visited as the Pope and Argentina wasn't even there. Feels really strange.


One data point, but I live in a progressive country in western Europe, and I have close family members who are in the "right wing / trumpist / christians" movement (which does exist in Europe too), and obviously they really disliked this pope.


I think these are two sides of the same coin


i saw this only on the internet tho, and mainly the english speaking internet, never in real life.


This just isn't true. Anyone who hangs around people who follow the church happenings would know even if they were in support of his actions.


He is one of the few religious leaders who actually gave me a positive view of religion. He seemed like a really great human.


"An athiest doesn't believe in 2,000 gods, a Christian doesn't believe in 1,999 gods." -- Ricky Gervais


Ricky is smart, but not smart enough.


Maybe not, but dismissing this quote outright is to dismiss something fundamental to our psychology, and our history.


I'm dismissing it on its lack of logic (not smart enough), even though it has a superficial patina of logic (smart).

That is, I don't find it fundamental to our psychology nor history. To address your perspective.

It's an atheists view, sure. It assumes that if one god is real, then all gods must be real. Because in the mind of the atheist, all are equal in that they are imaginary or, at best, avatars of psychological phenomena.

It necessarily assumes the inverse logic that if 1,999 gods are fake, then 2,000 gods must be fake.

Again, because they are all fundamentally equal. And therefore it is the Christians that are illogical because they have dismissed 1999 gods as fake but couldn't quite get there for the Last God Standing.

I believe that's the sum of it.

Which is a statement that about illustrates the intellectual limit for atheism. I will give Ricky credit for that much.

Where Ricky's quip stops and Christian logic starts is that Christians know that their singular God is not imaginary nor an avatar, but is a living being. With the figurative 1999 additional gods indeed being either imaginary, psychological avatars, or worse.

Which is clear to the Christian, because he understands something singularly fundamental about Christianity's God that others may not. This fundamental characteristic, at minimum, is the difference between real and fake.

At which point the atheist would quip that there is no logic to see the Christian God as different from the other 1999 Gods, and as concrete.

The Christian would finish up with telling the atheist that their failure to see the difference is a failure in being able to interpret the Bible, a failure in being able to interpret the Christian religion, and a failure to understand their own nature including from where they come.

Some of this lack of understanding on the part of atheists is rooted in a lack of perspective that is otherwise glaringly true, and a lack of logical rigor. In other words, they take too many lies on faith.

The conversation frequently continues with a debate over supposed proofs rooted in belief (faith) for atheists.

With the Christian making more general appeals to their own faith that is rooted in Biblical and religious interpretation, ideally combined with their own verifying observations about the nature of the World.

Or, for those who have blindly committed based on instinct, then just appeals to faith alone. Which is also acceptable, because it is a correct instinct. Being a Christian doesn't hinge on knowledge alone. It hinges on faith. There are good reasons for this state of affairs.

What both sides will agree on is that he who believes in the least lies "wins" so to speak. The process of parsing being roughly parallel to discovering the rules of a game. The disagreement is over the substance of the lies. What Judeo-Christianity does is attempt to convey those rules in clear-enough language, even if the motivation to follow them is solely based on faith. With deeper Bible reading often being revelatory for greater insight.

Whereas the atheist dismisses the traditional rules out of lack of faith and lack of understanding, and often then devolves into inventing his own rules. Which is specifically against the rules, but does align with the atheist's chaos theory of his own existence.


[flagged]


[flagged]


Where do you live? How much of your money and land are you willing to surrender to me? I think there's a real argument that it's the right thing to do.


if you showed up to my house with a gun and said "give me your living room or die" I'd probably do it, yeah. See the thing is, youre not at my house and you dont have a gun so the analogy doesn't work.

You should do it. Show up to my house with 6 of your friends and a tank, and then when you say "give me your living room or die" and then when I point out "this is bad, you shouldnt do this" you'll just leave? You'll realize the errors of your ways and go "you know, I was ready to kill for this but now I think I just won't".

Lets take this even further, youre openly threatening to kill me for my house. If my neighbors are going "hold on! Don't give in! Heres a gun, you got this!" are they helping or are they getting me killed? Do you think thats whats going to happen, or do you think the neighbors are saying "this psycho is going to kill you, give them what they want".

It doesnt work.

edit: this got flagged? why? Its pretty benign


are you going to come back to this or what because it was not as clever as you think it was. English your first language?


thinkingtoilet more like shit for brains


Ukraine is fighting to defend its land, not against an extermination campaign like Palestine

False. Russia has sought the cultural genocide of Ukraine for hundreds of years.

https://ukraineworld.org/en/articles/basics/linguicide-ukrai...

https://www.cbsnews.com/news/russian-history-of-subjugating-...

https://foreignpolicy.com/2024/04/23/russia-ukraine-cultural...

https://academic.oup.com/jicj/article/21/2/233/7197410

https://www.foreignaffairs.com/ukraine/why-russias-war-ukrai...

This is why they fire missiles at museums and libraries. This is why they steal Ukrainian children and ship them to Russia for adoption. This is why they deport Ukrainian citizens to Siberia and bring in Russians to replace them.


No, Ukraine should not surrender because if they surrender now the same argument can be made next time - and with Russia there will always be a next time. This is an existential fight for Ukraine and Ukrainians.

Assuming that you are arguing in good faith you should read up on some basic game theory. The outcome of this fight is not just about this war but about establishing the incentives of all future potential attempts at aggression by Russia (and other expansionist countries).

Edited to remove the snark.


Opening paragraphs:

"Mandiant analyzed 138 vulnerabilities that were disclosed in 2023 and that we tracked as exploited in the wild. Consistent with past analyses, the majority (97) of these vulnerabilities were exploited as zero-days (vulnerabilities exploited before patches are made available, excluding end-of-life technologies). Forty-one vulnerabilities were exploited as n-days (vulnerabilities first exploited after patches are available). While we have previously seen and continue to expect a growing use of zero-days over time, 2023 saw an even larger discrepancy grow between zero-day and n-day exploitation as zero-day exploitation outpaced n-day exploitation more heavily than we have previously observed.

While our data is based on reliable observations, we note that the numbers are conservative estimates as we rely on the first reported exploitation of a vulnerability. Frequently, first exploitation dates are not publicly disclosed or are given vague timeframes (e.g., "mid-July" or "Q2 2023"), in which case we assume the latest plausible date. It is also likely that undiscovered exploitation has occurred. Therefore, actual times to exploit are almost certainly earlier than this data suggests."


I'm pretty sure this gives credence to the monopoly argument


This is a good example of why you don't want ring0 level access for clients. Or just, you don't want client-based solutions. The provider just becomes another threat vector.


This is actually quite insane if you consider that this is intended to be tame language.

"The Board finds that this intrusion was preventable and should never have occurred. The Board also concludes that Microsoft’s security culture was inadequate and requires an overhaul, particularly in light of the company’s centrality in the technology ecosystem and the level of trust customers place in the company to protect their data and operations.

The Board reaches this conclusion based on: 1. the cascade of Microsoft’s avoidable errors that allowed this intrusion to succeed;

2. Microsoft’s failure to detect the compromise of its cryptographic crown jewels on its own, relying instead on a customer to reach out to identify anomalies the customer had observed;

3. the Board’s assessment of security practices at other cloud service providers, which maintained security controls that Microsoft did not;

4. Microsoft’s failure to detect a compromise of an employee's laptop from a recently acquired company prior to allowing it to connect to Microsoft’s corporate network in 2021;

5. Microsoft’s decision not to correct, in a timely manner, its inaccurate public statements about this incident, including a corporate statement that Microsoft believed it had determined the likely root cause of the intrusion when in fact, it still has not; even though Microsoft acknowledged to the Board in November 2023 that its September 6, 2023 blog post about the root cause was inaccurate, it did not update that post until March 12, 2024, as the Board was concluding its review and only after the Board’s repeated questioning about Microsoft’s plans to issue a correction;

6. the Board's observation of a separate incident, disclosed by Microsoft in January 2024, the investigation of which was not in the purview of the Board’s review, which revealed a compromise that allowed a different nation-state actor to access highly-sensitive Microsoft corporate email accounts, source code repositories, and internal systems; and

7. how Microsoft’s ubiquitous and critical products, which underpin essential services that support national security, the foundations of our economy, and public health and safety, require the company to demonstrate the highest standards of security, accountability, and transparency."


Not everything should be on-prem, but some things should be. Access control comes to mind


Ugh, please do not give car manufacturers any ideas!

...or Boeing.


Ah, the Terraform alternative. I wonder how long they'll be able to maintain backwards compatability


How long do they need to?

If they say “opentofu 1 supports terraform version blah if ppl want the new features in opentofu 1.3 or whatever they will adopt or… go back/stay with terraform?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: