Hacker News new | past | comments | ask | show | jobs | submit | m_ke's comments login

Wow great timing, I just got a $22,000 bill 2 hours ago for a surgery that UHC approved 2 months ago (in a written letter from them) because they refused to pay.


I'm on the hook for $128k for a no complications birth and 5 days my newborn had to be on a CPAP machine after blue cross denied the claim. I picked the plan only after confirming all our providers were in network, but failed to check if the building where the delivery was occurring was in network.

The plan at this point is to just ignore it and hope it goes away, since they can't put it on your credit anymore.


If it doesn’t affect your credit, why would anyone pay? Sounds ripe for an act of mass civil disobedience.


I personally believe it is our civic duty and non-payment is the most effective non-violent way to show our opposition to the system.


This is the equivalent of going to a restaurant and having the waiter spit in your empty plate and charging you for it. How insanely ridiculous


>I picked the plan only after confirming all our providers were in network, but failed to check if the building where the delivery was occurring was in network

What?

I'm sorry what kind of kaska-esque system is this?!


>what kind of kaska-esque system is this?!

It's the system that us Americans are tricked into believing is the best and nOt sOciAlIsM. Certainly USA healthcare is "the best" — if you can afford it!

My personal belief is that the kafkaesque nature of so many systems is designed to keep people destitute and despondent — to quote ole TedK: "our system keeps people demoralized because a demoralized person won't fight back."

~"We'll keep them poor and tired; if they're poor they can't afford to fight back, and if they're tired they won't have energy to..."~ —Jeff (Jonestown Massacre)

Having dropped out of a US medical school (almost two decades ago), I can assure you things have only gotten worse (from a bottom 80% POV). My best method of pyhhric victory is to not reproduce, earn just enough to live minimally (i.e. lessen tax burden/revenue), and never pay for health insurance.

YMMV — I quit, a long time ago.


I’m so sorry. No one should have to deal with this stress.

It might be worth reaching out to your state (local, not federal) rep and also your state’s insurance commissioner.


What are your options? I suppose you are liable to pay for the surgery fully and then you have to sue your insurer to try and get the money back?


I have no idea, I tried calling the number on the bill but it gave me a dialer with 8 options of "if you're calling about a bill from X which is now part of Y, please dial N". When I selected 8, which was "all other" I got a canned message telling me to call between 9-5 on a week day.

I'm definitely not paying it


Start by calling billing and telling them what happened, and that you effectively don't have insurance and will be self-paying (said for the purpose of negotiation, not what you may or may not actually do). They should discount it by a lot.


Healthcare providers have starting saying it's "insurance fraud" to say that you don't have insurance when you do.

My guess: they know they can get more money from the insurer than the individual (or a combination of both!) so they want to scare you from not allowing them to negotiate with the insurers.


This is only semi related but I wonder what will happen to these huge hierarchical orgs when the pace of software development improves by 10-20x thanks to LLMs.

How will these risk averse slow moving teams with a ton of process keep up with 100x more tiny teams of engineers who can ship whole features in days instead of months.


You don't have to worry, it's not going to happen. LLMs does/will make individuals more efficient, therefore, reducing number of developers maybe, but you will still have the exact same bottlenecks at the exact same places throttling the delivery speed.


I'm saying there will be 10-100x more small dev shops competing with the big cos. Pizza sized teams that own the whole product and can just ship stuff without the dog and pony show that's common at larger orgs.


Big co buys them. Big co sues them. Big co lobbies to keep them out of their space etc etc. Not everything is a technical challenge.


These still exist and don't make a dent in the big cos balance sheets. They may be growing the pie though.


They'll build a billion middle of the road bland messes?


No, some of them will build midjourney with no pressure to sell like instagram did


When one bottleneck is removed, that usually means the rate of change is bottlenecked somewhere else. Maybe in the release process, or testing?

Or maybe the bottleneck is the willingness of customers to try new things? Risk-adverse customers will often avoid startups. Showing yourself to be trustworthy isn’t purely about the rate of feature development.

If the other bottlenecks can’t be removed easily, instead of 10x features you could end up with fewer software developers.


Yes for sure, but from what I've seen at large companies the bottlenecks are already usually caused by intra team conflicts, legal hurdles and "processes" that take something that would have taken a dev with ownership 1-2 days to do and turns it into months long slogs and rituals.

Having worked at early stage startups and mid sized companies there's already a 10-20x productivity gap between them due to this (even on brand new projects at large companies vs startups, where it's not an issue of legacy code).

As an example I just witnessed a large co hire a consulting company to help them "ideate" on a RAG app that barely worked and required 3 rewrites and ~18 months to make it to POC stage, even though a front end dev had a better working POC that he hacked together in a day and a half.

I've heard way worse horror stories from friends at Google / Meta / Apple.

What will happen when tiny startups of 3-8 people get 5-20x more productive and can ship new stuff daily?


> What will happen when tiny startups of 3-8 people get 5-20x more productive and can ship new stuff daily?

The answer is in the comment you just wrote.

If those tiny startups are successful, they will become the next bloated large companies where things take forever because of "intra team conflicts, legal hurdles and processes", which are categories of things LLMs will never solve because LLMs can't solve problems of human consensus.

If those startups aren't successful, they will run out of money and die.

Big companies take forever to do things because they have lots of paying customers to keep happy, a bunch of people who are ready to sue them at the slightest misstep, thousands of employees with families who want job stability and therefore don't want to be betting the farm every 6 months, etc.

Tiny companies can iterate really fast because they have none of this.

LLMs don't change anything about this fundamental reality.


As the cost of going from 0 to 1 goes to 0 the incentives flip. You'll have way more small companies that raise little or no money from VCs and have no incentives to juice head count to pump the valuation.

I have a lot of friends who started similar companies recently, who are making millions in revenue with 2-8 people and deliberately plan to never grow head count past around 10 people.

We'll have way more teams like midjourney, early whatsapp / instagram and 37signals.


Can you show us these companies?


Someone’s read The Goal!

100% agree. The SW pipeline is complicated. AI may one day slot into every part and improve velocity, but it will be piecemeal and better at some processes than others for a long while.


I can't imagine what it's like at Meta right now, with the CEO publicly stating that they're firing the bottom 5% of performers and then a week later stating that the LLMs that his researchers / engineers are working on will soon be able to replace them.

Zuck needs Yann LeCun and other senior researchers at Meta a lot more than they need him. If they were to quit there would be a line out the door to hand them as much money as they want to start a competing open research lab. I bet a ton of top researchers from other labs would be happy to join too, since from what I've heard from friends they're all miserable from dealing with incompetent management.

On current trajectory one of Sam Altman / Zuck / Elon will end up having full control over the frontier models that are trained on their huge new clusters. All 3 of them are unaccountable to anyone.


> the CEO publicly stating that they're firing the bottom 5% of performers

I understand that people don't like any talk about layoffs and performance management, but I've never worked at a company where being in the bottom 5-10% of performers meant your job was safe. I've also never worked at a big company that didn't have at least 1-in-20 people who were clearly underperforming and everyone around them knew it.

I know the real complain is that he said it out loud and people don't like threats. However, Meta employees are highly compensated, especially now that the stock price is extremely high. I don't really think it's unreasonable for a company that compensates well and has generous severance packages to be cutting the bottom 5% of their workforce.


The problem isn't that they are cutting 5%, it's that they use stack ranking. Within a team of 10, you may have the top 10 performers in the whole company, but the manager still has to rank them and assign at least one of them the bottom ranking, or engage in a lengthy battle to defend their high rankings.

They're not actually finding the bottom 5%, they're giving managers an excuse to get rid of people the don't like for whatever reason.

It's also terrible for morale to do it all at once. Sure, maybe there are some underperformers. Let managers deal with those people individually. Don't do a mass layoff where they have to select someone at a specific time when all their people might be doing well.


> They're not actually finding the bottom 5%, they're giving managers an excuse to get rid of people the don't like for whatever reason.

More insidious is the rankings are capricious and arbitrary despite haughty claims. Unless you're in the top quintile and know so explicitly you can never feel safe in your position. You can also drop into the bottom quintile of the stack for no other reason than someone else on your team self aggrandized a bit more right before reviews.


Vibes strike again.


Taking this to its eventual conclusion, wouldn't you just fire everyone?

Say you fire 5% now, then another 5%, and another, and so on. Obviously, you'll still hire, so you can argue that not everyone will be fired, but you could potentially just be firing/pushing out all the people you have today over the next X years to replace them with what you believed to be better employees. However, those newer employees are not the ones that got you to where you are today where you make so much money that you can liberally fire "the bottom 5%". It feels like a bit of a paradox.

At some point, it's worthwhile to step back and ask if maybe the system is broken. The constant hiring/culling cycle is ruthless way to wring out performance from people who are already likely overperforming in the industry.


>Taking this to its eventual conclusion, wouldn't you just fire everyone?

>Say you fire 5% now, then another 5%, and another, and so on.

sounds a bit like Zeno's paradox, or one of them.


I'm not sure what's keeping LeCun at Meta at this point. I can imagine he's not happy with Zuck's capitulation. I'm sure you're right that if he decided to leave he'd easily be able to get funding. I'm sure France would be willing to set him up with an AI research lab to get him back there. And there would be plenty of other companies/labs that would be trying to get him.


$META is hitting 700, and RSU refresher price for 2022 performance was ~110.

My bet, is Yann got a huge set of packages with the AI talent race, and what may have been a $10M/4yr package may now suddenly be 70M+.

Unlikely any other AI lab would be liquid like that.


Presumes he's a weird creature that is motivated by the number. Don't really need to care anymore if you've already got $10 million.


This type of idol worshipping has to stop. LeCun invented CNN but he also said world simulation using diffusion was a deadened, which has been proven very wrong. The money is better spent hiring new grads with open minds and something to prove.


He's a director not a "in the trenches" researcher anymore. He's being paid for being a highly technical leader who enables and recruits researchers he employs to do great work, similar to Oppenheimer in a way.


"I'm not sure what's keeping LeCun at Meta at this point."

The most obvious options are

1) An insane amount of compensation.

2) Access to an insane amount of compute to train new LLMs.


> I'm sure France would be willing to set him up

Are you familiar with realities of incorporating a startup in the EU?


In the UK it costs £12 and takes 5 minutes. It costs between £300-600 per year for an accountant to file your accounts, and £12 for your confirmation statement.

And did when we were part of the EU.

Cheaper and quicker than America.

You don't incorporate in the EU, you incorporate in one of the 27 different countries.

All have wildly different requirements.


UK was vastly different from the mainland EU. You're right that the EU is not singular, but once we start talking of Germany, the Netherlands, France, etc. - we quickly hit regulations that bear no resemblance to a free market and some of which are incompatible with IT business whatsoever.


(These costs rocketed last year. Incorporation is now £50, and an electronic confirmation statement is £34.)


You also have all sorts of ancillary fees like the Information Commissioner’s annual charge.


I suspect France/EU would be willing to set him up in a government funded research lab - possibly they already have something going that they could put him in charge of. No issues with incorporation.


Sure, with roughly 50% taxes and a really dynamic free labour market.

Don't judge me, I'm living in the EU and I love the place, but regulations and business climate are definitely not great.


Yeah he could easily get Hinton (who hates nothing more than Sam Altman) to endorse a new proper open AI lab, similar to what was described in the OpenAI Charter.

Karpathy, Alec Radford, and a ton of their old students are practically free agents right now who could probably be convinced to join.

There's probably even a chance of someone like Wojciech Zaremba leaving OpenAI to join them.

EU would build them CERN style compute clusters to train healthcare, education, climate, etc models.

I'm sure there's plenty of people at HuggingFace, Eluther, old Stability AI group who'd also love to get involved.


yes ... let's dream on ... until the new openAI becomes just like the old openAI again.

human nature will never change.

q: what is the definition of an optimist?

a: a person with no experience.

q; what is the definition of a pessimist?

a: an optimist with experience.

mine ;)


I've seen him on record that he'd pretty much work for whoever pays him (in the context of research grants for military). Virtue signaling to feel good is only worth so much to people. Humans compartmentalize very well.


It saddens me that taking an ethical stance is now derisively considered "virtue signaling".

I would never work at Meta, not because refusing to do so would make me feel good, but because working there would make me feel like I'm making the world a worse place.


The idea of having a moral compass is antagonistic to the worldview of a lot of people in tech, so they are instinctively dismissive or condescending to anyone who does.


This seems like a pretty widely shared ethos in today's software engineering culture. "I'd happily build the Torment Nexus if you pay me enough!" No ethical baseline below which we refuse to pass. Simply a required $$$:EVIL ratio.


It's not always like that. Most of the team would be hired to move protobufs around for the Torment Nexus so it seems quite innocuous.


Yea, I think this is how a lot of engineers rationalize it. "Well, I'm not directly participating in my company's A/B experiment to see what types of content drive children to suicidal ideation! I'm just moving data from the project's logging side of the stack to the metrics side of the stack so that reports can be generated. Don't blame me!"


>I'm not sure what's keeping LeCun at Meta at this point.

Maybe he's happy with his compensation, his coworkers, the food at the cafeteria and doesn't want to uproot his life or be burdened with running a company.

>I can imagine he's not happy with Zuck's capitulation

Who did Zuck capitulate to?


> Who did Zuck capitulate to?

There's a pretty decent list of the actions and changes at https://www.nytimes.com/2025/01/10/technology/meta-mark-zuck....


I’d be pretty embarrassed to be working for someone who kissed the ring like Zuck did on Jan 20.


And it was already embarrassing for a myriad of reasons before that including how he went on Joe Rogan talking about how corporations need more "masculine energy". In the hobbies I participate in (notably I'm not in a major tech hub) some of these tech companies are getting a similar social stigma to like finance (and this is especially pronounced among women I know who really don't like what they view as "tech bros")


Indeed, I think the inauguration was kind of Zuck's "pedo guy" moment, where the pieces fell into place and a whole bunch of people at once were like... oh, yes, okay I see what is actually the state of things here.


Zucc has been kissing various unsavory rings for a long time, though. It's not like this just started. Didn't he ask China's President for the honor of naming his baby? [1] Totally shameless suck-up.

1: https://www.independent.co.uk/news/people/china-s-president-...


I've always thought that Zuck looked like a psychopath, even leaving aside his actions, many of which I have read about in the past.

you just have to take a look at his face, and those mad staring eyes.


>some of these tech companies are getting a similar social stigma to like finance

SV """tech""" companies have had this stigma since at least mid-2010s. Don't you remember the awfulness of Uber's CEO?

A lot of bros in tech delude themselves that they are the "in touch" ones and actually no, it's not chauvinism and misogyny it's just some "masculine energy" but it's always been lies.

It really shouldn't be this surprising that the same people who swear that there's nothing wrong with tech that results in it's INSANE gender ratios despite historical evidence that women love to code continue to ignore obvious signs of their bad behavior.

IDK, maybe it's proximity to hollywood and it's wealth of rich chauvinists and sex predators. Maybe california has something in the water that makes rich men act like sex predators. Or maybe they are a representative sample of male behavior when in positions of power over women in the USA and they just get outed more.


very, except that embarrassed is too weak a word.


So kissing the ring and bending over was okay on Jan 20, 2021?


> Zuck needs Yann LeCun and other senior researchers at Meta a lot more than they need him.

Of course not. Quantifiably so. Proof: he can get all of them for comparably measly salary to his net worth. He has.

(P.S. Besides, you'd be surprised how replaceable such people are. Often at these companies who can hire high quality talent at lower levels you are going to see impressive people step up when the old wash away, so it might actually be the opposite.)


> a week later stating that the LLMs that his researchers / engineers are working on will soon be able to replace them.

This is a pessimistic interpretation of Mark's words that has been trumpeted in the media. Which I am appalled to admit.

He said that they anticipate the majority of new code to come from AI models rather than human engineers. He then adds that they expect developers to be augmented by these tools. Which tracks as you still need somebody to drive the AI and validate or correct their outputs.

https://youtu.be/7k1ehaE0bdU?t=2h8m6s


I'm talking about this: https://www.threads.net/@zuck/post/DFNf73PJxOQ

> "...we'll build an AI engineer that will start contributing increasing amounts of code to our R&D efforts".

What do you think will happen when these models are good enough to do 90% of engineering work? He's already putting a squeeze on his employees now (https://africa.businessinsider.com/news/meta-ceo-mark-zucker...)

> "I think whoever gets there first is going to have a long-term, durable advantage towards building one of the most important products in history," Zuckerberg said, according to the recording.

> Zuckerberg also reiterated his belief that this would be the year Meta started seeing AI agents take on work, including writing software. Asked whether this would lead to job cuts, Zuckerberg said it was "hard to know" and that while it may lead to some roles becoming redundant, it could lead to hiring more engineers who can harness artificial intelligence to be more productive.


> What do you think will happen when these models are good enough to do 90% of engineering work?

Honestly? I think we'll see a lot of vengeful and technically capable people who are out of work and who are looking to get revenge on the people that laid them off.

Some of those people who feel they have nothing to lose will build swarms of small drones that will use machine vision to track down Zuckerberg whoever they feel wronged them and kill them.

The future is going to be very, very spicy.


> you still need somebody to drive the AI and validate or correct their outputs

100% visual inspection catches only about 80% of the defects.

The following is a classic example from QC circles (I used to run incoming QC at a medical device factory). Count the number of F’s in the paragraph below:

> THE NECESSITY OF TRAINING HANDS FOR FIRST-CLASS FARMS IN THE FATHERLY HANDLING OF FRIENDLY FARM LIVESTOCK IS FOREMOST IN THE MINDS OF FARM OWNERS. SINCE THE FOREFATHERS OF THE FARM OWNERS TRAINED THE FARM HANDS FOR THE FIRST-CLASS FARMS IN THE FATHERLY HANDLING OF FARM LIVESTOCK, THE OWNERS OF THE FARMS FEEL THEY SHOULD CARRY ON WITH THE FAMILY TRADITION OF TRAINING FARM HANDS IN THE FATHERLY HANDLING OF FARM STOCK BECAUSE THEY BELIEVE IT IS THE BASIS OF GOOD FUTURE FARMING.

How many did you get?

The correct answer is four dozen (I wanted to make the number harder to calculate before you count them).

Having software devs become some sort of QC inspectors for AI code sounds like a fucking nightmare to me, and I know how much of a nightmare QC in a factory is and how many defects escape both the design and the manufacturing process even with very strict QC.


> The correct answer is four dozen (I wanted to make the number harder to calculate before you count them).

No it isn't? I counted 34, and a python oneliner agrees.


Good job, I guess, I was doing that for the comment-bait to get people to count it, not with Python though (is a Python one-liner visual?). In any case, go read stuff from Deming and Juran and others in manufacturing quality, and you will still see that 100% inspection is not enough.


> He said that they anticipate the majority of new code to come from AI models rather than human engineers. He then adds that they expect developers to be augmented by these tools.

only 2 ways this can work:

1) Meta collectively generates 5x more code than it presently is capable of generating

2) Meta generates the same amount of code than it presently does, with fewer engineers since each engineer can (supposedly) generate 5x code

Unless Zuck announced some initiative that will require 5x more code than they currently can generate, you can be pretty sure the goal is #2.


The problem with #2 is Meta doesn't operate in a vacuum. Assuming there are problems to be solved, if Meta doesn't do #1 then someone else will. The someone else will eventually surpass Meta.


Surpass Meta in what? Meta’s revenue comes from social networks. Revenue doesn’t not increase with LOC. Writing 5x more code does not get you X billion users.


No company can rest on its laurels, even one the size of Meta. No one said LoC increases users or revenue, implementing ideas does though. If Meta decides to use the benefits of AI to keep the current productivity and cut staff instead of increasing productivity, they will eventually be displaced by a group that went the other way.


Competition means everyone is now 5X faster. So you can't get by with the previous output level.


I just finished a blog post with some thoughts on AI’s future [1] and the surprising conclusion was that most big tech companies probably have much bigger problems than whether researchers leave or not.

As Taleb and DeepSeek’s CEO point out, usually when you have a disruptive technology, then the incumbents will be left behind. Cursor AI and DeepSeek are a sign of new players coming out of nowhere and beating the incumbents.

[1]: https://huijzer.xyz/posts/ai-learning-rate/


It's Jack Welch's rank and yank but this time with LLMs!

https://en.wikipedia.org/wiki/Vitality_curve

I wonder if there are future plans to rank and yank LLMs, too. Or whether LLMs will exhibit "morale problems" because of it.


>All 3 of them are unaccountable to anyone.

In what way are they unaccountable to anyone?

Their wealth is tied up in stock whose value is tied to the perception, aka the accountability, of the general public. Not being able to personally destroy someone's wealth because you don't like what they're doing is different from being unaccountable. If tomorrow Zuck released an AI model or FB feature that was deeply unpopular, his ventures and personal wealth would dwindle according to the market's reaction. That's accountability. I'm not even a fan of Zuck... he's a slimy weasel who changes his tune to whoever is in power. But public perception directly affects his decision making.


Zuck has majority voting shares, so can't be fired.

Sam already proved that nobody at OpenAI can get him out, and the new board makes that even harder.

Same for Elon.

No matter what happens, all 3 will be billionaires for the rest of their lives.


Nothing you said refuted the point made.


Actually the top talent wants to work at a company that regularly fires the bottom performers


All talk of being about social change or diversity by large companies should now be exposed as purely performative. If you want to work at not just Meta but Google or Microsoft or Amazon because money is good, that's fine. We live in a society where you need money.

But you're fooling yourself if you think you're doing something good for society should've shattered long ago. All these big tech companies have done an immediate and total heel turn to get in line with the administration, which isn't even a partisan issue. The interests of large companies is aligned with US domestic and foreign policy.

Meta (etc) are now no different to Boeing, Lockheed Martin or Northrop Grumman. You are working for a defense contractor.

Every day Zuck further exposes himself as being about his own class interest: that of the billionaire class. It's now OK to say that LGBTQ have "mental illness" on Meta platforms [1]. Meta already had a longstanding policy of censoring and downranking Palestine content [2].

It's also why the government was so keen to ban Tiktok: because it doesn't censor

[1]: https://www.nbcnews.com/tech/social-media/meta-new-hate-spee...

[2]: https://www.nbcnews.com/tech/social-media/meta-new-hate-spee...


>All talk of being about social change or diversity by large companies should now be exposed as purely performative

It was always understood as purely performative. You think Gay people actually thought Target cared about them? Do you think Trans people actually thought Budweiser was going to go out of their way to support the trans community just because they gave a trans person like $50k?

The only people who have ever insisted that corporate "we love the gays" was serious are the people who are yelling about how "woke" companies are. Except at the same time they will also yell about how it's just performative?

I can't help but feel what they were asking for was never genuine support of LGBTQ people either, since, uh, who they tend to vote for. Rather, their complaint seems to have come simply from any media, any images, any acknowledgement whatsoever that LGBTQ people are PEOPLE


It's weird. You either stay quiet or be loud and expect to be out of a job. The mindset is "will this help for PSC."

I'm not bothered by the free speech policy decisions or Trump political contributions. Especially in light of overreach by the Biden administration, allowing more speech is reasonable, and political contributions to the party in power area always reasonable.

What bothers me is dishonesty from leadership about cost cutting, refusing to answer hard questions at the Q&A, and short-sighted decisions causing a lot of churn. When Sheryl left, the adult in the room that would call out Zuck left. No one's there to tell Zuck that the gold chain and million dollar watch isn't a good look. And now Nick Clegg left and Dana White joined the board. I'm sure his UFC experience will prove indispensable.

Don't get me started on how much money is wasted on AR/VR.

If it weren't for juicy 2023 RSUs and the bad job market, there'd be a lot more turnover.


[flagged]


Sir, this is a Wendy's


Nobody except people who value democratic systems of government I guess.


Engineers still need big paychecks and big funding pools to work on AI things.

There are only so many deep pockets out there to fund this.


The only reliable final test will be a black box test suite that takes your model, executes it in a sealed environment and gives you a grade back, potentially with a performance break down by subject.

No telling companies what the questions look like, what the output format is, what topics are covered, so that there’s no room to make up synthetic data to interpolate from.


A grade is mostly meaningless if you don't know how it was calculated, so no one would "rely" on it. If nothing else, you need to know the grading methodology after the test.

It's the same problem with cheating students. Once the test questions are known, they have a very short lifespan before cheaters can make them worthless. Tests have to be refreshed.


By grade I mean a score of how many of the tasks were completed successfully.

K/N or as a percentage.


If I don't know what the tasks were, that's almost exactly as useless to me as a unitless number would be. For starters, are they all of equal difficulty? Are you sure? Do you expect to be able to convince me of that without letting me see them?


Except we’re probably decades away from reliable open ended agents that can be trusted to perform any task.

There’s a reason why waymo started out in SF and Phoenix, getting to enough 9s to be hands off is really hard and current ML based systems don’t extrapolate well to new environments.


That's certainly possible. I'm not convinced AGI is just around the corner either, but I can't say with a high degree of certainty that it definitely won't arrive in the next few years.


We’ll definitely get above human level performance for a lot of tasks soon. It just won’t be general and reliable enough to do open ended tasks the way competent humans do.

So we’ll have models that can fill out and validate a tax return, and give you reasonable financial advice, but we won’t have an off the shelf general LLM from OpenAI that can replace an accountant at any random business anytime soon.


It’s amazing how confident all of the recent 20 year olds with “AI” companies are.

I’ve got to meet a bunch of founders of chatGPT wrapper companies from recent YC batches and other startups that raised a ton of money from top firms and the way they prognosticate compared to all the people I know who built real successful ML products in the past is insane.

Most of them have “AI expert” in their LinkedIn bios but have not trained a single model, their companies amount to a nodejs app with a few chained prompts and 0 data nor evals to speak of.

One of these guys just confidently opened up a conversation with me with something along the lines of “once we reach ASI, our accounting agent company will be one of the largest businesses in the world”, as in their ChatGPT wrapper will be useful when OpenAI releases a model that’s smarter than all humans.

EDIT: this is not meant to be a knock on Luka, who from what I can see seems like a brilliant guy who will probably have an amazing career.

Same goes for the recent young “AI” startup founders, most of whom are also really talented. Cheers to them for doing the right thing by going after the big new opportunities in the market enabled by LLMs.

Just maybe take it easy on the grand proclamations and crypto bro style hype.


> It’s amazing how confident all of the recent 20 year olds with “AI” companies are.

Can you blame them? The window for joining the ranks of billionaire tech-founders is slowly closing and AI may be their last hope at entering their echelons.


At Bitesnap we were surprised at how much interest there was from researchers to use our app for diet tracking. It turns out giving people a piece of paper to write “grilled cheese sandwich for lunch” is not a scalable and reliable way to collect research quality data.

We even worked with USDA on putting together a food logging dataset: https://agdatacommons.nal.usda.gov/articles/dataset/SNAPMe_A...


We've also been surprised at SnapCalorie how many researchers have approached us to use the app for more accurate diet tracking for medical study participants. The LiDAR based portion size has been a huge draw for them.

If anyone wants to check out our app or research its on our site: https://www.snapcalorie.com/

PS: Bitesnap was an awesome app!


Feels kind of incredible that something as advanced as laser imaging is being used to measure sandwich size.


What happened to your app? I was on such a research team (Scripps) that used your app for the study (PROGRESS).


Unfortunately it was shut down after I sold the company to MyFitnessPal.

I was a shitty business person who thought it made sense to try and build a free consumer product on a bootstrapped budget. We had some traction on the B2B side that paid the bills but COVID took a dent in it and it would have taken a long time to build back the revenue stream selling to healthcare companies (tip for others, it can take 6-18months to close healthcare deals and another 6-18months to integrate)

We had a few offers to sell the company and took the one that seemed to make the most sense.

If there’s anything I can do to help out my email is michalwols at the Google email provider domain


The study ended so no worries. In any case, congrats on the exit!


This doesn't surprise me.

Just trying to keep track of calories for myself stupid things like supersized slices of bread becoming common in stores can really throw off my expected calorie counts.

It seems like this can completely throw off any attempt at figuring out nutrition from an app or research perspective.


Wokeness is what happens when you have socially liberal and fiscally conservative investors / executives try to please their democratic leaning employees without having to pay more taxes. It costs them nothing, so you get corporations and the media to embrace race and gender progressivism with a full clamp down on any true progressive causes like universal health care, free education and etc.

The same VCs crying about wokeness are also crying about a collapse of the manufacturing base in the US, when they're the ones responsible for offshoring all of it and not investing in any business that deal with physical goods because software are so much larger.

As an example, yes Starbucks can have LGBT mugs but hell no to unions.


"Starbucks can have LGBT mugs but hell no to unions". I think you hit the nail on the head. There is a whole chapter to be written about pro/anti "wokeness" stances used by companies / politicians to divert attention from the deeper class vs class issues.


Wokeness is also a way the media can smack down candidates like Corbin and Sanders, labeling them sexist or an antisemite for focusing on class instead of identity politics.


It is no coincidence that wokeness arose during Occupy Wall Street, and the insistence on the use of the "progressive stack" was part of what destroyed that protest movement.


Surya is a great open source toolkit for table parsing, layout analysis and OCR: https://github.com/VikParuchuri/surya


The richest man in the world, who just fired 90% of his employees and plans on cutting the Department of Education saying Americans are too stupid so we need to remove the caps on indentured labor is the most amusing thing I've seen in a really long time.

Immigration is America's greatest asset, followed closely by the vast oceans that shield our capital from any threats. H1B should be replaced by an auction system that lets in the top paid employees and converts to a green card after one year with the company.


Context !

What It's Like to Work for Elon Musk - Genius, Chaos, and Burnout (techiegamers.com) :

https://news.ycombinator.com/item?id=42574496

And

US culture breeds 'laziness' and 'mediocrity (telegraph.co.uk)

https://news.ycombinator.com/item?id=42574238

In particular :

> “It comes down to this: do you want America to WIN or do you want America to LOSE,” Mr Musk posted on X. “If you force the world’s best talent to play for the other side, America will LOSE. End of story.”

> The world’s richest person later clarified, however, that he only advocated bringing in the top “0.1 per cent of engineering talent”.

It's funny, looks like, at a meta level this also explains both my burning hatred for Twitter (even before Musk) and why Musk doesn't give a fuck about this context issue :

> “Are people really dumb enough to think they can convince Elon Musk, an immigrant to the United States who has generated untold wealth and national security advantage for our economy and nation out of thin air, that high skilled immigration is bad for Americans? Like, seriously?” Mr Nelson wrote.

> “My tolerance for subtards is limited,” Mr Musk replied.

I can certainly see how this philosophy can work wonders at his level. And even produce tremendous improvements for *everyone : an invention has a one-time cost, but a "forever" benefit.

Still, even Musk needs a society that is liberal enough to produce the kind of people that can reach for the stars, and an environment pristine enough that something like an industrial civilization is possible. He would do well to not jeopardize these...


Vivek Ramaswamy recently said that we need "more movies like Whiplash, fewer reruns of Friends."

He (and presumably Musk) think that a society of Terence Fletcher overlords and Andrew Neiman whipping boys is an ideal to strive for, not a cautionary tale.

This is what happens when you don't get any exposure to the liberal arts.


See also: Musk and others' critique of the movie Oppenheimer.


> The richest man in the world, who just fired 90% of his employees

This isn't true. Best I can guess you are referencing the company formerly-known-as-twitter. The 90% number refers to software engineers, not employees and only applies to one of several companies that Musk runs.


He had major layoffs at tesla too https://electrek.co/2024/12/30/tesla-replaced-laid-off-us-wo...

EDIT: in general, my point is that he practically triggered mass layoffs at large companies by being so vocal about doing it at twitter. A ton of investors were celebrating it after and pushing companies to do the same.

The market for software engineers crashed after https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE

Also see the layoff charts tab on this site https://layoffs.fyi/


>EDIT: in general, my point is that he practically triggered mass layoffs at large companies by being so vocal about doing it at twitter. A ton of investors were celebrating it after and pushing companies to do the same.

His acquisition was in October 2022. Job postings started dropping off in May 2022. It was already leveling off as early as February 2022. You'd have to squint really hard to believe this was caused by musk, or that he played a major factor.


> in general, my point is that he practically triggered mass layoffs at large companies by being so vocal about doing at twitter. A ton of investors were celebrating it after and pushing companies to do the same.

That's an interesting claim, and one I'd be curious to see a more detailed argument for. I'm sure it had an impact, but arguing it had a larger impact than the prevailing market conditions seems hard. The massive overhiring in the year or two before seems like the obvious culprits for the majority of the mass layoffs.

I see a lot of hyperbolic and flat out false things said about Musk. I push back on them because I think that people's habitual innaccuracy when he is involved tends to make the real criticism of him harder.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: