The electoral college was created in the time before the Internet, computers, television, radio, telephone, telegraph, electricity, the automobile, the airplane, and the train. It was logistically impossible to have a national popular vote at the time. Even the gap between the election and inauguration was based on the time it would take a man on horseback to reach DC from the farthest point out in the country.
There was a highly publicized case a few years back where the police entered a hospital and ordered a nurse to draw a blood sample for an unconscious patient who had been in a car accident. They had no warrant and she refused per hospital policy (and law). The cops roughed her up pretty bad and arrested her.
Also good to point out that the reason they -rushed- to the hospital to do this was that the person who had hit them was an off-duty cop who was drunk and had run a red light, and they were looking for something, anything, to pin on this guy instead as being responsible, rather than the cop.
Said unconscious patient later died, if I recall correctly, too.
If Panther Lake and 18A are really on track for shipment later this year then why are they saying that its successor (Nova Lake) which is slated for 2H 2026 will use a mix of Intel and TSMC manufactured tiles?
If Panther Lake and 18A are really on track for shipment later this year then why are they saying that its successor (Nova Lake) which is slated for 2H 2026 will use a mix of Intel and TSMC manufactured tiles?
What they really destroyed was the idea that OpenAI would be able to charge $200/month for their ChatGPT Pro subscription which includes o1. That was always ridiculous IMO. The Free tier and $20/month Plus tier along with their API business (minus any future plan to charge a ridiculous amount for API access to o1) will be fine.
> The Free tier and $20/month Plus tier along with their API business (minus any future plan to charge a ridiculous amount for API access to o1) will be fine.
Actually no! If we take their paper at face value, the crucial innovation to get a strong model with efficiency is their much reduced KV cache and their MoE approach:
- where a standard model needs to store two large vectors for each token at inference time (and load/store those over and over from memory) deepseek v3/R1 only stores one smaller vector C that is a „compression“ from which the large k,v vectors can be decoded on the fly.
- They use a fairly standard Mixture of Expert (MoE) approach, which works well in training with their tricks, but whose inference time advantages are immediate and equal to all other MoE techniques, which is to say that from ~85% of the 600B+ params that are inside the MoE layers, the model at each token inference step will only pick a small fraction to use. This reduces FLOPs and memory io by a large factor in comparison to a so-called dense model where all weights are used for every token (cf Llama 3 405B)
In the long run (which in the AI world is probably ~1 year) this is very good for Nvidia, very good for the hyperscalers, and very good for anyone building AI applications.
The only thing it's not good for is the idea that OpenAI and/or Anthropic will eventually become profitable companies with market caps that exceed Apple's by orders of magnitude. Oh no, anyway.
Yes! I have had the exact same mental model. The biggest losers in this news are the groups building frontier models. They are the ones with huge valuations but if the optimizations becomes even close to true, its a massive threat to their business model. My feet are on the ground but I do still believe that the world does not comprehend how much compute it can use...as compute gets cheaper we will use more of it. Ignoring equity pricing, this benefits all other parties.
My big current conspiracy theory is that this negative sentiment toward Nvidia from Deepseek's release is spread by people who actually want to buy more stock at a cheaper price. Like, if you know anything about the topic, it's wild to assume that this will drive demand for GPUs anywhere but up. If Nvidia came out with a Jetson like product that can run the full 670B R1, they could make infinite money. And in the datacenter section, companies will stumble over each other to get the necessary hardware (which corresponds to a dozen H100s or so right now). Especially once HF comes out with their uncensored reproduction. There's so much opportunity to turn more compute into more money because of this, almost every company could theoretically benefit.
Can you guys explain what this would be bad for the OpenAI and Anthropic of the world?
Wasn't the story always outlined to be we build better and better models, then we eventually get to AGI, AGI works on building better and better models even faster, and we eventually get to super AGI, which can work on building better and better models even faster...
Isn't "super-optimization"(in the widest sense) what we expect to happen in the long run?
First of all, we need to just stop talking about AGI and Superintelligence. It's a total distraction from the actual value that has already been created by AI/ML over the years and will continue to be created.
That said, you have to distinguish between "good for the field of AI, the AI industry overall, and users of AI" from "good for a couple of companies that want to be the sole provider of SOTA models and extract maximum value from everyone else to drive their own equity valuations to the moon". Deepseek is positive for the former and negative for the latter.
I believe in general the business model of building frontier models has not been fully baked out yet. Lets ignore the thought of AGI and just say models do continue to improve. In OpenAIs case they have raised lots of capital in the hopes of dominating the market. That capital pegged them at a valuation. Now you have a company with ~100 employees and supposedly a lot less capital come in a get close to OpenAIs current leading model. It has the potential to pop their balloon massively.
By releasing a lot of it opensource everyone has their hands on it. Opens the door to new companies.
Or a simple mental model, there has been this ability for third parties to get quite close to leading frontier models. The leading frontier models takes hundreds of millions of dollars and if someone is able to copy it within a years time for significantly less capital, its going to be hard game of cat and mouse.
There was a time when the word "layoff" referred to a TEMPORARY separation due to a lack of demand with the understanding that when activity picked back up you'd be recalled back to work. This was particularly common in the automotive sector and really across manufacturing. These were cyclical industries and while employers couldn't afford to pay idle workers during periods of low economic demand, they also couldn't afford to lose the skillsets. Oftentimes unions would provide partial compensation to these workers until they were recalled.
Somewhere around the mid 1990s, "layoff" became just a euphemism for permanent reductions in force/downsizing.
Nope. Layoffs were always understood to be temporary, up through some point in the 1990s. Furloughs were much shorter in duration, typically days or weeks, and in some cases were partial (one or two days a week).
All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.
You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.
So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.
And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.
>You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.
When the old gang at Open ai was together, Sutskever, not Sam was easily the most hypey of them all. And if you ask Norvig today, AGI is already here. 2 months ago, Lecun said he believes AGI could be here in 5 to 10 years and this is supposed to be the skeptic. This is the kind of thing i'm talking about. The idea that it's just the non academics caught in the hype is just blatantly false.
No, it doesn't have to be literally everybody to make the point.
Here's why I know that OpenAI is stuck in a hype cycle. For all of 2024, the cry from employees was "PhD level models are coming this year; just imagine what you can do when everyone has PhD level intelligence at their beck and call". And, indeed, PhD level models did arrive...if you consider GPQA to be a benchmark that is particularly meaningful in the real world. Why should I take this year's pronouncements seriously, given this?
OpenAI is what you get when you take Goodhart's Law to the extreme. They are so focused on benchmarks that they are completely blind to the rate of progress that actual matters (hint...it's not model capability in a vacuum).
Yann indeed does believe that AGI will arrive in a decade, but the important thing is that he is honest that this is an uncertain estimate and is based off of extrapolation.
I'm inclined to agree with Yann about true AGI, but he works at Meta and they seem to think current LLM's are sufficiently useful to be dumping preposterous amounts of money at them as well.
It may be a distinction thats not worth making if the current approach is good enough to completely transform society and make infinite money
Yeah, in my mind, the distinction worth making is where the inflection point from exponential growth to plateau in the s-curve of usefulness is. Have we already hit it? Are we going to hit it soon? Is it far in the future? Or is it exponential from here straight to "the singularity"?
Hard to predict!
If we've already hit it, this has already been a very short period of time during which we've seen incredibly valuable new technology commercialized, and that's nothing to sneeze at, and fortunes have and will be rightly made from it.
If it's in the near future, then a lot of people might be over-investing in the promise of future growth that won't materialize to the extent they hoped. Some people will lose their shirts, but we're still left with incredibly useful new technology.
But if we have a long (or infinite) way to go before hitting that inflection point, then the hype is justified.
It's obviously not taken to mean literally everybody.
Whatever LeCun says and really even he has said "AGI is possible in 5 to 10 years" as recently as 2 months ago (so if that's the 'skeptic' opinion, you can only imagine what a lot of people are thinking), Meta has and is pouring a whole lot of money into LLM development. "Put your money where your mouth is" as they say. People can say all sorts of things but what they choose to focus their money on tells a whole lot.
My immediate reaction to the announcement was one of these is not like the others. OpenAI, a couple of big investment funds, Microsoft, Nvidia, and...............Oracle?
Oracle provides two things: A datacenter for Nvidia chips, and health data. Oracle Cerner had a 21.7% market share for inpatient hospital Electronic Health Records (EHR). Larry Ellison specifically mentioned healthcare when announcing it in the Whitehouse.
The announcement was funny because they weren't quite sure what they are going to do in the health space. Sam Altman was asked, and he immediately deferred to Ellison and Masayoshi. Ellison was vague... it seems they know they want to do something with Ellison's massive stash of health data... but they don't quite know what they are building yet.
The Snowflake-for-health is more about opening EHR data for operational use by providers and facilities.
Versus being locked into respective EHR platforms.
If Oracle provided a compelling data suite (a la MS) within their own cloud ecosystem, they'd have less reason to restrict it at the EHR level (as they'd have lock-in at the platform level), which would help them compete against Epic (who can't pivot to openness in the same way, without risking their primary product).
I think you mean PostgreSQL for EHR data. MS Fabric and Snowflake are analytical databases, not operational. Patient privacy requirements (and HIPAA law) is a blocker for having an open operational database for EHR.
Oracle makes perfect sense in that they are 1) a massive datacenter company, and 2) sell a variety of saas products to enterprises, which is a major target market for AI.
> Oracle has 2-3% market share as a Cloud Provider.
And the market leader is what, 30%? about 1 order of magnitude. That's not such a huge difference, and I suspect that Oracle's size is disproportionate in the enterprise space (which is where a lot of AI services are targeted) whereas AWS has a _ton_ of non-enterprise things hosted.
In any case, 2-3% is big enough where this kind of investment is 1) financially possible, 2) desirable to grow to be #2 or #3
Getting from 2% (Oracle) to 10% (GCP) market share would need 37.97% CAGR in 5 years. In a vacuum where everything else keeps the same, maybe, but I see that goal as very difficult to attain in what is a highly competitive industry right now.
Disclaimer: I work at a highly regulated industry and we are fine running our "enterprise" workloads in Azure (and even AWS for a spinoff company in the same sector). Oracle has no specific moat in that area imho, unless you already locked-in in one of their software offerings.
There is a certain reason that last weeks everybody and their grandma is simping for Trump. Nobody would want to be on his bad side right now. Moreover, we hear here and there that Trump "keeps his promises". A lot of the promises we do not know about and we may never will. These people did not spend money supporting his campaign for nothing. In other places and eras this would have been called corruption, now it is called "keeping his promises".
Trump is one of the most famous people in the world for not keeping promises of paying debts. But there is money to be made temporarily when he is running a caper, as long as you can get your hand in the pot before he steals it.
If your knee jerk response to any political discussion even remotely critical of 'your guy' is to snap into whataboutisim instead of participating in the conversation you might need a outrage pornography detox for a while.
> There is a certain reason that last weeks everybody and their grandma is simping for Trump. Nobody would want to be on his bad side
It's worth keeping in mind how extremely unfriendly to tech the last admin was. At this point, it's basically proven in court that emails of the form "please deboost person x or else" were send, and there's probably plenty more we don't know about.
Combine that with the troubles in Europe which Biden's administration was extremely unwilling to help with, the obstacles thrown in the way of major energy buildouts, which are needed for AI... one would have to be stupid to be a tech CEO and not simp for Trump.
Tech has been extremely Democratic for many years. The Democrats have utterly alienated tech, and now they reap the consequences.
> Tech has been extremely Democratic for many years. The Democrats have utterly alienated tech, and now they reap the consequences.
Well, on the other side it can be said that Big Tech wasn't really on the side of democracy (note: democracy, not the Democrat Party) itself, and it hasn't been for years - at the very least ever since Cambridge Analytica was discovered. The "big tech" sector has only looked at profit margins, clicks, eyeballs and other KPIs while completely neglecting its own responsibility towards its host, and it got treated as the danger it posed by the Biden administration and Europe alike.
As for the cryptocoin world that has also been campaigning for the 45th: they are an even worse cancer on the world. Nothing but a gigantic waste of resources (remember the prices of GPUs, HDDs and RAM going through the roof, coal power plants being reactivated?), rug pulls and other scams.
The current shift towards the far-right is just the final masks falling off. Tech has rather (openly) supported the 45th than to learn from the chaos it has brought upon the world and make at least a paper effort to be held accountable.
Yes, big tech was the kid caught in the corner cleaning out the cookie jar and threw a tantrum when one parent moved the jar out of reach as punishment in effort to help the industry learn self-control. Now the other parent has come home and has not only returned the cookie jar to the kid but pledged to bring them packs of cookies by the shipping container to gorge on in exchange for favors.
Nice euphemism for giving people autonomy in their data and privacy.
Most of there companies are so large that they cannot really fail anymore. At this point it has very little to do with protecting themselves, more with making them more powerful than governments. JD Vance are said that the US could drop support for NATO if Europe tries to regulate X [1]. Oligarchs have fully infiltrated the US government and are trying to do the same to other countries.
I disagree with the grandparent. They don't support Trump because they do not want to be on his bad side (well, at least not only that), they support Trump because they see the opportunity to suppress regulation worldwide and become more powerful than governments.
We just keep making excuses (fiduciary duties, he just doesn't know how to wave his arm because he's an autist [2]). Why not just call it what it is?
I do agree that big part of why they support Trump is for anti-regulation reasons. But, it is also a fact that Trump is one of them, a businessman, not a politician. With Trump they can now discuss more business and less policies. There is a certain dealing of business right now that seems not at all transparent. And in this, the amount of public simping is really weird to what usually happens, everybody praising Trump even before he was taking office, and even tiktok, "coming out" as whatever etc.
Oligarchs want less regulation, but they also want these beefy government contracts. They want weaker government to regulate them and stronger government to protect them and bully other countries. Way I see it, what they actually want is control of the government, and with Trump they have it (more than before).
We have more energy and are pumping more domestic oil than ever. We are a major exporter of LNG. Trump just killed EV subsidies, and electric charging network funding.
What are you talking about via Europe? Holding tech companies accountable to meddling in domestic politics? Not allowing carte blanche to user data?
I understand (though do not like) large corps tiptoeing around Trump in order to manipulate him, it is due to fear. Not due to Trump having respectable values.
reply