If this is seriously a $143 part then this could turn things upside down. For anyone who haven't yet clicked through and read the article, the CPU is by far the cheapest one featured in the benchmarks, is usually middle of the pack at worst and frequently beats CPUs that are $250+
It's also a part with Performance cores only, which is bad for most applications but might improve compatibility with older software (esp. games) that doesn't cope well with having both P- and E-cores, and sometimes might even fail to run altogether on these asymmetric machines.
That's not true. E cores on desktop Alder Lake mostly allow for higher multithread performance for a given amount of silicon. At desktop part power limits they are run so high that they don't allow for more work per power budget, and they don't have any advantage apart from cost. This will probably change in the future with higher core counts, but for now E cores have no real advantage outside low power laptop parts.
That article still shows E-cores as being more efficient for most of their clock speed range. They may lose out to P-cores at 3.5GHz+ clock speeds, but these speeds are very rare nowadays. You'd have to run a compute intensive workload that pegs both P- and E-cores to run into the issue.
They had to manually underclock the E-cores to get better efficiency than the P-cores, which isn't surprising; Intel's power management has historically been based around the idea of "race to idle" - running at a high speed so you could complete the task faster and shut off the CPU sooner. AMD makes heavy use of downclocking their cores but this causes all kinds of headaches with performance regressions on real loads because of course it's not possible to predict in advance how much CPU time arbitrary code will need.
I assume what he means to say is that its power draw characteristics are bad for most common workloads, in the sense that the power draw is far higher than it needs to be when running non-demanding applications, but I see actively no reason why higher clock speed would be "bad for an application"
My interpretation was more that it was kneecapped by not including the cheap but decent E-cores. Not surprising, if it was 4+4, it would make the 6+0 i5 redundant.
I was thinking why the heck is this review submitted and even upvoted on HN. It is another Alder Lake CPU. So What?
But once you mentioned it is a $143 part and a cheaper version goes for $122 this suddenly is news worthy. You are getting the latest Intel uArch for only $122. This is looking like a very good chip for low end machine especially Office work. And suggest may be Raptor Lake could really deliver 10-15% IPC. Enough of differentiation for next gen.
Another important note is Intel doing it with Intel 7nm . Which suggest their 7nm node yield must be doing exceptionally well. And their 4nm must be on track to market at 2H 2022.
Isn’t the latest Intel platform based on DDR5?
I don’t know why someone who is budget-focused would opt for being an early adopter of a new platform with overpriced ram instead of going for the EOL socket where every component is well developed and priced sensibly?
> "users are more likely from a cost perspective to build a system with one of the more affordable B660, H670, and H610 chipsets and pair that with DDR4 memory"
If their 4nm aren't doing well ( or at least on schedule ), pushing your 7nm capacity towards lower end lower margin $122 chip doesn't make any sense. Of course, that is not ruling out the possibility of Intel being completely irrational.
They might just sell CPU dies where some cores aren't working.
Also these chips are important for business customers where lower end designs are important and they make the money by selling huge amounts.
Just out of curiosity, how do you know the die is designed with four cores? I would be interested to know which chips have cores disabled and which don't.
who would have thought that Intel would be known as "the best cheap CPU" company. I think last time I remember saying this was when the first Celerons came out.
And the Celeron 300A was definitely a contender for best cheap CPU during that pre-Athlon era, especially given how overclockable it was. Later there were also a couple of inexpensive socket 370 motherboards which could accomodate two processors, so a few people had pretty kickass early dual-CPU setups for a good price.
I remember the lack of an L3 cache was a real pain. The ones you remember were the later models. The first model was a Slot-1 "cartridge". I still remember the legendary Abit BP-6 motherboard.
Yeah it's true those weren't the very first model. Seems the 300A was ~four months after the initial Celeron (April '98 Celeron launched, August '98 300A released).
Test setup, as "suggested" by Intel marketing to anandtech consists of:
MSI Z690 $400
2x32 GB DDR5-4800 CL40 $500, but lets conservatively count $300 for 2x16GB pair
You have to be positively insane to spend ~$700 on motherboard and ram just to plop 4 core CPU in there. Remove the advantage brought by pairing this CPU with fastest ram and chipset available and you end up with Ryzen 3600 competitor.
Almost all reviewers do this. For a "fair" comparison they'll test all CPUs with the same high-end motherboard and GPU which can be unrepresentative of real PCs.
I don't know if that's the case with intel, but I've extensively tested my own 5900X on both a £250 X570 motherboard and on a £40 B450 motherboard(with the same ram) and there was literally no difference whatsoever. No benchmark has shown any difference. You lose things like PCIe 4.0 connectivity in that scenario, but the CPU performance was identical.
edit: just wanted to add - yes the VRM section on the £250 mobo is vastly superior so I imagine overclocking isn't even a contest at all. But in stock form the 5900X performed equally well on both.
That makes sense though, doesn’t it? Otherwise you’re not just benchmarking CPUs but also motherboards, which is another variable and makes drawing conclusions about performance messy.
> You have to be positively insane to spend ~$700 on motherboard and ram just to plop 4 core CPU in there.
If you are testing a CPU, it makes sense to remove as many external bottlenecks as possible. It also makes it easier for anyone publishing benchmarks to have a motherboard that can take low end chips as well as HEDT ones without much configuration change.
Higher tier chipsets have more PCIe lanes and allow overclocking. I would be very surprised if there is a large performance difference outside those two scenarios. DDR5 prices are inflated, and 16Gb is sufficient for use cases this CPU is suitable for.
Extra RAM and more PCIe lanes don't really help single core performance benchmarks that much. You can build something much more cost effective than the test system in the article.
True, but most people building a budget PC will buy this in a few years and get a used motherboard, ram etc. The CPU only needs to be in a few pre-built Dells etc. to get popular in the used market.
Not to mention the electricity savings you get with the AMD Ryzen competitor over time. I once calculated that over 3-4 years, the Intel CPU would cost double the AMD.
Not sure why you were downvoted but this is true - desktop Ryzen chips have terrible idle power usage due to the IO die. The APU variants have lower IPC and performance due to lower cache, but have good idle power usage. There's no way to get both good idle power and good IPC.
Right but surely that just means that the same power draw also exists with intel chips, it's just located elsewhere on the board. Something has to do IO after all.
Not quite true, because on-die data transfer is a lot cheaper than off-die. If you look at full system power consumption between even chiplet based and monolithic Ryzen products there is a large difference.
Often a comparison is made about the cost of electricity, a valid comparison. But, there's more.
How much space does this cost?
This is a desktop vs laptop, and or, a location vs location argument.
A desktop case takes around 0.2m2 floor space. Now, for simplicity's sake as San Francisco is often incorrectly assumed to be where most readers of HN live, but given that we're all somewhat versed in SF prices: A 1 bedroom, say 60m2, apartment costs about $3000. So, that desktop case is costing about $10 per month in floor space. That's $120 per year, or $600 for 5 years. Quickly adds up!
You could run the numbers for where you live. While a desktop could be put under a desk and may not initially factor in as an explicit cost, everything is at the opportunity cost. Say, your legs, or a houseplant, neither of which cost anything in electricity or upfront cost.
(Written on an 11 year old X220 with an i3 processor.)
Your post made me laugh out loud. Your rent is a sunk cost, it doesn't increase if you get a PC taking up 0.2m floor space and it's not like your PC is a car needing a dedicated garage for storage. Also, it's not significant large enough that getting rid of the PC means you can suddenly downsize and move out into a cheaper apartment to save money. So the cost in rent space of a PC is just bogus, unless of course, you live in one of those space-ship pods you find in Tokyo hotels.
And in regards to opportunity costs, I'm sure that if you're mildly creative, you can find some dead space under your desk or between your desk and the wall, that can't be used for anything like a plant or your legs, or you can just hang it from the desk so it doesn't take up any floor space. Like, come on, seriously, there are various mini-ITX case sizes and mounting options if you're short on space and have some imagination or willingness google a bit.
Nevertheless, I'm still impressed by the length of your mental gymnastics trying to attach a rent cost to the PC case. Quality satire like this is why I come to HN. Thank you.
This a pretty nice part and dominates in what I would expect its target market to use (aka office desktop & gaming). Its really a shame its so hard to pick this out from all the benchmark cases in this article where its compared against processors that cost > 2x and have 2x the cores in a multithreaded tests. For desktop (not workstation) use, ST perf is probably the number one dictator of perceived perf. Then if it has enough cores to run most games without being core bound/etc then its a winner.
These i3's have been pretty nice parts for the last half dozen+ generations, I find it hard to personally justify more except for workstation purposes. AKA unless your doing some kind of heavy science/engineering/movie editing/etc style workload you won't actually notice much benifit moving to a more costly machine. Put another way, either do the top of the line i9/ryzen 9, or stick to these high clock rate low core count machines. Until they removed ECC some of the i3 parts with ECC were absolutely fantastic home/microserver parts too.
The issue with i3’s is performance improvements have gotten slow enough you might as well get a faster chip and upgrade less frequently. It’s not just the cost of the chip and motherboard, simply the hassle of swapping systems pushes waiting longer.
But, there aren't really "faster" processors for desktop use unless you pick the 5Ghz ones which are definitely pretty far into the diminishing returns region. When you plot single thread perf vs freq, its not linear so those 4.8Ghz cores are only marginally faster at best, and some of the lower end i5's/etc are probably slower for any number of desktop tasks (web browsing/etc) vs the 12300.
Anyway, your completely right, for me the largest "cost" is the hundreds of hours of program install, OS tweaking/etc i've put into the current machine over the span of a few years. MS makes it really hard to get away with just swapping motherboards now as you end up calling them to reactivate windows on the new machine (and its always a crapshoot as its basically licensed to the motherboard no the user). There was a time when I was constantly rotating parts in/out of my PC's but that stopped in the late XP timeframe due to MS being a PITA. So, now outside of ram/gpu/SSD upgrades they tend to remain fairly static. This one is pushing 6 years old at this point.
> its basically licensed to the motherboard no the user
You're right that you have to go through the reactivation dance, but doesn't that particular limitation apply only to OEM licenses that are included with a ready-built PC?
These are effectively locked to the a limited number of hardware changes per time period before they force you to talk to someone at microsoft to transfer the license to new hardware.
The alternative is to buy a Windows license from Microsoft direct, but the only direct option they offer to consumers anymore is an account-bound license, so you'll need to set up with a Microsoft account for initial activation on new hardware (of course, if recent Insiders builds are any indication, soon everyone will).
As the article points out, AMD has largely vacated that market. Stuff like the 1600AF or 3300X were rather poorly stocked even when they were sold and now AMD has claimed the more profitable performance segment of the DIY market don't feel the need to complete downmarket when they could be using their fab time to make epyc cpus and reestablish a foothold in the server market, or GPUs with their massive price inflation.
This might change now Intel is competitive again, or if the shortages ease so they can't simply sell a GPU or server CPU by just having one to sell, but it won't be instant.
Which leaves basically the only other CPU active in that price bracket as the 11400
Great to hear that Intel is finally competitive in this space again. If process improvements are coming on time (and they are, if Pat is to be believed), perhaps Intel can finally compete in low-power, high-perf versus M1.
Yeah, regardless of whose chips are fastest, it's really fantastic how closely competitive they all are currently. You can pick Intel or AMD or Apple, and have a fantastic CPU (and likely a decent iGPU too) whichever one you choose.
Disposable CPU sockets aren't exactly great. I don't care whose chips are faster if I have to throw out my motherboard alongside my CPU if I want more performance.
The Ryzen 5 1600 is nearly 5 years old (Apr 2017), and the current 5600X is a little older than 1 year (Nov 2020). The 5800X refresh (X3D) is announced for Q2 2022.
Now that's still a matter of perspective. If you got an early low-end CPU like a 1600 or even a 1600AF (got mine Jan 2020) and can now upgrade to a more recent, more high-end CPU like e.g. a 5600X or even 5900X the jump is pretty nice - and you get to keep your old board+RAM. Depending on your future computational requirements that system could be useful for quite a while.
If you jump on that 5000-series CPU coming from something like Ivy Bridge, you'll still be able to use the system for the same time, but you will only have used the MoBo+RAM for a single generation. In that case the total amount of usefulness you get for your money is less, and you might be better off waiting for the upcoming socket.
I'm most curious about whether there is an actual difference on the die between the performance and efficiency cores?
Or is this just fancy binning? If you get a die with 10 working cores, assign the weakest 4 as E cores and the best 6 as the P cores? This would be a great value proposition for Intel & the customer, cores that aren't up to snuff are not a complete write off for Intel like previous designs.
According to here (https://www.techpowerup.com/review/intel-core-i3-12300/2.htm...) The i3 uses a different die ("H0" vs "C0" of the 12900K). The "H0" supposedly has 6 P cores and no E cores. This meshes well with Intel's product stack - everything 12600 and down is maxed out at 6P+0E.
is this better than any ryzen 4650G or similar products? i am in the market for a desktop "office server" that would do local/remote RDP hosting of around 8-10 people.
my current machine is ryzen 3600 and i want a duplicate. i am more inclined to go with ryzen because the motherboard compatibility is much better.
I have a 3900X do-everything machine and have been loosely keeping track of comparable hardware from Intel. The 3900X is great!
With Alder Lake, Intel’s power efficiency is finally similar to AMD’s 7nm Zen hardware. That’s it. As far as I can tell.
I’d be willing to go for another AMD box in most cases. Especially as things work the same. Same already-known fixes to issues. Consistency.
Reasons for Intel:
1: Desire to run a 100% compatible Hackintosh setup. Not sure if Alder Lake is there yet. (AMD works well though! And I’d always rather have an M1 than x86.)
2: Interest in getting a feel for the difference.
—
That said, do you have a feeling for how an upgrade to the current machine would work for you? Double the cores, memory, drives and you get an almost-identical machine compared to 2 boxes. Less to manage but also fewer points of failure I guess.
M1 Mac Minis are cheaper than a CPU upgrade in many cases and offer native features and performance. I went almost a decade without recommending anything Apple, but the latest lineup is fantastic.
Not including work issued devices, I've got all AMD hardware right now. Anything prior to Intel 10th gen suffered from needing performance-tanking mitigations for their security design, and I haven't spent enough time digging in to find out if they've actually fixed it or are cheating again somehow. That was an expensive disaster, and we had to lifecycle a lot of Intel hardware early to make up for it.
Although I wish Epyc had been more available at the time, I'll say that replacing my PowerEdges with a single Threadripper has been worth it. I'd probably build around a high end Ryzen if I did it again, but there were conveniences to having NUMA on a single die. Just need an ASRock Rack motherboard if you want IPMI.
my upgrade path looks to be first upgrade to memory and then any upgraded am4 processor, whatever is available at the time. say 1-2 years for the memory and maybe more for the processor? i guess.
this is an office setup where workloads are defined and there isnt much variation in loads and all. the machine is supposed to start and just run. when there are a lot of concurrent RDP users, files IO tend to suffer a bit but since moving over to NVME drives, i have seen much improvements.
Alder Lake is really really competitive (arguably blowing AMD out of the water here) but no one actually reads these articles before screaming about how Intel are so behind AMD.
But that's by no means cheap. 12100/12300f CPUs should have been paired with mobos that cost 60-70eur tops. Instead you have to get a mobo which costs as much or more than the CPU itself
That's crazy compared to AMD though. You can get a £40 B450 board for AMD and it will run anything up to and including the 5950X without any issues. That's where the problem is - you save a 100 on the CPU, but spend a 100 more on the motherboard.
No one should be purchasing a H610 without being fully cognizant of what they are giving up. Namely a drastic reduction in PCIe lanes, zero USB 3.2 20Gbps ports, and no memory overclocking, which in Intel land usually means no XMP.
If you are buying a 120$ CPU for a budget build (which realistically is what most people going for an i3 are building) then you don't need those feature. You can have a dirt cheap CPU with a dirt cheap mobo to splurge on the best GPU you can fit in your budget.
It would make no sense to buy an i3 and then pair it with expensive memory. 3200Mhz is plenty.
I doubted for a second but it seems that H610 does support XMP but only up to CPU max speed (3200Mhz for Alder Lake) which means that it should work as intended.
I could be completely off base here. But from what I have read, H610 does not allow setting any memory timings or speed outside of the JEDEC profiles, XMP is not JEDEC so it doesn't count.
On some 3200MHz GSkill RAM I have in a desktop in front of me, that means on this kit a H610 would only run 1067MHz @ 15-15-15-36-50. Instead of the XMP profile I run now which is 1600MHz @ 16-18-18-38-56.
I agree that it's very hard for consumers to understand. But to be honest how often do you need high speed usb? How many >5gbps thumb drives exist? Even if they do exist how often are you transferring enough data to care?
You're still talking about minutes at most to dump your whole phone.
backup hard drives are the main place where it matters I transfer around my roughly 500gb home folder roughly once a month over USB. that takes a while.