Don't get me wrong: It's impressive and I have huge respect for it. I also bought one.
However, it would be surprising if Apple's new 5nm chip didn't beat AMD's older 7nm chip at this point. Apple specifically bought out all of TSMC's 5nm capacity for themselves while AMD was stuck on 7nm (for now).
It will be interesting to see how AMD's new 6000 series mobile chips perform. According to rumors they might be launched in the next few months.
This definitely is a factor. Another thing that people frequently overlook is how competitive Zen 2 is with M1: the 4800u stands toe-to-toe with the M1 in a lot of benchmarks, and consistently beats it in multicore performance.
Make no mistake, the M1 is a truly solid processor. It has seriously stiff competition though, and I get the feeling x86 won't be dead for another half decade or so. By then, Apple will be competing with RISC-V desktop processors with 10x the performance-per-watt, and once again they'll inevitably shift their success metrics to some other arbitrary number ("The 2031 Macbook Pro Max XS has the highest dollars-per-keycap ratio out of any of the competing Windows machines we could find!")
> This definitely is a factor. Another thing that people frequently overlook is how competitive Zen 2 is with M1: the 4800u stands toe-to-toe with the M1 in a lot of benchmarks, and consistently beats it in multicore performance.
It's a bit unfair to compare multicore performance of a chip with 8 cores firing full blast against another 8 core chip with half of them being efficiency cores.
The M1 Max (with 8 performance cores) multicore performance score posted on Geekbench is nearly double the top posted multicore performance scores of the 5800U and 4800U (let alone single core, which the original M1 already managed to dominate).
It'll be interesting to see how it goes in terms of performance-per-a-watt. Which is what really matters in this product segment. The graphs Apple presented indicated that this new line up will be less efficient than the original M1 at lower power levels, but they'll be able to hit it out of the park at higher power levels. We'll have to wait to see the results from the likes of Anandtech to get the full story though.
Personally, I'd love to see a MacBook 14 SE with an O.G. M1, 32 GB memory, a crappy 720p webcam, and no notch. I'd buy as many of those as they'd sell me.
I'm curious to see how the M1 Pro compares to the M1 Max. They are both very similar processors with the main differences being the size of the integrated GPU and the memory bandwidth available.
The M1 Max isn't competing with the 4800u, considering that it's starting price is ~10x that of the Lenovo Ideapad most people will be benching their Ryzen chips with. I don't think it's unfair to compare it with the M1, since it's still more expensive than the aforementioned Ideapad. Oh, and the 4800u came out 18 months before the M1 Air even hit shelves. Seeing as they're both entry-level consumer laptops, what might you have preferred to compare it with? Maybe a Ryzen 9 that would be more commonplace in $1k+ laptops?
It's hard to compare CPUs based on the price of the products they're packaged in. There are obviously a lot of other bits and bobs that go into them. However, it's worth noting that a MacBook Air is $300 cheaper than the list price for the Ideapad 4800U with an equivalent RAM and storage configuration. So by your logic, is it fair to compare the two? Perhaps, a 4700U based laptop would be a fairer comparison?
The gap between the Ideapad 4800U and the base model 14 inch MacBook Pro is a bit wider, but you'll also get a better display panel and LPDDR5-6400[1] memory.
We'll have to see how the lower specced M1 Pros perform, but it's hardly clear cut.
Edit: I just looked up the price of the cheapest MacBook Pro with a M1 Max processor and it's about 70% more expensive than the Ideapad 4800U. However, with double the memory and much better quality memory and a better display and it seems roughly about 70% better multithreaded performance in Geekbench workloads. Furthermore, you may get very similar performance on CPU bound workloads with the 10 core M1 Pro, the cheapest of which is only 52% more expensive than the Ideapad 4800U.
AMD is already planning not only 20% YoY performance improvements for x86 but now has 30x efficiency plan for 2025.
I think x86 is in it for much longer than a decade.
Intel otoh, depends on if they can gut all the MBAs infesting them.
30x efficiency is specifically for 16 bit FP DGEMM operations, and it is in the context of an HPC compute node including GPUs and any other accelerators or fixed function units.
For general purpose compute, so such luck unfortunately. Performance and efficiency follows process technology to a first order approximation.
also bear in mind that AMD's standards for these "challenges" have always involved some "funny math", like their previous 25x20 goal, where they considered a 5.02x average gain in performance (10x in CB R15 and 2.5x in 3D Mark 11) at iso-power (same TDP) to be a "32x efficiency gain" because they divided it by idle power or some shit like that.
But 5x average performance gain at the same TDP doesn't mean you do 32x as much computation for the same amount of power. Except in AMD marketing world. But it sounds good!
Like, even bearing in mind that that's coming from a Bulldozer derivative on GF 32nm (which is probably more like Intel 40nm) 5x gain in actual computation efficiency is still a lot, and it's actually even more in CPU-based workloads, but AMD marketing can't help but stretch the truth with these "challenges".
To be fair idle power is really important for a lot of use cases.
In a compute focused cloud environment you might be able to have most of your hardware pegged by compute most of the time, but outside of that CPUs spend most of their time either very far under 100% capacity, or totally idle.
In order to actually calculate real efficiency gains you'd probably have to measure power usage under various scenarios though, not just whatever weird math they did here.
That's not really being fair, because the metric is presented to look like traditional perf/watt. And idle is not so important in supercomputers and cloud compute nodes which get optimized to keep them busy at all costs. But even in cases where it is important, averaging between the two might be reasonable but multiplying the loaded efficiency with the idle efficiency increase is ludicrous. A meaningless unit.
I can't see any possible charitable explanation for this stupidity. MBAs and marketing department run amok.
Yep 100% agree with you - see my last sentence. Just trying to clarify that the issue here isn't that idle power consumption isn't important, it's the nonsense math.
Wow that's stupid, I didn't look that closely. So it's really a 5x perf/watt improvement. I assume it will be the same deal for this, around 5-6x perf/watt improvement. Which does make more sense, FP16 should already be pretty well optimized on GPUs today so 30x would be a huge stretch or else require specific fixed function units.
it's an odd coincidence (there's no reason this number would be related, there's no idle power factor here or anything) but 5x also happens to be about the expected gain from NVIDIA's tensor core implementation in real-world code afaik. Sure they advertise a much higher number but that's a microbenchmark looking at just that specific bit of the code and not the program as a whole.
it's possible that the implication here is similar, that AMD does a tensor accelerator or something and they hit "30x" but you end up with similar speedups to NVIDIA's tensor accelerator implementation.
I've seen tensor cores really shining in... tensor operations. If your workload can be expressed in convolutions, and are matching the dimensions and batching needs of tensor cores, there's a world of wild performance out there...
the 4800u stands toe-to-toe with the M1 in a lot of benchmarks, and consistently beats it in multicore performance
I had a Lenovo ThinkPad with a 4750U (which is very close to the 4800U) and the M1 is definitely quite a bit faster in parallel builds. This is supported by GeekBench scores, the 4800U scores 1028/5894, while the M1 scores 1744/7600.
If AMD had access to 5nm, the CPUs would probably be more or less or par. Well, unless you look at things like matrix multiplication, where even a 3700X has trouble keeping up with the Apple AMX co-processor with all 3700X's 8 cores fully loaded.
But tbh, it doesn't seem the new hotness in chips is single core CPU, it's about how fancy you spend the die space in custom processors, in which case the M1 will always be tailored to Apple (and presumably Mac users') specific use-cases...
My 5950x compiles the Linux kernel 10 seconds faster than a 40 core xeon box, in like 1/10th the power envelope.
The chances of actually getting a only a single core working on something are slim with multitasking, I had to purpose build stuff - hardware and kernel/etc for CPU mining a half decade ago to eliminate thermal and pre-emption on single threaded miners.
Single thread performance has been stagnant forever because with Firefox/chrome and whatever the new "browser app" hotness is this month your going to be using more than 1 core virtually 100% of the time, so why target that. Smaller die features means less tdp which means less throttling which means faster overall user experience.
I'm glad someone is calling out the M1 performance finally.
You actually get better single core performance out of a 5900x than a 5950x. The more cores the AMD CPU has, generally the more they constrain the speed it can perform at on the top end. In this case, the 5950x is 3.5GHz and the 5900x is 3.7GHz. The 5800x is even slightly faster than that, and there's some Geekbench results that show single core performance faster than the score listed here, but at that point the multi-core performance isn't winning anymore.
Also, I'm not sure what's up with Geekbench results on MacOS, but here's a 5950x in an iMac Pro that trounces all the results we've mentioned here somehow.[1]
>that trounces all the results we've mentioned here somehow.[1]
MacOS, being unix based, has a decent thread scheduler - unlike windows 10/11, which is based on windows NT and 32bits, and never cared about anything other than single core performance until very very recently.
If that's true it puts a lot of comparisons into question. That windows multiprocessing isn't as good as MacOS doesn't matter to a lot of people that run neither. There's not a lot of point in using these benchmarks to say something about the hardware if the software above it but below the benchmark can cause it to be off by such a large amount.
Most comparisons have always been questionable. Main reason MacOS gets away with charging so much more for similar hardware and still dominates the productivity market is it squeezes so much more performance (and stability) out of equivalent hardware.
Just check the geekbench top multithread scores, windows starts around the 39th _page_ - and thats for windows enterprise.
They look to be on par to me. Things will be less murky if and when Apple finally scale this thing up to 100+ watt desktop class machines (Mac Pro) and AMD move to the next gen / latest TSMC process.
In my view Intel and AMD have more incentive to regain the performance crown than Apple do at maintaining it. At some point Apple will go back to focusing on others areas in their marketing.
It wouldn't be a surprise that a CPU that can't run most of the software out there (because that software is x86) and has ditched all compatibility and started a design from scratch, can beat last-gen CPUs from competitors. For specific apps and workloads, for which it has accelerators.
But here I am, with a pretty thin and very durable laptop that has a 6 core Xeon in it. It gets hot, it has huge fans, and it completely obliterates any M1 laptop. I don't mean it's twice as fast. I mean things run at 5x or faster vs an M1.
Now, this is a new version of the M1, but it's an incremental 1-year improvement. It'll be very slightly faster than the old gen. By ditching Intel, what apple did is making sure their pro line - which is about power, not mobility, is no longer a competitor, and never will be. Because when you want a super fast chip, you don't design up from a freaking cell phone CPU. You design down from a server CPU. You know, to get actual work done, professionally. But yeah, I do see their pro battery is 11 hours while mine usually dies at 9. Interesting how I got my computer plugged in most of the time though...
>Because when you want a super fast chip, you don't design up from a freaking cell phone CPU. You design down from a server CPU.
Is that really true? I don't have any intricate chip knowledge, but it rings false. Whether ARM is coming from a phone background or the Xeon from a server background, what matters in the end is the actual chip used. Maybe phone-derived chips even have an advantage because they are designed to conserve power whereas server chips are designed to harvest every little ounce of performance. IDK a lot about power states in server chips, but it would make sense if they aren't as adapted to rapidly step down power use as a phone chip.
Now, you might be happy with a hot leaf-blower and that's fine. But I would say the market is elsewhere: silent, long-running, light notebooks that can throw around performance if need be, you strike me as an outlier.
Pro laptops should have a beefy CPU, great screen, really fast SSD, long battery life, lots of RAM which (presumably) your notebook features, but the new M somethings seemingly as well. But in the end, people buy laptops so they can use them on their lap occasionally. And I know my HP is getting uncomfy hot, the same was said about the intel laptops from Apple I think.
Apple doesn't need to have the one fastest laptop out there, they need a credible claim to punching in the upper performance echelon - and I think with their M* family, they are there.
You actually have it correct. When you start with an instruction set designed to conserve power, you don't get "max power." The server chips were designed with zero power considerations in mind - the solution to "too much power" is simply "slap a house-sized heatsink on it."
>Apple doesn't need to have the one fastest laptop out there
correct. My complaint, which I have reiterated about 50 times to shiny iphone idiots on here who don't do any real number crunching for work, is when the industry calls "mid tier" something that apple calls "pro" - apple is deceiving the consumer with marketing. The new laptops are a competition to Dell's Latitude and XPS lines. Not their pro lines. Those pro laptops weigh 7lb, and have a huge, loud fan exhaust on the back so they can clock at 5GHz for an hour. They have 128GB of RAM - ECC RAM, because if you have that much RAM w/o ECC, you have a high chance of bit errors.
There are many things you can do to speed up your stuff, if you waste electricity. The issue is not that apple doesn't make a good laptop. It's that they're lying to the consumer. As always. Do you remember when they marketed their acrylic little cube mini-desktop? It was "a supercomputer." They do this as a permanent tactic - sell overpriced underperforming things, and lie with marketing. Like using industry standard terms to describe things not up to that standard.
I’ll happily take my quiet, small, and cool MacBook and number crunch in a data center infinitely more powerful then your laptop. Guess that makes me a shiny iPhone idiot.
Relax, no one is forcing you to use Apple products.
I love how you added "laptop" to make your statement... still false. There is a program running on macos that literally recompiles x86 binaries to arm, then the m1 executes the arm code. the m1 does not execute x86 binaries. period. it only runs arm binaries.
No, parent comment isn't false, even if the wording could be more precise. It is true that M1 CPUs do not execute x86 instructions, but the machines do, in effect, execute x86 binaries. Also, M1 does have added instructions for TSO to improve performance of translated code.
Hipster graphic designers make upwards of 150,000 a year in my area. The professional in pro, never actually meant “software engineer”. It meant anyone who can hang up their signboard and work on their own: lawyers, doctors, architects, and yes… graphic designers.
Personally, I think software engineers don’t need fast laptops either. We need mainframes and fast local networks. Nothing beats compiling at the speed of 192 cores at once.
Which reminds me, laptops and render farms is exactly the technique those hipster graphic designers you talked about are using so they aren’t missing out on any power.
which is the top of their salary ceiling, and it's not a high number, like at all. the top number for software devs is about 700k. In my field, people make 200k+. But we're not talking about "pro" people. We're talking about a "pro" laptop. It's the best thing that apple makes - that doesn't make it "pro." It's got the performance of the midrange systems from everyone else.
>I think software engineers don’t need fast laptops either.
yeah, when I run a script to read a few gig of performance data and do a bunch of calculations on it, I need a fast laptop. Until that's done, I'm sitting there, CPU maxed out, not able to do anything else. With an M1, I have to arrange my schedule to process the dataset overnight. With a Dell I run it over lunch. Case closed.
>We need mainframes and fast local networks. Nothing beats compiling at the speed of 192 cores at once.
I'm not a software engineer anymore. When I was, no, I did not usually compile on the server. I compiled on my workstation. Because you're not on a fast local network. You're at an airport for 4 hours, or on a plane for 5 hours, or on a comcast connection at your house.
Rosetta 2 kicks in, performs a JIT/AOT translation of the x86 instructions to ARM instructions, executes those, and caches the resulting ARM binary for later use.
Please stop being so hostile to other users. It really doesn't add anything. You have made some factually questionable comments yourself, and I say this as someone who has worked on JIT aarch64 translation very similar to Rosetta 2.
Right? I’m curious what pro’s do “at the indy 500”.
Most devs where I work use 15” macs and probably a blend of apps from jetbrains toolbox. Mostly connected to power outlets to be fair.
So we’re talking local installs of spring boot app Java servers, front end Webserver, an IDE to work on one of those, because opening a second one on a Intel mac will either run the dev out of RAM or the heat will cause a shutdown.
The thing is, the corporate DELL windows machines available were largely unsuitable to dev work due to the trashy interfaces (low resolution screens, bad trackpads, battery life so bad you can’t make it through a 2 hour meeting undocked). The Windows laptops available really failed hard when they needed to be laptops.
It's fine to work sometime on battery. Except, after 5 hours, marginal utility decreases, and 8 hours it goes to zero. Why would I need more than one day?
because you're a "dev" who makes web pages for a company that can't afford an oracle license. and your office is a starbucks. but you want to call your little toy a pro, because to non-programmers, you make missle guidance systems. well not you. but the other few people on this thread.
Yes, using a laptop for over 10 hours on battery is not for people who do any serious work needing a pro laptop - what is in the professional circle called a workstation. Glad you understand. Note apple's stated hours: 11 hours while browsing the internet, and 17 hours for watching videos. If this is your use case, you are not the target market for a workstation. Apple sells "pro" laptops like Kia sells racing cars.
But here I am, with a pretty thin and very durable laptop that has a 6 core Xeon in it. It gets hot, it has huge fans, and it completely obliterates any M1 laptop. I don't mean it's twice as fast. I mean things run at 5x or faster vs an M1.
Probably not faster than an M1 Pro and definitely not faster than the M1 Max.
Your machine doesn't have a 512-bit wide memory interface running at over 400GB/s.
Does the Xeon processor in your laptop have 192KB of instruction cache and 24MB of L2 cache?
Every ARM instruction is the same size, enabling many instructions to be in flight all at once, unlike the x86-64 architecture where instructions vary in size and you can't have nearly as many instructions in flight at once.
Apples-to-apple: at the same chip frequency, an M1 has higher throughput than a Xeon and most any other x86 chip. This is basic RISC vs. CISC stuff that's been true forever. It's especially true now as increases in clock speeds has dramatically slowed and the only way to get significantly more performance is by adding more cores.
On just raw performance, I'd take the 8 high-performance cores in an M1 Pro vs. the 6 cores in your Xeon any day of the week and twice on Sunday.
And of course, when it comes to performance per watt, there's no comparison and that's really the story here.
Now, this is a new version of the M1, but it's an incremental 1-year improvement.
If you read AnandTech [1] on this, you'll see this is not the case—there have been huge jumps in several areas.
Incremental would have resulted in the same memory bandwidth with faster cores. And 6 high-performance cores vs. the 4 in the original M1.
Except Apple didn't do that—they doubled the number to 8 high-performance cores and doubled the memory width, etc. There were 8 GPU cores on the original M1 and how you can get up to 32!
Apple stated the Pro and the Max have 1.7x of the CPU performance of Intel's 8-core Core i7-11800H with 70% lower power consumption. There's nothing incremental about that.
By ditching Intel, what apple did is making sure their pro line - which is about power, not mobility, is no longer a competitor, and never will be.
Pro can mean different things to different people. For professional content creators, these new laptops are super professional. Someone could take off from NYC and fly all the way to LA while working on 16-inch MacBook Pro with a 120 MHz mini LED 7.7 million pixel screen that can display a billion colors in 4k or 8k video—battery only.
If you were on the same flight working on the same content, you'd be out of power long before you crossed the Mississippi while the Mac guy is still working. At half the weight of your laptop but a dramatically better display and performance when it comes to video editing and rendering multiple streams of HDR video.
The 16-inch model has 21 hours of video playback which probably comes in handy in many use cases.
Here's a video of a person using the first generation, 8 GB RAM M1 Mac to edit 8K video; the new machines are much more capable: https://youtu.be/HxH3RabNWfE.
if you define efficiency as compute per watt. I don't give a flying crap about watts. Efficiency is measured as amount of work done per hour. Because I get paid for the work, and then I pay the two dollars a week for the watts. I don't care if it's five dollars a week for the watts or two. I do care if it's two hours of waiting time versus five.
lol no. the $20/month cost of electricity is a rounding error for my $6k laptop and the $50k of software licenses for it. It's even less of a rounding error for the datacenter, where a $500k ESX farm that has several million in software on it farm uses $5k of electric per per month including cooling.
Have you noticed almost no one uses ARM? There's a reason for that. Including software being licensed per core, so faster hotter cores and fewer of them win.
I also have a Xeon laptop. (45w TDP E3-1505m v6 in a dell precision).
Xeons are not magically faster than their i7/i9 counterparts (mine being not faster than a i7-7820HQ which is its contemporary flagship high performance mobile CPU). In fact they can be slower because the emphasis is on correctness and multi core, not usually single thread performance.
Xeons are also slower than a modern AMD chip which also can have more cores.
5x is a performance metric that doesn’t match up. Unless you have a desktop class 145w/165w cpu, in which case it’s not going to get 9hrs of battery unless you’re not actually touching the CPU. More like 30 minutes of heavy use on the largest battery legally allowed in laptops.
Edit: I just took a quick snoop on geek bench and found a modern xeon w in the largest dell precision laptop available:
Synthetic scores aren’t everything, but I’m hard pressed seeing how you can get 5x performance out of a chip that scores almost exactly half. Even with hardware accelerations like AVX512 (which causes a cpu to get so hot it throttles below baseline even on servers with more than adequate cooling.)
My experience as well. I, too, have a dell precision with a 8 core xeon part, and while it looks decent its heavy and not noticeably faster than the m1 I replaced it with when it came out. The xeon would get hot and noisy when running teams or hangouts. It sits in my drawer for the last year or so.
M1 does not. Code compile is about as fast. Battery lasts a 3 day business trip or a hackaton without charging. I never heard its fan. I dont care much about brands, but lightweight, fast enough and well built M1 is praiseworthy. I am not getting the pro or max, as the benefits for me as a software dev are probably not worth the extra weight and power consumption.
Citation please on “the Xeon dell smokes the M1 air”, geek bench says the M1 air can be twice as fast.
All other things being equal: your statement is simply not true.
I just checked and I can’t find a mobile Xeon with a greater TDP than 45w, so you’re stuck with that geek bench score because that’s essentially as good as it gets for a modern mobile Xeon.
Xeons, fwiw, are just higher binned i7s and i9s with features still enabled. The reason they can be slower than i7s and i9s is that the memory controller has to do more work and the way Intel does multi-core (essentially a ring bus) doesn’t scale gracefully always.
All things are not equal though. Geekbench includes many things that run on the GPU - video encoding and playback, rendering web pages - heck even your window manager mostly uses the GPU. The Dell has low power low performance GPU. To use the second one - an NVIDIA RTX, which is literally the fastest thing you can put in a laptop. You have to explicitly tell your OS to use that GPU for a program - it defaults to the low power one.
In summary, you are full of crap if you think an untuned blind geekbench score is what you're going by - an aggregation of a whole bunch of tests, using defaults. My statement is true, as I kick off the same data processing script on my laptop and it finishes it over lunch, while my coworker kicks it off overnight.
> Xeon with a greater TDP than 45w
yes, the Xeon W-11955M in the Dell is 45W. Now add the RTX GPU - which coincidentally will be doing most of the work. Unless you're running the geekbench test you're referring to, to purposely gimp the results. That Intel integrated graphics chip uses almost no power.
go process a large dataset and do some calculations on it. Run a bunch of VMs while you're doing it to - let's say 3. Give each one 32G of memory. Better be ECC memory too, or your data won't be reliable. Maybe in about 5 years when apple catches up to the current pro laptops, you'll be able to. This is why all the m1 comparisons they do is to previous generation intel chips in their old laptops. which have always been slow. apple has always used outdated hardware, in everything they've ever made.
I guarantee you, an M1 is about as fast as your "6 core xeon" laptop. M1 Pro/Max will steamroll it. You can look at Cinebench, LLVM compile, Java benchmarks, etc. You're completely delusional claiming your laptop is 5x faster in CPU. Mocking a "phone CPU" when the A15 is actually faster than a 5950X in single core performance shows you don't know what you're talking about.
You probably wouldn’t have gotten downvoted as much if people hadn’t (ironically?) read the part you said they wouldn’t read :p And I want to mention that I agree with you about apple not selling to pros very well over the past 6 years.
Your laptop is definitely very capable. But it’s barely a laptop. Why not build out a proper desktop? This precision would be a pain in the ass to carry around for most people, especially travel. Dell made sacrifices to get that kind of power: size, cooling, and battery life. Those are actually meaningful things when it comes to a laptop for most people, even pros.
I think the fact that you’re even mentioning a MacBook <em>air</em> in the same sentence is very good for the Air. The M1 hasn’t really been marketed to the pro market until the recent release.
Also, 5x faster at what? The M1 is about the same performance as a W-11855M at single core cinebench, and only 25% slower at multicore. So comparison to the M1 pro/max is not very promising for the Xeon chip.
Engineers in the field, on oil rigs, doing CAD, Simulations etc. need huge heavy desktop replacement laptops. The licensing cost for the software they use is usually above 100_000$, and only certified for RHEL or Windows.
Even at 8k$ the laptop is often below 5% of the budget.
it's because you're using "geekbench" for your numbers. which is a combined number of misleading stats. it includes things like "encoding video" and "rendering html" - things that the m1 has specific optimizations for, which in the real world are done by the NVIDIA RTX on my Dell, with the CPU sitting at under 1% utilization. Yes, if you offload these tasks, which in the real world don't use the CPU at all, and run it on a CPU with special accelerators for these useless tasks, the CPU designed specifically to game "geekbench" metrics will win. In the real world, I got a multi-gig dataset I need to process and do calculations on. Go put a database on the M1 and see if it beats a xeon. Or for an easier test, just load up excel with lots of functions and lots of calculated rows and columns. Run that while running a couple of VMs for virtual appliances on your laptop too (128GB ECC RAM helps with that).
You're literally here saying the M1 is going to replace a server chip. Newsflash - the M1 doesn't even run any of the needed code, because it's arm code - a small niche.
in fact, i'm not sure you can even order a xeon precision w/o ecc ram. but i'm not here to do your research for you for things you can look up in a minute. you're the one that claimed a xeon precision doesn't come w/ ecc ram, without even googling it.
my laptop is a 7660. I also have an i7 5560, and a latitude 7410 w/ an I7. It's what work gives me for work - and yes, I use all 3 since we went full remote. For "pro" work. The M1 laptop is comparable to my 7410 - which I use to play online games and chromecast videos. Not any real work. It's a kids toy compared to my 7660.
since you seem to be lost here on this tech site, instead of hanging out on reddit with your peers: all xeon CPUs support ECC ram. If you want you can go on ebay, buy ECC ram, and put it in any Xeon system. Or, you know, just order it w/ ECC ram from Dell.
>It is common for Dell Precisions to ship with Xeons and not ECC ram.
correct. because most come with 16 or 32GB or RAM, and are the low end of Dell's pro line. Once you get a lot of RAM, like 64 or 128GB, and you're crunching numbers and running VMs, your chances of a memory error go up dramatically. Which is why you need ECC RAM. Which has zero to do with your post that I was replying to, claiming precisions don't have ecc ram. now find me an m1 laptop with 128GB of ECC RAM. Because you're right - "Enough" strawman astroturfing from you.
The M1 is a Latitude competitor - not a Precision competitor. Apple's "pro" line is considered mid-tier from other vendors. Their tests showing it beats xeons are comparing xeons released 2 years ago, that for some reason they used in their apple laptops. Because Apple has always used outdated CPUs compared to everyone else.
> since you seem to be lost here on this tech site, instead of hanging out on reddit with your peers
This kind of behavior makes you seem much more lost here on HN than the guy you're replying to. And looking at your downvoted and flagged posts all over the place, HN seems to agree.
I don’t need to google it when I own such a system.
My precision did not ship with ECC ram.
ECC also needs to be supported by the motherboard; all AMD Ryzen chips have ECC enabled but due to limited motherboard support: many are not able to effectively use ECC.
If you have the time could you share the output of `sudo dmidecode —type 17`?
you're looking at geekbench scores that do a bunch of GPU-offloaded stuff. and they used the low power integrated graphics, not the NVIDIA RTX in their tests - you have to explicitly select the discreet GPU for a process to use, as it defaults to the low power. so, something that doesn't even look like a mistake of taking the defaults - something that looks like deliberately lying to game the numbers.
Yup. I, too, call BS on that. I do own a precision laptop with a 8 core xeon and this thing is heavy, noisy and can't work on battery for more than 2hrs under normal workload.
Just for fun, I looked up the most expensive Dell Precision Xeon laptop, which seems to be the Precision 7760 with a Xeon W-11955M.
With a GeekBench score of 1647 ST/9650 MT, this $5377.88 machine is just a bit faster than the passively cooled $999 MacBook Air with 1744 ST/7600 MT. The MacBook Pro 14" 10 core is better than the Dell Xeon in about every way, performance, price, performance per watt, etc.
This is the xeon that's in the laptop. the geekbench score is base on running tests on the low power integrated graphics instead of the discreet NVIDIA RTX GPU. the numbers you idiots keep quoting are completely bogus. anywise losers, enjoy your apples, i got real work to do.
I run about 50k worth of software licenses on the laptop and generate millions in revenue per quarter with it. That's why it's a pro laptop, and I'm sure my company paid about double your number after you add in dell's pro support plus. Pro laptop for pro work. You're a kid who wants toys, but wants to say you're using professional equipment. I got something that's like the pro mac laptop. work gave that to me too for secondary tasks. it's called a dell latitude. it runs the latest i7 and no ecc memory. great for chromecasting porn and playing games in the browser, 13 hours of battery unlike the precision's 9, and much lighter. they just don't call it a pro.
the main reason it does it is because apple bought up all the 5nm capacity though. amd is running at 7nm still. so impressive because they could afford to do that I guess.
I'll never buy an Apple computer, but I can't help but be impressed with what they've achieved here.