Hacker News new | past | comments | ask | show | jobs | submit | more phoobahr's comments login

Probably because the announced hardware is clearly entry level. The only model line that gets replaced is the MacBook Air which has been, frankly, cheap-is and underpowered for a long time.

So you have a platform that is (currently) embodied by entry level systems that appear to be noticeably faster than their predecessors. Apple has said publicly that they plan to finish this transition in 2 years. So more models are coming - and they'll have to be more performant again.

It seems pretty clear that the play here runs "Look here's our entry level, it's better than anyone else's entry level and could be comparable to midlevel from anyone else. But after taking crap for being underpowered while waiting for intel to deliver we cn now say that this is the new bar for performance at the entry level in these price brackets."


It would be interesting to see the comparison to a Ryzen 7 PRO 4750U, you can find that in a ThinkPad P14s for $60 less than the cheapest macbook air (same amount of ram and ssd size) so that seems like a fair comparison



Assuming that geekbench is reflective of actual perf (I'm not yet convinced) there is also the GPU, and the fact that AMD is sitting on a 20% IPC uplift and is still on 7nm.

So if they release a newer U part in the next few months it will likely best this device even on 7nm. An AMD part with a edram probably wouldn't hurt either.

It seems to me that apple hasn't proven anything yet, only that they are in the game. Lets revisit this conversation in a few years to see if they made the right decision from a technical rather than business perspective.

The business perspective seems clear, they have likely saved considerably on the processor vs paying a ransom to intel.

edit: for the downvoters, google Cezanne, because its likely due in very early 2021 and some of the parts are zen3. So apple has maybe 3 months before another set of 10-15W amd parts drop.


That'll mean an 8c/16t will catch up to a 4+4 core

Apple will have a 8+4 core out soon, and likely much larger after that. Since they're so much more power efficient, they can utilize cores better at any TDP.


Sad to see downvotes on this: it's like there's a set of people hellbent on echoing marketing claims, in ignorance of (what I formerly perceived as basic) chip physics - first one to the next process gets to declare a 20-30% bump, and in the age of TSMC and contract fabs, that's no longer an _actual_ differentiation, the way it was in the 90s.


I'm as sceptical of Apple's marketing claims as anyone but if you're comparing actual performance of laptops that you will be able to buy next week against hypothetical performance of a cpu that may or may not be available next year (or the year after) then the first has a lot more credibility.

PS last I checked AMD was not moving Zen to 5nm until 2022 - so maybe a year plus wait is a differentiation.


Regardless, this competition is great for us consumers! I’m excited to see ARM finally hit its stride and take on the x64 monopoly in general purpose computing.


Apple hardware is far removed from general purpose and is about as propertiarty computing as it comes.


General purpose != free. Totally separate categories.


is a higher score worse? If not it's a sweep in favor of ryzen

https://i.imgur.com/cwpebHk.png


You're making completely the wrong comparison. On the left you have Geekbench 5 scores for the A12Z in the Apple DTK, and on the right you have Geekbench 4 scores for the Ryzen.

The M1 has leaked scores of ~1700 single core and ~7500 multicore on Geekbench 5, versus 1200 and 6000 for the Ryzen 4750U.


how can you tell it's only 6k for the ryzen 4750U on the GB5 tests? there's so many pages and pages of tests I can't sift through all of that to confirm



swznd: this is from the page you linked https://i.imgur.com/JgN8o3m.png am I reading this wrong?


not sure why I'm getting downvoted, I clicked around and found a ryzen that benchmarked at 7.5k...didn't go through even half the benchmarks though



It’s just a shame that the screens in the Lenovo AMD devices doesn’t hold a candle to the MacBooks.


in what way? I've got a levono ideapad with ryzen7 4800u that outperforms my macbook pro 2019 16" by a long shot


What screen performance are you referencing? Refresh rate? Pixel density? Color space? Color accuracy? Backlight bleed? Dead/stuck pixels? Nits? Size?


I misread OP, somehow glossed over the "screen" qualifier - thanks for getting me to reread


> It seems pretty clear that the play here runs "Look here's our entry level

Not quite. The play here is, "Look, here is our 98% of sales laptop". That it's entry level is only an incidental factor. 98% of sales volume is at this price point, and so they get the maximum bang for their buck, the maximum impact, by attacking that one first. Not just because it's the slowest or entriest.

Had they started at the fastest possible one, sure it would have grabbed some great headlines. But wouldn't have had the same sales impact. (And it's icing that the slowest part is probably easiest to engineer.)


> Probably because the announced hardware is clearly entry level.

Yes, but why compare it to an entry card that was released 4 years ago instead of an entry card that's been released in the past 12 months? When the 1050 Ti was released, Donald Trump was trailing Hillary Clinton in the polls. Meanwhile, the 1650 (released April 2020, retails ~$150) is significantly faster than the 1050 Ti. (released October 2016, retailed $140 but can't be purchased new anymore)


The 1050 is still a desktop class card. The M1 is in tiny notebooks and the Mac Mini, none of which even have the space or thermals to house such a card.


The main point was that its a 3 generation old desktop card which is obviously not as efficient as the modern mobile devices.

Lets see what a 3000 series nvidia mobile design does on a more recent process before declaring victory.


NVIDIA doesn't make small GPUs very often. The 1050 uses a die that's 135mm^2, and the smallest GeForce die they've introduced since then seems to be 200mm^2. That 135mm^2 may be old, but it's still current within NVIDIA's product lineup, having been rebranded/re-released earlier this year.


The 1050 isn't just a desktop class card. Plenty of laptops have a 1050 inside.


Apple doesn't make anything entry-level. The entry-level is a $150 ChromeBook, it's not "the cheapest that Apple sells".


The Air IS entry level; it's slow, low resolution, has meager io, etc. It just happens to be at a price point that is not entry level.


Depends on how you define “entry level”. The Porsche Cayman is an entry level Porsche, but starts at $60,000.

I don’t know anyone who would call that an entry level car, but any Porsche-phile would.

The new Air is fast and reasonably high resolution with a ~4K resolution screen. But it is Apple’s entry level laptop.


I concede the new air is likely faster than the ancient i3 they used to put in them.


Here are all the differences between the M1 Air and M1 Pro, from [1]:

    Brighter screen (500 nits vs. 400)
    Bigger battery (58 watt-hours vs. 50)
    Fan (increasing thermal headroom)
    Better speakers and microphones
    Touch Bar
    0.2 pounds of weight (3.0 pounds vs. 2.8 — not much)
The SoC is the same (although the entry-level Air gets 7-core GPUs, that’s probably chip binning). The screen is the same (Retina, P3 color gamut), both have 2 USB-C ports, both have a fingerprint reader.

[1]: https://daringfireball.net/2020/11/one_more_thing_the_m1_mac...


> low resolution > 2560 x 1600

Damn, you've got some high standards.


Competition has 4k displays on their thin and lights. They also don't use macOS which has problems running at resolutions other than native or pixel-doubled. The suggested scalings are 1680 by 1050, 1440 by 900, and 1024 by 640. None of those are even fractions of the native resolution so the interface looks blurry and shimmers. Also, all the suggested scalings are very small so there isn't much screen real-estate.


No it's not. It was designed to be extremely light with many compromises to make that happen. Yeah it got outdated, but that doesn't mean it was entry level.


My Xiaomi laptop is entry level and costed 300 euros including shipping from China and taxes

It still does 8 hours on battery after 3 years

Being entry level performances don't matter much, but it's still a good backup

Bonus point: Linux works great on it

For 13 hundred dollars (30% more in Euros) I can buy a pro machine, one that at least will give me the option of using more than 16GB of RAM

The new Apple Silicon looks good and I love that they are finally shipping some decent GPU, but price wise they're still not that cheap

------

The downvotes are because I said Xiaomi or for something else?

LG sells a 17 inches, 2.9 lb (same weight of the Air) 16GB of RAM (up to 40) and a discrete GPU (Nvidia GTX 1650) for 1,699


And, at that time, a mandatory stylus meant (likely) a resistive not capacitive touch screen.


Do you feel the same way regarding the World Trade Center discussion from today? https://news.ycombinator.com/item?id=24502706


I feel that you should be allowed to put a representation of a very public building in your game.


Of course he would close borders - he thinks trade wars are easy to win and it’s an easy sell to his racist base.

Of course he’d sign The stimulus bill - it made looting the country even easier.

Of course he’d have a task force - he’d liquidated those who could bring him to task (and incidentally do the rest of the country some good) and a task force is an opportunity to sell appointments on hand and then dispose of them with little ceremony later. Oh yeah right that task force that isn’t exactly setting policy these days.

When you have no shame, no decency and make every decision as a means to win the zero sum game immediately in front of you, well, you don’t have to be smart, you don’t need to have a plan and the most opportunistic scum will leap to your aid as a means to facilitate their own graft.

This political mistake is little different than all of his other shitty, petty, and unpardonable catastrophes. Treason big and small.



That's Geekbench. The differences in Geekbench for ARM devices and Geekbench for x86 are laughable. Why not compare them in a real life workload like a render on Blender or a kernel compile?


People have (Jonathan Morrison for an example) , the 4k exports from iMovie or other video/image editor has been proven to be vastly faster on iPad Pro than the fastest Macbook Pro.

Intel CPUs are not customized by Apple for their own APIs, they're for general purpose use. yes, they have ISA extensions that Apple could use like QuickSync but it's not enough for Apple.

Apple customize their A series with the same APIs they use, such as Metal, CoreFoundation, Javascript Core (they have hardware-based JS acceleration support), etc.

It's why they added T2 chips to their Macs to help accelerate a lot of tasks like disk encryption, more locked down security with TouchID and so on.


All of these tasks are "accelerated" by skimping on quality or using silicon dedicated to a specific program. It doesn't matter in real life unless literally the only thing you do is use a browser and nothing else, and even then only JS, not wasm. In which case, I don't know why you would even have a CPU with more than two cores, and an i3 would be more than enough anyways. JS acceleration really only matters if you want to maximize battery life and the only thing you're doing with your computer is contained in a traditional webview. That is to say, it matters little for laptops, as they already have 7-8 hour battery life, and literally not at all for desktops.

So, for example, my computer can encode 4 4K videos in real time simultaneously. Why don't I use this feature? Because the encode quality is subpar. So unless you give me a benchmark of the iPad Pro on ffmpeg or any other open source, non-hardware accelerated video encode software, comparison is completely moot. And the benchmarks that have been done on ffmpeg on x265 or x264 show that the iPad Pro is multiple times slower than an Intel laptop. Now obviously x265 is optimized for x86, but not on the order of multiple times slower. Unless it is the case, and that means that your benchmarks don't apply either.

Herein lies the issue, for many many workloads hardware acceleration while faster offers results that are not comparable to the CPU in terms of quality. So the only way to compare is by disabling hardware acceleration, and x86 processors tend to win.

Metal is a trash API that no one uses in the real world. It offers literally nothing better than Vulkan. And the fact that the chip is "customized for the API" is vacuous. All GPUs are optimized for OpenGL, DirectX or Vulkan. Same for a lot of the "Core" APIs. They will not succeed outside of the mobile market.

The T2 chip is simply a glorified security processor. There is absolutely nothing the T2 chip does that a traditional security processor in say a Zen chip can't do. x86 CPUs can already do AES at speeds so high you would need something like 4 RAID-0 NVMe SSDs to have a performance bottleneck, and even then the limitation isn't the CPU but RAM speed. There is no real world scenario where you would need to "accelerate" disk encryption or other kinds of cryptography beyond what an x86 CPU can do. Cryptography isn't some kind of magic you can't implement without some specialized chip, literally everything the T2 chip does can be done using a trusty old x86 processor and TPM. The only use of the T2 chip is for Apple to have more control over your hardware, and literally nothing else.


> the 4k exports from iMovie or other video/image editor has been proven to be vastly faster on iPad Pro than the fastest Macbook Pro.

I'd only be impressed if both used the exact same high-quality software encoder. Most likely the iPad uses the fast but less quality dedicated hardware encoder of the A-Series SoC and the MacBook uses a high quality but slow software-only one, which is what you typically use in any non-real-time encoding scenario due to way better bitrate-to-quality ratios.

> Javascript Core (they have hardware-based JS acceleration support)

Do you have a credible source for this? AFAIK JS VMs have gotten to the same place that Java VMs (for which some people also envisioned dedicated silicon a long time ago, but it was a dud) reached: so frickin fast on standard x86 ISA that putting any special instructions for them into the ISA isn't worth it, because it's more important to stay flexible to be able to adapt future extensions of ECMAScript.

> It's why they added T2 chips to their Macs to help accelerate a lot of tasks like disk encryption, more locked down security with TouchID

That has more to do with having a secure element under Apple's control in the T2 chip and nothing with performance. Any modern x86 CPU can do accelerated AES just as fast as any ARM with hardware crypto support.


That is true, I don't have any evidence to say that x86 isn't faster or equal against Apple's ARM CPUs or vis versa. They're hard to come by since they're both completely different arch.

For JS: https://twitter.com/codinghorror/status/1049082262854094848

It looks like it's not exclusive to Apple's CPU, it's the specific instruction features in ARM 8.3 ISA that makes JS faster.

Added here: https://bugs.webkit.org/show_bug.cgi?id=184023

> Any modern x86 CPU can do accelerated AES just as fast as any ARM with hardware crypto support.

Right but back then, Intel mobile chips weren't that fast. I had MBP with Filevault that took a massive hit and I had to turn it off to get back disk performance. I can't prove that T2 is the reason the encryption doesn't take any hit on T2 Macs, all I can see from my trial of rMBP 16, there was zero performance hit with it on or off.


Thanks for the JS link, I didn't know about that. Though I would hardly call that "Javascript acceleration in hardware". They added a slightly weird float-to-int conversion command that handles overflows a bit different than the normal command, and for lack of better names (and probably because no one else is expected to require that quirky command) they put a "Javascript" into its name.

The perf hit with Filevault became practically zero when Intel added the AES encryption hardware into its chips, which was quite a while ago (and definitely long before the T2 was a thing). I don't remember exactly when that was, but I remember noticing a considerable difference, because I've used FDE since it became available in Filevault. It wasn't even on Macs only, my Windows machines also showed the same improvements using (IIRC) TrueCrypt back in the days.

The dedicated AES hardware extensions in ARM and x86 cores are probably the same logic anyway, so it shouldn't matter too much who decrypts the data. Maybe it's a tiny bit faster with the T2 though, because then the CPU doesn't necessarily have to pipe all the data through it's buses for decryption then. But that is more or less a feature of having a dedicated separate chip for it, and is thus not tied to the question of whether that chip uses ARM or x86 or any other ISA.


>The dedicated AES hardware extensions in ARM and x86 cores are probably the same logic anyway, so it shouldn't matter too much who decrypts the data. Maybe it's a tiny bit faster with the T2 though, because then the CPU doesn't necessarily have to pipe all the data through it's buses for decryption then. But that is more or less a feature of having a dedicated separate chip for it, and is thus not tied to the question of whether that chip uses ARM or x86 or any other ISA.

I don't really see how that would help, CPU i/o has to handle the exact same amount of incoming data encryption or not. Maybe a bit less RAM impact though.


My thought was this: If the CPU decrypts, it touches every byte being read to RAM, and it touches the data again later when it does actual work on it. If it doesn't decrypt, but a chip next to the NAND does the job, the CPU can DMA-transfer the data directly from that chip to RAM. The first time the CPU touches the data is when it actually does some real work on it.


True, but my thought was that since AES decryption is mostly limited by RAM bandwidth anyways, the transfer from the SSD to the CPU, then from the CPU to the T2 chip, then from the T2 chip to the CPU won't be much faster than transferring from the SSD into the RAM, then decryption, then it being read back into the RAM.


AFAIK the T2 is also the SSD controller in Apple's architecture, meaning it speaks directly to the NAND. So it should not be necessary for data to first go to the CPU, then to the T2 for decryption - the T2 can transparently decrypt and encrypt while doing the job of offering block-device-level access to the flash chips.


I mean, that might depend on the type of encryption used by your computer but there is a ton of documentation about full disk encryption in the Linux world and for over a decade there is almost no performance hit. My experience with a 2008 MBP running Debian was that disk encryption on or off had a very small performance hit.


I’d love to read your results rendering something in Blender on the latest A13 chip, if that’s the best way to make a comparison.


I'd love to make such a benchmark, but I won't because I don't have enough money to justify buying a Mac. If you send me one I'll port the rendering engine over and do the benchmark, though.


How is running the same code in two different machines and reporting the results objectively "laughable"?


Because the way the code is compiled or hand optimized even, which kind of extensions are used for ARM vs x86 (SSE, AVX and so on). Many of the worloads used in Geekbench are straight up directly accelerated, which is fine if the only thing that's requiring CPU power on your machine is Javascript, but not so otherwise.

Other synthetic benchmarks of memory bandwidth and so on use all acceleration features of the plafortm on ARM but don't support AVX2 or AVX512, although some other workloads in the benchmark do. And of course you would have to choose exactly which instruction is used in which scenario in which processor(Intel vs Zen 1 vs Zen 2) in order to have the same kind of optimization as for ARM processors. Then comes the issue that vector operations are "hand-tuned", which is not realistic and depending on the skill of the programmer and their affinity with a given uArch can yield vastly different results. Which is why they should either use the fastest library for each processor, or leave all the optimization to the compiler.

The only way to do a proper comparison between two uArch is with an open source benchmark compiled specifically for the processor.


The US is the only country we share a land border with and at the very least at this time of year 35million people will starve without trains and trucks over the border.

So, yeah, the first logisitical barrier I can think of inside of 30 seconds.


Right in the article:

"Trudeau said the new border controls will not apply to trade and commerce in order to keep Canada's supply chain open."


That strap time parsing doesn't fail under python 2.7.17 or 3.8.1


  Python 2.7.17 (default, Dec 31 2019, 23:59:25) 
  Type "help", "copyright", "credits" or "license" for more information.
  >>> from datetime import datetime
  >>> datetime.strptime('Sat 29 Feb 12:00:00 PM', '%a %d %b %I:%M:%S %p')
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  ValueError: day is out of range for month


This is because 1900 is the default year in datetime. 1900 was not a leap year.


I know, I'm just providing a counterexample to phoobahr.


SF needs to build diagonal crossing bridges (or tunnels). The traffic jams come from all the vehicles i pedestrian spaces.

Fixed that for you.


Regardless of how you frame it not forcing different classes of traffic to cross each other at the same grade is almost always (i.e. always but I'm sure there's exception or two out there) improvement for both but projects like that rarely happen in this day and age.


It's a broken social contract; more akin to re-zoning that apartment out of existence after you lived there for 20 years.

And for a profit motive, targeting the segment of the neighbourhood least able to complain, defend themselves or move.


I live in vim, write mostly python, and don't need function keys. An iPad gives me a range of vim ports, shells, python environment and one really good gift client.

Also the Smart Keyboard is simply the best shape for typing on a seat-back tray table.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: