Not to speak for anyone else, but one thing I gently disagree with:
>Given that Hackintoshers are a particular bunch who don’t take kindly to the Apple-tax[...]
I have zero issues with an Apple premium or paying a lot for hardware. I think a major generator of interest in hackintoshes has been that there are significant segments of computing that Apple has simply completely (or nearly completely) given up on, including essentially any non-AIO desktop system above the Mini. At one point they had quite competitive PowerMacs and then Mac Pros covering the range of $2k all the way up to $10k+, and while sure there was some premium there was feature coverage, and they got regular yearly updates. They were "boring", but in the best way. There didn't need to be anything exciting about them. The prices did steadily inch upward, but far more critically sometime between 2010 and 2012 somebody at Apple decided the MP had to be exciting or something and created the Mac Cube 2, except this time to force it by eliminating the MP entirely. And it was complete shit, and to zero surprise never got a single update (since they totally fucked the power/thermal envelope, there was nowhere to go) and users completely lost the ability to make up for that. And then that was it, for 6 years. Then they did a kind of sort of ok update, but at a bad point given that Intel was collapsing, and forcing in some of their consumer design in ways that really hurt the value.
The hackintosh, particularly virtualized ones in my opinion (running macOS under ESXi deals with a ton of the regular problem spots), has helped fill that hole as frankenstein MP 2010s finally hit their limits. I'm sure Apple Silicon will be great for a range of systems, but it won't help in areas that Apple just organizationally doesn't care about/doesn't have the bandwidth for because that's not a technology problem. So I'm a bit pessimistic/whistful about that particular area, even though it'll be a long time before the axe completely falls on it. It'll be fantastic and it's exciting to see the return of more experimentation in silicon, but at the same time it was a nice dream for a decade or so to be able to freely take advantage of a range of hardware the PC market offered which filled holes Apple couldn't.
Apple does not want to offer to the hackintosh/enthusiast market because they are the most price conscious segment. Targeting that segment means putting out extremely performant, low-margin commodity machines. Doing so then cannibalizes the market for their ultra-high-end stuff.
Not only that, though. Enthusiasts are also extremely fickle and quick to jump ship to a cheaper hardware offering. If you look at all of Apple’s other markets, you’ll see loads of brand loyalty. Fickle enthusiasts don’t fit the mould.
When Apple first mandated kext signing (Mountain Lion?) they explicitly whitelisted certain community built kexts used for Hackintosh. IMO Apple and the Hackintosh community has been mutually benefited until now. Many who have accustomed to macOS from Hackintosh has eventually invested in Apple products.
Considering Apple has only went after those who profiteered by selling pre-built Hackintosh and not everyone who are profiteering from Hackintosh scene; I would say Apple did care about the Hackintosh community in some way.
I thought the higher performance/price Hackintosh, especially with Ryzen might force Apple to act differently but now with M1, Apple needn't worry about Hackintosh performance/price anymore.
> Apple does not want to offer to the hackintosh/enthusiast market because they are the most price conscious segment. Targeting that segment means putting out extremely performant, low-margin commodity machines. Doing so then cannibalizes the market for their ultra-high-end stuff.
Looking over the shoulder at a 64-core Threadripper with 256GB of ECC RAM, 3090FE, Titan RTX and Radeon VII, yeah right. Some of us do Hackintoshing because we want more dope specs than what Apple offers and customizability that comes with PC hardware.
Sure, but there's a legit gap in the datacenter— not having a sanely, legally rackable OS X machine is a pretty big problem for a lot of organizations. Not everyone wants to do their Jenkins builds or generate homebrew bottles on a Mac Mini under someone's desk.
Is this really an issue? They sell shelves that let you rack 2 Mac Minis in 1U space. You can also buy a rack mount Mac Pro if you want to spend really big bucks.
Ah, so it is, and the thermal story there is definitely much better than with the Mini, there being a clear intake/exhaust flow. OTOH, there's still likely a gap in terms of management features, and the starting price of $6.5k for a 4U system is definitely going to be a barrier for some. Good to know there's at least something, anyway.
Surely you'd want CI builds for your app? I suppose you can always go the sassy option and just offload this problem onto Travis or CircleCI, but then they're the ones stuck figuring out how to rack thousands of Mac Minis, dealing with thermals in a machine that isn't set up for hot/cold aisles, a computer that doesn't have a serial port or dedicated management interface, etc.
If you're a big enough org or the app is for internal use, this might not be an option anyway. At that point I imagine most people just give up on it and figure out how to run macOS on a generic VM. But at that point you have to convince your IT department that it's worth it doing a thing that is definitely unsupported and in violation of the TOS.
Or maybe some of these are big enough that they are able to approach Apple and get a special license for N concurrent instances of macOS running on virtualized hardware? Who knows.
No company on the planet is big enough for Apple to make exceptions like that. All of them either use a cloud provider or a custom rack design just for Mac Minis.
Companies like Google or Microsoft aren't big enough? Google's Chrome and Microsoft Office alone I would wager are more than big or popular enough to get special treatment
Adobe is smaller by contrast but I'd speculate has a much deeper relationship with Apple as well
Well sure, for a single person team. But as soon as you're working with other people, surely you want an independent machine making builds and running tests— this is literally item 3 on the Joel Test.
If you ever get a chance to meet employees at CircleCI or some other CI provider at a conference after Covid is over, consider asking them about how they rack Mac Minis.
pjmlp's view appears to be that because their customers, who are not experts, don't know enough to ask for continuously tested software, they don't believe it is their professional responsibility to provide that either. This allows them to dismiss any complaints about macOS in datacenters as irrelevant.
I'm not a consultant, but I believe it would be an ethical failing on my part to hand someone else a piece of code without extensive, automated testing and CI.
Well, thank you for providing the first compelling argument as to why software practices need to be more formally regulated. Providing CI/CD should be the industry norm and expected default.
So true. I've done a lot of freelance work over the past 20 years. CI/CD has never come up. You're sometimes lucky if you can even set up a test system / site.
I don't think the answer is for Apple to force people into buying custom server hardware any more than it is to force them into making janky rack setups for Mac Minis.
The answer that most people would like to see would be a stripped down, non-GUI macOS that's installable at no cost in virtualization environments, or maybe with some evaluation scheme like Windows Server has, which effectively makes it free for throwaway environments like build agents.
> The answer that most people would like to see would be a stripped down, non-GUI macOS that's installable at no cost in virtualization environments
That's called "Darwin" and it's theoretically open source, but there doesn't seem to be a useful distribution of it. Whether that's due to lack of community interest or lack of Apple support is the question.
A useful distribution (for building anyway) would require all the headers and binaries from macOS, which wouldn’t be distributable, right? So you’d have to have enough of a free system to be able to get to the point where that stuff could be slurped out of a legit macOS installation. Sounds like an interesting challenge.
> offering no machine suitable for developers and power users
This perception strikes me as having warped in from a different decade. Nowadays, at least in my neck of the woods, developers almost universally use laptops, and Apple's still plenty competitive in the (high end) laptop department.
For the most part, the only developers I know who still use desktops are machine learning folks who don't like the cloud and instead keep a Linux tower full of GPUs in a closet somewhere. And then remote into it from a laptop. Half the time it's a MacBook, half the time it's a XPS 13. And they were never going to consider a Mac Pro for their training server, anyway, because CUDA.
I couldn't speak to power users, but my sense is that, while it meant something concrete in the '90s, nowadays it's a term that only comes out when people want to complain about the latest update to Apple's line of computers.
I work in games where we write c++ in a multi-million LOC base. Every developer in my company has a minimum of 12 cores, and 96GB RAM. All of the offices are backed by build farms on top of this. There are entire industries that rely on very high end hardware. (Of course we also rely on lots of windows-only software too, but that's only an issue once the hardware is solved)
Fair, and we could spend ages listing all the different kinds of people who have really specific job descriptions that require them to have traditional, stationary workstations. And then we could follow that up with lists of all the reasons why they need to be running Windows or Linux on said workstations, and couldn't choose comparable Apple hardware even if it were available.
But I don't think that we need to beat a dead horse like that. The more interesting one would be to figure out some interesting and non-trivially-sized cross-section of people who both need a workstation-class computer, and have the option of even considering using OS X for the purpose.
The main reasons to buy Apple x86 machines for any OS developer was that Apple has to keep their number of hardware variants to a minimum and you can run compatibility (and truly same hardware performance) tests against any OS as OsX was the only one locked to it's hardware. The same might be true for Arm if there are adequate GPL drivers to not exclude Linux/Android, etc.
I'm not sure that's true. At least in my experience, Bootcamp seemed almost designed to cripple Windows by contrast to OS X
The last time I used it (the last MBP with Ethernet built in. I want to say 2012 or 2013?) some of the features "missing" in Bootcamp
- No EFI booting. Instead we emulate a (very buggy!) BIOS
- No GPU switching. Only the hot and power hungry AMD GPU is exposed and enabled
- Minimal power and cooling management. Building Gentoo in a VM got the system up to a recorded 117 degrees Celsius in Speccy!
- Hard disk in IDE mode only, not SATA! Unless you booted up OS X and ran some dd commands on the partition table to "trick" it into running as a SATA mode disk
The absolute, crushing cynic in me has always felt that this was a series of intentional steps. Both a "minimum viable engineering effort" and a subtle way to simply make Windows seem "worse" by showing it performing worse on a (forgive the pun) "Apples to Apples" configuration. After all, Macs are "just Intel PC's inside!" so if Windows runs worse, clearly that's a fault of bad software rather than subtly crippled hardware
I think we used rEFIt.. I remember it would be a bit finicky, but I never really had to boot windows since my product had no equivalent, and these days I don't boot OsX, though firmware updates would be nice.
What if Apple decided that they don't get to gain that much out of AAA games so they don't care offering hardware that those companies might ran on?
I have the feeling that Apple just cares about Apps for iOS (money wise). What's the minimum they need to do so people write iOS apps?
If this hardware, incidentally, is good for your use case, all is good. If not, they might just shrug it and decide you're too niche (i.e. not adding too much value to their ecosystem) and abandon you.
Yes, I think they view the basic mid-range tower box as a nearly-extinct form. Like corded telephones & CRTs.
They choose to make the mac pro as some kind of halo product, I guess. But really the slice of people who need more power than an iMac, and less than this "Linux tower full of GPUs" or a render farm, they judge to be very small indeed. This wasn't true in the 90s, when laptops (and super-slim desktops) came with much bigger compromises.
I don't think they think it's a small market; I think they think it's a commoditized market with very thin margins. That form-factor has a literal thousand+ integrators building for it, and also in many segments (e.g. gaming) people build their own to save even more money. Those aren't the sort of people who are easily swayed to pay an extra $200+ of pure margin in exchange for "integration" and Genius Bar "serviceability" (the latter of which they could mostly do themselves given the form-factor.)
I guess people into hot-rodding, especially for games, have never been Apple's target. (Even if they are numerous, and I actually have no idea how large this segment is.) Besides price-sensitivity, wouldn't they be bored if there were only 3 choices? Maybe we will find out when the M2-xs or whatever arrives.
Me too. I last had a desktop, at work, about that long ago, and have not bought a desktop computer for myself in a lot longer. Laptops got very good and I can still plug it into a monitor and external controllers when I need to. I don’t need a server at home because of the cloud and broadband.
There's a massive US centric bubble when it comes to Apple.
iPhones and Macbooks are not in the majority, let alone universal, with software developers as whole, just in pockets
The five dollar latte crowd is willing to pay and consume. Walk into any café and good luck finding non-Apple machines. (Occasionally there will be a Surface or two, especially if you live in Seattle).
They are most visible but that does not mean they are most important part of ecosystem. Plus, in 5 years they will be reconsidering workplace setup due to back pain and/or carpal. And Apple asks arm and leg for all ergonomic accessories like external monitor and dock.
> Plus, in 5 years they will be reconsidering workplace setup due to back pain and/or carpal.
This is where I'm at.
I don't know if other people are built from sturdier stuff than me or what, but typing on a laptop to any significant extent leaves me with tendonitis for several days. And staring at a laptop screen too long leaves me with neck pain.
Laptops are a nightmare in terms of ergonomics.
It's been a bit of a blessing for me because I only have a laptop at home, and it basically means I can't take work home with me.
But I'm pretty seriously considering upgrading to a traditional desktop sometime in the next year.
Laptops are my ergonomic savior. I make sure it's on my lap, and that my elbows are on softly padded armrests and hang down gently, and this has given me decades of work after fierce carpal tunnel inflammation.
I also use a Wacom tablet comfortably placed on a table to my right.
> The same people who almost universally use a Mac?
This has become steadily less true since about 2012, in my experience. I don’t know any full time developers still using an Apple laptop. The keyboard situation caused a lot of attrition. I finally stopped support for all Apple hardware at my company months ago, simply to get it out my headspace. Will Fusion360 again be completely broken by an Apple OS update? Am I going to have to invest time making our Qt and PyQt applications work, yet again, after an Apple update? Are Apple filesystem snapshots yet again going to prove totally defective? The answer is “no”, because we really need to focus on filling customer orders, do we’re done with Apple. ZFS snapshots function correctly. HP laptop keyboards work ok. Arch Linux and Windows 10 (with shutup10 and mass updates a few times per year) get the job done without getting in my face every god damned day.
> I don’t know any full time developers still using an Apple laptop.
Fascinating. I can name a few startups in my town that use Apple. One just IPO'd (Root), another is about to (Upstart). There are others as well.
The big companies it's hit or miss. Depends on if they are working on big enterprise applications or mobile/web. Mobile and web teams are all on MacBook Pros, and the big app dev teams aren't.
When I was last in Mountain View they were on Mac as well but I know that depends on personal preference.
I use a mac because other developers do in my office. But I'd be just as productive on a linux or windows machine.
For a while osx had the edge because it had a nice interface while still offering a lot of unix. Now windows and linux has caught up in the areas they were lacking before. Meanwhile apple has been caring less and less about people using the cli.
Apple will certainly offer an ARM-based MacPro, but I'm assuming it'll be a very different beast - current one maxes out at 1.5TB of RAM and it doesn't seem likely anyone will integrate that much memory on a chip anytime soon ;-)
Memory bandwidth is one key feature impacting M1's performance. When Apple builds an ARM-based MacPro, we can expect something with at the very least 5 DDR5 channels per socket. It's clear, from this, the M1 is a laptop/AIO/compact-desktop chip.
I would expect more, so that cores don't get memory starved. The M1 has 4 fast cores and 4 slow ones. If we imagine an M2 with 8 fast cores, I would expect it to need 16 channels to have the same performance. That's a lot.
Dunno, the M1 CPU package is tiny, thin, power efficient, etc. It's got 4 memory chips inside the package. I don't see any particular reason why a slightly larger package could have 4 memory chips on one side, and 4 chips on the other to double the memory bandwidth and memory size.
However the M1 is already pretty large (16B transistors), upgrading to 8 fast cores is going to significantly increase that. Maybe they will just go to a dual CPU configuration which would double the cores, memory bandwidth, and total ram.
I think the developers and power users that still use desktop machines/towers are either very cpu-power-hungry niche exception, or the more backwards ones, and thus least likely to influence/be imitated by anyone...
I care to differ (as a developer on a desktop). The reason for developing on a desktop is that my productivity is much higher with 3 screens, one of which is a 40 inch, a full 101 key keyboard and a mouse.
> The reason for developing on a desktop is that my productivity is much higher with 3 screens
Those requirements don’t dictate a desktop[0]. Also, the physical size of the monitor is irrelevant, it’s the resolution that matters. Your video card doesn’t care if you have a 40” 4K monitor or an 80” 4K monitor, to it, it’s the same load.
The reason I still have a cheese grater Mac Pro desktop at all is because I have 128gb RAM in it and have tasks that need that much memory.
[0] I’ve connected eight external monitors to my 16” MBP (with laptop screen still enabled, so 9 screens total). I don’t use the setup actively, did it as a test, but it very much works. The setup was as follows:
TB#1 - 27” LG 5K @ 5120x2880
TB#2 - TB3<->TB2 adapter, then two 27” Apple Thunderbolt Displays @ 2560x1440
TB#3 - eGPU with AMD RX580, then two 34” ultrawides connected over HDMI @ 3440x1440, two 27” DisplayPort monitors @ 2560x1440
TB#4 - TB3<->TB2 adapter, then 27” Apple Thunderbolt Display @ 2560x1440
So that’s almost 50 million pixels displayed on around 4,000 square inches of screens driven by a single MBP laptop.
You kid, but it legit was an issue. I’ve used at least 3 monitors (if not 1-2 more) for over a decade now, so I’ve experience there, but going up to 9 even for a short while, it was definitely an issue.
Yeah I’m with you. Laptops are great, but they sacrifice a lot for the form factor. Remove the constraint of needing an integrated screen, keyboard, touch pad and battery, and you can do much more. Sure you can dock it, but docked accessories are always second class citizens relative to the integrated stuff.
Laptop user, I also have 3 screens. I do use the MBP's keyboard, but never felt like that cost me productivity. I use a normal mouse as well. The only reason I can think of the need a desktop is the extra CPU/GPU capacity you can get.
Or internal peripherals. If I want 20Tb of storage, and I don't want external chassis all over the place, I need a desktop with at least a couple of 3.5 bays.
Nope, not what I’m saying at all (in part because your comment is hyperbolic and untrue). Some folks need more than 64gb RAM which is the highest amount most laptops have.
He is not that off. Apple asks $200 for 8 GB, so he is at the same order of magnitude. For comparison, I've bought this week 16 GB DDR4 ECC (unregistered) sticks for 67 EUR per piece (before VAT).
Great, so you bought an different type of RAM in a completely different form factor and paid a different price. This is on “processor package” RAM and will thus have an entirely different price basis than a removable stick would, not even factoring in the Apple Tax.
Furthermore, how is that relevant to the point _I_ was making about needing more than 64gb of RAM? If you both want to tangent, fine do so, but don’t try to put words in my mouth while doing it.
> Great, so you bought an different type of RAM in a completely different form factor and paid a different price.
It is being called "using an example" or "illustrative example". For comparison, I've used a type of RAM that is traditionally much more expensive than you find in laptops.
> This is on “processor package” RAM and will thus have an entirely different price basis than a removable stick would,
No.
1) The same price is being asked for RAM in non-M1 models.
2) You could put any price tag you want, because the item is single-sourced, the vendor can pull a quote out of the thin air and you cannot find exact equivalent on the market. Therefore, for comparison, a functionally and parametric similar item is being used.
> how is that relevant to the point _I_ was making about needing more than 64gb of RAM?
You get a different product, that supports more RAM.
> If you both want to tangent, fine do so, but don’t try to put words in my mouth while doing it.
Could you point out, where I did that? I was pointing out, that your note about the GP being hyperbolic is untrue - he was in the ballpark.
> I was pointing out, that your note about the GP being hyperbolic is untrue - he was in the ballpark.
Essentially as in the ballpark as $80 is, both are off by 2.5x. Claiming they are “same order of magnitude, so it’s not hyperbolic” is laughable. $100k and $250k are both same order of magnitude, but are radically different prices, no?
at work, when at the office, they are always pushing screens on us. keep thinking is some pork deal with dell. my whole team either plugs in a laptop to one screen, or just works straight on the laptop. maybe we're not cool.
A quick Google will turn up several serious usability studies that show more screen real estate == higher productivity. It depends a lot on the type of work, of course, but for development a larger screen would mean less scrolling and tab switching => less context switching => so your brain gets more done.
Or ...they're old and cant see the tiny laptop screen or get back pain when using a laptop all hunched over. To be honest, I don't know how anyone does serious work on them.
Apple itself is selling it's new chips as making faster devices. If only a niche want that speed, Apple probably wouldn't be pushing it as part of the pitch so hard.
Considering how much the gaming side of the PC market will drop on just a single card I am more of the opinion that Apple chose to avoid this market because it did not want the association with gaming as if that were beneath their machines.
At times there seemed to be a real disdain for the people who loved upgrading their machines as well as those who gamed on them. Apple's products were not meant to be improved by anyone other than Apple and you don't sully them with games. The Mac Pro seems to be the ultimate expression of "You are not worthy" from the base system which was priced beyond reason to the monitor and stand. It was the declaration of, "fine, if you want to play then it will cost you" because they didn't really care about the enthusiast of the wrong use - games and such.
Being “fickle” is kind of hard to apply to a market segment, because it not a synchronized monolith. There clearly is demand for decent machines hackingtoshes make. It’s just that the mobile market is much higher ROI. So, any R&D other Apple products get are some coincidental opportunities. This entire M1 change is a happy accident.
So it’s not that hackingtosh builders are anything, at all, it’s that they’re outnumbered by iPhone buys 1 to a million.
They certainly wouldn't want to scare away developers, others have suffered greatly due to neglecting them and developers seems to be getting rarer. You always want the enthusiasts and of course they buy new hardware to look for ways how they can make it work for them. Many devs also have a high income so that price isn't as important anymore.
A big part of saving Apple was Jobs killing the clone program. That lesson probably still resonates in the halls of Apple even if allowing hackintoshes is a different thing without the same risks.
Because as the sibling comments point out, the price of a Mac Pro isn't just an "Apple Tax^WPremium" over a desktop machine but is an order of magnitude more expensive (assuming you don't care about workstation-class components, i.e. Xeon Ws, Radeon Pro GPUs and ECC RAM).
There's an enormous price gap between a Mac Mini and the Mac Pro (especially when the Mini now has higher single-threaded performance than the base Pro...) which Apple has widened in the last decade or two.
I appreciate that the 2013 mac pro wasn't for you, but it was perfect for me: small but powerful. Firstly: RAM. I was able to install 64 GiB on it, which enabled me to run Cloud Foundry on ESXi on Virtual Workstation on macOS. Non-Xeon chipsets maxed-out at (IIRC) 16 GiB and then later 32 GiB—not enough.
Secondly, size & esthetics: it fits on my very small console table that I use as a desk. I have a modest apartment in San Francisco, and my living room is my office, and although I had a mini-tower in my living room, I didn't like the looks.
Third, expandability: I was able to upgrade the RAM to 64 GiB, the SSD to 1 TB. I was able to upgrade the monitor to 4k. It has 6 Thunderbolt connections.
My biggest surprise was how long it has lasted: I typically rollover my laptops every year or so, but this desktop? It's been able to do everything I've needed it to do for the last 7 years, so I continue to use it.
While the form factor was cool, how pissed would you have been if it broke and you were buying the exact same machine, for the same price (give or take), in 2018?
Part of the "mess", I'd argue, was that Apple backed themselves into a thermal corner where they couldn't update the machine but also wouldn't cut its price so it got steadily worse value as time wore on.
Oh, definitely. Look at the Apple TVs for another example. In both cases, if Apple would drop the price, even just yearly, they would sell so many more units.
But my workhorse has had so many upgrades. Lots of storage in and out. I have a bunch of drive sleds. I updated the graphics card more than once. Presently it has 2x6 core, 5 ssds (one in a pcie slot), a 10tb hard disk, a pcie usb3 card, and a gtx980.
I just got a new Mac Pro. The only real upgrade I did from Apple was to the 12 core Xeon. Other than that I kept the base 32GB memory, though I did get a 1TB SSD from the 256GB base offering.
... then I went to NewEgg and got 192GB of memory for $800ish, rather than Apple's exorbitant $3,000. And seriously, why? Same manufacturer, same specs. And convenience factor? It took a good 45 seconds to install the memory, and I'd wager anyone could do it (it's on the 'underside' of the motherboard, all by itself, and has a little chart on the memory cover to tell you exactly what slots to use based on how many modules you have).
And then I bought a 4x M.2 PCIe card and populated it with 2TB SSDs (that exceed the Apple, with sustained R/W of 4500MB/s according to Blackmagic) for just around $1,100, versus the $2,000 Apple wanted. Only downside is that it cannot be the boot drive (or maybe it can, but it can't be the _only_ drive).
> The latest mac pro... I think it wasn't just expensive, it was sort of sucker expensive
It's the kind of Mac that makes you get an iMac to put on your desk and a beefy Linux server-grade box you hide somewhere, but that does all your heavy lifting.
Some tools and OSs make it easier than others. I used to do a lot of work from my IBM 43P AIX workstation (great graphics card, huge monitor, model M keyboard) that actually ran on a more mundane Xeon downstairs. X made it even practical to browse the web on the 43P. It attracted some really confused looks in the office.
Exactly this. The closest to this would be an i7 iMac but not everyone wants an Aio PC. It’s kind of a bummer. We finally have an iPhone for everyone, even a high end small form factor option. Whoever is responsible for that decision please take a look at the Mac lineup next.
There's even precedent for it: the iMac/iMac Pro. The Pro model has workstation-class hardware in it while the non-Pro does not.
Ideally the enhanced cooling from the Pro models would trickle down to the non-Pro. By all reports the (i)Mac Pro is virtually silent but in the low-power ARM world a desktop machine that size could almost be passively cooled, even under load.
I bet Apple would love to release an all-in-one iMac Pro powered by an iteration on the M1. They could put a Dolby Vision 8k display in it and drag race against Threadripper machines for UHD video workloads.
I mean, the iMac Pro came out in 2017 and there isn't much sign of anything trickling down to the standard iMac. Rumour is that the ARM Mac Pro will be significantly smaller than the Intel one - it'll be interesting to see how (or if) they support discrete GPUs.
I don't totally agree with GP, but I think their global point was that during a long time (all the 2010s ?) there was just no decent Mac Pro.
Outside of the Mac mini, the most powerful desktop machine was actually iMacs, with all the compromises that come with the form factor, and the trashcan Mac Pro who was thermally constrained.
In that period, no amount of money would have helped to get peak storage + network + graphic performance for instance.
We are now in a slightly better place where as you point out, throwing insane amounts of money towards Apple solves most of these issues. Except for those who don't want a T2 chip, or need an open bootloader.
I don’t know that the “Apple tax” moniker is really fair anymore, either.
The machines have always commanded a premium for things that enthusiasts don’t see value in (I.e. anything beyond numeric spec sheet values), so most critics completely miss the point of them.
There’s a valid argument to be made that they’re also marked up to higher margins than rivals even beyond the above, but I’m not sure if any end user has really ever eaten that cost - If you buy a MacBook, there has always been someone (students) to buy it back again 3/5/10 years down the road for a significant chunk of it’s original outlay. That doesn’t happen with any other laptop - they’re essentially scrap (or worth next to nothing) within 5 years. After 10 years I might actually expect the value to be static or even increase for its collector value (e.g. clamshell iBook G3s)
The total cost of ownership for Apple products is actually lower over three years than any rival products I’m aware of.
> The machines have always commanded a premium for things that enthusiasts don’t see value in (I.e. anything beyond numeric spec sheet values), so most critics completely miss the point of them.
It's not just intangibles. I really like using Macs, but my latest computer is a Dell XPS 17. This is not a cheap computer if you get the 4k screen, 64GB of RAM and the good graphics card. At those prices, you should consider the MBP16. The MBP is better built, has a better finish and just feels nicer.
Thing is, Dell will sell me an XPS 17 with a shitty screen because I don't care about the difference and would rather optimise battery life. I can get 3rd party RAM and SSDs. I can get a lesser graphics card because I don't need that either. I can get a more recent Intel CPU. And I can get the lesser model with a greater than 25% discount (they wouldn't sell me the better models with a discount though).
I think some of the Apple Tax, is them not willing you sell you a machine closer to your needs, not allowing some user replaceable parts and not having discounts.
It works both ways: if you get something in Apple hardware, you will get the nice version of it. If can't get something there, you will have to be without.
Example: I've been looking at X1 Nano. It is improvement compared to other lines (it has 16:10 display finally!), but it is still somewhere in the middle of the road.
The competitor from Apple has slightly better display, much better wifi and no option for LTE/5G.
Nano has 2160x1350 450 nits display with Dolby Vision. Apple has 2560x1600 400 (Air)/500 (MBP) nits display with P3. The slightly higher resolution means that Apple would display 9 logical bits using 8 physical when using the 1440x900@2X resolution (177% scale), but to get similar scale on Nano that would mean displaying 8 logical pixels using 6 physical (150% scale). Similarly, the Dolby Vision is an uknown (how it could get used?), the P3 from Apple is a known.
X1 Nano has 2x2 MIMO wifi - Intel AX 200 - with no option for anything better. There are only two antennas in the display frame, you cannot add more (ok, 3, but the third one is for cellular, and cannot be used for wifi if you forego cellular). Apple ships with 4x4 MIMO. If you have decent AP at office or home, it is a huge difference, yet no PC vendors are willing to improve here.
The cellular situation is the exact opposite. You can get cellular module for Thinkpads, and you cannot for Apple, at all, so if you go this route, you have to live with workarounds.
Yes and no. To be honest I did the same back-of-the-napkin math that you did prior to buying my MBP - the thing is the TCO is even worse if you customise the machine.
Example - a Mac is a Mac for resale purposes - if I attempt to later sell an XPS that I've opened up and put an SSD in and a couple of SODIMMS - I now need to recoup my cost on all of those things. The problem is that if someone is looking at a used XPS with upgraded SSD and upgraded RAM they're statistically unlikely to fully investigate and value the (probably really good) parts that you upgraded it with - they're just going to see X,Y,Z numbers and price accordingly.
Generally though, a 5 year old Windows laptop with 16GB RAM still commands the value of a 5 year old Windows laptop as best I could tell looking at resale values.
I wasn’t trying to address the resale value. Only the tax part. The perception of the tax comes from Apple simply not offering compromised parts for a particular set of parameters. And other manufacturers willing to sell at large discounts regularly!
> I don’t know that the “Apple tax” moniker is really fair anymore, either.
I think it's still accurate and honestly that's apple's business model.
I think the resale value for a student macbook doesn't really matter. It still costs the student - while they are poor - as much as 4x what other students pay for their laptop. Many students are paying $250 for their laptop.
I threw out a laptop I had from 2008 that was at the time top-of-the-line 3000$. I bought it in the US when I was there on vacation. This was when the dollar was at such a low that I got the device for at the time the equivalent of something like 1200€.
I couldn't sell that device for half of that a year and a half later. I got a newer laptop in 2016, again very specced out for a laptop. About 1800€, couldn't sell it for 800€ 2 years later. I still use that last one because I didn't want to sell it so far under what the market value should be.
If you try to sell anything Apple related that isn't more than 5 years old you won't have that problem at all. You can get a good value for the device and sell it without too much of a hassle.
Even if you're a student you would likely be better off buying the cheapest macbook you can find (refurbished or second hand if needed). If you don't like the OS you can just install Windows or a Linux distro on it.
Are you sure about that 250$ number? Because I don't think that's a very realistic number.
> Are you sure about that 250$ number? Because I don't think that's a very realistic number.
For note taking, word processing, basic image editing, web browsing, video playing, etc, you can easily get a capable enough laptop for that price.
This is not comparing like-for-like in terms of what the machines can do, of course. Apple's range doesn't even remotely try to cover that part of the market so a direct comparison is unfair if you are considering absolute price/capability of the devices irrespective of the user's requirements, but for work that doesn't involve significant computation that bargain-basement unit may adequately do everything many people need it to do (assuming they don't plan to also use it for modern gaming in non-working hours).
> If you try to sell anything Apple related that isn't more than 5 years old you won't have that problem at all.
Most people don't consider the resale value of a machine when they buy it. For that to be a fair comparison you have to factor in the chance of it being in good condition after a couple of year's use (this will vary a lot from person to person) and the cost of any upgrades & repairs needed in that time (again more expensive for Apple products by my understanding).
And if you buy a $500 laptop and hand it down or bin it, then you are still better off (assuming you don't need
a powerful machine) than if you dropped $3,000 for an iDevice and later sold it for $2,000.
> what the market value should* be.*
"Market value" is decided by what the market will bare, not what we want to be able to sell things for, and new & second hand are often very different markets.
> Are you sure about that 250$ number? Because I don't think that's a very realistic number.
I'm not a student but it's pretty close I think.
I invested $350 into a Chromebook that runs native Linux[0] about 4 years ago and it's still going strong as a secondary machine I use when I'm away from my main workstation.
It has a 13" 1080p IPS display, 4gb of memory, an SSD, a good keyboard and weighs 2.9 pounds. It's nothing to write home about but it's quite speedy to do every day tasks and it's even ok for programming where I'm running decently sized Flask, Rails and Phoenix apps on it through Docker.
If I had to use it as my primary development machine for web dev I wouldn't be too disappointed. It only starts falling apart if you need to do anything memory intensive like run some containers while also running VMs, but you could always spend a little more and get 8gb of memory to fix that problem.
I'm sure nowadays (almost 5 years later) you could get better specs for the same price.
Right, except that when you stop using the Chromebook every day or move on to something better, will it have residual value or just go to landfill?
I love Chromebooks, don't get me wrong, but the problem I've come to realise over time is that many are specced and priced just about at a point where they'll quickly move into obsolescence not long after purchase - at which point the only thing keeping them out of the ground is your willingness to tolerate them after the updates have stopped.
The Mac will still be worth a good chunk of money to someone.
I have a Chromebook Flip here that I adored for several years that I couldn't give away now.
To answer your question accurately will depend on how long it ends up lasting for.
For example if it works well enough for another 4 years, now we need to ask the question on whether or not you could get reasonable value out of an 8+ year old Mac. I never sold one so I'm not sure. My gut tells me it's going to be worth way less than what you bought it for even if it's in good condition.
But more generally, yeah I have no intentions on re-selling this thing if I decide I'm done with it before it physically breaks. I'd probably donate it or give it away for free (if someone wanted it).
I don't see that as too bad tho. If I can get 7-8 years out of $350 device I'm pretty happy, especially if the next one costs about the same.
It's a tough comparison tho because a decently decked out MBP is going to be like 8x as expensive but also have way better specs.
I think it is realistic. It's easy to think people have money to spend on $999 laptop when you are living in a first world country. 90% of the world probably couldn't afford that.
> Are you sure about that 250$ number? Because I don't think that's a very realistic number.
I think it's fairly realistic. The Dell Latitude 7250 is probably a good representative of what you can get used for ~$220-$300 US these days: https://www.ebay.com/sch/i.html?_from=R40&_trksid=p2380057.m... The dual-core processor should still be serviceable for everyday work, at ~1.3kg it's light enough to carry around all day, a 1080p resolution should be OK on a 12" screen, and it can take up to 16GiB of RAM, though holding out for one with 16GiB preinstalled will definitely tend to push the cost up to nearer $300: https://www.ebay.com/sch/i.html?_from=R40&_trksid=p2380057.m...
(Then any laptop with similar specs except with a 2-in-1 form factor tends to cost a fair bit more, but that's not a must-have for most students or anyone who might have been considering a MacBook.)
"I think the resale value for a student macbook doesn't really matter"
It's literally the only thing that matters if you're the seller.
If you have your choice of two items to sell 5 years from now, you ideally want to be selling the item that's worth substantially more to the buyer, rather than trying to sell something worthless.
Assuming there's nothing dishonest happening, it's really up to the market to price.
Thing is, the student buying the MacBook is probably going to be substantially better off that way too, in that it will likely retain proportionally more of it's value from that point too.
> I don’t know that the “Apple tax” moniker is really fair anymore, either.
Apple on the new Mac Pro that I got a month ago: 192GB memory? That will be $3,000. NewEgg? We'll sell you the same specced memory from the same manufacturer for $800. And you get to keep/sell the baseline 32GB memory.
8TB SSD? $2,000, thanks. OWC and NewEgg? Here, have a PCIe 4xM.2 card and 4 2TB SSDs for $1,100. Oh, and they'll be 50% faster, you just can't have them as the only drive on the system (my Apple SSD runs at around 2800MB/s, the alternative, 4500MB/s).
So they are entirely marked up, and look in any forum - by far most people are not doing what I'm doing, and just "going straight Apple for convenience", though the memory installation was less than 1 minute, and the SSD installation less than 5, including unboxing, seating the 4 drives, reinstalling the heatsink on the card and installing. I get "my time is money", and "it just works" (which, as we know, more and more is less the case with Apple), but really, for me, that was a $3,100 savings for <10 minutes effort.
In terms of high performance products, I’m actually really excited for the next Mac Pro. They’ve got novel design options open to them that no rival has.
The M1 costs Apple relatively little to produce per unit - I would expect them to keep the overall design for a Mac Pro but have stacked modules such that the side wall of the Mac Pro is a grid of 4 or more such modules each with co-located memory like the M1 has. Obviously performance would depend upon the application being amenable to a design like that but a 32 or 64 5nm core Mac Pro is not out of the question, and would be impossible to match for performance in the next few years by any Hackintosh.
Even after capacity frees up at TSMC for AMD to move to 5nm, they won’t be able to co-locate memory like the M1 does due to standards compliance with DRAM sticks.
I think the next couple of years will be really turbulent for other vendors - the M1 is likely far more significant for the PC market due to how disruptive it is than it is for the Mac market.
It might force Intel / AMD / Broadcom to get serious about hardware and at least integrate more of the components for notebooks. Maybe not go full on OEM, but a lot more than the CPU, because M1 is probably fundamentally winning with SoC design.
I would like to know if they are using fundamentally better batteries, and how much a 5nm process lead is behind this.
But I will hand it to Apple, if they finally did something to break the 4-8 hour battery life limit, a limit that always seemed to stay the same despite node shrink after node shrink after node shrink, and really about the same on-screen performance for usual browsing/productivity application use.
I was pretty distrustful of the ARM move, but if they deliver this for the Macbook Pro, I'll hop to ARM.
Associated with the CPU people "getting serious" is them pushing an OS, which would have to be Linux. Intel should have done this 20 years ago, at least as leverage to make Windows improve itself.
AMD would be able to do DRAM on package for the lowest wattage "ultrabook" chips, at the cost of producing a very different package for them vs. the bigger laptops that are expected to have upgradable SODIMMs. But I doubt that this "co-location" is that huge for performance. Whatever memory frequency and timings Apple is using are likely easily achievable through the regular mainboard PCB, maybe at the cost of slightly more voltage. DDR4 on desktop is overclockable to crazy levels and that's going through lots of things (CPU package - socket pins - board - slots - DIMMs).
> stacked modules such that the side wall of the Mac Pro is a grid of 4 or more such modules each with co-located memory like the M1 has
Quad or more package NUMA topology?? The latency would absolutely suck.
Why would latency suck? 64-cores are already only beneficial for algorithms which are parallelizable -- with the most common class of parallelizable algorithm being data parallelizable ... So -- shouldnt the hardware and os be able to present the programmer illusion of uniform memory and just automatically arrange for the processing to happen on the compute resources closest to the RAM / move the memory closer to the appropriate compute resource as required?
Yeah, I'm no kernel developer, but I've been replying to anyone saying 'just stick n * M1 in it' that even AMD has been trying to move back to more predictable memory access latency, less NUMA woes.
But in general we're moving toward even less uniform memory, with some of it living on a GPU. NUMA pretended that all memory was the same latency, because C continues to pretend we're on a faster PDP-11, but this seems like a step in the wrong direction as for how high-performance computation is progressing.
What I don't understand - Windows 10 Pro comes with tons of pre-installed junk - Candy Crush etc. Just charge what you need to, business don't want this stuff.
Well, large businesses at least get Windows 10 Enterprise, which doesn't come with all of that nonsense. The real shame is that you can't get Windows 10 Enterprise without a volume license.
It installed itself on any Win 10 machine I saw (most of which run Pro), except those running Enterprise (or LTSC/LTSB), or an education license.
Er, I heard that junk doesn’t install itself on “Pro for Workstations,” but I’m not certain and even then that’s another hundred dollars more expensive than Pro.
And even Enterprise still comes with a lot of junk that, like, 90% of users won’t need. Windows AR? Paint 3D? And so on… half the things in the start menu of a stock Windows 10 Pro install are either crap like Candy Crush, fluff like 3D viewer, or niche like the Windows AR thing.
The worst part about this is that there’s definitely a middle ground between not including anything and pushing crap on people — both nearly every Linux distro I have ever seen, as well as Apple nail that balance, and to be frank with the App Store or Microsoft Store and such I really don’t see the need to include hardly anything.
Install it in a virtual machine, every 90 days make a new virtual machine from scratch. That or use the secret code to reset the demo days. Enter an Enterprise key when you want to register it for real.
> it won't help in areas that Apple just organizationally doesn't care about/doesn't have the bandwidth for because that's not a technology problem
I would posit that Apple is always going to keep macOS working on some workstation-class hardware, just because that kind of machine is what Apple's software engineers will be using internally, and they need to write macOS software using macOS.
Which means one of two things:
1. If they never release a workstation-class Apple Silicon chip, that'll likely mean that they're still using Intel/AMD chips internally, and so macOS will likely continue to be compiled for Intel indefinitely.
2. If they do design workstation-class Apple Silicon chips for internal use, they may as well also sell the resulting workstation-class machines to people at that point. (Or, to rearrange that statement: they wouldn't make the chips if they didn't intend to commercialize them. Designing and fabbing chips costs too much money!)
Which is to say, whether it be a Hackintosh or an Apple Mac Pro, there's always going to be something to cater to workstation-class users of Apple products — because Apple itself is full of workstation-class users of Apple products.
> I would posit that Apple is always going to keep macOS working on some workstation-class hardware, just because that kind of machine is what Apple's software engineers will be using internally, and they need to write macOS software using macOS.
I hope I'm not out of line here, but this is not what a "workstation" is. "Workstation" actually has a specific meaning in the realm of enterprise computing solutions, and developers do not (generally) use workstations.
A workstation is something that, say, the people at Pixar use, or Industrial Light and Magic. It's an incredibly powerful machine that can handle the most intensive of tasks. Software development is generally not such a task, unless you're frequently re-compiling LLVM from source or something. (And even then, it's a world of difference.)
Apple's software developers, like most software developers who use Apple machines, use MacBook Pros (for the most part). Sometimes Mac Minis if they need multiple test machines, and I'm sure there are some who also have Mac Pros. But overwhelmingly, development is done on laptops that they dock while at work and take home with them after. (This was my experience when I interned there, anyway.)
Apple develops not just macOS, but also application software like Logic and FCPX. The engineers writing that code need to test it on full-scale projects (probably projects on loan to them from companies like Pixar.)
But moreover, changes to foundational macOS libraries can cause regressions in the performance of this type of software, and so macOS developers working on systems like Quartz, hardware developers working on the Neural Engine, etc., also work with these apps and their datasets as regression-test harnesses.
See also: the Microsoft devs who work on DirectX.
All of this testing requires "workstation" hardware. (Or servers, but Apple definitely isn't making server hardware that can run macOS at this point. IIRC, they're instead keeping macOS BSD-ish enough to be able to write software that can be developed on macOS and then deployed on NetBSD.)
"I would posit that Apple is always going to keep macOS working on some workstation-class hardware, just because that kind of machine is what Apple's software engineers will be using internally, and they need to write macOS software using macOS."
I have always hoped that we could rely on that heuristic - that internal Apple usage of their own products would guarantee that certain workflows would be unbroken.
In practice, this has never held up.
Over the past 10-12 years it has been reinforced over and over and over: Apple engineers use single monitor systems with scattered, overlapping windows which they interact with using mousey-mousey-everything and never keyboard shortcuts.
They perform backups of critical files - and manage financial identities - using their mp3 player.
The fact that multiple monitors - and monitor handoff - is broken in fascinating new ways with every version of OSX tells you how Apple folks are (and are not) using their own products.
It sounds like you have an issue with the lack of window snapping keyboard shortcuts that are in Windows 10, as well as iPhone backups happening in iTunes until they moved to finder, and iCloud being connected to iTunes although it's managed in SysPrefs. And you have seen some regressions along with the successive improvements to display management. Is that fair?
If so, what is the connection to professional workflows on macOS?
Yeah, I think it's pretty clear, based on all the bugs with multi-monitor support down to their Mac minis, everyone at Apple must be running iMac Pros, and maybe they're using Sidecar to make their iPad into an extra screen.
Cost of complete system equivalent to the almost car priced mac about 4200.
9k-15k isn't "a lot" its a crazy amount. 15k is 1/4 of the median households income.
Most of planet earth can't sink 6000 into a computer let alone 15000. Under Apple the standard expandable board in a box with room to expand is a category available to 1% of the US and 0.1% of the world.
Exactly this, I was fine with the Mac Pro until 2012, it was a little more expensive than PCs but not much outrageously so (maybe 30% more, that's a tax I was fine paying given the OS and that it was quite a well built machine).
The new Mac Pro is 3-4x the price of a machine built around AMD having equivalent performances. I'm building a Threadripper for exactly this reason. Most of the issue is Intel vs AMD and the fact that AMD's Threadrippers are an amazing deal when it comes to performance per dollar and that Apple has an aversion to offering decent GPUs
The M1 was released on the MacBook Air, MacBook Pro, and Mac Mini.
Buying an iMac now would seem to be a poor decision.
From what I'm seeing in some of the comments, people are so lost in the history of the past years of Apple being the pooch ridden from behind on performance, that they can't get their heads out of their arses to see how awesome this is.
I am sitting here right now wondering if I should invest more of my savings directly in Apple stock, at least temporarily to ride their sales wave, or if I should buy a Mac mini and a nice wide curved monitor with a mechanical keyboard from WASD and be f'ing awesome all of a sudden.
The only reason I'm not hitting the buy button is that all of that isn't $135. There's no reason for that amount, but if it said $135, I'd have already paid for it and been drinking beer to celebrate the happiest purchases I ever made.
The new Mini is certainly impressive and suitable for many tasks, but not a replacement for a full desktop machine. Memory and storage are very limited, and the GPU, while great for the MacBook Air, is far from desktop performance. Also, for desktop, the ports selection is very limited.
The iMac in many senses isn't a replacement for a proper desktop. You can't expand the disk storage, many iterations had no great graphics cards, this seems to be somewhat better now, but an upgrade means having to upgrade everything, including the screen. You can't even clean out the fans after some years.
Yes, I own an iMac, as this is the closest to a desktop machine Apple sells, but a replacement for what the Mac Pro used to be, it is not.
Looks pretty nice, but for many you'd be better off with:
- AMD 5600x or 5700x (saving $500 or $400)
- Samsung 970 pro 2TB for $229 (twice the space for the same price)
- rtx 3060 (in a few weeks)
You'll save a fair bit of $600, run quite a bit cooler, it will be much easier to be quieter, and have twice the disk space. Or buy 2x2TB NVMe (motherboards with 2x m.2 are common these days).
Sure the 5600x/5700x isn't as fast in throughput, but how often do you max more then 6/8 cores? Per core performance is near identical and with more memory bandwidth per core you run into less bottlenecks.
I bet over a few years more people would notice double the disk than the missing extra cores.
I don't use apple products but I think the high price of a workstation is not something specific to Apple.
From a discussion I had with a friend recently, I found that Precision workstations from Dell or Z workstations from HP have similar prices for similar performances (sometime prices can reach 40k or 70k dollars).
When comparing Mac Pro to an enthusiast pc build, yes the mac pro is "overpriced", but the mac pro is using a Xeon which is pricier than a ryzen (even if performance wise it's inferior) and a pro gpu which also cost more than consumer gpu (again, even if performance is inferior). The price of a nvidia quadro is always higher than a Geforce gpu with the same specs.
It's not just "enthusiast PC builds". You can buy plenty of PCs in that middle category of high-but-not-extreme performance without the certifications, Quadros and vendor markup that a full-on "workstation" model has. And they're perfectly fine choices for many professional use cases.
For that market, the Mac Pro is overpriced (the high-end Dell/HP workstations are too), and Apple doesn't make anything more suited for it. That's the criticism. That the Mac Pro is acceptably priced compared to the Dell/HP workstations doesn't matter if that's not what you need.
> I think the high price of a workstation is not something specific to Apple.
A professional workstation, with support, services, guaranteed replacement components, guaranteed service-life, maintenance contracts and so on is very different from an enthusiast-built machine.
It's like comparing a BMW M3 to a tricked out VW Golf. You can fit a bigger, badder engine under the VW's hood, stiffen the suspension, replace the gearbox and so on but, in the end, you can get one straight from the dealer and not everyone is inclined to assemble a car from parts.
Did that once. It's fun, educational and not very practical.
Depends on the model. The E-series 3-ers were very reliable; for the F-series and newer it is exactly as you wrote.
The leasing thing is for slightly different reason: it is being used by such a market segment, that always wants something new. They would not drive older car, even if it was reliable, it would be not cool enough. Unfortunately, since cca 2010 BMW also found it out, and since then their cars stopped being good -- they don't have to last -- and are just expensive.
Yes and no. 15k hardware is for people who are using it professionally. By "professionally" I mean that they can throw such purchase into their cost and just pay less tax. From my perspective it does not matter if I pay ~15k to tax office or to Apple.
In Europe the incentives to buy expensive goods as a company (like cars, fancy office furniture, etc.) is even bigger because of VAT tax (much bigger than US sales tax).
That just isn't how taxes work. If you spend 15k of your profits and buy capital goods you don't reduce your taxes by 15k because no one was going to charge you 100% tax on it. You save only the foregone taxes rate on the purchase.
It depends. If it's a constant workload, it may be better to lease the server and operate it on-prem. If it's spiky, then cloud and on-demand/spot is the best option.
You can get the VAT off, sure, but on the rest you just get to pay out of pre-tax profit. In the UK that means 20%.
So a machine that's £5k retail becomes £4167 without VAT, effectively £3333 if you take into account tax savings, which only apply if your company is in profit. A £15K machine effectively would still cost you £10k.
It's a big saving, sure, but it's still a very expensive machine.
That tax rate is corporation tax, not a personal tax. Does any European country have a 50% corporation tax?
You're right if you start looking at "Well I run my own company so the cost compared to paying myself that cash as a dividend is much smaller", but that only really applies to those of us who do run our own small companies, own them fully and run them profitably, and have already pumped their personal earnings up to that level. And then we're on to a question about what that box is for and why it's needed, is it a company asset or a personal one?
And remember that you get to apply the same percentage discount to any other machine - your 15K apple box may come down to a conceptual £5K hit on your pocket, if you're paying 50% personal tax on top of the company taxes, but a £4-5k Zen 3 box with dual nvidia 3090s in it will come in at £1333-£1600 by the same metric and quite likely perform better...
I mean, if you're not running macos-specific stuff, then a top-end Zen3 box with a couple of 3090s in it is going to have more grunt than a 15k mac pro with a Xeon and a Vega II Duo.
But I wasn't really here to talk about comparative value anyway - this was a tax discussion!
Strictly speaking, yes, there is the expectation that the machine is used entirely for business purposes, if the business is paying for it, otherwise it might be considered a benefit in kind. It's not so much about VAT then as PAYE.
However I feel no particular guilt that the workstation I use for my full-time dev day-job also has a windows partition for gaming in the evening, and I hope that the tax authorities would see things the same way! It's not like the asset isn't a justified business purchase.
Obviously “I’m willing to pay a lot” can mean a wide range of things, but pretty clearly the comment is talking about paying a moderate premium over the competition. The same way a MacBook Pro model might cost $2500 where you could get a similarly specked windows laptop for $1800. Or an iPhone might cost 30% more than a similar android flagship.
It’s an order of magnitude different with the Mac Pro, the base model is a $6000 machine that will perform like a ~$1500 PC. And the base model makes no sense to buy, it’s really a $10k-$30k machine. It’s a completely different product category.
I don't mind paying for a new Apple, but I do mind paying for repairs on a system that fails just after the warranty runs out. Paying $1500 for a new computer every 5 years or so is good. Paying $1500 for a computer every 16 months, not so much.
The monitor on my MacBook Pro just died, and I bought it July of last year. The repair was about $850 USD. Luckily my credit card covered the hardware warranty, but I'm kind of wishing I'd bought AppleCare.
AppleCare for the Macs is definitely worth it. They even replaced the display on my MBP after 5 years when I called (that seems to be the key is to call someone at Apple directly). This was a part they should've recalled, but anyway I wouldn't have gotten a new display without AppleCare.
There was a display replacement program for 2012-2015 MBP13s, due to the staingate. Though I recall that it was for 4 years since date of purchase, or so.
Yep it was related to that, even though AppleCare ended up replacing my screen after 6 years (not 5 like I initially remembered). It was an unexpected bonus and I’m still quite happy about it. But the key was to call AppleCare directly and not go to the geniuses.
I wonder if there will be one for 2019 MacBook Pros. The repair shop said the replacement display was also faulty, so they had to order a replacement replacement.
Unless absolutely everything in the PC fails at once, you can just repair it and change components as needed. Not really doable with a Mac given that you can't buy the components from Apple.
Because even a $3K non-Apple Machine outperforms the $5K base model Mac Pro by a large margin, and if I spent $5K outside Apple it would be even more ridiculous, double RTX 3090 + 24 core Threadripper ridiculous.
> Retain and release are tiny actions that almost all software, on all Apple platforms, does all the time. ….. The Apple Silicon system architecture is designed to make these operations as fast as possible. It’s not so much that Intel’s x86 architecture is a bad fit for Apple’s software frameworks, as that Apple Silicon is designed to be a bespoke fit for it …. retaining and releasing NSObjects is so common on MacOS (and iOS), that making it 5 times faster on Apple Silicon than on Intel has profound implications on everything from performance to battery life.
> Broadly speaking, this is a significant reason why M1 Macs are more efficient with less RAM than Intel Macs. This, in a nutshell, helps explain why iPhones run rings around even flagship Android phones, even though iPhones have significantly less RAM. iOS software uses reference counting for memory management, running on silicon optimized to make reference counting as efficient as possible; Android software uses garbage collection for memory management, a technique that requires more RAM to achieve equivalent performance.
This quote doesn’t really cover why M1 macs are more efficient with less ram than intel macs? You’ve got a memory budget, it’s likely broadly the same on both platforms, the speed at which your retains/releases happen isn’t going to be the issue. it’s not like intel macs use GC where m1 uses RC.
(It explains why iOS does better with less ram than android, but the quote is specifically claiming this as a reason for 8GB ram to be acceptable)
I doubt the M1 macs are really using memory much more efficiently; the stock M1 macs with 8GB were available rapidly; the Macs with 16GB ram or larger disk space had a three to four week delay when ordering; a lot of enthusiasts and influencers rushed out and got base models; they are then surprised to find they can work ok in most apps with only 8GB.
Perhaps they never really needed to fit 32GB into their intel macs either.
Some days after the glowing reviews; and strange comments about magic memory utilization; we now see comments concerned about SSD wear due to swap file usage.
If the applications and data structures are more compact in memory on the arm processors; it should be easy to test; you just need an intel mac; and an M1 mac running the same app on the same document and look at how much memory it uses.
When you need 32 or 64gb of ram is not because the data structures or the programs you use need memory, it's because the data they use (database content, virtual machines, images, videos, music... ) fill that ram, and those data is not going to occupy less on an arm machine.
However real case usage of such massive amount of data are limited for typical desktop user. Massive database load usually happen on specialized servers in which 256Go RAM and more are pretty mundane.
So on customer PC ram is maybe more used as caching mechanisms or eaten away by poorly designed memory leak/garbage collection.
And if your GPU is able do to real time rendering on data heavy load maybe you need less caching of intermediate results as well.
Plenty of use cases for more than 8gb of ram. When you're doing data analysis on even smaller datasets you may need several times more available memory than the size of the dataset as you're processing it.
1.Again typical use case for entry level PC is not data analysis on bigdata.
2.My current production server is a PostgreSQL database on a 16GB RAM VM running on Debian (my boss is stingy). This doesn't prevent me from managing a 300GB+ data cluster with pretty decent performances and perform actual data analysis.
3.If Chrome sometimes use +8GB for a godsake webrowser the only explanation is poor design, there is no excuse.
I think you’re right. I’ve only ever needed 32 gb when I was running a local hadoop cluster for development. Those virtual images required the same amount of ram regardless of OS.
It's a contributing factor. If things like retain/release are fast and you have significantly more memory bandwidth and low latency to throw at the problem, you can get away without preloading and caching nearly as much. Take something simple like images on web pages: don't bother keeping hundreds (thousands?) of decompressed images in memory for all of the various open tabs. You can just decompress them on the fly as needed when a tab becomes active and then release them when it goes inactive and/or when the browser/system determines it needs to free up some memory.
You've completely changed the scope of what's being discussed, though. Retain/release being faster would just surface as regular performance improvements. It won't change anything at all about how an existing application manages memory.
It's possible that apps have been completely overhauled for a baseline M1 experience. Extremely, extraordinarily unlikely that anything remotely of the sort has happened, though. And since M1-equipped Macs don't have any faster IO than what they replaced (disk, network, and RAM speeds are all more or less the same), there wouldn't be any reason for apps to have done anything substantially difference.
Third, Marcel Weiher explains Apple’s obsession about keeping memory consumption under control from his time at Apple as well as the benefits of reference counting:
>where Apple might have been “focused” on performance for the last 15 years or so, they have been completely anal about memory consumption. When I was there, we were fixing 32 byte memory leaks. Leaks that happened once. So not an ongoing consumption of 32 bytes again and again, but a one-time leak of 32 bytes.
>The benefit of sticking to RC is much-reduced memory consumption. It turns out that for a tracing GC to achieve performance comparable with manual allocation, it needs several times the memory (different studies find different overheads, but at least 4x is a conservative lower bound). While I haven’t seen a study comparing RC, my personal experience is that the overhead is much lower, much more predictable, and can usually be driven down with little additional effort if needed.
But again that didn't change with M1. We're talking MacOS vs. MacOS here. Your quote is fully irrelevant to what's being discussed which is the outgoing 32gb macbook vs the new 16gb-max ones. They are running the same software. Using the same ObjC & Swift reference counting systems.
ARC is not specific to M1, BUT have been widely used in ObjC & Swift for years AND is thus heavily optimized on M1 that perform "retain and release" way faster (even when emulating x86)
Perfect illustration of Apple software+hardware long term strategy.
That still doesn't mean that M1 Macs use less memory. If retain/release is faster then the M1 Macs have higher performance than Intel Macs. That is easily understood. The claim under contention here is that M1 Macs use less memory, which is not explained by hardware optimized atomic operations
Ok. However the posts in this thread were asking how the M1 Macs could use less RAM than Intel Macs, not if they were more optimized. The GP started with:
>This quote doesn’t really cover why M1 macs are more efficient with less ram than intel macs? You’ve got a memory budget, it’s likely broadly the same on both platforms
Well, if less memory is used to store garbage thanks to RC, less memory is needed. But that was largely discussed in other sub-comments hence why we focused more on the optimisation aspect in this thread.
CPU speed is often bound by memory bandwidth and latency... it's all related. If you can't keep the CPU fed, it doesn't matter how fast it is theoretically.
What I mean is that (to my understanding) memory bandwidth in modern devices is already high enough to keep a CPU fed during decompression. Bandwidth isn't a bottleneck in this scenario, so raising it doesn't make decompression any faster.
RAM bandwidth limitations (latency and throughput) are generally hidden by the multiple layers of cache in between the ram and CPU prefetching more data than is generally needed. Having memory on chip could make the latency less, but as ATI has shown with HBM memory on a previous generation of its GPUs its not a silver bullet solution.
I am going to speculate now, but maybe, just maybe, if some of the silicon that apple has used on the M1 is used for compression/decompression they could be transparently compressing all ram in hardware. Since this offloaded from the CPUs and allows a compressed stream of data from memory, they achieve greater ram bandwidth, less latency and less usage for a given amount of memory. If this is the case I hope that the memory has ECC and/or the compression has parity checking....
> I am going to speculate now, but maybe, just maybe, if some of the silicon that apple has used on the M1 is used for compression/decompression they could be transparently compressing all ram in hardware. Since this offloaded from the CPUs and allows a compressed stream of data from memory, they achieve greater ram bandwidth, less latency and less usage for a given amount of memory.
Are you aware of any x86 chips that utilize this method?
Not that I am aware. I remember seeing apple doing something it in software with the intel macs. Which is why I speculated about it being hardware for M1.
> Blosc [...] has been designed to transmit data to the processor cache faster than the traditional, non-compressed, direct memory fetch approach via a memcpy() OS call. Blosc is the first compressor (that I'm aware of) that is meant not only to reduce the size of large datasets on-disk or in-memory, but also to accelerate memory-bound computations (which is typical in vector-vector operations).
I can't speak to the MacOS system, but from years spent JVM tuning: you're in a constant battle finding the right balance of object creation/destruction (the former burning CPU, the latter creating garbage), keeping memory use down (more collection, which burns CPU and can create pauses and hence latency), or letting memory balloon (which can eat resource, and makes the memory sweeps worse when they finally happen).
Making it cheaper to create and destroy objects with hardware acceleration, and to do many small, low-cost reclaims without eating all your CPU would be a magical improvement to the JVM, because you could constrain memory use without blowing out CPU. From what's described in TFA it sounds like the same is true for modern MacOS programming.
Manual memory management isn't magic and speeding up atomic ops doesn't fundamentally change anything. People have to spend time tuning memory management in C++ too, that's why the STL has so many ways to customise allocators and why so many production C/C++ codebases roll custom management schemes instead of using malloc/free. They're just expensive and slow so manual arena destruction etc is often worth it.
The JVM already makes it extremely cheap to create and destroy objects: creation is always ~free (just a pointer increment), and then destruction is copying, so very sensitive to memory bandwidth but done in parallel. If most of your objects are dying young then deallocation is "free" (amortized over the cost of the remaining live objects). Given the reported bandwidth claims for the M1 if they ever make a server version of this puppy I'd expect to see way higher GC throughput on it too (maybe such a thing can be seen even on the 16GB laptop version).
The problem with Java on the desktop is twofold:
1. Versions that are mostly used don't give memory back to the OS even if it's been freed by the collector. That doesn't start happening by default until like Java 14 or 15 or so, I think. So your memory usage always looks horribly inflated.
2. If you start swapping it's death because the GC needs to crawl all over the heap.
There are comments here saying the M1 systems rely more heavily on swap than a conventional system would. In that case ARC is probably going to help. At least unless you use a modern pauseless GC where relocation is also done in parallel. Then pausing background threads whilst they swap things in doesn't really matter, as long as the app's current working set isn't swapped out to compensate.
Yea, this is a BS theory. I have a 16Gb M1 MacBook Air and the real answer is that it has super fast SSD access, so you don’t notice the first few gigabytes of swap.
But when swap hits 8-9 Gb, it’s effects start to get very noticeable.
This seems correct. RC vs GC might explain how a Mac full of NSObjects needs less memory than a Windows full of .NET runtimes, but it doesn’t explain how M1 Mac with 16GB of RAM is faster than x86 Mac with 16GB or more of RAM.
Besides, a lot of memory usage is in web browsers, which must use garbage collection.
Looking at the reviews of M1 Macs, those systems are still responsive and making forward progress at a “memory pressure” that would make my x86 Mac struggle in a swap storm. It seems to come down to very fast access to RAM and storage, large on-die caches, and perhaps faster memory compression.
Oh one more thing, they said in the Apple Silicon event that they had eliminated a lot of the need for copying RAM around so … could be some actual footprint reduction there?
I tend to agree! I think Big Sur on M1 uses 16kB page size vs 4kB on Intel so maybe that contributes to more efficient / less obvious perf issue when swapping.
yeah its a bit of a stretch. to the extent that macos apps use garbage collection less than pc apps it would need less ram. but they are kinda hopping around a macos vs android comparison which makes no sense. I think mac enthusiasts trying to imagine why a max of 8 or 16gb is ok. it is ok for most people anyway.
It also would have no difference between the outgoing Intel ones and the incoming Apple Silicon ones. Same pointer sizes, same app memory management, etc... Some fairly minor differences in overall binary sizes, so no "wins" there or anything either.
All Swift/ObjC software has been doing ARC for ten (?) years. Virtual memory usage will be the same under M1. It will just pay off in being faster to refcount (ie as fast as it already is on an iPhone), and therefore the same software runs faster. Probably won't work under Rosetta 2 with the per-thread Total Store Ordering switch. And it's probably not specific to NSObject, any thread safe reference counter will benefit. There are more of those everywhere these days.
2 more points:
- All the evidence I've seen is gifs of people opening applications in the dock, which is... not impressive. I can do that already, apps barely allocate at all when they open to "log in to iCloud" or "Safari new tab". And don't we see that literally every time Apple launches Mac hardware? Sorry all tech reviewers everywhere, try measuring something.
- I think the actual wins come from the zillion things Apple has done in software. Like: memory compression, which come to think of it might be possible to do in hardware. Supposedly a lot of other work/tuning done on the dynamic pager, which is maybe enabled by higher bandwidth more than anything else.
Fun fact: you can stress test your pager and swap with `sudo memory_pressure`. Try `-l critical`. I'd like to see a benchmark comparing THAT under similar conditions with the previous generation.
all comparisons are appropriate, but the question here was whether the mac laptops memory limits were somehow made better by more efficient use of memory. they are not. these are laptops, not phones or tablets. memory is used as efficiently as in previous laptops.
>The memory bandwidth on the new Macs is impressive. Benchmarks peg it at around 60GB/sec–about 3x faster than a 16” MBP. Since the M1 CPU only has 16GB of RAM, it can replace the entire contents of RAM 4 times every second. Think about that…
Yes, reading some more of the discussions it seems like the answer is that (roughly) the same amount of memory is used, but hitting swap is no longer a major problem, at least for user-facing apps. Seems like the original quote is reading too much into the retain/release thing.
Yeah, but swapping suddenly isn't as big problem as before (probably). You had to be very careful not to hit trashing on x86_64, now you don't have to worry so much.
Or that's how I understand this, I don't actually own M1 Mac.
first there is no sensible reason why ram bandwidth would be different by 3x, its lpddr4x either way, and you can’t replace it from an ssd that fast, the ssd would limit swap speed
> Besides the additional cores on the part of the CPUs and GPU, one main performance factor of the M1 that differs from the A14 is the fact that’s it’s running on a 128-bit memory bus rather than the mobile 64-bit bus. Across 8x 16-bit memory channels and at LPDDR4X-4266-class memory, this means the M1 hits a peak of 68.25GB/s memory bandwidth.
The point of the memory bandwidth is so that it never has to swap to disk in the first place.
By swap speed I think he meant that the bottle neck is the time that it takes to move data from the SSD to the RAM, not how fast can the RAM be read from the processor.
Due to how DRAM works the array itself cannot have more than one port and almost certainly even the DRAM chip as a whole is still some variation of single ported SDRAM (long time ago there were various pseudo-dual-ported DRAM chips, but these were only really useful for framebuffer-like applications). But given that there are multiple levels of cache in the SoC it is somewhat moot point.
I suspect you mean they come with two channels on a single chip, which is not the same as two ports. Channels access separate bits of memory. Ports access the same bits of memory.
The reason GDDR isn't typically used for system RAM is it's higher latency & more power hungry. Like, the GDDR6 memory on a typical discreet card uses more power than the an entire M1-powered Mac Mini power hungry.
I believe the poster meant "normal" in the sense that it's a conventional memory technology for laptops. (ie LPDDR4 not GDDR like had been suggested above).
being inside the cpu package should allow for less than a 1% improvement in latency by my napkin math. its got good latency because it is a top of the line mobile ram setup but that isn’t unique to m1
I 100% agree, but I've audited my code, and on other platforms my code closely agrees with LMbench's lat_mem_rd, which seems pretty well regarded for accuracy.
Someone please correct me for the sake of all of us if I’m wrong, but it sounds like Apple is using specialized hardware for “NSObject” retain-and-release operations, which may bypass/reduce the impact on general RAM.
On recent Apple Silicon CPUs uncontended most atomic operations are essentially free - almost identical in speed to the non-atomic version of the same operation. Reference counting must be atomic safe whether using ARC or MRR. On x86 systems those atomic operations impose a performance cost. On Apple Silicon they do not. It does not change how much memory is used but it does mean you can stop worrying about the cost of atomic operations. It has nothing to do with the ARMv8 instruction set, it has to do with how the underlying hardware implements those operations and coordinates among cores.
Separately from that x86's TSO-ish memory model also imposes a performance cost whether your algorithm needs those guarantees or not. Code sometimes relies on those guarantees without knowing it. Absent hardware support you would need to insert ARM atomics in translated code to preserve those guarantees which on most ARM CPUs would impose a lot of overhead. The M1 allows Rosetta to put the CPU into a memory ordering mode that preserves the expected memory model very efficiently (as well as using 4K page size for translated processes).
> On recent Apple Silicon CPUs uncontended most atomic operations are essentially free - almost identical in speed to the non-atomic version of the same operation.
They are fast for atomics but still far, far slower than the equivalent non-atomic operation. An add operation takes around half a cycle (upper bound here - with how wide the firestorm core is an add operation is almost certainly less than half a cycle). At 1ghz a cycle is 1 nanosecond. The M1 runs at around 3ghz. So you're still talking the atomic operation being >10x slower than non-atomics.
Which should not be surprising at all. Apple didn't somehow invent literal magic here. They still need coherency across 8 cores, which means at a minimum L1 is bypassed for the atomic operation. The L2 latency is very impressive, contributing substantially to that atomic operation performance. But it's still coming at a very significant cost. It's very, very far from free. There's also no ARM vs. x86 difference here, since the atomic necessarily forces a specific memory ordering guarantee that's stricter than x86's default. Both ISAs are forced to do the same thing and pay the same costs.
It's in the post. Half a cycle for an add or less, and cycles are every 1/3 nanosecond. So upper bound for an add would be around 1/6th a nanosecond. Likely less than that still yet, since the M1 is probably closer to an add in 1/8th a cycle not 1/2. Skylake by comparison is at around 1/4th a cycle for an add, and since M1's IPC is higher it's not going to be worse at basic ALU ops.
6 nanoseconds @ 3ghz is 18 cycles. That's on the slow end of the spectrum for a CPU instruction.
Where? 6 nanoseconds is pretty long, that’s about how long it’d take to do an entire retain/release pair, which is a couple dozen instructions I believe.
I don't think that's quite right. Apple believes strongly in retain-and-release / ARC. It has designed its software that way; it has designed its M1 memory architecture that way. The harmony between those design considerations leads to efficiency: the software does things in the best way possible, given the memory architecture.
I'm not an EE expert and I haven't torn apart an M1, but Occams's Razor would suggest it's unlikely they made specialized hardware for NSObjects specifically. Other ARC systems on the same hardware would likely see similar benefits.
I suspect that Apple didn't anything special to improve performance of reference counting apart from not using x86. Simply put x86 ISA and memory model is built on assumption that atomic operations are mostly used as part of some kind of higher-level synchronization primitive and not for their direct result.
One thing is that M1 has incredible memory BW and is implemented on single piece of silicon (which certainly helps with low-overhead cache consistency). Another thing is that rosetta certainly does not have to preserve exact behavior of x86 (and in fact it cannot because doing so will negate any benefits of dynamic translation) it only has to care about what can be observed by the user code running under it.
The hardware makes uncontended atomics very fast, and Objective-C is a heavy user of those. But it would really help any application that could use them, too.
But GCd languages don't need to hit atomic ops constantly in the way ref-counted Objective C does, so making them faster (though still not as fast as regular non-atomic ops) is only reducing the perf bleeding from the decision to use RC in the first place. Indeed GC is generally your best choice for anything where performance matters a lot and RAM isn't super tight, like on servers.
Kotlin/Native lets us do this comparison somewhat directly. The current and initial versions used reference counting for memory management. K/N binaries were far, far slower than the equivalent Kotlin programs running on the JVM and the developer had to deal with the hassle of RC (e.g. manually breaking cycles). They're now switching to GC.
The notion that GC is less memory efficient than RC is also a canard. In both schemes your objects have a mark word of overhead. What does happen though, is GC lets you delay the work to deallocate from memory until you really need it. A lot of people find this quite confusing. They run an app on a machine with plenty of free RAM, and observe that it uses way more memory than it "should" be using. So they assume the language or runtime is really inefficient, when in reality what's happened is that the runtime either didn't collect at all, or it collected but didn't bother giving the RAM back to the OS on the assumption it's going to need it again soon and hey, the OS doesn't seem to be under memory pressure.
These days on the JVM you can fix that by using the latest versions. The runtime will collect and release when the app is idle.
I saw that point brought up on Twitter and I don't know how it it makes more efficient use of RAM.
Specifically, as I understood it is that Apple software (written in objective C/Swift) uses a lot of retain/release (or Atomic Reference Counting) on top of manual memory, for memory management rather than other forms of garbage collection (such as those found in Java/C#), which gives Objective C programs a lower memory overhead (supposedly). This is why the iPhone ecosystem is able to run so much more snappier than the Android ecosystem.
That said, I don't see how that translates to lower memory usage than x86 programs. I think the supporting quotes he used for that point are completely orthogonal. I don't have an M1 mac, but I believe the same program running on both machines should use the same amount of memory.
The only thing I can think of that would actually reduce memory usage on M1 vs the same version of MacOS on x86 would be if they were able to tune their compressed memory feature to run faster (with higher compression ratio) on the M1. That would serve to reduce effective memory usage or need to fall back to swap. I would not expect something like that to be responsible for more than, say, a 5-10% RAM usage decrease though.
That would serve to reduce effective memory usage or need to fall back to swap. I would not expect something like that to be responsible for more than, say, a 5-10% RAM usage decrease though.
I think you can reach a lot more than that. Presumably, on Intel they use something like LZO or LZ4, since it compresses/decompresses without too much CPU overhead. But if you have dedicated hardware for something like e.g. Brotli or zstd, one could reach much higher compression ratios.
Of course, this is assuming that memory can be compressed well, but I think this is true in many cases. E.g. when selecting one of the program/library files in the squash benchmarks:
I suspect Apple is using their own LZFSE[0] compression, perhaps now with special tweaks for M1. The reason I only suspected a 5-10% increase though is even if it’s able to achieve a massive increase in compression ratio (compressing 3GB to 1GB instead of 2GB, say), that’s still only saving 1GB total. Which I guess isn’t nothing and is more than 10% on an 8GB machine.
For a very long time, budget Android devices that were one or even two generations older were faster than just released iPhones at launching new apps to interactivity (see: hundreds of Youtube "speed comparison" videos). This was purely due to better software, as the iPhone had significantly faster processors and I/O. RAM doesn't play a big factor at launch. One very minor contributor would be that the GC doesn't need to kick in until later, while ARC is adding its overhead all the time.
Ridiculous that you're downvoted. There are a lot of people posting here who haven't worked on memory management subsystems.
GC vs RC is not a trivial comparison to make, but overall there are good reasons new systems hardly use RC (Objective-C dating back to the 90s isn't new). Where RC can help is where you have a massive performance cliff on page access, i.e. if you're swapped to disk. Then GC is terrible because it'll try and page huge sections of the heap at once where as RC is way more minimal in what it touches.
But in most other scenarios GC will win a straight up fight with an RC based system, especially when multi-threading gets involved. RC programs just spend huge amounts of time atomically incrementing and decrementing things, and rummaging through the heap structures, whereas the GC app is flying along in the L1 cache and allocations are just incrementing a pointer in a register. The work of cleaning up is meanwhile punted to those spare cores you probably aren't using anyway (on desktop/mobile). It's tough to beat that by hand with RC, again, unless you start hitting swap.
If M1 is faster at memory ops than x86 it's because they massively increased memory bandwidth. In fact I'd go as far as saying the CPU design is probably not responsible for most of the performance increase users are seeing. Memory bandwidth is the bottleneck for a lot of desktop tasks. If M1 core is say 10% faster than x86 but you have more of them and memory bandwidth is really 3x-4x larger, and the core can keep far more memory ops in flight simultaneously, that'll explain the difference all by itself.
Indeed, one of the articles cited by the article that this discussion is about (https://blog.metaobject.com/2020/11/m1-memory-and-performanc...) links to a paper saying that GC needs 4x the memory to match manual memory management and then makes a huge wacky leap to say that ARC could achieve that with less. It can't. ARC will always be slower than manual memory management because it behaves the same way as naive manual memory management with some overhead on top.
On the other hand, that same paper shows that for every single one of their tested workloads, the generational GC outperforms manual memory management. Now obviously, you could do better with manual memory management if you took the time to understand the memory usage of your application to reduce fragmentation and to free entire arenas at a time, but for applications that don't have the developer resources to apply to that (the vast majority), the GC will win.
I'm not saying that better memory management is the reason Android wins these launch to interactivity benchmarks because the difference is so stark relative to the hardware performance that memory management isn't nearly enough to explain it, but it does contribute to it. (My own guess is that most of the performance difference comes from smarter process initialization from usage data. Apple is notoriously bad at using data for optimization.)
Tfa says the hardware optimization for ARC is to the point of being bespoken. Hardware will always beat software optimization. Further, the other GCs have much higher ram overheads than this combined, bespoke system.
Apple has decades of proven experience producing and shipping massively over engineered systems. I believe em when they say these processors do ARC natively.
I’m not denying that Apple has better ARC performance. It’s that I don’t understand how an application would use less memory on ARM than x86. I’d expect the ARM code to run faster (as a result of being able to do atomic operations faster), but I don’t see how that translates to less memory usage
In one of those random “benchmark tests” online where someone opened several applications on an M1 Mac with 8GB RAM and did some work on those, they kept Activity Monitor open alongside and pointed to the increase in swap at some stage. So it seems like the swap is fast enough and is used more aggressively. That reduces the amount of RAM used at any point in time. The data in RAM has also benefited from compression in macOS for several years now.
Read up on the performance overhead of GC across other languages. They’re messy and can lock up periodically. They take up significant ram and resources.
Q: Does reference counting 'use' less RAM than GC?
A: Yes (caveats etc. go here, but your question is a good explanation)
Q: Does the M1 in and of itself require less RAM than x86 processors?
A: No
Q: So why are people talking about the M1 and its RAM usage as if it's better than with x86?
A: It's really just around the faster reference counting. MacOS was already pretty efficient with RAM.
I'd like to propose tokamak-teapot's formula for hardware purchase:
Minimum RAM believed to be required = actual amount of RAM required * 2
N.B. I am aware that a sum that's greater than 16GB doesn't magically become less than 16GB, but it is somewhat surprising how well MacOS performs when it feels like RAM should be tight, so I'd suggest borrowing a Mac or making a Hackintosh to experience this if you're anxious about hitting the ceiling.
There is no “next GC cycle”, objc and swift use ref counting on every platform (there was an abortive GC attempt on desktops a few years back but it never saw wide use and has been deprecated since mountain lion).
The post kind of does:
> The benefit of sticking to RC is much-reduced memory consumption. It turns out that for a tracing GC to achieve performance comparable with manual allocation, it needs several times the memory (different studies find different overheads, but at least 4x is a conservative lower bound).
It implies that ref-counting is more economical in terms of wasted memory than GC, with the tradeoff being performance.
This is solved thanks to the M1.
Nope, they also tried a tracing GC for Objective-C on the desktop, but it was a failure due to interoperability across libraries compiled in different modes alongside C semantics.
Then they pivoted into automating retain/release patterns from Cocoa and sold it, Apple style, as a victory of RC over tracing GC, while moving the GC related docs and C related workarounds into the documentation archive.
> Nope, they also tried a tracing GC for Objective-C on the desktop
Operative word: tried. GC was an optional desktop-only component deprecated in Mountain Lion, which IIRC has not been accepted on MAS since 2015 was removed entirely from Sierra.
Without going into irrelevant weeds, "apple has always used refcounting everywhere" is a much closer approximation.
That's not exactly relevant to the subject at hand of what memory-management method software usually uses on macos.
> Which then in Apple style ("you are holding it wrong") turned it around in a huge marketing message, while hiding away the tracing GC efforts.
Hardly?
And people are looking to refcounting as a reason why apple software is relatively light on memory, which is completely fair and true and e.g. generally assumed as one of the reasons why ios devices fare well with significantly less ram than equivalent android devices. GCs have significant advantages, but memory overhead is absolutely one of the drawbacks.
Memory overhead in languages with tracing GC (RC is a GC algorithm) only happens in languages like Java without support for value types.
If the language supports value types, e.g. D, and there is still memory overhead versus RC, then fire the developers or they better learn to use the language features available on their plate.
This shows latency, not memory consumption, as far as I can tell.
> If the language supports value types, e.g. D, and there is still memory overhead versus RC, then fire the developers or they better learn to use the language features available on their plate.
Memory overhead of certain types of garbage collectors (notably generational ones) is well-known and it's specified relative to the size of the heap that they manage. Using value types is of course a valid point, regarding how you should use the language, but it doesn't change the overhead of the GC, it just keeps the heap it manages smaller. If the overhead was counted against the total memory use of a program, then we wouldn't be talking about the overhead of the garbage collector, but more about how much the garbage collector is actually used. Note that I'm not arguing against tracing GCs, only trying to keep it factual.
I think the author doesn't understand what Gruber wrote here. Android uses more memory because most Android software is written to use more memory (relying on garbage collection). It has nothing to do with the chips. If you ran Android on an M1, it wouldn't magically need less RAM. And Photoshop compiled for x86 is going to use about the same amount of memory as Photoshop compiled for Apple silicon. Sure, if you rewrote Photoshop to use garbage collection everywhere then memory consumption would increase, but that has nothing to do with the chip.
Maybe I misread, but I understood that more as Apple using ARC and that gives them a memory advantage. M1 is simply making that more efficient by doing retain-release faster. But I agree that should not change total memory usage.
But I think in general you could say that Apple has focused more on optimizing their OS for memory usage than the competition may have done. Android uses Java which eats memory like crazy and I suspect C# is not that much better being a managed and garbage collected language. Not sure how much .NET stuff is used on Windows, but I suspect a lot.
macOS in contrast is really dominated by Objective-C and Swift which does not use these memory hungry garbage collection schemes, nor require JIT compilation which also eats memory.
> I suspect C# is not that much better being a managed and garbage collected language
C# is better than JVM in that it has custom value types.
Say you want to allocate an array of points in Java you basically have to allocate array[pointer] all pointing to tiny 8 byte objects (for eg. 32 bit float x and y coords) + the overhead of object header. If you use C# and structs it just allocates a flat array of floats with zero overhead.
Not only do you pointlessly use memory, you have indirection lookup costs, potential cache misses, more objects for GC to traverse, etc. etc.
JVM really sucks at this kind of stuff and so much of GUI programming is passing around small structs like that for rendering.
FWIW I think they are working on some proposal to add value types to JVM but that probably won't reach Android ever.
I had a C# project that was too slow and used too much RAM.
I can attest that structs use less memory however IIRC they don't have methods so no GetHashCode() which made them way too slow to insert in a HashSet or Dictionary.
In the end I used regular objects in a Dictionary. RAM usage was a bit higher than structs (not unbearably so) but speed improvement was massive.
1. structs can have methods
2. the primary value of value types is not to use less ram (you just save a pointer, I guess times two because of GC) but the ability to avoid having to GC the things, since they are either on the stack or in contiguos chunks of memory, and to leverage cpu caches are you can iterate over contiguous data rather than hopping around in the heap. Iterating over contiguous data can be a large constant factor faster than over a collection of pointers to heap objects.
>I can attest that structs use less memory however IIRC they don't have methods so no GetHashCode() which made them way too slow to insert in a HashSet or Dictionary
You can and should implement IEquatable on a struct, especially if you plan on placing them in a hashset - the default implementation will use reflection and will be slow but it's easy to override.
You could always have methods (for as long as I can remember at least, I started using .NET in 3.0 days), you just can't inherit structs or use virtual methods because structs don't have virtual method table. You can implement interfaces however and override operators - it's very nice for implementing 3D graphics math primitives like vectors and matrices, way better than Java in this regard which was what got me into C# way back then.
It looks like my blog post[1] was the primary source for this (it's referenced both by this post and by the Gruber post), and to be clear, I did not claim that this helps ARM Macs use less RAM than Intel Macs. I think John misunderstood that part and now it has blown up a bit...
I did claim that this helps Macs and iPhones use less RAM than most non-Apple systems, as part of Apple's general obsessiveness about memory consumption (really, really obsessive!). This part of the puzzle is how to get greater convenience for heap allocation.
Most of the industry has settled on tracing GCs, and they do really well in microbenchmarks. However, they need a lot of extra RAM to be competitive on a system level (see references in the blog post). OTOH, RC trends to be more frugal and predictable, but its Achilles heel, in addition to cyclic references, has always been the high cost of, well, managing all those counts all the time, particularly in a multithreaded environment where you have to do this atomically. Turns out, Apple has made uncontented atomic access about as fast as a non-atomic memory access on M1.
This doesn't use less RAM, it decreases the performance cost of using the more frugal RC. As far as I can tell, the "magic" of the whole package comes down to a lot of these little interconnecting pieces, your classic engineering tradeoffs, which have non-obvious consequences over there and then let you do this other thing over here, that compensates for the problem you caused in this other place, but got a lot out etc. Overall, I'd say a focus on memory and power.
So they didn't add special hardware for NSObject, but they did add special hardware that also tremendously helps NSObjet reference counting. And apparently they also added a special branch predictor for objc_msgSend(). 8-). Hey, 16 billion transistors, what's a branch predictor or two among friends.. ¯\_(ツ)_/¯
Thanks for the clarification, it seems this post has gotten lost in the comments. You could try making a new post on your blog and add it to HN as a new post.
> RAM capacity is just RAM capacity. Possibly Swift-made apps uses less RAM compared to other apps, but microarchitecture shouldn't be matter.
My guess it's mostly faster swapping.
Microarchitecture could help, perhaps by making context switches faster.
But it could also be custom peripheral/DMA logic for handling swapping between RAM and NVM.
I think it makes sense.. NVM should be fast enough that RAM only needs to act as more of a cache. But existing architectures have a lot of legacy of treating NVM like just a hard drive. Intel is also working on this with its Optane related architecture work.
You could also do on-the-fly compression of some kinds of data to/from RAM. But I havent heard any clues that M1 is doing that, and you'd need applications to give hints about what data is compressible.
Most experiment with 8GB M1 Macs I've seen so far (on YouTube) seems to start slowing down once the data cannot fit in a RAM, although the rest of the system remain responsive e.g. 8K RED RAW editing test. In the same test with 4K RED RAW there were some shuttering on the first playback but subsequent playback were smooth, which I guessed it was a result of swap being moved back into a RAM.
My guess would be they've done a lot of optimization on swap, making swapping less of an performance penalty (as ridiculous as it sounds, I guess they could even use Neural Engine to determine what should be put back into RAM at any given moment to maximize responsiveness.)
macOS has been doing memory compression since Mavericks using WKdm algorithm, but they also support Hybrid Mode[1] on ARM/ARM64 using both WKdm and a variant of LZ4 for quite some time (WKdm compress much faster than LZ4). I wouldn't be too surprised if M1 has some optimization for LZ4. So far I haven't seen anybody tested it.
It might be interesting to test M1 Macs with vm_compressor=2 (compression with no swap) or vm_compressor=8 (no compression, no swap)[2] and see how it runs. I'm not sure if there's a way to change bootargs on M1 Macs, though.
Exactly, several reports since the launch have pointed out that both the memory bandwidth of the RAM is higher than before, and also the SSDs are faster than before (by 2-3x in both cases I think?)
combined it should make a big difference to swapping
On my almost-stock (primarily used for iOS deployment) MBP, Catalina uses 3GB RAM (active+wired) memory on boot. It's much more than my Linux laptop (~400MB). I haven't booted Win10 recently but I'd assume it'd be close to macOS.
The transfer speeds of the M1 SSDs have been benched at 2.7GB/s - about the same speed as mid-range NVMe SSDs (my ADATA SX8200 Pro and Sabrent Rocket are both faster and go for about $120/TB).
I’m guessing swapping happens quicker, perhaps due to the unified memory architecture. With quicker swapping you’d be less likely to notice a delay.
That said I’d still be very hesitant buying a 8 GB M1 Mac. When my iMac only had 8 GB it was a real pain to use. Increasing my iMac’s memory to 24 GB made it usable.
Bottleneck for swap in/out must be SSD, not memory. Also its SSD isn't fast compared to other NVMe SSDs in both throughput and IOPS. Possibly latency is great thanks to integrated SSD controller onto M1 but I don't think it changes the game.
When I looked into this, the information I found suggested that modern consumer SSDs generally have more than enough write cycles to spare for any plausible use case. Possibly this was more of an issue five to ten years ago.
That's my take as well; I have a fairly modern Macbook, it's just fine, it's just that the software I run on it is far from ideal.
Intellij has a ton of features but it's pretty heavyweight because of it; I'd like a properly built native IDE with user experience speed at the forefront. That's what I loved about Sublime Text. But I also like an IDE for all the help it gives me, things that the alternative editors don't do yet.
I've used VS Code for a while as well, it's faster than Atom at least but it's still web technology which feels like a TON of overhead that no amount of hardware can compensate for.
I've heard of Nova, but apparently I installed it and forgot to actually work with the trial so I have no clue how well it works. I also doubt it works for my particular needs, which intellij doesn't have a problem with (old codebase with >10K LOC files of php 5.2 code and tons of bad JS, new codebase with Go and Typescript / React).
If you want faster Intellij experience, you can try disable built-in plug-ins and features you don’t need, power saving mode is one quick way to disable heavyweight features.
I'm wondering if the "optimized for reference-counting" thing applies to other languages too. i.e. if I write a piece of software in Rust, and I make use of Rc<>, will Macs be extra tolerant of that overhead? In theory it seems like the answer should be yes
I sure hope so. In macOS 10.15, the fast path for a retain on a (non-tagged-pointer) Obj-C object does a C11 relaxed atomic load followed by a C11 relaxed compare-and-exchange. This seems pretty standard for retain-release and I'd expect Rust's Rc<> to be doing something similar. It's possible Apple added some other black magic to the runtime in 10.16 (and they haven't released the 10.16 objc sources yet) but it's hard to imagine what they'd do that makes more sense than just optimizing for relaxed atomic operations.
I didn't understand why the implementation wouldn't just do an atomic increment, but I guess Obj-C semantics provide too much magic to permit such a simple approach. The actual code, in addition to [presumably] not being inlined, does not seem easy to optimize at the hardware level: https://github.com/apple/swift-corelibs-foundation/blob/main...
The short answer for why it can’t just be an increment is because the reference count is stored in a subset of the bits of the isa pointer, and when the reference count grows too large it has to overflow into a separate sidetable. So it does separate load and CAS operations in order to implement this overflow behavior.
No, because Rc<> isn't atomic. Arc<>, however, would get the benefit. The reason retain/release in ObjC/Swift are so much faster here is because they are atomic operations.
Yes, it applies to everything which uses atomics and is not something special in the runtime. It's also worth noting that this is an optimization that iphones have had for the last few years ever since those switched to arm64.
If anything, an Objective-C machine: they apparently have a special branch predictor for objc_msgSend()!
But they learned the lesson from SOAR (Smalltalk On A RISC) and did not follow the example of LISP machines, Smalltalk machines, Java machines, Rekursiv, etc. and build specialized hardware and instructions. The benefits of the specialization are much, much less than expected, and the costs of being custom very high.
Instead, they gave the machine large caches and tweaked a few general purpose features so they work particularly well for their preferred workloads.
I wonder if they made trap on overflow after arithmetic fast.
Remember, ARM instruction set, aka an interface, not an implementation. Arm holdings does license their tech to other companies though, so I don't know how much apple silicon would have in common, with say a Qualcomm CPU. They may be totally different under the hood.
Basically reference counting requires grabbing a number from memory in one step, then increasing or decreasing it and storing it in a second step.
This is two operations and in-between the two -- if and only if the respective memory location is shared between multiple cores or caches -- some form of synchronization must occur (like locking a bank account so you can't double draft on two ATMs simultaneously).
Now the way this is implemented varies a bit.
Apple controls most of the hardware ecosystem, programming languages, binary interface, and so on meaning there is opportunity for them to either implement or supplement ARM synchronization or atomicity primitives with their own optimizations.
There is nothing really preventing Intel from improving here as well -- it is just a easier on ARM because the ISA has different assumptions baked in, and Apple controls everything up the stream, such as the compiler implementations.
I know it's a fast Arm CPU - I've read the Anandtech analysis etc - and that there is lots of extra hardware on the SoC. But the specific point was why is it a Swift machine. What makes it particularly suited to running Swift?
2. low memory latencies between cache and system memory (so dirty pages in caches are faster updated etc.)
3. potential a coherency impl. optimized for this kind of atomic access (purely speculative: e.g. maybe sometimes updating less then a page in a cache when detecting certain atomic operations which changed a page or maybe wrt. the window marked for exclusive access in context of ll/sc-operations and similar)
Given that it's common for a object to stay in the same thread I'm not sure how much 2. matters for this point (but it does matters for general perf.). But I guess there is a lot in 3. where especially with low latency ram you might be able to improve performance for this cases.
These are interesting points. I'd like to hazard a guess that the leading contributor is cache-related. Just looking at the https://en.wikipedia.org/wiki/Reference_counting suggests as much: "Not only do the operations take time, but they damage cache performance and can lead to pipeline bubbles."
I roughly understand how refcounting causes extra damage to cache coherency: anywhere that a refcount increment is required, you mutate the count on an object before you use it, and then decrement it later. Often times, those counting operations are temporally distant from the time that you access the object contents.
I do not really understand the "pipeline bubbles" part, and am curious if someone can elaborate.
Reading on in the wiki page, they talk about weak references (completely different than weak memory ordering referenced above). This reminds me that Cocoa has been making ever more liberal use of weak references over the years, and a lot of iOS code I see overuses them, particularly in blocks. I last looked at the objc implementation years ago, but it was some thread safe LLVM hash map split 8 or 16 ways to reduce lock contention. My takeaway was roughly, "wow that looks expensive". So while weak refs are supposed to be used judiciously, and might only represent 1% or less of all refs, they might each cost over 100x, and then I could imagine all of your points could be significant contributors.
In other words, weak references widen the scope of this guessing game from just "what chip changes improve refcounting" to "what chip changes improve parallelized, thread safe hash maps."
The "pipeline bubbles" remark refers to the decoding unit of a processor needing to insert no-ops into the stream of a processing unit while it waits for some other value to become available (another processing unit is using it). For example, say you need to release some memory in a GC language, you would just drop the reference while the pipeline runs at full speed (leave it for the garbage collector to figure out). In an refcount situation, you need to decrease the refcount. Since more than one processing unit might be incrementing and decrementing this refcount at the same time, this can lead to a hot spot in memory where one processing unit has to bubble for a number of clock cycles until the other has finished updating it. If each refcount modify takes 8 clock cycles, then refcounting can never update the same value at more than once per 8 cycles. In extreme situations, the decoder might bubble all processing units except one while that refcount is updated.
For the last few decades the industry has generally believed that GC lets code run faster, although it has drawbacks in terms of being wasteful with memory and unsuitable for hard-realtime code. Refcounting has been thought inferior, although it hasn't stopped the Python folks and others from being successful with it. It sounds like Apple uses refcounting as well and has found a way to improve refcounting speed, which usually means some sort of specific silicon improvement.
I'd speculate that moving system memory on-chip wasn't just for fewer chips, but also for decreasing memory latency. Decreasing memory latency by having a cpu cache is good, but making all of ram have less latency is arguably better. They may have solved refcounting hot spots by lowering latency for all of ram.
From Apple's site:
"M1 also features our unified memory architecture, or UMA. M1 unifies its high-bandwidth, low-latency memory into a single pool within a custom package. As a result, all of the technologies in the SoC can access the same data without copying it between multiple pools of memory." That is paired with a diagram that shows the cache hanging off the fabric, not the CPU.
That says to me that, similar to how traditionally the cpu and graphics card could access main memory, now they have turned the cache from a cpu-only resource into a shared resource just like main memory. I wonder if the GPU can now update refcounts directly in the cache? Is that a thing that would be useful?
Extremely low memory latency is another. It's also has 8 memory channels, most desktops have 2. It's an aggressive design, Anandtech has a deep dive. Some of the highlights, lower latency cache, larger reorder buffer, more in flight memory operations, etc.
Typical desktops have 2 64 bit dimm into 2 channels (64 bits wide each) or 1 channel (128 bits wide).
The M1 Mac's seem to be 8 channels x 16 bits, which is the same bandwidth as a desktop (although running the ram at 4266 MHz is much higher than usual). The big win is you can have 8 cache misses in flight instead of 2. With 8 cores, 16 GPU cores, and 16 ML cores I suspect the M1 has more in flight cache misses than most.
The DDR4 bus is 64-bit, how can you have a 128-bit channel??
Single channel DDR4 is still 64-bit, it's only using half of the bandwidth the CPU supports. This is why everyone is perpetually angry at laptop makers that leave an unfilled SODIMM slot or (much worse) use soldered RAM in single-channel.
> The big win is you can have 8 cache misses in flight instead of 2
Only if your cache line is that small (16 bit) I think? Which might have downsides of its own.
> The DDR4 bus is 64-bit, how can you have a 128-bit channel??
Less familiar with the normal on laptops, but most desktop chips from AMD and Intel have two 64 bit channels.
> Which might have downsides of its own.
Typically for each channel you send an address, (a row and column actually), wait for the dram latency, and then get a burst of transfers (one per bus cycle) of the result. So for a 16 bit wide channel @ 3.2 Ghz with a 128 byte cache line you get 64 transfers, one ever 0.3125 ns for a total of 20ns.
Each channel operates independently, so multiple channels can each have a cache miss in flight. Otherwise nobody would bother with independent channels and just stripe them all together.
Here's a graph of cache line throughput vs number of threads.
So with 1,2 you see an increase in throughput, the multiple channels are helping. 4 threads is the same as two, maybe the L2 cache has a bottleneck. But 8 threads is clearly better than 4.
It's pretty common for hardware to support both. On the Zen1 Epyc's for instance some software preferred a consistent latency from stripped memory over the NUMA aware latency with separate channels where the closer dimms have lower latency and the further dimms had higher.
I've seen similar on Intel servers, but not recently. This isn't however typically something you can do at runtime, just boottime, at least as far as I've seen.
But doesn't that only help if you have parallel threads doing independent 16 bit requests? If you're accessing a 64 bit value, wouldn't it still need to occupy four channels?
Depends. Cachelines are typically 64-128 bytes long and sometimes depending on various factors that might be across on memory channel, or spread across multiple memory channels, somewhat like a RAID-0 disk. I've seen servers (opterons I believe) that would allow mapping memory per channel or across channels based on settings in BIOS. Generally non-NUMA aware OS ran better with stripped memory and NUMA aware OSs ran better non-stripped.
So striping a caching line across multiple channels goes increase bandwidth, but not by much. If the dram latency is 70ns (not uncommon) and your memory is running at 3.2 GHz on a single 64 bit wide channel you get 128 bytes in 16 transfers. 16 transfers at 3.2GHz = 5ns. So you get a cache line back in 75ns. With 2 64 bit channels you can get 2 cache lines per 75ns.
So now with a 128 bit wide channel (twice the bandwidth) you wait 70ns then get 8 transfers @ 3.2GHz = 2.5ns. So you get a cache line back in 72.5ns. Clearly not a big difference.
So the question becomes for a complicated OS with a ton of cores do you want one cacheline per 72.5ns (the stripped config) or two cachlines per 75ns (the non-stripped config).
In the 16 bit 8 channel (assuming the same bus speed and latency) you get 8 cacheline per 90ns. However not sure what magic apple has but I'm seeing very low memory latencies on the M1, on the order of 33ns! With all cores busy I'm seeing cacheline througput of a cacheline per 11ns or so.
I believe modern superscalar architectures can run instructions out of order if they don't rely on the same data, so when paused waiting for a cache miss, the processor can read ahead in the code, and potentially find other memory to prefetch. I may be wrong about the specifics, but these are the types of tricks that modern CPUs employ to achieve higher speed.
Sure, but generally a cacheline miss will quickly stall, sure you might have a few non-dependent instructions in the pipeline, but running a CPU at 3+GHz and waiting 70ns is an eternity. Doubly so when you can execute multiple instructions per cycle.
That's true but for it to be a "swift machine" as mentioned above it would imply some kind of isa level design choices, as opposed to "just" being extremely wide or having a branch predictor that understands what my favourite food is
This is where it's worth pointing out that Apple is an ARM architecture licensee. They're not using a design from ARM directly, they're basically modifying it however it suits them.
Indeed, they’re an ISA licensee, and I don’t think they’re using designs from ARM at all. They beat ARM to the first ARM64 core back in 2013 with the iPhone 5s.
I don't think this applies to good software. Nobody will retain/release something in a tight loop. And typical retain/releases don't consume much time. Of course it improves metrics like any other micro-optimization, so it's good to have it, but that's about it.
Taking that as true for a moment, I wonder what other programming languages get a benefit from Apple's silicon then? PHP et al. use reference counting too, do they get a free win, or is there something particular about Obj-C and Swift?
Android phones are build on managed code, but PC computers are built on C/C++ mostly (almost all productivity apps, browsers, games, the operating system itself). And the only GC code most people run is garbage collected on apple too - it's Java Script on the web.
I'm not familiar with MacOs, are the apps there mostly managed code? Even if they were and even if refcounting on Mac is that much faster than refounting on PC - refcounted code would still lose to manual memory management on average.
It is a lot of atomic +1 and -1, meaning possible thread contention, meaning that no matter how many cores your hardware has you have a worst case scenario where all your atomic reference counted objects have to be serialized, slowing everything down. I do not know how ObjectiveC/Swift deals with this normally, but making that operation as fast as possible on the hardware can have huge implications in real life, as evidenced by the new Macs.
It's a lot +-1 on atomic variables guarded using atomic memory operations (mainly with the Aquire/Release ordering)
on memory which might be shared between threads.
So low latency of the cache to system RAM can help here, at least for cases where the Rc is shared between threads. But also if the thread is not shared between threads but the thread is moved to a different CPU. Still it's probably not the main reason.
Given how atomic (might) be implemented on ARM and that the cach and memory is on the same chip my main guess is that they did some optimizations in the coherency protocol/implementation (which keeps the memory between caches and the system memory/RAM coherent). I believe there is a bit of potential to optimize for RC, i.e. to make that usage pattern of atomics fast. Lastly they probably take special care that the atomic related instructions used by Rc are implemented as efficient as possible (mostly fetch_add/fetch_sub).
Just to be clear, the RAM/memory and cache are not on the same chip/die/silicon. They are part of the same packaging though.
> which keeps the memory between caches and the system memory/RAM coherent
Isn't this already true of every multi-core chip ever designed; the whole point of coherency is to keep the RAM/memory coherent between all the cores and their caches.
> Isn't this already true of every multi-core chip ever designed;
Yes, I just added the explanation of what coherency is in this context as I'm not sure how common the knowledge about it is.
The thing is there are many ways how you can implement this (and related things) with a number of parameters involved which probably can be tuned to optimize for typical RC's usage of atomic operations. (Edit: Just to be clear there are constraints on the implementation imposed by it being ARM compatible.)
A related example (Not directly atomic fetch add/sub and not directly coherency either) would be the way LL/SC operations are implemented. Mainly on ARM you have a parameter of how large the memory region "marked for exclusive access" (by an LL-load operation) is. This can have mayor performance implications as it directly affects how likely a conditional store fails because of accidental inference.
At the hardware level, does this mean they have a much faster TLB than competing CPU's, perhaps optimized to patterns in which NSObjects are allocated? Speaking of which, does Apple use a custom malloc or one of the popular implementations like C malloc, tcmalloc, jemalloc, etc.?
I don't think this really makes sense. How many of the benchmarks that people have been running are written in Objective-C? They're mostly hardcore graphics and maths workloads that won't be retaining and releasing many NSObjects.
I agreed. It think it's typical of cargo-culting: explanations don't need to make sense, it's all about the breathless enthusiasm.
Look, want to know how M1 achieve its result? Easy. Apple is first with a 5nm chips. Look in the past: every CPU maker gains both speed and power efficiency when going down a manufacturing node.
Intel CPU were still using a 14nm node (although they called 12+++) while Apple M1 is now at 5nm. According to this [1] chart, that's a transistor density at least 4x.
Not saying Apple has no CPU design chops, They've been at it for their phones for quite a while. But people are just ignoring the elephant in the room: Apple gives TSMC a pile of cash to be exclusive for mass production on their latest 5nm tech.
The bit about reference counting being the reason that Macs and iOS devices get better performance with less ram makes no sense. As a memory management strategy, reference counting will always use more ram because a reference count must be stored with every object in the system. Storing all of those reference counts requires memory.
A reference counting strategy would be more efficient in processor utilization compared to garbage collection as it does not need to perform processor intensive sweeps through memory identifying unreferenced objects. So reference counting trades memory for processor cycles.
It is not true that garbage collection requires more ram to achieve equivalent performance. It is in fact the opposite. For programs with identical object allocations, a GC based system would require less memory, but would burn more CPU cycles.
“A reference counting strategy would be more efficient in processor utilization compared to garbage collection as it does not need to perform processor intensive sweeps through memory identifying unreferenced objects. So reference counting trades memory for processor cycles.”
I think it’s the reverse.
Firstly, garbage collection (GC) doesn’t identify unreferenced objects, it identifies referenced objects (GC doesn’t collect garbage). That’s not just phrasing things differently, as it means that the amount of garbage isn’t a big factor in the time spent in garbage collection. That’s what makes GC (relatively) competitive, execution-time wise. However, it isn’t competitive in memory usage. There, consensus is that you need more memory for the same performance (https://people.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf: with five times as much memory, an Appel-style generational collector with a non-copying mature space matches the performance of reachability-based explicit memory management. With only three times as much memory, the collector runs on average 17% slower than explicit memory management)
(That also explains why iPhones can do with so much less memory than phones running Android)
Secondly, the textbook implementation of reference counting (RC) in a multi-processor system is inefficient because modifying reference counts requires expensive atomic instructions.
So, reference counting gets better memory usage at the price of more atomic operations = less speed.
That last PDF describes a technique that doubles the speed of RC operations, decreasing that overhead to about 20-25%.
It wouldn’t surprise me if these new ARM macs use a similar technique to speed up RC operations.
It might also help that the memory model of ARM is weaker than that of x64, but I’m not sure that’s much of an advantage for keeping reference counts in sync across cores.
> reference counting will always use more ram because a reference count must be stored
True, reference counting stores references… but garbage collection stores garbage, which is typically bigger than references :)
(Unless you’re thinking of a language where the GC gets run after every instruction - but I’m not aware of any that do that, all the ones I know of run periodically which gives garbage time to build up)
No, many garbage collection approaches WILL require more RAM, some need twice as much RAM to run efficiently. Then there is the case that with garbage collection can have a delay which lets garbage pile up thus using more memory than necessary. Retain-release used by Apple is not as efficient, but you reclaim memory faster. https://www.quora.com/Why-does-Garbage-Collection-take-a-lot...
I think it's more a design pattern you see more with GC collected languages where people instantiate objects with pretty much every action and then let the the GC handle the mess afterward. Every function call involves first creating a parameters object, populating it, then forgetting about it immediately afterward.
I've seen this with java where the memory usage graph looks like a sawtooth, with 100s of MB being allocated and then freed up a couple of seconds later.
Isn't it the case with Java that it will do this because you do have the memory to spend on it? Generally this "handling the mess afterward" involves some kind of nursery or early generation these days, but their size may be use-case-dependent. If tuned for a 8/16 GB environment, presumably the "sawtooth" wouldn't need to be as tall.
Slower in what metrics? Latency? Throughput? Not to mention that the behavior may strongly depend on the GC design and the HW platform in question. It seems far too difficult to make a blanket statement about what is and isn't achievable in a specific use case.
> reference counting will always use more ram because a reference count must be stored with every object in the system
In the tracing GCs I have seen, an "object header" must be stored with every object in the system; the GC needs it to know which parts of the object are references which should be traced. So while reference counting needs extra space to store the reference count, tracing GC needs extra space to store the object header.
I understand the machine is great or going to be great for most use cases. My mbp is my main workhorse, but as a freelance SRE "devops" guy, the Apple ARM platform won't be suitable for my job any time soon, if ever.
Docker is not yet available - but even when it would become available, emulating virtualised x86 code is explicitly not going to be supported. That in many cases means pulling a docker image built in a ci/cd pipeline where a dev screwed something up and debugging it locally is no longer an option. If I wasn't freelance, I could probably get away with some cloud instance to run all my docker stuff, but I'm dealing with too many different environments, for clients with various different legal requirements making this simply 'not an option'.
Too bad, because the machines look very promising for everything else. Development tools aren't there yet, but I expect that to be fixed pretty quickly.
FWIW, I never run docker on my local machine (I develop on a remote machine), benefits: remote machine os + setup is very close to production and GBit bandwidth up and down at my hoster is so much nicer when working with Docker images.
> If I wasn't freelance, I could probably get away with some cloud instance to run all my docker stuff, but I'm dealing with too many different environments, for clients with various different legal requirements making this simply 'not an option'.
While not the exact same reasons as GP, I also need to be able to do this locally.
Even with legal restrictions: just put an Ubuntu server at home and ssh to it. Then you wouldn't have the GBit connection but still better than using Docker on a non-production OS.
I'd argue there's some benefit to being able to code on a train. While Internet connectivity has grown with tethering, it's just nice sometimes to not need to be connected to do your work. That's my opinion, anyway.
I don't think this is an issue for Docker. They can run an ARM64 Linux VM instead of the current x86 one, and then use QEMU to run x86 Docker containers within it if they want.
The bummer is that this won't be taking advantage of Rosetta 2 so it'll likely perform bad, but it might be good enough for debugging the odd image or even development depending on _how_ bad.
> I don't think this is an issue for Docker. They can run an ARM64 Linux VM instead of the current x86 one, and then use QEMU to run x86 Docker containers within it if they want.
If youre at the point where you have to run a container running inside an emulator inside a VM on an ARM Mac then you should just get an x86 linux machine and enjoy performance and native support for containers.
Are you just talking about using an ARM64 container your root container? This kind of breaks the point of docker if you're running on x86 in the cloud, doesn't it? You're no longer developing in your exact deployment environment, which is one of the key benefits of docker.
There is no hardware virtualization here because it's emulating a different architecture.
Rosetta 2 is able to do this entirely in software with very impressive performance but I doubt QEMU will be in the same league.
That said, if you can get most of your containers in native ARM (you can already run k8s on ARM, for instance), it might be a valid escape hatch for the odd image that hasn't updated yet.
Docker has QEMU built in. You can run ARM docker images on x86, and I believe you can run x86 docker images on ARM.
This is not very widely talked about, I stumbled across it by accident: I was working on an x86 emulator on my Chromebook, and was going back and forth between the ARM Chromebook and an x86 laptop. I was working within docker on both machines. At some point I was running a test binary saying "this is an ARM-binary" and forgot to run it using the emulator - but it still executed directly on the x86 machine. It was very confusing and took me a while to figure why my x86 cpu was executing this little static ARM binary just fine - QEMU inside Docker.
Linus Torvalds was and is probably still right that unless developers have the same breed of processor (not just AArch64 but the actual chipset, for example) on the bench it'll probably never be particularly prevalent server-side.
That and Apple presumably dream of not letting developers touch anything without going through their stack, so no touching the hardware for you. For example, I believe Apple expose Performance counters through Instruments in Xcode - but without a Mac to test it on I should say - it doesn't seem to be close to perf. The wider point being that Apple will probably never let you run Linux on their hardware, and your server will probably not be running MacOS either.
I suspect the previous poster was referring to non-Apple ARM hardware, not Apple servers.
> The wider point being that Apple will probably never let you run Linux on their hardware
Apple has already issued docs on how to load alternative OSs on their system and has said explicitly that Windows support is up to Microsoft. Linux on Mac metal is not out of the question, but it's going to take some time to get running well.
From the sounds of it, Apple hasn't put anything in the way of installing Linux or Windows on the M1 based Macs. They don't have the sort of built in support the Intel based Macs have, but they should be able to boot Linux or Windows on bare metal.
Recently there was a Apple support article posted here on HN detailing it.
Making Linux work on an ARM SOC is a lot harder than on a regular laptop. It basically requires help from the vendor to make the peripherals and such work, although there. Will be a lot of eyes on this so I'd be surprised if someone hasn't got it working within a year or so (working is not equal to usable)
> Making Linux work on an ARM SOC is a lot harder than on a regular laptop.
Yep. It's going to be a while, and it'll be pretty rudimentary for some time. I think Linux on the Mac mini will be viable well before it's interesting on the MacBook.
> The wider point being that Apple will probably never let you run Linux on their hardware, and your server will probably not be running MacOS either.
I agree the idea of running Apple Silicon in the cloud seems far-fetched, but at the same time, if Apple actually does achieve the best price/performance processors in the world, it almost seems like a failure of the market if they do not also serve the cloud market.
Unless AWS starts selling their graviton CPU, whose going to create chips for the arm server market? The volumes are huge and the margins are tiny, which is not something Apple was ever interested in.
For a myriad of reasons, I don't think Apple would ever want to sell their chips standalone.
OTOH, Apple could consider releasing a line of Apple silicon cloud servers.
Even if more expensive than x86 servers, they could eventually become VERY competitive if they are more power efficient, which is a real issue for cloud providers. It would pay off any upfront cost if, on the long run, the server uses ie. 50% less energy than x86 alternatives for the same bang.
Now, the thing is that the typical cloud or datacenter machine today is more of a custom built piece of witchcraft than an off-the-shelf blade. So, meh, maybe not. But an ARM-based cloud, Apple or not, sound like the way to go, at least for the long term.
> [...] the Apple ARM platform won't be suitable for my job any time soon, if ever.
> Docker is not yet available - but even when it would become available, emulating virtualised x86 code is explicitly not going to be supported.
Is there any reason why Apple couldn’t add support for emulated virtualized x86 code in a future ARM CPU? The M1 doesn’t support it, but might the “M2” or “M3” support it?
I ask because I’m in the same situation as you, where not being able to run x86 Docker containers would make me not buy an ARM MacBook Pro.
> A task like editing 8K RAW RED video file that might have taken a $5000 machine before can now be done on a $699 Mac Mini M1 or a fan-less MacBook Air that costs $999
That’s insanely great. Maybe I am exaggerating but Apple’s M1 might be the best innovation in the tech industry in the past 5 years.
I did this test last night with my buddy's 6K RED RAW footage. I could play 6K realtime at 1/4 quality, while my fully loaded MBP with i7 + 32GB of ram could only playback at 1/8 quality. Keep in mind real editors NEVER edit raw footage, they use proxies.
The really impressive part was that the mac mini did NOT spin up the fan during playback, completely silent. The i7 Macbook Pro sounded like a jet turbine spinning up within 30 seconds. Awesome.
> It's very easy to perform very well on a specific benchmark when there's dedicated hardware for this.
I'd say the same about GPUs as well. It's great for people who game, but beyond a certain baseline, it's pretty pointless for most of us.
> I'm not saying this isn't good, it's great for people editing, but this isn't a general indicator of performance
Umm, it's also great for people who watch YouTube or Netflix. Zoom calls use the encoder and the decoder. Fundamentally, modern computers do a crapload of video (and audio!) decoding and encoding. Arguably for most of us, it is more important than having a high performance GPU.
This is all trebly important when you are using Netflix, YouTube, HBO Max, Zoom, Skype, etc on battery where the specialized encoder uses about 1/3 the power as the CPU.
GPUs, unlike video decoders, are programmable and general in purpose.
As for Zoom, YouTube, and Netflix, existing hardware is more than fine enough. No one is streaming 8K RAW for a conference call. Unless you're an editor, you won't see much of a benefit.
> As for Zoom, YouTube, and Netflix, existing hardware is more than fine enough.
Didn't suggest these were jobs existing CPUs struggle with. I said the video encode/ decode makes the CPU much more efficient which increases battery life.
But existing CPUs already use accelerated decode for these tasks. They have been for years and years. Those hardware decode blocks just aren't powerful enough for 6K RAW video, but they are fine for YouTube, Netflix, and Zoom, and indeed it's already accelerated.
If there were no benefit, Apple wouldn't be able to decode 8K video with a low end Mac mini. People wouldn't be seeing vastly better battery life when viewing videos and using Zoom.
8K video decode isn't very useful when you can barely drive an 8K display, and certainly not for the average consumer.
As for wonderful battery life with Zoom, I am fairly certain that this is because Intel CPUs have a bad process that cripples their video decode performance.
The correct comparison would be with the 7nm or imminent 5nm Renoir APUs that have accelerated decode on an actually good process. Which is what you should compare M1 against, anyway.
But sure, if you want to compare them against obsolete Intel chips, you can, and you'll find improved battery life. It's just not a logical comparison, as Intel isn't the competition to M1 chips, the competition is AMD. And AMD does have high efficiency accelerated video decode on their laptop chips, and it also even supports 8K decoding, though it's almost useless. It is less useless than on an M1 computer though, because at least then you have enough I/O to actually run an 8K screen.
> 8K video decode isn't very useful when you can barely drive an 8K display, and certainly not for the average consumer.
You are talking in circles. This same exact hardware is used to decode lower resolution video. Benefits reach down to 6k, 4k, 2k, 1080p, 720p, etc etc. Any encoding you do.
> The correct comparison would be with the 7nm or imminent 5nm Renoir APUs that have accelerated decode on an actually good process. Which is what you should compare M1 against, anyway.
How exactly do you compare an unreleased product to an actual shipping one? Do we go to the land of hypothetical benchmarks where you just make up numbers for the unshipped product?
> And AMD does have high efficiency accelerated video decode on their laptop chips, and it also even supports 8K decoding, though it's almost useless.
Please share some details on these AMD based $699 systems which can edit 8k video. No-one is claiming you can't edit 8k video on other systems. The entire point is that you can do this on the cheapest system in Apple's lineup.
The 7nm Renoir APUs already came out. As for the 5nm, we don't have them yet even though they should release in a few months, but we have processors of the same architecture on a different process.
The M1 Mac Mini can only edit 8K video at a fairly low quality if you use the accelerated encode. If you're actually going to be doing real editing, you're going to only be using the 8K decode, and for that you can look at literally any Renoir APU system.
The cheapest system with a Renoir APU capable of accelerated 8K decode is 340$, so half of the cost of the Mac Mini.
As for this :
>You are talking in circles. This same exact hardware is used to decode lower resolution video. Benefits reach down to 6k, 4k, 2k, 1080p, 720p, etc etc. Any encoding you do
It's only talking in circles if you ignore the rest of the comment. Accelerated encode and decode with similar architectural efficiency is already there. The main advantage Apple has here is that, for a few months, they have a more power efficient process.
As for encode, literally no one has a solid use case for a laptop and hardware encoding over 1080p. For streaming video, anywhere over 1080p is useless on a laptop, and for actual video encoding, no one uses embedded accelerated encode because it's inherently of lower quality.
But sure, if for some absurd reason you want to edit video directly in 8K and don't care about the abysmal rendering times at high qualities, you can buy a 340$ Renoir SBC, enable Hardware Decode on your favorite video editing software, and be on your merry way with accelerated real-time decode of 8K - as long your video files are h264 or h265.
I think many folks' experience with Zoom would show the opposite. Perhaps it's not very efficient software, but battery life often tanks, computers get very warm, and fans start spinning.
Any improvement to that is a very welcome change, if you ask me.
That article is only looking at if the CPU is fast enough to keep up with an import. Basically a toy benchmark. You're going to be butting up against the memory limit in no time once you start actually editing.
That's nice. But what does CPU/GPU horse power have to do with memory?
If I want to spin up a bunch of VMs to do pre-commit test builds in clean environments, and each need RAM for the OS and user land, being able to edit a lot of raw video does nothing for me. I'm generally fine running macOS (or Linux), but sometimes I need to boot up Windows in a VM for specialized apps: how do I assign >16GB of memory to it if I only have 8-16GB of RAM? Even with fast storage I'm enamoured that I may need swap.
> how do I assign >16GB of memory to it if I only have 8-16GB of RAM?
This is Apple's slowest/ lowest performance M series CPU.
Complaining that the CPU they built for the MacBook Air and the lowest end MacBook Pro doesn't have 32GB of RAM misses the entire picture. This is Apple's first and lowest end M series chip, and it's blowing away Intel chips with discrete GPUs and more RAM. Their higher end processors which will be coming out over the next couple years are likely to be much better... and will support 32GB of RAM. In fact since Apple is migrating the entire line-up, it's likely the next generation of CPUs will support discrete RAM so the Mac Pro can offer systems with massive amounts of RAM as the current Mac Pro does.
> I need to boot up Windows in a VM for specialized apps
Aside from getting ARM Windows running on the Mac hypervisor, Windows VMs seem pretty unlikely. Another possibility is someone porting or creating an x86 emulator to run on the hypervisor.
Aside from that, Crossover by Code Weavers or something like AWS Workspaces are your best bets.
> In fact since Apple is migrating the entire line-up, it's likely the next generation of CPUs will support discrete RAM
I've been wondering about how much of the general purpose performance boost of M1 is due to having the RAM in the same package. That has to have benefits in power and latency. So if a future Mx chip supports discrete RAM, it may not seem quite as magical anymore. Then again, Apple's volume and margin is high enough that they could just build a single package with lots of RAM. You wouldn't be able to tinker with it, but it's not like Apple cares about that.
Makes you wonder if AMD or Intel will come up with a similar package for x86-based laptops.
As far as I understand about chip design (not much), the fact that the memory is inside the same package allows Apple to do stuff that would never fly with unknown external memory.
They know the exact latencies and can distribute the memory between CPU and GPU as they please.
A loss in upgradeability is a huge gain in speed and reliability.
My bet is that the next M processor will just have more of everything. More cores and more built-in memory. Maybe the one for the (i)Mac Pro will have upgradeable memory on top of the built-in ones. All of the laptops will only have the on-package memory.
Apple makes it very clear in their materials that their unified memory is a very big part of their performance boost.
> So if a future Mx chip supports discrete RAM, it may not seem quite as magical anymore.
I agree, but I also doubt they will be making a Mac Pro SOC with huge amounts of RAM aboard either. I'm not sure how common they are, but Apple supports up to a terra-byte of RAM (maybe more). I could easily see SOCs with 64GB of RAM, but I'm struggling with them putting 128 or 256GB+ on the SoC.
Maybe some kind of hybrid?
Very curious to see how they are going to work around this.
It would be interesting if Apple treated their off-chip RAM as a RAM disk. Could make for some intriguing possibilities. So you'd "swap" from the hot/ on chip RAM into slower GDDR RAM instead of to the SSD.
Why are we even talking about 8k video? It's something that almost noone needs and even fewer people have to edit. My guess is most people are still happy with editing their FHD videos. Something that works well on a five year old laptop.
It’s used because it’s a consistent workload to ensure a fair comparison, and long enough to make sure the performance seen is not just burst. Someone who plays games, edits smaller videos or photos all day, uses heavy web apps, compiles code, etc., can apply the result of a large video render to their purchase decision even though their work doesn’t aggressively use the battery as fast as possible.
Hopefully that helps. In your original example, you cited someone editing FHD doing fine with a five year old laptop, and now we’ve talked about why larger formats are used, and why someone upgrading a laptop would look at a benchmark of an intensive process, even if they themself don’t plan to run that specific process.
We'll see about that (with respect to pixel density and efficiency). I'm typing this on a 4k xps 15, and while the display is great, the battery cost is extreme. There would be no meaningful advantage in an 8k-display of the same form factor, so there is far, far less incentive for manufacturers to race to 8k.
there will be 8k tvs, sure, but lets be real - the step to 1080p was massive, the step to 4k already couldn't fit in those shoes.
the step to 1080p I'd say that, for some, could have been seen even as a downgrade. It was possible to use 1600x1200 back in the late 90s and early 2000s, with CRTs.
The concept of "high definition" was already known by PC users (gamers and professionals, that is)
4k is a nice upgrade, and I'd say that many professionals already were using it with proper monitors
I think diminishing returns will stop 8k from getting mass adoption.
Same thing happened to audio players with "better than CD quality". They never caught on because there was no need.
65" 4K TVs start at 74 Watts (max is 271 Watts). 65" 8K TVs start at 182 Watts and go all the way to 408 Watts. For what? An improvement you won't notice unless you get off the sofa?
Since you've been able to edit high-res (haven't tried 8K but 4K and 6K been working fine) footage on commodity hardware (that costs way less than $5000) for... years, via editing software that supports operations via GPU (like DaVinci Resolve), you still think it's the best innovation in the tech industry? Add to it that the M1 is proprietary, only developed for one OS and one company, it feels less like innovation for the industry and more like Apple-specific innovation.
Also, the argument that only $5000 could edit high-res footage is false even without the invention of GPU editing. Proxy clips been around for as long as I've done video editing, even though the experience is worse, it's been possible to edit at reduced resolutions for a long time.
PSA: however impressive the M1 hardware is, you're still going to be stuck using OSX, playing in Apple's walled garden and being subjected to their awful policies.
I'll gladly join the groupie crowd once Linux runs stable on it.
Apparently over the years the definition of "Walled Garden" has drifted a lot. The iPhone has a "Walled Garden", unless you jailbreak, it's very difficult to run anything outside the App Store.
My Mac? Almost nothing I run is from the App Store. Nothing needs to be from the App Store. Most of what I run doesn't even go through Gatekeeper and it certainly never touches software I build or compile myself.
If a "Walled Garden" can be disabled by bypassed by a single entry in your hosts file, by running from the command line, or any other number of ways, it's a damned short wall around that garden.
Lots of valid criticisms of MacOS, but it's nowhere in the ballpark of a walled garden.
> So, the "walled garden" is intended for the underlying hardware.
I don't think that is what he was saying at all.
Regardless.
There is nothing preventing Linux (or Windows) from booting on Mac M1 hardware. People almost certainly will have Linux running on Mac hardware before too long. It's just a slog getting it working well.
Not a walled garden in any traditional sense, just difficult to implement.
Apple Silicon supports local signing of whatever boot blob you like from in the recovery environment (in Permissive Security mode). This ensures the system boots the bits you yourself approved without applying any other requirements.
The other policies available are a) verify at install time the boot kernel is signed by Apple and is the most currently available version (default) or b) to verify at install time the boot kernel is signed by Apple without doing an online check (allows downgrades).
In all cases it's the same underlying mechanism that records what the system should be allowed to boot (according to whatever policy is in effect) and verifies at boot time it hasn't been tampered with. Big Sur takes that further and cryptographically verifies the entire system volume hasn't been tampered with (even offline).
You are correct that Apple does not provide drivers for other operating systems for M1 Macs.
> Linus has openly said he wants Linux running on one
Linus has been lusting after Apple hardware for a long time. He's also been exceedingly frustrated by the lack of drivers for most of that time.
8 years ago in an interview with Tech Crunch, he waxes poetic about the MacBook Air:
> "That said, I’m have to admit being a bit baffled by how nobody else seems to have done what Apple did with the Macbook Air – even several years after the first release, the other notebook vendors continue to push those ugly and clunky things. Yes, there are vendors that have tried to emulate it, but usually pretty badly. I don’t think I’m unusual in preferring my laptop to be thin and light."
I think they have been clear about boot camp not being supported, at least at present time. But they have also said that it is possible to run Windows, but that it will be up to Microsoft.
I like to think of HN as contrarian, not negative. Techies are cynical by nature because the promise and reality of technology are perpetually so far apart. And if your actual job in life is to anticipate where the problems are going to surface in technology, well... the longer you live with that, you develop commensurate expectations and compensatory mental models.
What software are you having trouble getting installed?
I've never had issues with getting things from inside or outside the App Store working. Usually it's just dragging the App into the Applications folder and answering a prompt. Sometimes there is an installer. (or I use Home-brew)
Maybe you are trying to install something from a developer who doesn't sign their code or doesn't have an Apple Developer ID? I'm curious what that might be.
Several of my virtualenvs are broken on Catalina - apparently the binaries installed by my requirements.txt are not signed for use in Catalina. I had to accept an amount number of dialog boxes by hand to make that work.
There have been others, too, beyond the virtualenvs, which I do not remember offhand.
Its just that the frog is being boiled quite slowly. Apple every year is doing tiny little steps towards full control what customer can run on "their" computer and people instead of trying to stop them in their tracks, they defend them. It's bizarre and funny at the same time. Then if few years people will be crying they cannot publish software without being robbed and we will say "I told you so".
Seems like a giant processor architecture change with changes to driver models, app runtimes, and requiring translation to run any legacy software would have been a great time to do that if they ever planned to.
Customer shouldn't be required to mess up with the system files in order to gain full access to the product they paid for. Such restrictions should be illegal if they are not "opt in". Customers should have a right to install alternative app stores without the need of "hacking". We desperately need regulation to stop greedy, tax shy and privacy violating giants from exploiting the consumers.
> Customer shouldn't be required to mess up with the system files in order to gain full access to the product they paid for.
This comment is quite removed from the reality of using a Mac. I've never even considered bypassing Gatekeeper because it's never been in my way. But bypassing these Gatekeeper checks is comparable in difficulty to adding a second repo to Debian to install apps outside Debian's repo. Are you suggesting Debian is guilty of this too?
> I'll gladly join the groupie crowd once Linux runs stable on it.
There's still the issue with the keyboard having 7 fewer keys than a modern ThinkPad, and compared to the classic 7-row ThinkPad keyboard, 15 physical keys are missing. That's a considerable disadvantage. Mac laptops have superior displays, but we're using keyboard just as often.
I honestly can’t think of any Apple policy that restricts what I run on my Mac. I haven’t upgraded to Big Sur yet, and maybe I just don’t rely on anything yet that’s against their policies and will be mad when I hit that point. What concretely are the awful policies that are restrictive on Mac that you’re thinking of?
When a new "App" is installed on Big Sur (and I think this was on Catalina too), there is a check to see if the developer ID has been tagged as providing malware. When Big Sur launched, there was a glitch and the servers hung, leaving a lot of users in a bit of a lurch.
It's easy to disable the check and a lot of software never gets checked for other reasons.
By that logic, whenever I recommend a Thinkpad and I'm met with responses of "it doesn't have as good of a trackpad!", am I not allowed to say "use the keyboard shortcuts on Linux, they are better"?
I mean, one folk's bug is another folk's feature, right?
If I were to rearrange your response to fit the above exchange, it would be in the form of: "Learn to use MacOS instead. It's faster than using Linux and makes it much less important.". Couldn't I reply with "Why do you not want people to use MacOS?"
Oh, god. Stuck is not even close to define the joy of OSX. I've been "stuck" with OSX for 10 years and every time I tried the waters of windows or ubuntu again I came home crying.
Not perfect since Sierra? Yes.
Windows and Ubuntu not even close? Yes.
Personal opinion? Yes.
Hardware was crap for last 2 years and make me consider moving back to windows? F* yes!
But now with those M1 chips I'm already saving money for an Air (very very expensive in Brazil).
> I'll gladly join the groupie crowd once Linux runs stable on it.
Is there any hope that this will happen in a reasonable timeframe?
I'm very happy running Linux on my (rather old) MacBook Air, which is hardware-wise the best (not in performance but in comfort, durability, design) laptop I've used yet. I want to change because it's starting to show its limits now, but I haven't found yet something to replace it.
Isn't this mostly about driver support? I'm not a hardware guy, but it's hard to understand why it's so difficult to reverse engineer something like the touchpad driver. Isn't it just a matter of measuring the IO and reproducing it? I mean how has MS done this for bootcamp?
> Isn't it just a matter of measuring the IO and reproducing it?
No, you also have to understand it.
For a very simple example, suppose that, on initialization, the driver always sends 01 02 03 04 88 99 00 to the hardware, which then replies with 05 06 AA BB 01, and then the driver sends 07 08 11 22 05. What is the meaning of each of these bytes? What should the driver do if the hardware instead replied with 05 06 CC DD 07? Is the AA BB always the same for every device, or is it a calibration constant which it got from somewhere else? Is the last byte some kind of checksum, and if so, how to calculate it? And so on. Even for very simple hardware, reverse engineering the IO can be a lot of guesswork.
Apple was the one who provided and wrote those Bootcamp drivers for Windows, not Microsoft. That's how it normally works: the hardware vendor writes (or contracts, whatever) the drivers for the hardware they ship, not Microsoft. Windows by itself isn't much of an obstacle; it'll boot on just about any x86 device as long as the drivers are there...
It's mostly a matter of corporate/government politics more than technology. And those politics are not going to change unless serious political action (read: lawmaking) forces it to be so. It's in a similar vein as "right to repair" or whatever.
At the end of the day, if you don't have a datasheet and the vendor gives you the finger, you're always going to be running a rat race against them, struggling to support 10 year old hardware with free alternatives, and they'll always win because they control the playing field. Further analysis of this phenomena ("what is the root cause of this attitude, and why does our society allow it?") would require actually criticizing and analyzing software development in the grander context of workforce politics over the past, say, 40 to 50 years. Spoiler alert: doing this will probably make you depressed.
Eh, Nvidia shares a lot of the blame on that one though, because the signed firmware they use post-Fermi can't be legally redistributed in any way, and is embedded in the proprietary graphics driver blobs, so it can't be extracted. That firmware is required to adjust GPU clocks, which start off very low. So the choice isn't "Good proprietary drivers" vs "Less featured open drivers", it's "proprietary drivers" vs "drivers that are legally forbidden from running above 0.5% their advertised clock speed, no matter if it's otherwise fully featured".[1] In contrast, AMD and Intel have open drivers in Mesa that work very well and fully supported and AMD allows its signed VBIOS to be redistributed, so it's clearly not just a matter of chip complexity, but the position of the vendor as well.
In that vein I agree with your original point, though: I suspect we'll probably never see Linux on the M1 in anything but the most superficial form that lacks all the really good stuff. You might get the 8 core CPU, SSD/RAM, some peripherials. But no Neural Engine, no GPU, video decoders or image processors, power management, security features in the T2 like attested boot or encrypted key storage, etc. The only way that'll ever change is if Apple makes it happen.
[1] For reference, the open source Nouveau driver for Fermi cards, nvc0, is the 4th most well supported Mesa driver in terms of features implemented (almost 90% of all features, surpassed only by radeonsi and i915). So it's not like there's no interest in FOSS Nvidia drivers... https://mesamatrix.net/
It's not only about the touch pad. It's about the SoC and all the peripherals. It's about power management and the secure enclave. It's about the whole package.
Linux rarely runs as well as Windows on hardware that doesn't go out of its way to keep it out, just ignores it. Let alone on a platform so hostile to external modifications.
I'm gladly waiting for Ryzen 4800u to actually be available in good quality laptops, or otherwise the next iteration of Intel Mobile CPUs, but like you I won't use Apple's walled gardens (except for work, as I'm forced to).
Given that Linux desktop cannot properly utilize even the limited amounts of very common HW accelerators build into AMD and Intel chips (video decoders) and the existing GPU's (all of them suck in different ways), running Linux on M1 would be a waste of everything without giving Linux any noticeable improvement over existing x86 machines.
Same here. I took my decision to leave macOS after their release of BigSur. It's absolutely, 101% unacceptable to me when someone else controls what program I can run on my hardware, and I should hope that their server will not go down.
I still congratulate the team of Apple Silicone. Hope it will force Intel to create something similar.
The only downside of the amazing new M1 MBP is that it runs WoW on max settings 60fps. And now I'm back into the world of Azeroth. Especially with the launch of Shadowlands.
What the hell, Apple, I thought I was safe and immune from video games with my MacBooks.
Tbf, WoW will run on just about anything... I remember playing it on an Acer (or maybe Lenovo?) netbook back in 2009/10. Checked it out from school right before Christmas break so I would have a computer at home for a month.
In the article here they show Minecraft running at native res and 60 fps. No small feat nowadays considering draw distances that are popular now.
It’s not just WoW and Minecraft. I’ve seen screenshots of Dota 2 highest settings running over 100 FPS. Borderlands 3, a graphics beast, at ultra settings, running about 30 FPS. Unoptimized. On a low-end Mac with an M1 chip!
It didn't run smoothly on my maxed out 2018 MBP 15".
I was getting 25fps in bgs with medium graphics settings. And needless to say, the fans were always on.
People are exaggerating a little bit, but this is the low end machine with 8GB RAM and only 7 GPU cores. So you can see it's pretty good in worst case. https://www.youtube.com/watch?v=UQoGPLO8zBI
It drops to 25fps with not much happening on the screen, no water and no crowded area, at 1440p. This guy also mentions dropping to 30fps: https://www.youtube.com/watch?v=2ubyXiY4N2o
I don't know what your motivation is, but 60fps is simply not possible. I bought one now (8G/512G) and hooked it up to the 4K TV.
At native 4K and max settings, it's in the low 20's.
At 1920x1080 and max settings it's at ~35fps. To get it close to 60fps it needs level 7 details at 1920x1080. That's in non-crowded typical quest areas.
I'm typing this from a 2014 i7-4980HQ 15" MBP. This machine would have been replaced in 2017 but I wasn't impressed with that years model. I had planned to upgrade in 2020 but the announcement of the M1 basically quashed that. I've been on this planet long enough to know when Apple changes course like this the old architect is already obsolete. 68k -> PPC -> x86_64 -> ASi. The PPC G5 got exactly 1 OS upgrade (10.5) before it was EOL'd.
If the reports are to be believed on performance and Rosetta than this upgrade may be one of the smoothest in Apple history. The Intel CPU has had an incredibly long run, 15 years, at Apple. If they are confident they can make the leap and not leave their users in the lurch more power to them.
I'm still on the fence on buying an M1 laptop. Apple users know you pay an Apple tax and a v1 tax. My MBP is getting so long in the tooth I may have to ignore my own advice of not getting first generation Apple hardware.
> The PPC G5 got exactly 1 OS upgrade (10.5) before it was EOL'd.
The OS X release cycle back then was much slower. It was 4 and a half years between when Apple shipped the first Intel PCs and Snow Leppard was launched. Even then, Apple continued supporting and updating Leppard for a couple years after that. Even if you bought a PPC Mac on the last possible day, you still got 5-6 years out of the machine with security updates.
Seems to me like we'll see at least comparable support for Intel Macs going forward. Particularly since they will still be shipping Intel Macs for at least another year, possibly another 2 years.
This may actually be the perfect time to get a first-version Apple notebook, because it looks like the big design refresh will be staggered one release cycle. Buying one right now gets you the powerful new architecture inside the tried-and-true industrial design.
While the risks are probably less than they were earlier in Apple's history, I'd much rather take a chance on a first-generation Apple Silicon SoC than on a first-generation MacBook Pro redesign.
> Apple users know you pay an Apple tax and a v1 tax
I expect the v1 tax this time around is that the externals are exactly the same as the Intel machines. Compared to what laptops like the Dell XPS 13 are doing with larger screens in smaller bodies, the current Air and 13" Pro designs are getting a bit long in the tooth now.
I expect next year we'll see new design for the iMac with the M1X, or whatever the bigger variant will be called, as the current version is absolute dinosaur at this point, but I also think we'll see an updated design for the Air and 13" Pro (maybe 14" like how the 15" went to 16"—it would certainly help differentiate the Air and Pro a little more).
If rumors are to be believed, the next macbook generation is going to get a new body design and new screen, so will be 1st gen in a different, probably more severe sense. Makes it difficult to decide whether to just buy now.
My dream laptop would be an iPad Pro 12.9 with a Magic Keyboard and M1 chip which could run both full screen iOS apps and full Mac OSX. In a perfect world it would have an extra USB-C port too.
But to be honest, just a regular macbook with a touchscreen would be great now. I've had enough of people trying to use my Macbook at work, prodding the screen to try and scroll down, then trying and failing to do the two finger scroll gesture on the touchpad.
I, too, am on 2014 hardware. The real step that would move me to update would be more RAM with the Apple Silicon. The last gen Intel hardware did offer 32GB of RAM, but not much else to justify the cost over a 6 year old precursor.
Was planning on getting the pro 16 but the extra cost for 32gb is obscene. Ended up going with a dell xps 15 (still in shipping) which even has an option for 64gb at a reasonable price. Can't wait to get rid of this pro 13 which constantly overheats.
Hey man, I'm using the xps with 64 Gigs.
Good luck with yours. It's a great laptop, but I can't wait for my m1 air with 16 gigs, because for mobility, I'm missing the outrageous stability that macos and Apples hardware are delivering.
I'll keep the xps as a daily driver for work (wsl 2 and vscode are just unmet in apples ecosystem), but for the couch, conferences, trains etc., it's going to be the mba 100% of the time. Are you transitioning to windows?
I had a dell xps 13 before running Linux and I really liked it but the screen was too small and I couldn't read the text while it was on my desk next to my main monitor.
You can still buy Intel MacBook Pros. Apple will likely support it for a couple of more years before they won’t.
I think there are more devs invested in the macOS ecosystem with hardware during this transition than the last, so it would make sense for Apple to let those Intel hold outs still keep up with the latest macOS version.
I would say two or three cycles of macOS upgrades before they EOL Intel support.
One of the things I've noticed recently, and especially since the CPU space started finally moving again is how much of a divide there now is between the computer literate, and the computer illiterate.
It probably creates a social divide at least that which existed when the majority of people couldn't read or write, and is just as "not OK".
Example of this in the first paragraph of this article:
> For everyday users who just want to browse the web, stream some Netflix, maybe edit some documents, computers have been “perfectly fine” for the last decade.
These kind of things now read to me like "for the everyday peasant, that just wants to go to swim in the river, seal the roof of their house and get to work on time, clay tablets and stylus's have been perfectly fine for the last century"
Even the title screams this kind of thinking, computers are not black magic, any more than medicine or writing were magic or sorcery back when burning witches was a thing.
A more apt analogy would be "for everyday users who just want to record their expenses, keep a journal, and sketch some drawings, paper and pencil have been perfectly fine for the last century".
Understanding how computers work down to being able to describe L1 instruction caches and how prefetching works and why having an 8 instruction wide decoder pulling from a giant L1I cache isn't really relevant to empowering people. Most people are going to be more empowered by using computers more efficiently to prepare documents and presentations to assist their other endeavors. I think those people can be forgiven for not getting excited that now they have "8 cores" or "16 cores" or "32 cores". Conversely, getting drastically improved battery life is an immediate and tangible improvement in the day-to-day lives of people.
Being illiterate creates a societal gap because it prevents the free spread of information, and creates a class hierarchy centered around controlling the spread of information. How does not understanding how computers work limit people?
> "how much of a divide there now is"
You seem to be insinuating that, used to, people were "more literate" and are becoming more ignorant. What if it's just that computers have become easier to use and more prevalent?
More people are using computers to communicate now than ever before. This seems to be the opposite of "peasants not knowing how to write". People have more opportunity to reach out and grow.
There are problems in controlling information and infrastructure, who owns all our data, and who controls social media along with privacy concerns, and maybe one of the solutions to these problems is more tech education, but this seems orthogonal to your concerns.
>You seem to be insinuating that, used to, people were "more literate" and are becoming more ignorant.
No, I'm saying the goalposts moved - a lot. The same way widespread reading and writing moved the goalposts back in the day.
>How does not understanding how computers work limit people?
It puts them in the class that considers computers to be magic, like not knowing how to read and write put people in a class that considered industrial machinery and accounting to be magic.
In a world where computers do all the high paid jobs..... thats as low a class as being generally illiterate.
To put it really simply
"computer illiterate"
Is now a thing.
Actual written definition being: not able to use computers well, or not understanding basic things about computers
Similarly to how literal literacy enables access to information, Computer Programming literacy could enable people to derive more value from open software, for example. It could enable people to automate some tasks, or make some tasks easier (eg. gathering + summarizing information, which can enable someone to make better predictions/decisions).
If I want to know what happened today, I can read the newspaper. If I want to know what happened on Nov 25, every year, I could write a quick script fetching this information from some source and look at that list showing Nov 25 (although yes, Wikipedia provides some of that info).
Computer Literacy provides tools to interact with information.
If this was 1990 I would agree with you.
Its not 1990, and computers do a lot more than edit documents, watch netflix and browse the web, if thats where someones knowledge of computing ends they dont have an understanding of the basics and are by definition computer illiterate, modern day equivalent of signing their name with a palm print.
It's far less sad, (and perhaps hopeful?) if you think of it more as there now being more diversity of use cases.
Most consumer personal computers used to have relatively similar power as well as use cases; your Apple II, your neighbors Apple II, and Apple IIs at corporate offices would not have differed as much as personal computers do these days.
These days, there are large differences across segments - anything from school children using chromebooks to enthusiasts running homelab servers.
Not just a gradient of computing power, but also use cases: home archivists with a lot of storage; gamers with beefy graphics cards; media creators with expensive monitors; chrome tab hoarders always downloading more RAM...
I don't think it's that we treat school children using chromebooks as peasants, and treat enthusiasts like kings. We are rather now able to cater well to various segments, and this variety of product offerings to consumers is a good thing.
I think the article's general assertion that the computing advances in question have more to contribute to certain uses cases more than others is more than fair.
Modern life is just complicated: there's so much things to know. Computers are just a small part of it.
As an example, I'm (mostly) car illiterate, I just don't have the time and energy to commit to something that do what I want and works 99.99% of the time.
> For everyday users who just want to take the kids to school, get to work and head out of town on the weekend, vehicles have been “perfectly fine” for the last decade.
Nobody needs to know (or cares) how a car works - EXCEPT professionals and enthusiasts.
> For everyday users who just want to take some holiday snaps, record their kids making a mess and maybe print a few small pics, point n shoot cameras have been “perfectly fine” for the last decade.
Nobody needs to know (or cares) how a SLR works or all of it's features - EXCEPT professionals and enthusiasts.
We can go on and on with this. I don't think it says anything about a divide, it says that our world is so sophisticated and complicated there are entire devices and areas of society that most of us use on a daily basis, but for which we have no care about how it works. That's perfectly fine.
do you know how your car/microwave/dishwasher/TV works, to the detail of the engine, the type of tires, and would you be able to repair/exchange/hack parts of it?
I don't, and I really don't see the point of knowing to be able to do so, as my hobbies/curiosity lies somewhere else -- for some, computers are just another household appliance (and a particularly complex one)
IMO more of an indictment of the tower of babel in software development. Outside of games, we all basically do those things: browser, email, text, spreadsheet, word process.
Those things were reasonably well solved almost two orders of magnitude ago.
Software never bothered to optimize for snappiness, despite so many opportunities. So we got stuck with the same kinda-good response, and for mobile, a much more questionable kinda-ok battery life of 3-6 hours.
I mean, on a couple of node shrinks where the efficiency improved (because clock speed wasn't), could we have please attacked the battery life?
It takes an architecture change to highlight how inefficient desktop is. Unfortunately, desktop is an afterthought in terms of investment. The best hope for actual optimization is convergence with the phone OSs, which this is the first step of.
It is also wrong, computers from the last decade are not 'perfectly fine' for 'browsing'. I just saw my girlfriend do her website with Wix on her old macbook air and the performance was so slow that I wondered how she could tolerate it. Those new processors will benefit everyone.
The divide has always been there, it's just that "computer illiterate" people massively outnumber the literate ones and as such, are getting more attention.
It's kind of like saying there's a huge divide between "house literate" people, who know how everything in their home is built and how to fix it, and "house illiterate" people who just want to come home, turn on the lights, use the shower, the oven and the bed.
Did you mean that in a negative way? The way I see it, favelas are more of a government problem, not enforcing rules and standards leads to people making up their own (just like Internet standards).
Going with my analogy, someone who built their own house in the favelas is, in fact, "house literate". Their home might be sub-standard, but they can take care of it all on their own.
"housing illiteracy" was your example not mine. I was just suggesting a possible parallel with how great the divide is now compared with the days when the vast majority of the world was lucky to lay their hands on a Z80.
I'm sorry, I don't understand. It seems we're agreeing on there being a divide, except I think the current situation is just a natural result of widespread adoption.
There's more people tinkering with dev boards and writing open source software, too, so it's not all bad.
Im saying I really started noticing the divide between the computer literate and the computer illiterate - that its actually becoming an issue in exactly the same ways general illiteracy became an issue as reading and writing became widespread.
Wouldn't say so. As Kovek so very nicely put it further up:
"Computer Literacy provides tools to interact with information."
Easy to forget the iPhone was ~2007, prior to this computers were still a fairly niche market and Android still in its infancy, computer literacy gave an advantage but computer illiteracy wasn't really a big disadvantage.
Now literally every industry from mom and pop stores to basic healthcare demand a pretty high level of computer literacy to the point they don't survive long without it.
You shouldn't be tech savvy to have your right to privacy respected. We need regulation to compel companies like Apple to stop harvesting data they don't have legitimate need for. Any surveillance should be opt in only.
Those of us who are long enough in UI design know what is a result of attention to detail and professional GUI. We have all used Os X not only for UNIX like core (Darwin) but for consistent UX and UI libraries. In some point in time Apple was influencing our work in really meaningful way by setting the standard (remember Apple Human Interface Guidelines pre Yosemite).
For me personally Soundtrack Pro is most polished professional interface ever made. So in this context UI “innovation” trough emoji and implementation of white space for touch interaction (without touch interaction) is funny but not usable. Performance aside ( which is big accomplishment ) I miss the old approach with balance of contrast and natural flow and will stay on Catalina as long as I can. If Apple changes their stance on telemetry, bypassing things and fixes UI/UX design I have no problem to join again. What is lacking in Linux desktop is consistent approach to UI, but for some of us may be is time to revaluate and relearn things. My personal time investment is in Emacs, with time I have more and more respect for those ideas of freedom and consistency. The selling point for me with Apple was professional interface and high UI standards, sadly they are gone. But hey everyone of us is different and this is good, right?
It's all mostly redesign for the sake for redesign at this point. Desktop OSes had been feature-complete for quite some time, but they still have to update every year. They have to. Don't you even dare question that. I'm still on Mojave and it does everything I need from an OS. I also absolutely love native Mac apps, which are becoming rarer and rarer. And no, iOS apps that run on macOS aren't native mac apps. The abomination that is the mojave app store? That definitely took some extra talent to break every single UI guideline, but thankfully I only open it once a couple months.
Just a thought: If someone in 2008 asked me -What desktop interfaces will be used in 2020? My answer may have been: Apple will implement a new Desktop paradigm on top of Raskin Zoomable UI Ideas (https://en.wikipedia.org/wiki/Jef_Raskin).
But here we are: Monster SOC with Cartoon Network on top. :)
The thing with interfaces is that there's no inherent need for change if the method of interaction doesn't change. It was a non-touch screen, a keyboard, and a mouse/trackpad 20 years ago, and it still is today. Some things just work great. They're tried and true and battle-tested. Like, you know, densely packed windows that are optimized for the precision of the mouse pointer.
You can turn off all the telemetry in macOS and they ask you if you want it on when you setup the computer.
Agree to disagree on Big Sur, I love the new look. Keep in mind they’re calling it macOS 11, so there are probably bigger and less superficial changes down the road.
Yep, as a consumer I completely agree, this is firs iteration we will see better (may be). As a UI/UX designer I am hardwired to think in layers of interaction (created ergonomically by mouse pointer), flow and graphical representation. Look in broad term has an emotional impact and is a sum of lot of elements (lines, colors, whitespaces, iconography, animation etc.). But when we are speaking about interface - functional thinking is the heart of design. There are a lot of principles and usability guidelines that must be present. MacOS Big Sur in this context is breaking those desktop paradigms (which we want) but actual implementation is to much touch oriented (iOS).
Anyway, this is an open discussion and every point of view counts, so thanks for reply.
PS. Emacs is great, and I am thankful that Apple decisions have pushed me to replace Devonthink and start using Org Mode instead.:)
I don’t see much breakdown of the desktop paradigm from a usability perspective. Targets are larger and things are rounded off, but is that really as catastrophic as you suggest? I don’t have formal design training but I find it fascinating.
I know I am biased, my view is personal opinion only. In this personal opinion I see a lot of things I don't agree with, but I am old in a sense that I have seen better from Apple and automatically expect more (which is unrealistic at this point in time). In this personal and biased view Desktop Computing is optimisation for interaction real estate, when I work on big screen I expect "more space" for window management and the idea of larger target is some kind of funky experiment with visible goal: to merge desktop and mobile interaction and I don't see it usable at all. This approach is "Design for Design's Sake" (we don't touch MacOS, we use cursor interaction). It heavily reminds me of auto industries approach to replace every physical controller with touch interaction (because its cheap and people like their smartphones).
> You can turn off all the telemetry in macOS and they ask you if you want it on when you setup the computer.
That's false. You can turn off OS analytics but there is tons of telemetry built into almost every Apple app, separate from that, that you cannot disable. It tells you about it on first app launch. Open Maps, for example, and it will tell you about the unique, rotating identifier it uses to track your searches. Opting out of OS analytics does not disable telemetry for the other Apple services now deeply integrated in the OS. Even disabling these features doesn't prevent the mac from talking to the services, such as in the case of Siri.
Additionally gatekeeper OCSP checks on app launches serve as telemetry in practice, and this has no preference or setting to disable it.
If you have an intel mac, you can still install LS 4.x (once you have booted into recovery to permit the kext) on Big Sur. HN user miles pointed this out recently, and it is awesome.
I'm going to contact the developers and ask for an ARM build of 4.x so the same trick will work on M1, at least until Apple forbids all kexts some time in the future.
This is deal breaker for me and lots of security conscious professionals. I have foreseen this "Apple goal" in the past and the only thing thats keeping me to use MacOS is Little Snitch and Mullvad VPN combination. Sadly in Linux world there is no commercially viable option for access rules per app basis and I don't understand why. I don't have answer for another question: As a business I learned from Apple that keeping tight security and protecting your intellectual property is big thing. How on earth big business is complicit in this telemetry approach from Microsoft, Google and Apple? If you are business captured metadata is enough to know important metrics about your company and this may work against you in the long term. Thats why I am against cloud based apps (like Figma), and I am furious with Sketch for not providing collaboration solution and bragging about "Mac Only" approach.
It's because it's very time-consuming but ultimately it's more security theater than any meaningful benefit. An attacker has many ways to easily bypass tools like Little Snitch and they only have to succeed at one of them, whereas you have to hope that you take time away from your job to successfully block all of them.
If you're trying to prevent data exfiltration, you don't trust the client at all — confine it to a dedicated locked-down system on a restricted network which only allows egress to the minimal subset of trusted services. That's a much more winnable battle than trying to prevent every possibility on a general purpose computer running tons of things which are allowed to connect to the internet and legitimately uses lots of outside services.
Similarly, a lot of the data breaches you hear about are caused by people with legitimate access saving the data somewhere insecurely. Spending time on that is a lot more beneficial to most organizations than tracking every TCP socket.
They’re working on the OCSP issue and I would say it’s more of a disclosure bug than telemetry, which implies intent. You’re right about Maps but you’re saying it’s “almost every app,” do you know of others? Because I’m pretty sure it’s not “almost every app,” and moreover in Maps it’s required to deliver the functionality, so I’m also not sure I would call it telemetry.
Stocks. Weather. News. Maps (has a unique ID across multiple interactions to serve as explicit telemetry). App Store (sends device serial). TV (sends device serial). iMessage (sends device serial).
Telemetry doesn't imply intent. Many things serve great as telemetry that aren't intended to be such. There's no way to limit the way the raw data collected can be mined later, offline.
I never use those apps so I wasn't fully aware that they did this. Seems like it's for advertising though. I've had ad tracking turned off on their platforms for years.
> This is just a check that the developer's certificate hasn't been revoked or expired. I wouldn't call it telemetry.
It's an unencrypted network transmission of a unique identifier, at the time of an app launch, that maps to a single app for 99% of cases (due to the fact that almost all developers publish only a single app). That's objectively telemetry no matter what you call it, irrespective of the intent of the designers.
Approximately 0% of all users of macOS will change this setting, so Apple adding a preference toggle (that defaults to "send my local app launches to Apple via the network") is irrelevant from a privacy perspective.
It's an unencrypted network transmission of a unique identifier, at the time of an app launch, that maps to a single app for 99% of cases (due to the fact that almost all developers publish only a single app).
As the article states [1], Apple is changing to an encrypted connection, the IP addresses are no longer logged and the checks never included the Apple ID of the user or the identity of the user's device. Definitely not telemetry.
[1]
For those concerned with protecting their privacy,
Apple makes it clear that “these security checks
have never included the user’s Apple ID or the
identity of their device”, and that it has
stopped logging IP addresses.
The commitment to encrypt is "within the next year". That means it's been telemetry for the last two years, and will likely continue to be so until the next major macOS release, approximately a year from now.
The fact that Apple isn't logging the IPs any longer is irrelevant. The data is unencrypted, and your ISP and their ISP and everyone in between can log the data.
The fact that it doesn't include the Apple ID or device identity is similarly irrelevant. The IP address also communicates unique identifiers to other services (including at Apple), so the IP address is sufficient unique identifier in this instance. Additionally, even if one doesn't have any access to those other records mapping the IP address to the user (held by Apple, the carrier, and many others), simply monitoring the specific set of apps that are opened (again, because the data is unencrypted) is sufficient in many cases to fingerprint and uniquely identify the device.
>That's objectively telemetry no matter what you call it, irrespective of the intent of the designers.
There's no objective definition of "telemetry" that I know of, though, and this is a purely functional feature implemented straightforwardly. They are moving towards encrypting the requests, too.
Whether or not you can toggle something is absolutely relevant from a privacy perspective. Gatekeeper is something that should be on by default anyways, and I personally am more concerned about my endpoint security than Apple getting pinged with a signature when I open an app.
> Agree to disagree on Big Sur, I love the new look. Keep in mind they’re calling it macOS 11, so there are probably bigger and less superficial changes down the road.
Agree. Hardcore Linux user (custom KDE theme) here, and I have to say that macOS 11 is easily the most aesthetically appealing desktop theme I've ever seen. Just completely mops the floor with everything else, especially the previous version of macOS. The changed margins / white space are great, colors fantastic (eerily similar to the ones I use on KDE), perfect font rendering (as always), and I really love the changes to Finder.
In terms of actual usage I have quite a few issues, of course. Requires some heavy work with Karabiner and settings changes to make it usable, in my opinion, and you still can't beat KDE because of its customizability. But in terms of pure visual appeal it's unmatched. Apple's visual design team is the best.
That said, I don't use the App Store at all (at least as far as I can help it), nor do I really use any Mac specific apps (Photos, QuickTime, iTunes, etc) since this is a development machine, so a lot of rough edges are probably invisible to me.
In Electron 14, we have found an astounding new paradigm for writing apps that results in astounding developer time savings! Now instead of running your Javascript app inside a copy of Chrome, we have a custom hybrid of Javascript and Lisp, which is internally transpiled onto a Brainfuck interpreter running in Conway's Life!
Electron 15: The Life machine has been re-implemented as a series of GPU instructions, which will use up approximately 93% of most users' graphics performance in return for a 20% speedup!
Well, the great thing is that as long as computers are bifurcated between a majority of slow Intel PC's and a minority of fast Apple Macs... apps will need to remain usable on Intels, so they'll hopefully stay super-fast on Macs! ;)
I don't understand. Don't those two things seem antithetical? Like why would you make your app inefficient at the same time you optimize it for the CPU?
Parkinson's Law: Work expands so as to fill the time available for its completion.
Applied to computers, if you double the CPU speed, the program can be half as efficient with no apparent loss to the user. Similar with memory. If you can assume your typical gamer has a 2TB (or now much larger) storage capacity, you can ship a 250GB game. But then it gets complicated when everyone does the same thing. If every program is half as efficient as it could be (in CPU usage or memory), then there has been no gain with the new system.
That makes sense and is to be expected. A faster hardware does take away some of the incentive. As the old adage goes - "Necessity is the mother of invention"
Whenever a new, faster CPU comes out, developers act quick to compensate for that with bloated frameworks to make sure you consistently get the same laggy experience.
I'd suggest that someone who breaks into Slack HQ and secretly installs a underclocking kernel extension on all their developers' Macs wouldn't be a bad person.
The consumers buy part, yes, if the only reason they need it is because software has become stupidly inefficient. All of this e-waste is not good for the planet, and manufacturing is a significant contributor to global warming.
If consumers are replacing their computers to do things they couldn't do ten years ago, then great, that's a good use of resources!
If consumers are replacing their computers because today's IM clients are 10x slower than the perfectly good clients we had ten years ago, that's a problem!
While theres no way for me to have a source for this but I would hazard a guess and say most apps that use Electron simply would not exist without it. Maybe someone would fill the void if they had the idea, money and knowhow to build a native app on whatever platform you use but probably unlikely.
Basically, no way you'd just be waiting an extra month for your native version.
So let it be. If there's a niche waiting to be filled, there are bound to be multiple attempts at that. The world would've been a better place without electron.
I don't quite understand how 'retain' and 'release' can be more memory efficient on Apple Silicon than x86.... I can understand how they can be more efficient from a performance standpoint in terms of more efficient reference counting, but I don't understand how that translates to less memory usage which is apparently what's being argued... ?
Unless on x86 some of the 'free's when the ref counts hit 0 were being batched up and deferred, and that doesn't need to happen now?
I don't think retain/release perf has anything to do with memory consumption, but I have seen a bunch of reviews claiming that 8GB is perfectly fine.
This is fascinating to me, because:
(a) every 8GB Mac I've used in the past has been unusably slow
(b) since upgrading my 32GB Hackintosh to Big Sur, my usual 40GB working set is only about 20GB.
(c) My 2015 16GB MBPr with Big Sur is also using about half as much physical memory on the same workload. Swappiness is up a little, but I haven't noticed.
So my guess is that something in Big Sur has dramatically reduced memory consumption and that fix is being commingled with the M1 announce.
Seriously, I'm utterly baffled by all the people claiming that 8 GB isn't enough for the average user.
The only situation I ever ran into where it was a problem was in trying to run multiple VM's at once.
Otherwise it's just a non-issue. Programs often reserve a lot more memory than they actually use (zero hit in performance) so memory stats are misleading, and the OS is really good at swapping memory not touched in a while to the SSD without you noticing.
Yes, sometimes it takes a couple seconds to switch to a tab I haven't touched in Chrome in days because it's got to swap it back in from the SSD. Who cares?
> people claiming that 8 GB isn't enough for the average user
I'm not claiming anything of the sort.
My point is that memory consumption seems to be greatly reduced in Big Sur, and that might make 8GB machines much better to use than before. All of my testing is on Intel machines. It's not exclusively an M1 phenomenon.
I would still recommend 16GB to anyone, and if the extra $200 was a factor, I would recommend that they buy last year's Intel with 16GB of RAM.
Nah, sorry, but you're wrong. I had to upgrade my laptop because I wanted to run Firefox, IntelliJ IDEA and an Android emulator on the same machine. Nothing else. This was not possible on 8GB ram.
So it's not like multiple VMs are needed and above scenario is pretty average for a common mobile developer (but still not an average user, I admit)
Second thing is, lots of games require 16 GB RAM. Maybe gamers are still not average users, I don't know.
For me with 16GB in an MBP, there is currently 20.5GB used + swap, and I haven't even started Firefox today, that would add another ~6GB or so.
Usually if I'm running Safari, Firefox and my 4GB Linux VM, that's 16-18GB used up in those. At the moment I have a few other things open, PDF viewer, Word, iTerms, Emacs etc, but nothing huge.
Most of the time this level of usage is ok, but I've had times where I've had to wait 30+ seconds for the UI to respond at all (even the Dock or switching workspaces) and wondered if the system had crashed.
For that reason I'm generally waiting for the next 32GB model before committing, that's assuming I stick with Apple instead of switching back to Linux (which I used for ~20 years before trying the MBP).
> Programs often reserve a lot more memory than they actually use (zero hit in performance) so memory stats are misleading, and the OS is really good at swapping memory not touched in a while to the SSD without you noticing.
The stats are absolutely reliable because no physical memory page is allocated until it is actually used to store something. So allocating a large chunk of unused memory wouldn't show in the (physical) memory usage stat.
I pretty much daily have to do a closing round to not run out of my 24GiB. That’s all web browsers
(Usually 100-200 tabs), vs code with some extensions and 2x4k display.
But what do you even mean "run out"? This is what I don't get.
If you have multiple browsers with hundreds of tabs, the majority of those tabs are probably swapped out to your SSD already.
With swapfiles and SSD's, physical memory is less and less relevant except when you're performing very specific computational tasks that actually require everything to be simultaneously in memory -- things like highly complex video effects rendering.
How do you measure "running out" of your 24 GiB? And what happens when you do "run out"?
As a human, when I have many tabs open, I observe that everything gets really slow. All applications get slow, but especially the browser.
So I put on my engineering hat and pull up Activity Monitor and further observe (a) high memory pressure, (b) high memory consumption attributed to Chrome or Firefox, (c) high levels of swap usage, (d) high levels of disk I/O attributed to kerneltask or nothing, depending on macOS version, which is the swapper task.
I close some tabs. I then observe that the problems go away.
Swap isn't a silver bullet, not even at 3Gbytes/sec. It is slow. I haven't even touched on GPU memory pressure which swaps back to sysram, which puts further pressure on disk swap.
It's the equivalent of having 50 stacks of paper documents & magazines sitting unorganized on your desk and complaining about not having space to work on.
A bigger desk is not the solution to this problem.
If your tabs are swapped out to SSD, your computer feels incredibly _slow_. SSD are fast, yeah, but multiple orders of magnitude slower than the slowest RAM module.
You can run 4GB if you're fine with having most of your applications swapped out, but the experience will be excruciating.
Physical memory is still as relevant as it was 30 years ago. No offense but if you can't see the problem, you probably have never used a computer with enough RAM to fit everything in memory + have enough spare for file caching.
I don't swap. You can do all your arguments about why I should if you want but yes, there are legit reasons not to and there is such a thing as running out of memory in 2020.
4GB MBA user here, don't have any problems either running Chrome or Firefox with 10-20 tabs and iTerm (Safari does feel much faster than other two and my dev enviroment is on a remote server though).
iPhones and iPads also have relatively small amounts of RAM compared to Android devices in the same class, so I wonder if Apple is doing something smart with offloading memory to fast SSD storage in a way that isn't noticeable to the user.
This is most probably more linked to Java/Kotlin vs Objective-C/Swift. Want an array of 1000 objects in Java ? You'll endup with 1001 allocations and 1000 pointers.
In Swift you can add value types to the heap-backed array directly, in ObjC you can use stack allocated arrays (since you have all of C) and there are optimizations such as NSNumber using tagged pointers.
> Theoretically Java should be more memory efficient because it makes fewer guarantees and can move memory around.
Java makes a lot of memory guarantees that are hard to make efficient. Specifically in that it becomes extremely hard to have a scoped allocation. Escape analysis helps, but the nature of Java's GC'd + no value types means it's basically never good at memory efficiency. Memory performance can be theoretically good, but efficiency not really. That's just part of the tradeoff it's making. And nearly everything is behind a reference, making everything far larger than it could be.
Compaction helps reduce fragmentation, but it comes at the cost of necessarily doubling the size of everything being compacted. Only temporarily, but those high-water spikes are what kicks things to swap, too.
Big difference is that Objective-C is a superset of C. Any Objective-C developer worth his/her salt will drop down to C code when you need to optimize. The object-oriented parts of Objective-C are way slower than Java. But the reason Objective-C programs can still outcompete Java programs is that you have the opportunity to pick hotspots and optimize the hell out of them using C code.
Object-oriented programs in Objective-C are written in a very different fashion from Java programs. Java programs tend to have very fine granularity on their objects. Objective-C programs tend to have interfaces which are bulkier, and larger objects.
That is partly why you can have a high performance 3D API like Metal written in a language such as Objective-C which has very slow method dispatch. It works because the granularity of the objects have been designed with that in mind.
For those, Apple's favored approach to memory management (mostly reference counting) absolutely _is_ an advantage over Android's (mostly GC). That's not relevant when comparing an Intel and ARM Mac, tho.
I think the argument they were trying to get to but totally failed to make is possibly along these lines
huge memory bandwidth relative to ram size + os level memory compression => massive reduction in memory pressure for many many many workloads.
macos has supported memory compression for awhile now -- i would hypothesize that M1 may have massively improved that subsystem in ways that actually do translate into needing less memory on average for a lot of common real-world workloads that amount to "human timescale multitasking" between large working sets -- eg i click this app and it has a huge working set and then click into another that has a large working set and then click back -- with those clicks that represent application context switches occurring very very rarely in machine time scale.
If memory compression subsystem can swap working sets into and out of compressed memory space insanely quickly with low power usage then the os might've gotten very aggressive about using that feature to put not recently accessed memory into compressed memory space.
I believe it was being brought up as an example of "Apple has designed their hardware around their software" and then that translates to "Apple's software does well on machines with less memory".
Compared to something like Android, sure, I get that, but compared to ObjectiveC/Swift on x86 (which I think was being argued - i.e. against the Intel Macs)?
I guess it makes reference counting in general more efficient, I'm just saying I don't see why that would mean Apple Silicon Macs running ObjectC/Swift code would have less memory usage than the same code compiled and running on x86.
I'm not necessarily convinced by the posted argument. That being said, I tend to think that people running a bunch of VMs and Electron apps and Docker cause them to use a bunch more RAM than I would consider to be "reasonable", and they've lost sight of how much you can do in a lesser amount of memory. (Typing this from a computer with 8 GB of RAM, which I have repeatedly been told is "below adequate" for development.)
The problem is, by now development practices in many companies effectively force using multiple large containers. I know an x stack could use 4x less memory if I spent considerable time on ripping out unnecessary cruft, but few people in the company would agree that it's time well spent, and the home office allowance suffices for a machine with 32-64Gb RAM (especially in 2020, when I don't really see that much value in laptops for dev work anymore).
I believe the idea was that reference counting was more memory efficient than other forms of garbage collection, such as copy collectors and mark and sweep collectors which commonly make up generational garbage collectors.
Languages like Java also do not yet support stack-allocated value types outside a few primitives like integers, and heap allocations are both slower and less space efficient due to the indirection and memory management.
It is a simple process, everything that you do in a language needs to be mapped into lower level instructions.
If the lower level hardware instruction does not exist, you use multiple of other instructions to emulate that.
If you add a low level instruction that maps a very common high level operation in hardware, you don't need to call 5 to 10 software functions(extremely expensive),each calling lots of opcodes but just can execute a single opcode and works by hardware beings extremely faster.
It is not hard to be better than Microsoft here. From my personal experience and having disassembled lots of their code they always were lazy bastards. They cared 0 about efficiency. Why should they? They had monopolies like Office or Windows giving them over 95% margins. They could just use the money they printed to buy everything instead of competing.
Lisp machines did that (adding opcodes that map the high level language) with the most common Lisp operators. Those machines were extremely expensive, in the hundreds of thousands of dollars because few could afford that. Apple sells in massive scale, in the hundreds of millions of CPUs per year, making this cheap for them.
> each calling lots of opcodes but just can execute a single opcode and works by hardware beings extremely faster.
Typically these language oriented instructions need to be implemented by microcode in the CPU. Often this does not create a fast system, but it helps to keep the compiler simple. Examples are typical Lisp Machines, you've mentioned. With RISC CPUs OTOH the idea is to make the CPU instructions more primitive and put more effort into optimizing compilers instead. There were a few attempts to combine (high) language supporting architecture and the RISC principle, but I personally have never seen such a machine.
Reference counting releases memory as soon as it gets dereferenced, while GC cleans up memory periodically, which means higher memory usage (more than what's actually in use at any moment).
That explains a iOS vs Android difference (ARC vs Garbage Collection), but it doesn't explain the article (and Gruber's) apparent argument that Apple Silicon machines running native ObjectiveC/Swift code use less memory than the same apps natively built via ObjectiveC/Swift code on Intel running the same OS (but different machine code obviously).
Systems that can reap no longer needed objects rather than walking them can help here. The automatic approach is a copy collector, which is typically the more often approach of a generational garbage collector. Since a copy collector typically works by following references, this also increases data locality for machines with a small amount of L1 cache.
Garbage Collectors and JITs typically work best with hardware support, as you need to check pointer reads and writes as objects are being moved around or code is being rewritten. A lot of these systems use MMU gymnastics, such as mapping the same memory page into multiple locations with different permissions.
You also have systems where you create the objects knowing that they will be tiny and short-lived with a fixed lifetime, which can be hugely efficient. This is how Apache Bucket brigades work, since they know that other than a few special cases all memory allocated while handling a request will be garbage once a response is returned.
Lots of tiny memory allocations are inefficient no matter what. Slight deallocation refinements to poorly made software (the reference counting part is not 'hugely inefficient') is focusing on the wrong thing.
Lots of tiny memory allocations is pretty efficient in java. The VM would have already allocated memory from the kernel so there's no context switch, and once the tiny objects are no longer referenced, deallocation is a free (0 machine instructions) side effect of garbage collection. Garbage collection isn't free, but it can be cheap(er) than reference counting millions of objects with explicit and individual allocation and deallocation.
There are lots of problems that aren't being addressed here.
First, java ends up doing a huge number of heap allocations that are just stack allocations for other system languages.
Second, java might have some heap allocation optimization, but it's still a huge performance sink to allocate in a tight loop.
Third, reference counting is not slow. Incrementing or decrementing an integer only when a variable isn't moved can be both cheap and rare. Even better, it is deterministic. Garbage collection gets its optimization from doing bulk operations, which is exactly what becomes a problem. Any speed up pales in comparison to the speed advantage of avoiding those allocations all together. Once allocations are not weighing down performance, the lack of pauses and deterministic behavior of reference counting is an even larger advantage.
You can say that memory 'has already been allocated from the kernel' but that is what heap allocators do in any language. Jemalloc maps virtual memory and puts it into pools for sizes and threads.
At the end of the day, taking out excessive allocations is usually a trivial optimization to make. It is usually trivial to avoid in the first place. Languages fighting their garbage collector and promising the next version will have one that is faster and/or lower latency is a cycle that has been going on before java was first released. At a certain point I think people should accept that stack allocations and moves of heap allocations take care of the vast majority of scenarios and actual reference counting in this context is not a problem. Variable with unknown lifetimes should only be needed when communicating with unknown components. Garbage collection on the other hand has been a constant problem as soon as there is any neccesity for interactivity.
Yep. It’s actually a pointer to the class instance for the object, which is a full object that contains more information than a typical vtable might, but it serves as a “type ID” that the runtime can use to dispatch on.
x86-64 was designed to prevent (or at least discourage) efficient use of tagged pointers, with the higher half/lower half split in the virtual address space. All the excess high-order bits you don't need for actual addressing are required to have the same value, so you effectively only get at most one tag bit.
They’re required to have the same value upon dereference; there is no restrictions prior to this as assembly doesn’t care what a register is. The bits are appropriately masked off when necessary prior to using the pointer.
Yikes. That's the same shenanigans that got them into trouble with the 68000. Everyone stuffed data into the top 8 bits of pointers because even though the 68000 had 32-bit addressing registers, it only had a 24-bit address bus and the top 8 bits were dontcare's. Then, the 6802x came out with more address lines and...
...and that's basically why x86_64 was specified to require a particular bit pattern in high-order bits - it was to stop applications and OS programmers from writing a bunch of software with tagged pointers which would tie Intel's and AMD's hands when adding address lines. I guess Apple is ok with tying their own hands.
Tagged pointers are an officially accepted thing in ARM -- the relevant feature is called top-byte ignore (TBI). It only applies to the upper 8 bits of a pointer, leaving 56 bits for addressing.
Eh, the jump from 16-bit addressing to 32-bit was a factor of 65,536. The jump from 32-bit to 64-bit is 4,294,967,296x. Throwing away the top 8 bits drops it to an address space "only" 16,777,216 times bigger than 4GB. It seems like there's some headroom for growth in there.
Doesn't this become less and less of an issue the more bits you add to your pointers? Like with 32 bits, you can't have one memory address per person on the planet earth, at 64 bits, you can have 1 pointer per atom that makes up the planet earth, and at 128 bits we're talking 1 address per atom in the known universe (or something like that, I haven't crunched the numbers exactly, this is more to give a flavor for the order of magnitude we're talking).
So if you cut off the top 8 bytes of a 32bit register and leave yourself with 24 bits, you can't even give a pointer to each person in Tokyo, but you cut off the top 8 bits of a 64 bit pointer you can still give a pointer to each atom of every human being on earth?
I didn't really understand the TSO explanation given in this article and found it to be a bit hand-wavy. The article says to emulate the x86 TSO consistency model on an ARM machine which is weakly ordered you have to add a bunch of instructions which would make the emulation slow. I followed that much but then after that it doesn't really explain how they would get around these extra instructions needed to guarantee the ordering. It just says "oh, it's a hardware toggle"; toggle of what exactly?
I could see them just saying no to following TSO for single core stuff and when running emulated code for single core performance benchmarks since technically you don't care about ordering for single core operation/correctness. That would speed up their single core stuff but then what about the multi-core.
so you're saying somehow Rosetta2 is looking at an x86 binary and figuring out exactly which portions of the program rely on the TSO ordering for correctness and then dynamically switches to weak ordering for parts that might be able to do without?
I don't really know much about the internals of macOS but figuring out when there are applications for example running on two different cores (since TSO is only really needed for multi-core use cases) that need to access the same memory and then applying TSO on the fly like that seems difficult. If that is what Rosetta2 is actually doing, that is impressive.
AFAIK: Apple Silicon features an MSR you can toggle which swaps the memory model for a core between ARM's relaxed model and x86's TSO model, all at once. When Rosetta2 launches an app, and translates it, it simply tells the kernel that the process, when given an active slice of CPU time, should use the TSO memory model, not the relaxed one. Only Rosetta2 can request this feature. That's about all there is to it, and it does this whether the app is multicore or not (yes TSO is only needed in multicore, but enabling it unilaterally is simpler and has no downsides for emulating single-core x86 apps.)
There's also a similar MSR for 4k vs 16k page sizes I think, another x86 vs Apple Silicon discrepancy, but I'm not sure if Rosetta2 uses that, too.
I think I understand now. Rosetta is just doing translation from x86 to ARM; however, native ARM doesn't have a notion of TSO which means they're still putting in the logic to maintain TSO just to assist with the better emulation performance. On a purely ARM machine I guess that logic wouldn't be needed.
Yeah, I think that's key to understanding this. They are supporting a version of ARM ISA running that maintains TSO even though official ARM doesn't need to support TSO. I guess this is all to get better emulation performance and avoid those extra synchronization instructions that would have to be added by Rosetta if the silicon did not have TSO support.
For those of us who are picking up M1 MacBooks as first time Mac users, is there some kind of Mac crash course for devs? What apps are useful, what familiar tools from Linux can we use, etc? I'm aware of Dash, which makes me suspect there are a bunch of other Mac-exclusive tools which will be useful.
Most linux terminal stuff works easily on mac - all the vims, emacs and so on.
For FTP, there is Transmit by Panic, which is quite neat. For web browsing, Big Sur's Safari finally works well (it's the first time I managed to do a switch from Chrome, after multiple tries).
Time Machine backup is also awesome - I couldn't find anything working just as well for other OSs.
There are also some neat system-wide tricks you may like:
- you can set up a hot-corner in preferences to show the desktop. I have mine in bottom-right, that way if I want to move sth in and out of desktop, I grab it, do the hot-corner, and drop.
- most file-editing apps have an icon next to a file-name in the window bar. you can drag&drop that icon to copy file etc. if you command-click the icon, the whole path to the file get revealed
- you can drag & drop files to every open file dialog in the system. super-handy
- home/end/pgup/pgdown keys are missing on the keyboard, but Emacs shortcuts work throught the whole system, e.g. Ctrl+A, Ctrl+E = home/end in every text dialog
- command+option+shift+v = paste without style, if you want to paste something into a wysiwyg text editor as a plain text
- command+shift+4 - screenshot of a part of the screen. also can serve as a pixel ruler, since it shows how many pixels you are grabbing
- Cmd+Up/Cmd+Down - navigate within the filesystem
In general, for me as a dev - the best thing about MacOS is how much of the stuff is built-in, and a ton of system features that are consistent through all the apps. That, and a linux-style console/filesystem :)
It's interesting that right now, MacPorts is further along the M1/Apple Silicon porting process than Homebrew. I think this is because Homebrew tries to do everything from a binary repository and MacPorts downloads source, patches it and builds it. Since a lot of the open source repositories have Apple/Darwin AArch64 ports now, it seems to work in most cases.
Technology seems to swing like pendulum between running remote and running locally as technology evolves. Recently I purchased an RTX 3090, and between my Ryzen with 24 threads, and the 64 GB of memory I bought for a few hundred dollars it was really occuring to me how much power my PC has for really not that much money. I don't need to be spending so much cash on cloud services when my local machine has more than enough horses to do everything I need.
I think the M1 is one more force towards the pendulum swinging back. I suspect that as developers port applications to ARM people will rediscover the benefits of native installations as new software starts to take full advantage of this new hardware.
This time, thanks to "everywhere availability" convenience for consumer apps via phone, it might not swing that much towards running locally despite the so much powerful computing at not-so-high prices.
weren't the old exploits on the intel processors patched by essentially making many operations slower? like they had to disable some cpu instructions which were previously innovative and this debilitated the irreparable processors.
so a new architecture with better versions of the same instructions would feel very fast, since we went two steps back first.
It means absolutely nothing. Because every GPU vendor has a different notion of what a "core" is. Some count the smallest parallel execution unit as a "core" and thus boast chips with thousands of "cores", some group these smallest units into larger units and count those as a "core", which results in smaller numbers of "cores" per chip. I would guess Apple is doing the latter. I don't think that it's possible to deliver the graphics performance the M1 is delivering with just eight "cores" in terms of smallest computational units.
Yeah; Apple claims 128 "execution units" and 24,576 concurrent threads (I assume something analogous to SMT going on there, though I don't know enough about GPUs to be sure).
I’m starting to wonder if this is the reason we had some serious problems with macOS and iOS in recent months and years. Serious bugs and serious security flaws.
The A-team was working on getting everything ready for M1. The B-team was working on the usual releases.
If the M1 is as good as everyone says, then that means they had the best people on it.
I am weirdly obsessed with this. I am burning to see the next iteration of Apple Silicon devices with an M1X or M2 chip and more ports and RAM for higher-spec devices and new MacBook designs even lighter and smaller than the current Air (bring back the 11"!).
I guess I'm just caught up in the hype and excited about movement in the chip design space after many years of Intel stagnation. Zen and the M1 are a breath of fresh air.
Waiting to replace a 2014 11" MacBook Air until the story about Parallels support is clear and maybe new MacBook designs are available.
I also have a 2013 Mac Pro trashcan-style on my desk at work. Until recently it was simultaneously available in the inventory system and marked as EOL. I'm not sure if the 2020 6-core Intel Mac Mini would actually be faster - maybe. I'm only a part-time iOS developer so I keep trucking along with the 2013 model.
You're not the only one - there's an apple silicon story on HN every day, so that says something. And I too look out for them. Just dreaming that M1 would be available in a different reasonable linux notebook.
I know very little, so perhaps someone could enlighten me. But I am curious how Apple Silicon will be for machine learning.
When Apple releases a MacBook Pro with 64GB of unified memory (assuming they will) — won’t that be amazing for machine learning? I am under the impression that GPU memory is a huge factor in performance. Also, is there any way that the neural engine can accelerate training — or is it just for executing trained models faster?
I wouldn't expect it to be particularly competitive in training large models. It's an integrated GPU with 8 cores, and the "neural engine" has an additional 16 cores. The kinds of discrete GPUs (mostly Nvidia) that people use for deep learning have more like 5000+ cores.
I think Apple is aiming more at either training small models, or running pre-trained models. For example Photoshop is starting to integrate neural filters, so NN inference performance can be important for some desktop applications.
I think it's pretty clear they're aiming at inference only. Training models on laptops is never going to be competitive. Might be fun for prototyping small models in PyTorch/TensorFlow though.
It is not. According to [1] M1 GPU can run "up to 25000 threads".
Comparing raw numbers between vendors is always tricky, but it looks like Apple's "cores" are more like Nvidia's 'Streaming Multiprocessors' (SM's), of which their cards have between 14 and 100.
M1 seems to perform similar to their older, mid-end desktop cards (1050 Ti has 6 SM's and M1 matches it in benchmarks).
I too am curious how 64 GB unified memory performs for training deep learning models. Even if speed isn't amazing, 64 GB is much greater than the 24 GB available in Nvidia's flagship consumer cards, which would allow for inputting larger images, bigger batch sizes, deeper networks etc. Also, will be interesting to see how all of the different cores are used.
No. This has been misreported. The RAM is on the same package but not part of the same silicon die. Basically the SoC is mounted next to the RAM on a carrier.
Probably not very good, but the recent Ryzen offerings are still pretty strong, as is the very newest Intel stuff. See for example the 4900HS & 4800U on this Anandtech chart comparing spec2017 vs. the M1: https://images.anandtech.com/graphs/graph16252/119365.png
That's basically just sorted by power consumption. 4900HS is the most power used & fastest, followed by the ~22W M1 with all 8 cores loaded, then the 15W 4800U, and then the M1 with just the 4 bigs was last.
The truth is desktop Zen 3 is only decently ahead of certain multi core workloads at 10x or more the wattage, the rest is either very close or somehow behind. The mobile versions are already going to be week in comparison and it's going to be nothing but a travesty if the best hardware stays vendor locked to Apple only. Especially when the M2 (or whatever is next) rolls around as Apple has been on a steeper trajectory with in house silicon than Zen has been.
> The truth is desktop Zen 3 is only decently ahead of certain multi core workloads at 10x or more the wattage,
Zen 3 was never really compared to M1 in multicore workloads in that Anandtech M1 article. You linked a single-core one, and Zen 3 isn't using 10x the power in that single-threaded comparison, either.
Also the 27W TDP i7-1185G7 is keeping up with the also around 25w TDP M1 in spec2017 single-threaded. M1 is still faster, but more in the realm of a typical generational improvement.
And the thing to remember here is that the 4900HS and 4800U aren't just a process node behind, they're 2 full generations behind. Given the gains from Zen 3 it looks like AMD is in a good position to leapfrog the M1 with the next product cycle, not to mention gains from 5nm in the cycle after that.
You say that as if Apple hasn’t been delivering yearly performance improvements in their SoCs like clockwork. They’ve roughly tripled the performance in the last five years between the A9 and A14.
AMD has done a great job with Zen 3 in particular, but IMO they are going to really struggle to compete with what Apple is able to do with ARM.
Dunno, the zen 3 cores are pretty competitive with the M1. Sure they use more power, so you'll need to make do with somewhat larger batteries and somewhat smaller runtimes. The performance/core is pretty similar, even if the M1 has the edge. I'd rather have a zen 5000 APU (due at CES in Jan), 32GB ram, and a 12 hour battery life and be able to run linux than a M1, 16GB ram, and 20 hour life and only be able to run OSX.
Similar the Intel NUC like AMD products are hitting, I expect them to be pretty attractive with the zen 3 APUs.
Sure I'd buy an M1, just not sure I want to switch to OSX, even the cut/paste inconsistencies drive me batty.
Pretty slim, ARM SoCs require a lot of work on the vendor's part to support Linux[1], and Apple has said that they won't support running other operating systems outside of virtualization.
There is a good chance to run it inside a VM, already shown at WWDC, so it only depends on Parallels/VMWare releasing their updates for the M1. On the bare machine, it is somewhat unsure. Currently it is not possible, but it is not clear how difficult it would be. In any case, the question would be about driver development, as this is a completely custom computer. But as the Mini isn't expensive and this is a really interesting chip, I could imagine, that a lot of Linux hackers are trying to get it running. Even Linus Torwalds expressed interest in the M1 MB Air.
Qualcomm has been at the Windows laptop market for a while and has yet to come out with anything nearly as performant. I'd like to see it but they've been behind for many years in the mobile space compared to Apple as well.
It's the same what Apple did with wireless bluetooth audio. There were already companies making them but they didn't try their best. When Apple made Airpods, it defined them as a new category and every company started pushing their own take on it. Now we plenty of good options to choose from.
ARM was typically making very conservative reference designs keeping PPA in mind but now that they have customers asking for a higher TDP chips on the PC side. And they are already making bigger chips like Apple's with cortex X1 - https://www.anandtech.com/show/15813/arm-cortex-a78-cortex-x...
Apple isn't just now entering into this space with the M1 though, it's been this bad for 5 years vs Qualcomm already the change is now even the best power hungry x86 cores can't keep up with it. What Apple is delivering in the M1 continues to be 2 generations ahead of the Qualcomm's best in terms of performance an the +10% uplift on the usual +20% generational increase isn't changing that - Apple has been doing that level of improvement consistently every generation so Qualcomm can't "catch up" by doing the same in some generations.
I'm generally not a fan of Apple as a company and own no Apple products but I've long acknowledged what they've been doing in mobile hardware has been miles better than what the competition has been doing - this isn't some sudden upset and entrance just a on-pace continuation of what has been going on for years. I only wish it was decoupled from their software.
I am not really a knowledgeable person when it comes to CPUs, I just follow what Anandtech posts. And reading his comments on X1 it looks like ARM (and thus Qualcomm) are just being too conservative because there was really no business need for a non-Apple company to make such a large chip. But now there is, so it won't be impossible for them to catch up. Huawei's Kirin 9000 is already scoring close to A14 in multi core benchmarks and this is based on a year old A77 cortex design.
Qualcomm's best phone chip (the 865) has 56% the single thread performance of the A14 as found in the iPhone 12. I'm just not following how suddenly with the M1 all Qualcomm needs to do is make a chip of the same size and it will be just as performant. The disparity in performance isn't new with the M1, Qualcomm ARM CPUs have been slower than Apple ARM CPUs for nearly a decade now. What was amazing about the M1 is that it beats even the best power hungry x86 chips in many single thread tasks as well, not that it suddenly jumped ahead Qualcomm (Apple was already ahead of Qualcomm). Also the business case isn't new, Qualcomm has been trying to display x86 laptop chips for years and has fallen short on performance every attempt. The M1 does not present a new challenge or new opportunity in this space, just better execution.
Firstly, single core scores aren't the be all end all of a CPU. Secondly, Kirin 9000 gets 3700 multi core vs 4000 A14, so quite close to that of Apple with a year old A77 cortex. ARM tried making their chips larger with X1 and it resulted in 30% improvement in the first iteration itself, so it's not crazy to think there are so many unrealized gains to be made by ARM and others. Second mover advantage is a thing. And lastly Qualcomm isn't the flag bearer of ARM chips. Nuvia, Ampere, HiSilicon and even old Mediatek & Samsung could take their crown.
I'm curious about the Neural Engine cores. What software use this? Why would I want to buy a Neural Network coprocessor in my machine, instead of using that money for a better cpu/ram/ssd?
not really, except for maybe integrated intel graphics (switching monitors is the bane of my life)
that being said, the same or better performance of a high-end "hair-dryer" macbook pro in the form factor of a fanless macbook air and price-point + having increased battery-life is the huge draw imo
until now, we've never had that kind of high end performance in such a small, quiet and inexpensive form factor
I wonder how fast M1 feels compared to, say, KDE on a high-end Intel machine. Is it really that much faster than anything, or just faster than what people are used to with OS/X and Windows.
My colleague has a 4 year old dell laptop connected to 2 monitors doing all kinds of cpu demanding work while keeping his 100+ chrome tabs open. I never heard the darn machine cooler fans spinning.
I do take slight issue with the section about RAM performance. The idea in the article is that M1 Mac runs software that uses reference-counting instead of garbage collection as its memory model (i.e. Objective-C and Swift software). Two issues…
1/ Sure, but that's also the case with with x86 Intel Macs. Still running reference-counting Obj-C and Swift software for the most part. So how is this a M1 differentiator?
2/ Also, Macs run plenty of software that mostly uses garbage collection, e.g. any Electron app (Spotify, Slack, Superhuman, etc.) is mostly implemented in GC'd Javascript. There's also plenty of software written with other runtimes like Java or implementing a GC.
So this does nothing to explain why 8GB of RAM on an OS X device with an M1 chip is better than 8GB of RAM on an OS X device with an x86 chip from Intel.
Imo release of M1 will be comparable to the release of the original iPhone, but for the Mac brand (in terms of profitability and long term returns). Just saying.
It's quite clear how this "magic" is possible.
Weather you like it or not, the future is a "product on a chip". That includes everything: CPU, GPU, RAM, SSD, and assembly instructions for vendor-specific things like NSObject. This puts everything physically close together (efficient), eliminates all the protocol compatibility overhead (efficient), removes all the standards the company can't control (efficient).
The downside, of course, that this will be the ultimate vendor lock-in, which is hard to compete with, and can't be serviced by anyone else.
The upside is that the alternatives will always remain relevant.
It's not a sustainable model. Apple relies on the rest of the market being open and interoperable for their products to be useful (even the fastest computer is useless without content). If every competitor turns to monolithic solutions, they all lose.
Putting the CPU and RAM on the same die is absolutely not the future. Not only is it impossible to mix and match process technologies in that way, it would be extremely wasteful given the far greater numbers of metal layers on a CPU and the performance improvement would be marginal. The same is even more true with integrating CPU and NAND flash.
Pretty sure the M1 memory is not on the same die, just on the same "chip". You can see a separation in the teardowns and Apple's advertising materials don't show it on the die images.
Yeah I was wondering forever to how that were emulating x86 faster than native x86 its because they brought ram internal. But the ultimate black box is a better word than vendor lock-in. If they are gutting all the common stuff I wonder what dragons are lurking. Like this is the kind of design change that introduces something like meltdown in my opinion. Maybe MacOS can hide these things but there is definitely some hardware issues there its to many moving parts for it not to be.
Heh, I'm in the 2020 MBP group. But I still think these M1 Macs--as awesome as they undoubtedly are--are not quite yet ready for developers. Virtualization support still looks iffy, and while I really don't care much whether my container is Arm or x86, the ecosystems and tooling around those architectures are still at different levels of maturity.
The exciting (or depressing, for the 2020 crew) thing is that the fact that these M1 Macs are such a triumph probably does mean that these various rough edges will be smoothed out way faster than even I, a self admitted Apple fan, expected them to be.
I was thinking I'd be waiting until 2022 or 2023 before being ready as a developer to have my primary device be an Apple Silicon laptop. But with the overwhelming success of these chips, every developer wants to be on these things ASAP. I could easily see the ecosystem for Arm being radically improved over the next year.
I've been pushing my virtualization to cloud providers or my in-home VM beast anyway, so that didn't matter to me.
What did make me faster though is compiling Rust is now soooooo much faster that it flies. I've been building some toy projects and the edit, compile, test cycle is now shortened so much that I find myself enjoying hacking on my projects more because there is no dead time while the compiler does its thing.
That alone is hugely important.
webpack is also faster, as is a lot of other things that don't require virtualization or docker. I bought the 13" MacBook Pro M1 to supplement and use occasionally alongside my 2017 MacBook Pro 15", but I find myself not having touched my 2017 MacBook Pro at all because I keep grabbing the 13" MacBook Pro.
It's incredibly fast, and the battery life is amazing, which allows me to not worry about where my charger is, or whether I am comfortable on the couch and damnit now I need to get up and plug in.
I bought my MBP for personal projects, in which I don't use Docker or any other virtualization-based tooling and I could live with 16GB RAM. I expect the kinks with things like Brew to be worked out in a matter of weeks. So I think these would do fine for my purposes
I've had more success using MacPorts (https://macports.org). About 80% of what I've tried has worked. Most things that don't work, you can download source and build it yourself. Autoconf, automake, cmake, pkg-config etc work via MacPorts.
It's not all bad. 2020 non-M1 MBP is definitely the safe bet.
M1 machines have ways to go if you're going to use it as a developer. I hope they are worked out by the time the 16" M1 MBP gets released later next year. But I'd certainly get a non-M1 machine if I was replacing my daily driver today.
"Luckily" I recently bought a 2015 MBP (to replace my 2015 Air) and had been set to move to Linux next. Now this recent news make me want to reconsider.
I bought a pretty heavily specced (not top of the line, but close) 16" MBP in April. Kinda kicking myself... and aggressively insisting to myself that the screen real estate matters a lot to me. Damnit.
Don't kick yourself. Versus the MBP16: RAM is capped, USB-C ports are scarce, GPU is not discrete, software compatibility is still thin, and screen resolution is less. Not to mention you have better speakers on your machine.
I have a top of the line MBP13 delivering in mid-December. I could cancel the order at any moment, but I'd rather have ports, software compatibility, and RAM.
M1 may benchmark well, but it isn't an EMP that magically disables every Intel machine on the face of the planet.
I'm excited to see what happens when Apple rolls out its own silicon for the higher-end MBP devices.
Have done the same in December 2019.
I'm telling myself I really need 2 external monitors, as that's what I use at home. The M1 machines only support one external monitor (even in clamshell). Not a lot of people seem to be mentioning this.
I’m curious if/how Apple Silicon will compete in the server market.
Apple certainly isn’t known for producing cost effective servers, but if they really posses this technology that leapfrogs commodity hardware they’d be crazy not to use it in every market possible, right?
> looks at 2010 MBP that's working, but starting to have firefox issues.
... This may be the actual upgrade moment. I just hope time machine is compatible. likewise old (CS5) versions of Photoshop since I'm not into this subscription BS.
I've seen lots of M1 benchmarks, but has anyone done a side by side comparison of what it is like to actually get work done on one?
Take a conventional dockerized local dev environment and just start building stuff. How much time do you spend working around M1 arch issues versus building your app?
This is the key factor that is keeping me from being an early adopter. I don't get paid to figure out how to work on a new chip architecture, in fact I pay a lot of money to not have to think about those problems at all.
I just realized thanks to this article, that the x86 memory-model emulation is not only available in the M1, but has been present going back to at least the A12 series. Apple has been planning this a long time.
I have an iPad Pro with keyboard and a 15-inch MacBook Pro I got when my previous MacBook Air couldn’t handle the video editing I was dabbling in.
I desperately want one machine (and probably in the iPad form factor) but I don’t know if Apple is ever going to get me there.
What say you HN? Is there a future where I can have a single machine? Any other suggestions for what I can do about it today? I’ve test driven Surface computers in various flavors from friends of mine and I really can’t get down with Windows. Am I doomed to carry two machines with me all the time?
What is stopping you from selling your iPad and having a single machine right now?
I have personally avoided laptops for years because I hate managing more than one machine. The one time a year I need my laptop, it's out of commission for a day while updates install or whatever. For that reason, I have a desktop and take an iPad Pro when I'm going to be away from home. The iPad is no computer replacement, but it can SSH places and can do enough work to justify not owning a laptop. More importantly, it can't have any configuration done to it. But if you already have a laptop, I would just skip the iPad entirely. The laptop is your "one device".
(Would I want a mobile processor as my primary means of doing work? Absolutely not. But, people seem to make it work... at the cost of being blown away by a new processor that's slower than HEDTs they could have been using for a few years ;)
I use the iPad for some work (editing photos, email, etc) but mostly I use it for entertainment. I’m using it now for web browsing, I watch TV/movies on it as I don’t have a TV and don’t always want to use my projector/screen setup, and I do some light gaming on it occasionally.
The laptop is for work, which is photo editing (again) video editing, design work, word processing, light web development, and various other business needs (accounting, spreadsheets, etc).
I’ve tried to switch to iPad only so I don’t have to cut out entertainment completely but almost everything work related is slower and more annoying on the iPad. So I end up with both.
As another commenter above mentioned if the iPad ran OSX I would probably just have the iPad, but here we are.
Agreed! I seem to remember a company that was hacking up MacBooks to make them into touch screens at one point and I totally thought that was the inevitable future we’d be living in. Maybe it’s still in the future, but I’m tired of waiting.
In indeed. My wife has a work laptop with an i7, 16GB Ram, fast SSD and 802.11ax network. Just starting it up in the morning and just going through Outlook email takes longer than it did 10 years ago on 2Gb Core2Duo laptops.
It seems most of the hardware these days is used to run Windows/software updates and corporate security theatre malware.
I'm curious that the A12Z DTK Mac mini shows moderate Geekbench score under Rosetta 2. Is this means some improvements are not M1-only, but already in Ax processor?
I guess the good thing for those of us who can't afford a new mac right now is that the used market will be flooded with recent models at a cheap price LOL.
This is true. I think picking up a now last gen Mac Mini as a HTPC would be sweet. Upgraded RAM could make it a home server beast. I don't see Apple abandoning the Intel support until at least Q1 2023, and even then, the types of stuff I would use it for in a HTPC setup would still be compatible well past then (unless some new video codec or something comes out that isn't compatible? Idk how those work).
Call me dumb, but I didn’t realize they had done so much memory optimization to make the physical 8GB of RAM so effective. I saw a very much lower number than I expected and just assumed it wouldn’t handle memory intensive workloads well. As someone who develops web tech my entire life revolves around crushing RAM, now I think the M1 may actually result in big gains for my workload hrmm
Same here, I guess after years of the iPhone specs being lower than Androids when it came to things like RAM but still crushing Android phones should have clued me in. I figured that with 16GB max that the new computers would be a non-starter for me but from what I've seen I was wrong about that. Unfortunately the monitor limit IS a non-starter but I'll be first in line for the M2/M1S/M1Z (whatever they call it) in next year's new 16" MBP (assuming I can continue to drive all my monitors).
Damn, I knew these are fast (from reviews, from pros) , but I was happy with my mid 2012 MacBook pro ,and all its modularity, even if it doesn't open 100 tabs or open 7 simultaneous , or lasts for 3-4 hours max without charging.
but Man, these "user" reviews are making me drive towards a purchase and its going to punch a hole in my wallet!
Despite the confusing Apple terminology, the RAM on M1 is not on the chip, it's on the package. Similar things have been used in x86 laptops in the past, just find any laptop that uses LPDDR4.
I think the biggest reason this has such a huge performance advantage is that the 8/16GB RAM is built into the chip. Modern CPUs are mostly limited by memory bandwidth rather than compute performance, and this has way more memory bandwidth than competing CPUs because of the tight integration. This RAM might also be clocked much higher than what we're used to, because there is no long bus to the CPU anymore.
The downside of this SoC design, though, is that while you can fit 8GB or 16GB on a chip, it might be difficult to fit more. The 8/16GB limit might explain why this design is reserved to the smaller laptops for now, and they haven't replaced x86 in all of their lineup. If you want more RAM than that, then you would again be stuck with less memory bandwidth to your external RAM. You would maybe end up with a design where some applications are kept in the internal RAM and some in the external, or where your internal RAM acts as an L4 cache to the external RAM.
It's not that difficult IMO for Intel or AMD to replicate this with an x86 design. They might not have to modify the CPU core and its caches that much, mostly the memory controller. How much time they would need though, I'm not sure. There's some probability they were aware that this was coming and already had something in the works. Otherwise, maybe one or two years?
This has been misreported. The RAM is on the same chip carrier as the SoC but not built into the same silicon die. The memory bandwidth is reportedly ~68 GB/s which is good but not exceptional.
I do think it is too early to count out the competition. To be sure, the Air and 13" Pro are ultrabooks, and this seems to be a great fit for the great performance being extracted while sipping battery life.
On the other hand, it's not a foregone conclusion that higher performance systems will dominate the competition. For most Apple-focused users, this is a comparison between overheating Intel chips shoved into Macs, and these slick new M1 chips, and it's a no-brainer. And yes, the M1 does compete very well in several benchmarks on a per core/instruction per clock basis. But the higher end "many" core CPUs (especially outside the Intel universe cough AMD) with much higher TDP are still achieving much higher overall performance. Apple still has to catch up to them in those markets.
And for many people, right now Apple systems just are not an option. Not until all software and games run without issue. Software is a near certainty - games remain to be seen, but Apple has not prioritized them in the past. (And while the M1 integrated graphics are great, they are not at all competitive with existing dedicated graphics, and will not be suitable for replacing gaming systems.)
But the higher end "many" core CPUs (especially outside the Intel universe cough AMD) with much higher TDP are still achieving much higher overall performance. Apple still has to catch up to them in those markets.
Did you see the AnandTech review of the Mac mini with the M1?
The M1 undisputedly outperforms the core performance of
everything Intel has to offer, and battles it with
AMD’s new Zen3, winning some, losing some. And in
the mobile space in particular, there doesn’t seem
to be an equivalent in either ST or MT performance
– at least within the same power budgets.
What’s really important for the general public and Apple’s
success is the fact that the performance of the M1 doesn’t
feel any different than if you were using a very
high-end Intel or AMD CPU. Apple achieving this in-house
with their own design is a paradigm shift, and in the
future will allow them to achieve a certain level of
software-hardware vertical integration that just hasn’t
been seen before and isn’t achieved yet by anybody else.
And for many people, right now Apple systems just are not an option. Not until all software and games run without issue. Software is a near certainty - games remain to be seen, but Apple has not prioritized them in the past. (And while the M1 integrated graphics are great, they are not at all competitive with existing dedicated graphics, and will not be suitable for replacing gaming systems.)
AnandTech's section is called "M1 GPU Performance: Integrated King, Discrete Rival" which should tell you what's up. Spoiler—it's more than competitive with dedicated graphics and certainly none of them can touch it when it comes to power consumption. And remember, this is the *low-end chip; wait until what we see in the next generation.
Finally, putting theory to practice, we have Rise of the Tomb Raider.
Released in 2016, this game has a proper Mac port and a built-in
benchmark, allowing us to look at the M1 in a gaming scenario and
compare it to some other Windows laptops. This game is admittedly
slightly older, but its performance requirements are a good match
for the kind of performance the M1 is designed to offer. Finally,
it should be noted that this is an x86 game – it hasn’t been ported
over to Arm – so the CPU side of the game is running through Rosetta.
At our 768p Value settings, the Mac Mini is delivering well over 60fps
here. Once again it’s vastly ahead of the 2018 Intel-based Mac Mini,
as well as every other integrated GPU in this stack. Even the 15-inch
MBP and its Radeon Pro 560 are still trailing the Mac Mini by over 25%,
and it takes a Ryzen laptop with a Radeon 560X to finally pull even
with the Mac Mini.
That's showing a 45W AMD Zen 2 mobile chip outperforming the Macbook Pro M1 11k to 7.5k in Cinebench R23.
Apple Silicon M1 does not outperform "many" core high TDP processors. Yes, they have amazing IPC, but they have 4 cores and a low TDP. I didn't dispute that. It's really incredible. But I'll repeat that it's not a foregone conclusion that they can translate that into higher core, higher performance and outright win/embarrass AMD. Maybe they will - we are just putting forth conjecture. Extracting all that performance with low power usage is certainly winning half the battle, and making bigger chips with more cores will get Apple a long way. Remains to be seen but it's far too soon to say "the rest catch up..." when it comes to those markets.
And I mentioned gamers and dedicated graphics - you countered with a comparison with the 2018 Mini at 768p resolution and a Radeon Pro 560? Meanwhile gamers are playing 2k and 4k games with RTX 3000 series and RX 6000 series cards (if they can get their hands on them.) And many games simply cannot be played on macOS.
Neither the M1 nor the above crazy expensive power-hungry graphics cards are a one-size-fits-all solution. And it may come to pass that Apple starts to compete in those areas and really does embarrass all competitors. But it hasn't happened yet, so I think such proclamations are premature.
This is not only Apple. All modern mobile ARM processors, ones that are used in Androids, too, they all are far ahead of Intel in TDP to performance ratio, almost by order of magnitude. Just make bigger ARM chips with more high-perf cores, and they will destroy Intel.
Not even close to true. Apple's ARM processors are unlike anyone else's, and everyone else's ARM processors are not particularly impressive. The power/performance isn't really there, and scaling performance up is not linear, either. You can't just "make it bigger" and retain the same power/performance ratio you had. The 1W power draw that the 'big' ARM cores target, like the Cortex-A78, isn't even that special. You can run x86 cores at 1W/core all day long as well. How it performs at the power level is the question, but the ARM cores don't really perform all that well. See for example the slaughtering that is the 64-core Graviton2 vs. the 64-core Epyc Rome: https://www.phoronix.com/scan.php?page=article&item=epyc-vs-... (spoiler, the x86 chip has an overall performance lead of 50%, and they really aren't targeting that different of a power budget)
But even the M1 isn't an order of magnitude ahead on power/performance ratio. It's the leader, but it's sure as shit not 10x faster for the same power draw.
Android phones with Qualcom Snapdragon 865 scores better than an Intel i7 7700HQ (4 cores, 8 threads) on Geekbench on multi-core and ties on single core while using a magnitude less energy.
While this i7 model is not new, it's not the "U" low voltage processor version.
I could see Qualcom getting fancy and trying to scale their processors at 5nm.
The last time the i7-7700HQ was sold in a Mac was for the mid-2017, 15" MBP. The current gen (late-2019) base model 16" MBP uses a i7-9750H.
Single core performance is ~17.5% better and multi-core performance is ~51.2% better than the 865. The numbers are still surprisingly close given that the Snapdragon uses significantly less power.
Qualcomm hasn't been able to "just make bigger ARM chips with more high-perf cores" and destroyed Intel for many years now - and not for lack of trying to displace Intel in the laptop market. Same with other ARM manufacturers. Apple did something unique with M1. I don't know what it is but at their current rate of improving in-house silicon the M2 is likely to be the best performing single thread chip in the world regardless of company, architecture, or power envelope. And that's not done by being the same as everyone else except the clever idea to "just make it bigger".
I am thinking about getting thing mostly to ssh into a linux server. I would like to run emacs on the server and have its display bounced back via X to the Mac. Is this practical? I tried Quartz on my wife's Mac but the fonts looked like crap.
VLIW and other advanced designs never went mainstream in part because the AOT on install/JIT everything future never arrived. But with the success of Rosetta 2, has that future finally arrived?
The M1 Air is cool as a cucumber. I've tried everything in my power to heat it up, nothing works. The closest I've gotten is slightly warm when running the Cinebench benchmark.
Apple does use TSMC to build the M1 itself. So it's not quite bottom to top like the early computers up to the early to mid 90s, when a number of the workstation companies (eg. DEC, IBM, HP) still had their own fabs to make the CPUs.
Like, nothing. They design the whole computer, from chip to OS, but they own no factories.
Which is pretty remarkable, when you think about it. If you went back to the DEC era and said that the most valuable vendor of computers in the world would do no manufacturing in 2020, not many people would buy it.
I don't understand this comment. The new M1s all have 16GB of RAM integrated with the SoC, and it can't be upgraded. If you had 32GB on your old machine and didn't need that much, that's kind of on you, right?
The M1s have 8 GB standard, 16 GB is a $200 upgrade. The parent is stating that 8 GB is actually fine and that people will be paying to upgrade to 16 GB unnecessarily.
Modern OSes and apps make aggressive opportunistic use of available memory for caching and other performance-enhancing purposes. A side effect of this is that however much RAM you put in your machine (within reasonable limits), it always looks as though you're using the majority of it once you're running a few apps. Hence the large number of people who think that they "need" 16GB of RAM when in fact they are just getting a small to medium performance boost from it.
You can see a concrete example of this here: https://www.youtube.com/watch?v=PP1_4wek4nI The 16GB Macbook Pro is "using" more memory than the 8GB model to run the exact same tasks.
DHH tweet quoted in the article: You don't sit around thinking "oh, browsing the web is slow on my decked-out iMac", but then you browse with the M1, and you're like, DAMN, I can actually feel that +50%.
I wish I could feel excited about this, but my first thought is that web developers probably have a huge wish list of things they're ready to unleash to eat up that 50%. I was expecting to get a nice 5-7 year lifetime from my early 2020 Macbook Pro. Maybe I should revise my expectations, especially if more and more desktop apps are going to be built on web technology.
I just got one. I’m blown away by the speed as well. Chrome runs insanely fast! Alas, it’s not developer ready yet. Brew is a mess. Docker doesn’t work. PyCharm is WIP although can use x86 version. I was skeptical of the hype but this little laptop has made me realize how slow everything else is.
Unfortunately, while the hardware has accelerated far beyond expectations, the software - specifically MacOS BigSur is a major step backward. So many fucking animations. Everything feels fluid like operating in molasses. The UI changes seem to be shoe horned into a desktop that doesn’t need giant white space for fat fingers. Menu bars are twice as tall taking up precious space. Top bar was already crammed with a lot of icons. Now, they’ve made them sparsely spaced by adding padding between the icons. Everything is baby-like with rounded corners and without borders. Segmentation UI elements are no more. I want to ask Apple’s UI team: WHY!? What is currently wrong with macOS Catalina UI? Until you can satisfactorily answer that, there shouldn’t be any change. Stop changing the UI like you’re working at Hermès. It’s not fashion. If the reason is to unify everything, all screen sizes, then you’re sacrificing all three. Perhaps making it easy to develop apps for all 3 platforms is a plus, but as a user, this all feels like a regression. I’ve lost hope in modern UI engineering. It’s not engineering anymore.
I want macOS that has a UI of Windows 95. That would be totally insane on Apple Silicon.
> Stop changing the UI like you’re working at Hermès. It’s not fashion.
Of course it is. Our phones are intimately close to us. Physically, cognitively, socially and even emotionally. They may be the most widely-owned intimately-connected object humans have ever invented outside religion.
Our computers don't occupy as close of a niche. But they're in a similar space.
I agree with your observation that the new OS feels like molasses. I wish they went for a "snappy" feel. (Though keyboard shortcuts get around that.) But ignoring that Macs and iPhones are objects of fashion as well as computing devices misses a deep part of what Jobs saw that technologists missed.
I'm surprised they're getting rid of long held keyboard shortcut conventions though. Previously, modal popup options (i.e. for saving docs) could be selected via Option-letter, i.e. "Save _a_ll" could be selected with Option-a. Their new popups which may or may not be prettier force you to either use the mouse or turn on "Use keyboard navigation to move focus between controls" globally and tab around. Neither of these options is as elegant as being able to select what you want with a single command.
I agree: this kind of thing is infuriating because it's such a slowdown to have to go touch the mouse and find the pointer on the screen. Really interrupts the workflow.
The stopwatch consistently proves it takes less time than using the keyboard. Apple's HCI research showed that people "lose" the time it takes to hunt and acquire the keyboard shortcuts.
assuming you're referring to https://www.asktog.com/TOI/toi06KeyboardVMouse1.html, I find this dubious for a number of reasons and would like to see Tog's methods and data. for example if I want to cycle through undo, I can hold down command and hit Z and shift-Z as many times as I want, as quickly as my fingers can press the keys. acquiring the menu item for this over and over would get tiresome quickly. similarly with opening new tabs and switching between them. I suspect his data only holds for inexperienced users using typical shortcuts in typical applications, not for expert typists who spend all day with their hands on the keyboard. the amount of complicated text manipulation I can do at speed with keys in emacs and vim would be absolutely infeasible if I were clicking through some massive menu.
I always found that study unconvincing. I am sure there are people who need to look at the keyboard but surely anyone who leaves their hands on the keyboard all the time and never looks down finds the keys instinctively.
In Apple's defence, I've been exclusively a mac user (work and personal) for going-on eight years now and I had no idea about the Option-letter shortcut. I've always found keyboard nav within modal popups confusing. I consider myself lucky if hitting Space selects the primary button. Doesn't sound like the change is necessarily an improvement, though.
Even Windows 3.11, NT 3.51 did this... and my Atari ST... probably windows 1.0 even because a mouse was not always guaranteed to be attached and/or working on early PC's. In my case in the mid 90's I remember I sacrificed the serial port the mouse was attached to run another modem for dial-in access to our officer on NT 3.51 server... THAT tought me fast how to get around a windows desktop with the keyboard - and it impressed me you could access everything - switching windows, menus, etc without getting "stuck" . With a mac, your computer would be useless, or at best infuriating (ie using some plug in to slowly nudge the pointer around the screen with arrow keys....)
Apple never did a good job with this. Probably because their GUI machines always came with a mouse. Microsoft Windows was always mouse optional; the 'settings' panels are a lot harder to use with a keyboard than the 'control panel' settings were though.
Agreed. I wrote a whole post on Apple and fashion. I'm sympathetic to the complaints about Apple's obsession with aesthetics, but the obsession is certainly motivated by what people want.
> but the obsession is certainly motivated by what people want.
that may be true, but businesses would be keen to also give us what we need more than just what we want... thats how you get "faster horses" vs the automobile
For the most part, you are correct. Yes there are design trends, but that isn't the primary factor here. Staff designers have to justify their existence by making changes to established design patterns. They don't have to be good, just different.
Bonus points for following a public design trend, but so long as the visual diff is big enough, you get your pay check.
>> Stop changing the UI like you’re working at Hermès. It’s not fashion.
> Of course it is. Our phones are intimately close to us. Physically, cognitively, socially and even emotionally. They may be the most widely-owned intimately-connected object humans have ever invented outside religion.
"Phone" =/= "UI"
but that said, I'm actually puzzled as to how "fashion" plays such a central role in your model of human society. Even if UI was the fetish object (which it is not) in what sort of cultural matrix does the core fetish object mutate constantly "like fashion"?
I think Apple continues to push the manufacturing precision of their devices. Some of the bezel gaps and tolerances are impossibly precise and tight. That's a positive change on the axis of luxury.
There is an orthogonal axis - which is functionalism. You can continue to make products that are both luxurious and functional. See Olivetti (Sottsass), Braun (Rams), Herman Miller, Vitra, USM, etc. I think they are pretty fashionable products if we considered their popularity and some of them are in MoMA as iconic designs. Apple seems to be going negative on the axis of functionlism past few years. What if I told you that you don't need to make something look ugly to make it functional, utilitarian and usable?
I think they can make UI very marketable + functional if they hired the right folks. In fact, Apple is going back to skeumorphism (hey, its called neuomorphic UI now). Have you seen the battery icon? This kind of trend chasing without any purpose is my main gripe. Also, the windows 95 comment was sort of tongue-in-cheek to exemplify how bloated modern operating systems have become.
I've installed brew both in the historical /usr/local location as well as the future home of /opt/homebrew. I then created these two aliases:
alias armbrew="/opt/homebrew/bin/brew"
alias intbrew="arch -x86_64 /usr/local/bin/brew"
My PATH selects for programs installed in the /opt/homebrew location first and then /usr/local. I try to install with the ARM version first with `armbrew install -s <PKG>` and if it fails, I move to using the `intbrew` alias as normal. I haven't really had any issues.
It's obviously still messy but not in a way that is too bad!
I've been using MacPorts instead of Homebrew. It seems further ahead on the Darwin AArch64 transition. About 80% of things that I've tried have worked without trouble. Unfortunately, ffmpeg wasn't one of them. I got that working yesterday with libx264 and libx265 integration. Oddly, libx265 installs fine under MacPorts but libx264 doesn't. But I was able to download the latest from git and build it and copy it manually to /opt/local.
This certainly works but then everything I install runs in Rosetta2. I'd prefer to run as much native as possible which is what my "two brew" solution gives me.
It’s pretty trivial to disable most animations (and more importantly transparency!). I’ve been doing that on new MacOS installs since Jaguar and it only takes a few minutes. If you want to move quickly you’re probably already using keyboard shortcuts and ignoring the dock and toolbars.
For less technically adept users (ie. most users) the animations and spacings mostly seem to help them understand what’s going on. I know everyone has their preferences, but I don’t really get the level of griping that accompanies every release.
The animations on the iPhone are what made me return to Android.
On iPhones, you can only "reduce motion", which still has the "moving through molasses" feeling, replacing them with a fade in/out. Can you truly disable most animations in MacOS, or are they simply replaced with fade in/out animations?
Wasn't it the case that they used to prevent touch interactions until the animation finished? I remember moving to iOS from android in 2017 and being really annoyed at that, but they fixed it in like iOS 11 or 12.
They used animations as loading times, not bad UX to prevent you from staring at a blank screen, or waiting for an app to open, thinking the phone didn't register your action.
While I won’t go that far, arguing that macOS with a Windows 95 UI is a good idea, I do agree that Big Sur’s current UI is a mess.
Traditional macOS users valued the Mac user interface deeply, and IMO that was why while iOS 7 got a big refresh, macOS got a much smaller one, only with flatness refinements. Big Sur feels like the iOS 7 for the Mac, and I’m very sad that I’ll have to wait at least 4~5 years to see the new interface improve in a better state.
Some of my key annoyances (except for bugs):
* The new control center now requires more clicks, but due to the padding getting bigger, I can’t put all of the shortcut icons in the menu bar
* The new control center and notification center’s UI is so foreign from other parts of the macOS. It’s just… so custom.
* There are now multiple variants of the title bar thickness, and all of the Apple apps now use all four. It just feels ugly. The title ban of the Photos app and the Calendar app are almost the same — why does one use a .unified one and one uses a .unifiedCompact one? And why does Safari uses a .unified one when it doesn’t really have any information to convey?
* This is from Catalina, but the NSSwitch stuff collides with checkboxes… but well I’m guessing that’s for Catalyst trying to look native
I can go on and on… but my general feeling of Big Sur’s UI is that it would take multiple years to make this better.
I don't even care about the UI flourishes and whatnot, I just want them to observe their own HIG and preventing 3rd parties from abuses, and that would be a huge win.
It used to be that focus stealing on OS X was a cardinal sin and now no one cares. If they only fix a single thing in the entire OS, it should be this.
This is actually how I feel about modern iOS too. Why is it a swipe there, a modal there...none of it has a rhyme or reason that makes sense to me. Ok, to get what I want here, do I need to force push, swipe, or tap some icon somewhere? There’s no consistency, everything is hidden, and each little thing to use the OS better is a “trick.”
On my personal iPhone, you swipe from the bottom to get Control Center, and on my work iPhone, you swipe from the top corner. If you want to send the output of one program to the input of another, you click share, move past a bunch of Contacts and Airdrop etc that I don’t think I’ve ever used in this way, swipe left/right to find an app, don’t find it, and either need to click an “Add” or “Other” button OR you need to swipe further down to click More..., until you can select thing you actually want.
I have so many gripes with the whole system. I almost believe that they are trying to make the thing harder to use so that they can create a dark-pattern around feeling a sense of mastery...but when I was trying to walk an elderly relative through the menus over the phone, it became especially obvious just much specialized knowledge the iPhone requires in order to do the very most basic of things, and almost NONE of it is discoverable.
> Traditional macOS users valued the Mac user interface deeply
I'm on the developer side and I don't know if I count as traditional macOS user but I've personally bought 3 macs (including the current M1 one) and use another mac from my company to work.
IMO I have almost 0 interaction with mac user interface. My time are either spent in terminal or in a browser. There's little need to 'interface' with whatever UI mac comes with.
I love mac mostly because of its hardware form factor and its shell. I can't tell you any GUI gimmick despite using it as my main driver for years.
The thing is, I just don’t see the problem. I upgraded from Mojave last week, noted that the UI is slightly different, and… that’s it. Everything still works fine, the UI looks a little more similar to the iPad, and I figured that was probably the main reason - since people will be running iOS apps on Macs going forward, there’s been an effort to introduce some more consistency. I find it incredibly hard to get worked up about it.
Have been on big sur for a week now and I don't see the issue either. Things like the control center are a huge addition and mostly the ipad like changes were to border radii and colors unlike the windows 8 disaster.
That's not really on this topic. What linus wants to use personally has little bearing on what architectures linux will (eventually) support.
Edit To clarify: i mean, linus is saying he wouldnt go to the trouble personally, not that he would reject a patch adding support. Those are two very different things.
How is "Apple may run Linux in their cloud, but their laptops don't ;(" not on topic?
Edit: It's not like Apple has any (commercial) interest in working with the Linux kernel developers to ensure support/compatibility even if the M1 is ARM based.
Edit 2: Don't get me wrong, if Apple announces they will support Linux I'll run out the door and buy one now, as a MacOS user.
Edit 3: And this is a criticism against Apple running Intel x86 chips. Do you really think the M1 future is looking rosy for Linux?
Do you really think the M1 future is looking rosy for Linux?
I think fairly soon, we'll have Linux running nicely on Apple Silicon Macs, using the hypervisor built into macOS. And it will run faster than comparable Intel machines.
Apple gets the importance of Linux and Docker for developers; I'm pretty sure it'll get worked out.
I hope that hypervisor eventually "becomes the os" -- such that macOS or linux or whatever "collected set of software that needs to think it has an ownership relationship to hardware state" can all run on top of essentially the same hypervisor provided hardware abstractions ...
Ugh. Back in the 90s our supercomputers were way less powerful than today's laptops. I want to feel that. Never mind animations, just make everything happen instantly.
I agree 100% animations are out of control. There are a few situations where the animations hide loading. Instead of the ui freezing temporarily there's a fancy animation. Outside of situations like that I just want an instant action.
Yeah. There is this system wide bug on iOS where the animations actually BLOCK user input / state changes. You can see this if you type in your pass code fast enough on the lock screen, such that you are interrupting the animated fade in of the dot for the previous digit. You’ll often lose a digit and have to start again from scratch. Infuriating! It’s the most fundamental relationship to betray, the relationship between touch and input. It registered my touch, but decided to ignore it because it wasn’t done with its pointless animation that some manager thought would look neat. Same thing happened with calculator and they fixed that one case, whereas it’s clearly a fundamental issue with UIKit.
Ugh yes, this is infuriating and happens all over the place. And combine it with sinusoidally tweened animations where it often looks like the tail end of the animation has finished before it really has, making it easy to mistime your inputs so they they’re dropped.
Kiss all that speed goodbye once software developers get their hands on this. And then the low end current-gen models will be even slower when running "the code that runs ok on the M1 hardware, I guess, ship it."
Maybe, but also If developers are forced to use slow machines to develop, then they’re less likely to build. Full stop.
I'm actually in this situation right now. I develop a web application for event management. In some cases it's being used on older Chromebooks, where speed is, well, not there. While I'm testing on Chromebooks, I'm definitely not building my front end bundle on one. The faster machine I'm on, the faster I can iterate on building fast data tables, making intelligent choices about asset loading, etc.
So yeah, I think something like the Network Conditioner prefpane should exist to simulate a slow CPU for testing (does something like this exist?). But I very much enjoy my fast dev machine and I think it'll have the best overall outcome.
Seems pretty obvious that some future Macs will have touch screens or larger iPads will run MacOS.
Probably not. Apple has said they've prototyped touch screen Macs and from a UI/UX perspective, it doesn't really work. They've recently downplayed touchscreen Mac talk recently [1].
What Apple has shown is iPadOS becoming more like macOS, supporting a keyboard and a mouse instead of the other way around.
I have a 2-in-1 Surface on loan and it's not particularly good at being a laptop or a tablet because of the design tradeoffs that have to be made. Apple is unlikely to make such a device unless there's some kind of UI/UX breakthrough.
I was really hoping they'd announce a $500 touchscreen lapdock for iPhones with a mounting arm that used MagSafe. Turn iPhones into Macs. It'd be the slickest lapdock I can think of.
I honestly feel like they won't do this for one simple reason: Why sell one devices when you can force consumers to buy two? Apple is keen on somehow making customers buy more things, more accessories, etc. They aren't the biggest company in the world for nothing.
They say it's some fridge and microwave whatever excuse, but the real reason is money. The M1 chip itself lowers their costs by almost $3B.
I don't mind an upgrade to better looking UIs on regular intervals, but functionality and ease of use should come first. And here there is pandering to the iOS crowd when a touch based interface makes no sense for an OS operated with a mouse and a keyboard.
But I suspect it is a consequence of the incessant screaming by all the hipsters and clueless morons that think iOS and macOS should be merged for no other reason than their belief in MORE is MORE. No LESS is MORE, and that is what Apple was built on.
So yeah I feel your pain even if I have a slightly different take on this. macOS should be primarily designed to be the ultimate OS for somebody using a mouse and keyboard. iOS should be optimized for touch. We don't need to merge these two world. That is what clueless MBA types of people think. Because all they can think of is things like "synergy effects"...
Imagine 2-in-1 where iOS and OSX lived alongside each other and you could switch between them like tablet mode on a surface pro. Just let them be independent of each other imo.
I’d be up for that since I currently use an iPad and a win10 laptop.
I've had "reduce motion" turned off on my iOS devices since it was an options... The animations in the latest Messages.app have been driving me crazy, but sure enough "Reduce Motion" is available in the `Accessibility` preference panel under display.
FWIW, I am blown away by my M1 Mac, but my x86 Mac feels about 5-10% snappier on Big Sur.
>I am blown away by my M1 Mac, but my x86 Mac feels about 5-10% snappier on Big Sur.
For clarification, do you mean your x86 with Big Sur is faster 5-10% faster than you M1 with Big Sur, or is it compared with the same machine running what I'm going to guess is Catalina?
Sorry, I mean my x86 machine has improved in performance with the upgrade to Big Sur.
The M1 MBA (16gb RAM) feels snappier and more responsive in almost every way to my Mac Mini w/64gb of RAM. RAM usage is also noticeably lower on the M1 when I migrated my browser tabs over (500+ between Firefox and Safari).
second this. The roll-out was a mess, but Big Sur does appear to have some serious optimizations under the hood. Time Machine is also, finally, vastly improved.
(The changes to menu bar icon behavior, and shortcut keys on modal buttons, are infuriating)
Yes, can I please have System 7.1 or 7.5.3 back? I recently installed MAE on my SunBlade 2500 and it has been an amazing experience. A Classic Mac, without all the issues of HFS corruption from crashing out. Running 68k software at G4 speeds. Haha. For a lot of tasks it certainly works better than the Classic environment did under 10.1-4 since it's not targeting the System 9.2.
Ready for developers to port developer tools =/= ready for all developers. Systems dev is actually a fairly niche role in the industry, M1 will be a boon to them but for the vast majority of devs who either aren't interested in that kind of work or not trained in it, M1 is a huge monkey wrench.
I buy a new MPB every two years, and you bet your ass my next one will be an M1. But that'll be in 2022, I'll be quite happy to stay on my x86 machine until then :)
I actually really want to run native Linux on these new chips. Don't know why but the thought of modern "big sur" macOS on such beautiful hardware seems a bit sad to me. KDE/Manjaro has made me pretty happy lately and the battery life could make an amazing linux laptop. Of course you could run CDE or XFCE to get your Win95 DE. ;) Well in a few years when hopefully Linux supports M1 devices. Till then probably have to play with a PineBook or something.
> I want macOS that has a UI of Windows 95. That would be totally insane on Apple Silicon.
You'd have to redesign it somewhat for the higher resolution screen, and so there are many improvements in typographical rendering you would also want to include, plus Win95 was in some respects still inferior to System 7, but overall, yes, there is a certain directness to that era of OS UIs (see also Motif & BeOS).
Yeah, a lot of spacing... except of dismissing notifications! The small X icons are so hard to click on Big Sur!!! I miss the old notifications! The grouping makes my work even harder! It's a notifications disaster!
> I want macOS that has a UI of Windows 95. That would be totally insane on Apple Silicon.
WTF?
I mean I get most of what you said until that. I vastly prefer the Catalina and every version of MacOS over the past 15 years versus anything out of Redmond, and particularly not W95. (Heck I used Window Maker prior to MacOS, my love for Next ancestors goes back some time)
I hear a lot of people are frustrated with Big Sur. Hopefully Apple will dial back a lot of the egregious UI updates. They frequently go big on new redesigns, then dial things back. Hopefully.
Not too many people seem to be commenting on the UI but the Big Sur "improvements" were one of the main things that made me look into going back to Linux, researching how to replace my current MacOS workflow.
The UI changes are just completely nonsensical to me, and despite the initial announcements of the speed benefits of the M1 I was set on trying to go back to Linux. But now with articles like these I have to admit it feels tempting...
>Chrome runs insanely fast! Alas,
What hardware you were using before? I am using MBP retina 2013 and chrome loads and run fast. Browsing was never a bottleneck except few heavily bloated (ads and stuff) pages. I have noticed chrome can slow down on few windows machine with poorer hardware(which is the case of 90% of windows laptops).
>I just got one. I’m blown away by the speed as well. Chrome runs insanely fast!
I've got a 5ish year old desktop and Chrome runs, afaict, pretty much instantaneously. At least I don't notice a perceptible delay. I'd be interested to see a side by side comparison of page renders on M1 vs an older laptop.
Let's make a new computing law: The likelihood of an opinion on UI changes to be an overreaction is inversely proportional to the length of time that has passed since their release.
Apple should buy Docker. Only a partial /s on that one. I’m shocked Microsoft hasn’t bought Docker yet. Docker has so much mindshare it’s crazy. They’re the most valuable company that isn’t worth anything in the whole tech sector imo.
Unfortunately you are wrong. Everything we call tech is in fact driven by fashion.
My older version Mac is badgering me to upgrade to Bug Sur. Top feature: 100 new emojis. That is what Apple prioritised. Why the hell are emojis even part of the OS let alone its top feature!
Truth is GUIs are done, were a decade or more ago. All there is left now is change for the sake of change. And Apple can’t think of anything more to do in the real OS either!
I think you're getting downvoted just because you said "you are wrong", when the rest of your comment, though a little grumpy, isn't that far from the truth.
UIs are definitely fashion. We're going round and round in circles and things aren't getting better, IMHO.
Emoji is part of font support, and Apple takes that stuff seriously. It would be embarrassing for them if messages sent from iOS didn't show up properly in the Messages app.
Emoji is part of font support, and Apple takes that stuff seriously. It would be embarrassing for them if messages sent from iOS didn't show up properly in the Messages app.
Rendering a font, and a font, are two different things no? It should need an OS update to add some more characters.
GUIs are in a state of hysteresis at this point. A little more saturation in the UI elements, a little more purple on the background next release and more monochrome and more blue the release after.
I’m saying this like an old man (even though I’m not even close), but we’ve seen it before and we’ll see it again.
I mean why should most users even care about the details of any update? It’s a box that runs the software they actually care about. It’s primary purpose is to get out of your way and feel invisible. I feel like people on HN overestimate how much people care about operating systems.
Emoji are part of the OS because they’re frequently displayed on websites and in applications, so the system font needs to support them to render them correctly.
No you can load your own fonts if you want but the default ones are distributed in OS updates like every other OS.
Because Apple controls the end devices an OS update is very simple and goes out to basically all users so they don't have to rely on weird slide loading hacks that are typical for platforms like Android.
All my PCs run open source OSs and this really isn’t true. The GNU/Linux landscape has changed wildly in the last 5 or so years. You’re only slightly less beholden to the people that maintain your OS than with proprietary systems because you still have to update eventually. A lot of BSDs are still roughly the same but being OSS doesn’t necessarily imply you’re free of change.
- you can update whenever you want (or not do it ever)
- you can modify any part of the OS, no restrictions
- you can run any program you like: signed, unsigned, app-store, not app-store, etc.
==> the differences are there between OSX and GNU/LINUX
What the hell is going on here? Digging through someone's post history to figure out if there are contradictions in what someone says? Can we not just evaluate what people say on its own merit and instead of putting people under the lens for hypocrisy?
What is the point of putting links on your profile if you expect people won't visit them? It was on the index page. Anyway, it looks like satire. I definitely got fooled. My bad.
Even if it weren't satire, investigating someone for inconsistencies like that... well, it just kinda sucks, doesn't it? I don't think we should expect people to have a consistent viewpoint across everything they've ever posted online.
On the other hand, Apple is fashion. They have always advertised and placed themselves as such.
If you want functionality, you need a non-Mac running Linux. I'm just hoping we have better silicon options soon, for those of us who don't want customization-unfriendly, upgrade-unfriendly Apple hardware.
While you're getting rightfully downvoted (although I won't join in), it's worth underlining how much "they have always advertised and placed themselves as such" is utter nonsense. Apple has always advertised functionality and performance of their products, even when the case was (charitably) pretty shaky. They advertised their PowerPC Macs that way, up until the point they couldn't, then advertised their Intel Macs that way. Now they're advertising their ARM Macs that way. They proudly advertised their moves to 64-bit CPUs. They proudly advertised to consumers how cool it was that OS X was "Unix certified." Repeat: to consumers.
Yes, Apple is super conscious of design, occasionally to an obvious fault, from "butterfly" keyboards back through the G4 Cube. But they've absolutely always been interested in aiming for seriously high-end machines, from all of the Mac Pro incarnations all the way back to oldies like the Mac IIfx.
I hope other people make competitive silicon, too. I want Linux around and high-performing, and it's still a bit of an open question what the future holds for development environments on ARM-based Macs. But it is just so wearying for people who are otherwise technically literate to still be trotting out "Apple has never cared about anything but looks" after literally decades of obvious counter-examples.
> Apple has always advertised functionality and performance of their products, even when the case was (charitably) pretty shaky.
"I'm a Mac." "And I'm a PC." was _absolutely_ designed to appeal to the fashionable, "hey, you're going to be far cooler than this stuffy nerd" than just "functionality and performance".
So, in among everything else, they absolutely do care about looks. I'll agree that they don't "[not care] about anything but looks", but it's a part of the brand.
You're not wrong, but Apple also does advertise themselves as fashion. Their products are highly visible. Not too long ago, their computers literally had a giant glowing logo on them. Now it's just shiny. You can spot someone wearing their Apple headphones from 300 meters away. Apple is a fashion statement. It's designed that way.
Yep, and to a great degree, Apple has mostly been behind the curve in technology advancements, just good with the marketing, build quality (necessity if you want to market yourself as fashion) and most importantly, timing.
I'll give them the Apple silicon, but remember the iPhone X's advertising? "We've always wanted a phone that was all screen." Guess what, Android has been that way all along. High-end Samsung, Xiaomi, HTC phones were always that way. Apple just marketed it.
Now that Pixel 5 got rid of the notch entirely, expect the iPhone 13 to have that too. Just that Pixel didn't care to actually market it. You bet Apple will.
Apple is definitely fashion-driven, but I think they also produce some hardware that is much better to work with.
The touchpads on Apple laptops are much better than what you can find on other laptops. The screens are much clearer. The weight isn't too much, and they're not huge; easy to move around with.
I'm very keen on getting back onto Ubuntu, but there's just no equivalent laptops in terms of hardware.
What is gained here if we're just still applying faster cycles to Apple-esque wasteful (and perhaps harmful, as we're apparently learning re: their telemetry) software?
If people really dig their Apple stuff, great. But I think its worth thinking about the likelihood that a "slower" computer running Linux could probably serve the actual user better in terms of "getting stuff done." Moreover, I think we're pretty close to "beauty" parity here as well. Apple's advantage now is probably mostly the networked devices, i.e. unity between phone and pc messages, etc.
> But I think its worth thinking about the likelihood that a "slower" computer running Linux could probably serve the actual user better in terms of "getting stuff done." Moreover, I think we're pretty close to "beauty" parity here as well.
This is just not true. I'm sorry but I've used linux on the desktop many times, using many different distros, and many different desktop environments. It's shiny and pretty when you first install and then very quickly you run into apps that don't follow that design methodology and it takes you out of it. Desktop linux is not ready for average users and honestly it may never be. Also "getting stuff done." and desktop linux do not belong in the same sentence. I've personally spent and watched coworkers spend hours and hours tinkering with graphics cards drivers or other random quirks. Linux is amazing, don't get me wrong, it's a workhorse with immense power and the ability to tinker to your heart's content BUT most users, myself included, don't want to put in the upfront and ongoing work to keep it in tip-top condition.
I will forever use linux on servers and enjoy every minute of it but for the desktop linux is nowhere near ready for primetime. I've watched too many people online and in-person preach about linux on the desktop and then watched them having to spend tons of time tinkering with it so they can "get stuff done". Is it possible to be productive on the linux desktop? Absolutely but I value my time way too highly to spend the effort to make that a reality.
Just to present another point of view (from my subjective experience). Yes it is possible to be productive on the linux desktop.
I'm using Fedora 33 on my personal laptop with nvidia drivers. And never needed to install any drivers for my work laptop.
Upgraded both laptops 3 times from Fedora 30 without a hitch.
And it's a one step process using a graphical interface [1].
What I'm saying is that (in my own experience), linux of today is very different from linux of 10 or 20 years ago. I've had my share of problems before, installing drivers for graphic cards and modems back in the "00"s but I think that most linux distros have come a long way since then.
Linux may not be the best choice "getting stuff done" for an average user. But, from my own experience, for a software developer it's a good choice. And in the company I work for, most developers have hp elitebooks with fedora installed. (I'm picking software developers for the comparison because they represent a significant portion of macbook pro users).
As a new Linux user, I don’t think I agree, at least not 100%. I picked Linux Mint, and found it quite easy to install and maintain on my AMD desktop. All the drivers were already there.
The only difficulty I had was installing Docker, but an hour later I had that and my other required programs installed and I was able to be productive.
It took me less than three hours to make a USB stick, install, and get my programs needed for me to code again.
That said, I had also tried Ubuntu and found it a not-so-great experience. I had to look for a few different drivers and learn how to install them from the command line. After that it was fine, but I didn’t like the UI and didn’t really want to spend the time learning how to tweak it to my liking.
So I think the right distro really matters in the case of making Linux attractive to common users. IMO Cinnamon/ Mint is the one for that.
> * I've personally spent and watched coworkers spend hours and hours tinkering with graphics cards drivers or other random quirks*
This is like complaining that a Hackintosh is buggy and requires tinkering. If you want a polished experience, buy computers with first class Linux support from vendors. Even better if you buy machines with Linux preinstalled.
> Desktop linux is not ready for average users
If ChromeOS would suit a user's needs, then so would a polished Linux distribution like Ubuntu with Firefox or Chrome, most of the time.
What are we meant do with these ever constant, hand-wavey "linux bad" posts with exactly ZERO specifics. They mean nothing to all of us reading this comment on Linux systems that give us less fuss than Windows (and I know because I game on it) and there's nothing actionable. It's just a "nuh uh!!" after someone suggests there might be a better tool for the job.
Also, I really can't sympathize with this implication that not following the platform UI principles is somehow fundamentally "breaking" the user experience. It's virtually impossible to find two mainstream, popular, "good-UX" Win10 or OS X apps that follow some mythic standard UI.
> What is gained here if we're just still applying faster cycles to Apple-esque wasteful (and perhaps harmful, as we're apparently learning re: their telemetry) software?
Your comment is based on some sweeping claims with no supporting evidence — can you point to something specific you think is wasteful, alleged harmful telemetry (not Jeffrey Paul's misunderstandings about OCSP), or prevents “getting stuff done”?
As someone who started using Linux as a desktop OS in the 1990s I would especially suggest that if smugly-nonspecific sneering at other operating systems was an effective advocacy strategy the number of Linux desktop users would be a lot greater than it is now.
I've tried Linux on the desktop and know that I'm significantly more productive on a Mac (and also more productive on a Mac vs Windows).
It's partly a personal thing but I think that you have to look at the evidence in the marketplace - people switch to Linux for philosophical or technical reasons but not generally due to the user experience. Denying that evidence and pretending that it's otherwise isn't going to change that situation especially now Apple has an imminent hardware advantage.
Another linux guy here, on the market for a new linux laptop.
After reading the article, which linux-compatible laptops, would you say, come the closest to compete with the new generation of Apple products? Long battery life, plenty of power, good screen, good speakers? Is it the inevitable Dell XPS 13 and Thinkpad X1 Carbon, or are there any other darlings among the linux community?
Tiger Lake matches M1 in single-core perf, but gets trounced in multi. AMD has some good options, but I haven't seen a thin&light style AMD based laptop come out yet.
Check out System76, their Linux support is unparalleled. The lemur pro is interesting. I don't know if you'll find the specs for the screen/speakers, though.
I got the XPS 13, myself, but it still requires 'tweaks' - like running thermald master so your system doesn't throttle down too much after a short 'boost', and running kernel 5.10 because it allows your CPU to enter the PC10 power-saving state (Ubuntu's oem-5.8 may have this backported, I haven't checked).
As always, install your favorite user-agent spoofer as well. I just ran into a problem with some web videoconferencing software that works fine, but you have to tell it your running Win/Chrome.
This is why people say desktop Linux isn't here yet. It kinda is, but only if you're an expert (or want to be forced to become one)... I still love it, though, and always will.
> The lemur pro is interesting. I don't know if you'll find the specs for the screen/speakers, though.
I've seen a youtube review demonstrating that its speakers are abysmal [0, around 7:30 minutes in]. I have a feeling that this is the quality that is to be expected of Clevo laptops :-(
> In multi-threaded scenarios, power highly depends on the workload. In memory-heavy workloads where the CPU utilisation isn’t as high, we’re seeing 18W active power, going up to around 22W in average workloads, and peaking around 27W in compute heavy workloads. These figures are generally what you’d like to compare to “TDPs” of other platforms, although again to get an apples-to-apples comparison you’d need to further subtract some of the overhead as measured on the Mac mini here – my best guess would be a 20 to 24W range.
I'm struggling with the same dilemma at the moment but at the 15" size point.
I just bought the XPS 15 (2020 model) in the i7-10750H / 16 GB / 1 TB / 4K touchscreen configuration, with 4 years of extended warranty, and it set me back $3000 USD here in the UK. This was with 16% Dell Advantage discount. For comparison in the US a slightly better package with 32GB of RAM is currently up for less ($2,750, so more like $2,900 after sales tax I guess). Retail.
I've also bought the exact same configuration and package, refurbished from the Dell Outlet, in the standard HD resolution for $1,900. This one has yet to arrive.
So how does the 4K XPS fair in Linux?
I booted in to Ubuntu 20.10 using a USB stick and... the keyboard didn't work. For some reason Ubuntu brings up the on-screen keyboard when I use the touchscreen, but the physical keyboard is completely non-functional.
Fedora 33 live? No wifi.
Manjaro XFCE worked great. I was pleasantly surprised (once all the 4K related scaling settings were fixed) how good it looked and how well everything just worked.
Paying $1,100 for a 4K touchscreen upgrade that causes nothing but problems on Linux is a hard thing to justify and, unfortunately for Dell, the screen on the XPS is probably its biggest selling point right now. Now M1 is a known quantity they really need to discount these laptops imho.
On the XPS you only get 3 USB-C ports, and one is constantly in use for me because I rarely unplug. I will be needing a stupidly expensive dock, so I have no idea how anyone can cope with just the 2 on the XPS 13. The keyboard is better than average but still pretty harsh, the touchpad has borderline build quality (springboardy) and is too large for my tastes (and I miss physical buttons), and the although the screen is gorgeously sharp and bright the fact that it's glossy still causes reflections in my home office.
I do not think I will be keeping either laptop, and instead I'm looking toward the X1 Extreme Gen3, with its better port options, keyboard and build quality.
which linux-compatible laptops, would you say, come the closest to compete with the new generation of Apple products?
When there's a technological jump, there's not much you can do.
There's not a laptop you can buy that close enough to the M1-based MacBook Air or MacBook Pro to matter really.
You can't get something as light, as powerful and with the battery life these machines have.
Look, Apple demonstrated Debian running in a VM on an Apple Silicon Mac in June, so we know it's possible. And I'd bet dollars to doughnuts it's blazing fast.
You saw the comments in the article; people are talking about more than a day without having to plug these devices in. People are rendering 8k video while doing other stuff with no slowdown and without hearing the fan, in the case of the MacBook Pro.
These machines can drive up to six displays using DisplayLink adapters. Just nuts.
lol this is just delusional. The Librem is the build quality of a generic Chromebook. Maybe competitive with Apple in like 2006, as in the polycarbonate MacBook!
macOS has been my daily driver since 2006, fully since 2010 (when I got rid of my desktop which I dual booted between Windows for gaming and Linux for development). It's perfectly suitable for "getting stuff done". In the rare case where macOS doesn't run something that I need that another *nix or Windows supports I spin up a VM or VPS.
I would 100% use linux (because I like tinkering) if it didn't have issues with HiDPI, and really terrible localization compared to MacOS.
Everything else I can deal with, not having those two work out of the box just sucks.
Also, if you have a headless Linux server (even in a virtual machine, say Hyper-V), I don't really see how you will get less stuff done by using any other OS.
Linux still doesn't have smooth scrolling. There is little visual consistency or shared UX patterns across apps. I had to hunt down a good font because the one that came installed with Pop and Ubuntu was terrible. Spent a day getting drivers for a wifi card to work. Had graphical flickers due to a bug in picom/nvidia that I had to go in and fix. Don't even get me started on the app ecosystem - Mac/win has stuff like omnifocus, 1password, Alfred, photoshop etc etc. Linux mostly you hope there's a web app.
There is strong opposition to the idea of paid app stores on Linux but almost all the best software I use is paid, because it takes teams of people working hard to build it. This is actually the most critical issue imo.
Linux has come a long way but I think they are understandably reluctant to hear bad news that it's still not good enough compared to the alternatives, even if it's much much better than it was.
This is entirely subjective. As a primarily Linux user who uses a Mac for work, I would argue the opposite. Likely based more on familiarity than anything else.
i will provide one bit of anecdotal reasoning tho, within my company - the exchange server will _not_ work on anything other than paid for linux clients, but works flawless on apple's standard mail app. sure, they could set up the server a different way, but...why?
I found the macOS desktop much more responsive than GNOME which feels like a beast. Now I use MATE which is nice and simple but it’s no where near as nice as macOS.
I don't get the anti-Linux comments here. I use Manjaro with XFCE as my daily driver and I haven't had to tinker with shit on it since I set it up. Has nobody tried this distro?
Everything I need to do on it runs like a dream. Installing software is easier than anything, especially since I just use the GUI for that. I've had more trouble with Mac and Windows.
Even other Linux distros that I've tried don't really stand up to Manjaro. Ubuntu and other Debian derivatives have you hunting down PPAs and using the terminal to add things - maybe that's the "tinkering" that annoys people? If so, I wholly recommend you try Manjaro or another Arch based distro.
I see no mention of simd in these threads. The only thing that has made high data throughput possible is vector operations. What's the performance of libjpegturbo or hevc in software?
It supports NEON like most phone CPUs. The new thing for a desktop is sharing memory for the CPU and GPU with a much faster bus instead of having the GPU sit on a slower bus with its own fast memory, so if your workload can benefit from using the GPU, you don't have to worry about host to device transfer.
I can't find a number for it on M1 but I'm guessing a memory to SIMD register move is still a lot faster than faster CPU-GPU shared buffers, i.e. Even if it is a powerful tool it won't scale in the same way.
I'm not a huge fan of the thermal throttling comparisons.
Apple completely fucked the pooch on the previous gen(s?) when it came to design.
They don't get points for fixing an utter fuck up. That should have triggered a recall, imo.
However, the performance of M1 looks hella solid, and kudos to them. I'm gonna stay with Linux because I'm comfortable in it, but innovation is never a bad thing.
INB4 walled gardens and code signing: stop drinking the koolaid and do your own research
> iOS software uses reference counting for memory management, running on silicon optimized to make reference counting as efficient as possible; Android software uses garbage collection for memory management, a technique that requires more RAM to achieve equivalent performance.
Oh no crap Sherlock! Let me save this quote whenever someone wants to tell me all about how GC is much superior than reference counting.
Yes tell me how a stop-scan-mark-sweep periodic process is more efficient than just keeping track of what you do.
Tin foil hat warning - but how much of the M1 performance improvement is from optimizations made in Big Sur for Apple Silicon that they just didn't bother implementing for x86 since it's now the outgoing technology for Apple?
I realize the reducing in power consumed for any given quantity of work is downright amazing for laptops, but I guess I'm more curious about workstation and (build) server kinds of applications.
Also, how many of these benchmarks are x86 versus Apple Silicon where both are running Big Sur. I've been seeing so much "Xcode on Catalina" versus "Xcode on Apple Silicon on Big Sur"
I was not saying in any way that they didn't implement optimizations for x86 over the time that they used them. Of course they did.
What I meant was that Apple themselves talked about significant optimization for Apple Silicon in Big Sur. The question is if any of these optimizations could have also been applied to x86 but aren't because x86 is the outgoing platform.
I'm skeptical of the assertion (without supporting documentation) that there is some hardware design choice in Apple Silicon that makes it _drastically_ more memory (quantity) efficient than x86 when using presumably the same kernel, same toolchain frameworks (LLVM), etc.
That's basically what they did when they made the switch from PPC to x86. IIRC, there was an edict a few years before the switch (Jobs talked about it in an interview at the time) where teams were told something along the lines of not making any design/optimization decisions that were tied to PPC. They likely did the same thing with x86 this time.
The better tinfoil hat question is how much did they intentionally sandbag the outgoing Intel models vs. how much was it a case that M1 was delayed? Such as the Intel MacBook Air whose fan wasn't even connected to anything? Was that bad design because it wasn't supposed to exist, or was that bad design intentional to help drive up the huge generation over generation gains?
"iOS software uses reference counting for memory management, running on silicon optimized to make reference counting as efficient as possible; Android software uses garbage collection for memory management, a technique that requires more RAM to achieve equivalent performance."
It is equivalent of saying "On iOS devices the memory efficient Chrome app can be used, but on Android phones a browser is used, which requires more RAM for equivalent performance."
It is true that reference counting is a form of GC. However, Java's GC is not based primarily on reference counting. It is much more complex and, generally, does indeed use much more memory.
Swift's reference counting is not much different than C++ shared pointers, except that it is all baked into the language. It is generally true that iOS devices require less memory to achieve the same things as Android devices.
Looks like the comparison is between tracing garbage collection & reference counting, both of which could be said to belong to the broader category of "garbage collection" algorithms. I do agree tho that it's confusing
> Reference counting is typically used in garbage collection
While you _can_ implement a garbage collector with reference counting, and in the broadest possible definition of 'garbage collector' you could call Apple's use of reference counting a garbage collector, no, what people typically call garbage collectors are not, today, typically primarily dependent on reference counting.
I got tired of such reviews filled with so much bullshit..
These people forget about Pro users and still sees a laptop as an iPad with keyboard.
Pro users need the machine to -heavy- work, and rely on compatibility (software and hardware). Also, I work 90% of the time on my desk with the power adapter connected. So battery is far below in my priority list.
So far, MKHB was the only decent review I found. I suggest serious users to watch it.
These ppl are so drunk on the kool aid they don’t even realize that 80% of their experience is common to every new laptop purchase. In 1 year these “revolutionary” computers will be boring and slow again, especially when their batteries start to deteriorate.
Do you really believe this is a common hardware release? There’s nothing revolutionary here? People love to hate on Apple which is fair for a lot of reasons but I think you have to give credit where credit is due. The M1 will push all laptops forward and force others to complete which is great for customers.
I didn’t say there is nothing different here, only that the tangible differences are marginal to the exaggerated good experiences people are reporting. Apple has not revolutionized how lithium batteries work and the power savings by the M1 are less than 100% improvement, so the claims about battery life are clearly exaggerated and mostly common to all new laptop purchases. Apple makes decent products but they make even better marketing.
>Given that Hackintoshers are a particular bunch who don’t take kindly to the Apple-tax[...]
I have zero issues with an Apple premium or paying a lot for hardware. I think a major generator of interest in hackintoshes has been that there are significant segments of computing that Apple has simply completely (or nearly completely) given up on, including essentially any non-AIO desktop system above the Mini. At one point they had quite competitive PowerMacs and then Mac Pros covering the range of $2k all the way up to $10k+, and while sure there was some premium there was feature coverage, and they got regular yearly updates. They were "boring", but in the best way. There didn't need to be anything exciting about them. The prices did steadily inch upward, but far more critically sometime between 2010 and 2012 somebody at Apple decided the MP had to be exciting or something and created the Mac Cube 2, except this time to force it by eliminating the MP entirely. And it was complete shit, and to zero surprise never got a single update (since they totally fucked the power/thermal envelope, there was nowhere to go) and users completely lost the ability to make up for that. And then that was it, for 6 years. Then they did a kind of sort of ok update, but at a bad point given that Intel was collapsing, and forcing in some of their consumer design in ways that really hurt the value.
The hackintosh, particularly virtualized ones in my opinion (running macOS under ESXi deals with a ton of the regular problem spots), has helped fill that hole as frankenstein MP 2010s finally hit their limits. I'm sure Apple Silicon will be great for a range of systems, but it won't help in areas that Apple just organizationally doesn't care about/doesn't have the bandwidth for because that's not a technology problem. So I'm a bit pessimistic/whistful about that particular area, even though it'll be a long time before the axe completely falls on it. It'll be fantastic and it's exciting to see the return of more experimentation in silicon, but at the same time it was a nice dream for a decade or so to be able to freely take advantage of a range of hardware the PC market offered which filled holes Apple couldn't.