Hacker News new | past | comments | ask | show | jobs | submit login

>Twenty years ago, the Mac version of Photoshop was just better than the Windows version. That hasn't been true in a while.

I've used both (PS on macOS/Windows) recently and find them to be more or less equivalent. Given the choice of using macOS or Windows, I'd still take macOS.

Not sure what the point of your line re: hardware is when a standard M1 Mini runs very well for this use case. If you want a bigger monitor... just buy the bigger monitor with a Mini.

>Now the UI on Linux is better than it used to be

I ran Desktop Linux for years before going macOS, and I check in periodically. I remain unimpressed, it's still a jumbled mess of an ecosystem. UI is also not the stumbling block to Linux adoption - hardware support alone, while better, still isn't up to what you get with the big two.

>and Windows has WSL et al

This is a fair point, and was an incredibly smart thing for MSFT to do.

>and Mac is full of "pay for a license annually and deal with code signing nonsense" that developers hate.

Have you ever code signed anything elsewhere, where you buy certs and such? It costs money. Apple's setup is a flat $100/year and "just works". This is also only if you're shipping software, you can run homebrew/etc just fine if you're doing dev work - there's tons of developers working on macOS who never pay Apple the $100/year for membership.

Can you actually refute what I'm saying, which is that macOS is still very usable by developers? The OP I responded to originally is claiming that the macOS devs loved is "90% gone", but outside of HN/Reddit echo chambers I don't see people switching en-masse.




Apple's codesigning "just works" as long as you stick to XCode. Outside XCode it's extremely esoteric and opaque.

But to Apple's credit, it can be done, and done using the native toolset. It took me awhile, but my relatively simple Makefile is building fat binaries (x86_64 + arm64), building and signing the application package, submitting the package for signing, and stapling certificates. Not monolithically--it's not a giant shell script posing as a Makefile. I have proper, correctly defined build dependencies for every real target up to `make submit`. So `make submit` will effectively build and submit a package both from a fresh checkout, as well as incrementally from an existing tree where only a few object files need rebuilding. But `make submit` and `make staple` themselves are .PHONY targets; they're not idempotent, but invoking them multiple times is still benign.


I agree on the code signing. I wrote an app in LispWorks Common Lisp, and other people use the same for apps for the Apple Store, I ended up just rewriting it in Swift and SwiftUI because that made dealing with the Apple Store so much easier.


I implemented the codesigning and notarization process for Slippi, which is a Dolphin Emulator fork... based on CMake.

I would not call it esoteric and/or opaque. It's one or two shell scripts and an API key through App Connect for the notarization pass.


It's certainly esoteric if you're trying to diagnose entitlement issues. It's all poorly documented. Some of the documentation on the Apple website is flat out wrong, for example recommendations on signing multiple separate binaries, each requiring an entitlement (spoiler: there's exactly one app id per package (EDIT: per bundle), and exactly one executable that can use that app id for entitlements, so the documentation was recommending a method of accomplishing something that is simply impossible, which explains why it never worked).

Once you figure things out, it's not nearly as bad. The tools can work well. And that's hugely valuable. But documentation and consistency are significantly worse than common open source projects. And at least with open source you can also resort to looking at the code to figure things out. I find myself constantly doing that for Keychain, actually. Fortunately older versions of Apple's Security.framework are open source, which helps me diagnose and analyze API usage problems. Want to figure out how to retrieve the usage constraints (i.e. SecAccessControlCreateFlags) used to generate a T2 Secure Enclave private key? Or even simpler, want to figure out if its even possible to derive those constraints? Only the code is going to tell you.


Have you, or do you intend to write a blog post about your process? Could help other devs working on Macs.


> hardware support alone, while better, still isn't up to what you get with the big two.

I recently bought a Huion tablet only after checking that it works on Linux.

Install on Linux couldn't be easier: plug the thing and ready to work, pressure detection and all.

I did the same on Windows. It got detected, Windows told me that it was downloading the drivers, and after 10 seconds it worked... kind of. Any trace waited like 1 second to start drawing, and long pressure was a secondary click. What!? How can anybody draw with this? Pressure didn't work, it just worked as the mouse. Time to search: disable something called "windows ink", but directly in the register so it doesn't re-enable on boot. Ok, the secondary button issue gone, but the delayed draw still there. Maybe some stabilizer? Disable Krita stabilizer... Nope. Go to Huion web page, download an app to configure the pen: windows ink still enabled some how, the app allows you to disable it for real. Pressure still unseen. More tweaks on the app... Yes, finally, after two hours the tablet behaves just like in Linux after two seconds, and I only have to keep open a third party app that is probably calling home, just to avoid Windows to mess things up again.

Yes, hardware is a breeze in Windows, except when it isn't.


> hardware support alone, while better, still isn't up to what you get with the big two.

Ugh, not this again.

Apple's hardware support is fine because they only have to support a few bits of hardware. Microsoft's hardware support is fine because everyone (aside from Apple) sells their laptops preinstalled with Windows, and so of course they design them with Windows in mind, and test to make sure things work.

Linux's hardware support is also fine, if you buy your hardware with the intent to run Linux on it and look for something that's supported well, or buy something the manufacturer has built with Linux support in mind. All the "Linux hardware support is bad" stuff comes from people who already have a random Windows laptop or Mac and decide they want to run Linux on it, despite the fact that if they just did a quick web search they'd find a list of problems with their hardware.

Saying Linux's hardware support isn't good enough is like saying macOS's hardware support is not good enough, because you tried to run macOS on a random HP laptop. Run an OS on hardware that it supports well and your experience will be fine.

> UI is also not the stumbling block to Linux adoption

I agree that it's not the stumbling block, but it's a big one. The GNOME developers seem to like to completely change everything often, and break a bunch of things in the process. I don't know much about KDE; last time I tried it was in the late 90s. Xfce (which I use) is ridiculously stable (it's hardly changed at all in the 15+ years I've used it, and I think that's a good thing), but it's built by a very small team, and so it's always a bit behind on polish and covering all tasks with GUI tools. But I love it[0], and it works well and stays out of my way.

The other major stumbling block is commercial software distribution. Go check out a commercial software package's website, and unless it's Electron- or Java-based, you'll see one download each for Windows and Mac, and 10 different downloads for various distros and versions of Linux (if you're lucky!). I would hate to be the person who manages releases for all that.

I don't particularly mind, though: most of the software I use is open source and in my distro's package manager, and the few commercial apps I have work fine (because I run Debian, and most commercial apps will have a Debian download, or at least an Ubuntu one that can be made to work without much trouble). But that's just me, and an average user wouldn't want to put up with this.

Honestly, I'm afraid that too much desktop popularity would destroy Linux (and turn it into the ad-laden garbage that is Windows or the increasingly-locked-down prison that is macOS). I'm fairly happy with its current popularity, where hardware support and software availability are good enough for my purposes, but without the platform becoming too commercialized.

[0] Full disclosure: I was an Xfce core developer for 5 years in the aughts, so I'm a bit biased.


Take a look at KDE again, won't hurt. I switched to openSUSE Tumbleweed (best integration IMO, since its their standard DE since forever plus rolling release and btrfs snapshots on upgrades) from Cinnamon on Mint. KDE connect was the app that made me give it a try, more than happy with it. The devs made resource usage a priority, nowadays it rivals and sometimes surpasses XFCE which got heavier in the meantime.


> I've used both (PS on macOS/Windows) recently and find them to be more or less equivalent. Given the choice of using macOS or Windows, I'd still take macOS.

Of course you would, because macOS is better than Windows. That has always been the case. But then you have to deal with Apple's hardware segmentation strategy.

> Not sure what the point of your line re: hardware is when a standard M1 Mini runs very well for this use case. If you want a bigger monitor... just buy the bigger monitor with a Mini.

The M1 Mini has 4+4 cores. Creative work typically involves software that will use arbitrarily many cores, and on that kind of workload the M1 compares quite disfavorably with 16+ core x64 processors. If you want a Mac with the equivalent processing power of a Ryzen 9 5950X, you're paying $8000+ for a 16-core Mac Pro.

Which they do on purpose so that people doing professional work don't buy the Mini.

And it's the same problem for developers for the same reason. Compiling a big project on a 4+4 core machine is a lot slower than doing it on a 64-core Threadripper, but you can't even get a 64-core Mac.

> I ran Desktop Linux for years before going macOS, and I check in periodically. I remain unimpressed, it's still a jumbled mess of an ecosystem. UI is also not the stumbling block to Linux adoption

Many years ago the UI on Linux systems was basically unusable. It was the stumbling block to Linux adoption.

Now it's "good enough" and the stumbling block is... nothing. Millions of regular people have a Chromebook (i.e. Linux) and they use it just fine, and there is nothing superior about ChromeOS over Ubuntu et al, it just has a bigger corporation marketing it.

> Have you ever code signed anything elsewhere, where you buy certs and such? It costs money.

You're comparing it to alternatives which are also a pain in the butt, when the better alternative is to not do it at all because it's totally useless.

They'll give basically anybody who pays a code signing cert. A nobody who nobody trusts for anything can get one. Attackers can steal one from any of the millions of people who were forced to get one. So it's not providing any trust at all, so it's completely pointless, but it's still paying money and wasting time for no benefit.

> Can you actually refute what I'm saying, which is that macOS is still very usable by developers?

It's not that you literally can't use it. It's that it's worse now than the alternatives. And it's also worse now than it used to be. It would be hard to beat Snow Leopard on a modern 64-core workstation, but you can't get that anywhere. And the closest thing is the same workstation running Linux.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: