Hacker Newsnew | past | comments | ask | show | jobs | submit | joshka's commentslogin

Splice[1] Studio[2] used to be version control for music software (from about 2013-2021). You could browse Ableton Live (and a few other DAWs) projects and see the tracks and rendered versions each time you saved, add metadata etc. They pivoted into the more profitable sample discovery and sales business later and dropped the less profitable studio product.

I expect over the next few years that the DAWProject[3] open source protocol for exchanging between DAWs could make it possible for some of the ideas of splice come back without having to rely heavily on undocumented binary formats.

[1]: https://splice.com/

[2]: https://cdm.link/splice-studio-is-free-backup-version-contro...

[3]: https://github.com/bitwig/dawproject


From my previous recollection, there's an issue for this in just about every rust crate that handles these dirs. The right way to fix this is fix the spec, then make the libs adhere to the spec.


How would you fix the spec? Add a line explicitly stating the /Library/Application Support dir is only for applications with a bundle ID, instead of just implying it?


For clarity, is this what you're referring to as "the spec"? https://developer.apple.com/library/archive/documentation/Fi...


No the xdg base directories spec. If you want to be able to opt in to dry on a system which doesn’t canonically use xdg vars, then you need some config.


XDG is cross-distribution for Linux, but it's not cross-platform. MacOS doesn't use XDG.


That’s my point. MacOS doesn’t use it, but many expect it to do so, so make this explicitly opt in so that a user’s preference is respected over the canonical configuration. To do that the best place to define the setting is in the spec.

(I maintain a fairly popular TUI library), and as a CLI/TUI user on macOS, I dislike the App Support folder a lot. But even though I dislike it, I'd expect that apps for mac should put their files there because the XDG spec doesn't apply to macOS. It's the wrong place, but the technically correct one. The right thing to do is fix the spec, and then fix the apps / libs to follow that.

I wrote a top level thread that this should be fixed by adding an explicit "I want XDG even though I'm on macOS" setting somewhere. Probably another environment variable.


And I’d also argue that the App Support folder doesn’t apply to CLI/TUI config files either. Apple doesn’t force CLI programs it distributes from storing files in App Support. If this were the case, wouldn’t you also expect the .ssh folder to be relocated from $HOME to App Support on a Mac?

Much like the original author, my opinion is that you should do the least surprising to the user and if that’s not what the spec says, so be it.


Maybe it's because it's not their programs and they want to pull from upstream?


I completely expect for that to be the case. My point is, there are already programs distributed with macOS that don’t put config in the ~/Library/Application Support folder. Knowing this, I don’t see a good argument for (especially portable) CLI/TUI programs for keeping user editable config data in the App Support folder.


> this should be fixed by adding an explicit "I want XDG even though I'm on macOS" setting somewhere. Probably another environment variable.

Why another one? If an XDG env var is set explicitly, that's obviously what the user wants. Just don't (necessarily) use the spec's defaults when it's not set.


The xdg vars only control the location. There’s no var to control that xdg should be used. macOS has a canonical approach, which some tools choose to ignore. I want a flag to make that ignorance correct


I know, but if XDG vars controlling the location exist then obviously 'use XDG' is desired.


No. There are many situations where these are set where that’s not true.

How does the XDG spec not apply to macOS, and what "fix" are you proposing? https://specifications.freedesktop.org/basedir-spec/latest/ https://www.theregister.com/2024/10/11/macos_15_is_unix/

> Probably another environment variable

Having any of the existing XDG_* environment variables set is an incredibly-clear indication that the user wants the XDG spec followed.


The XDG environment variables only apply to systems where the XDG spec is defined to be in place. You need one extra flag for defining that to be true. Given that macOS isn't a place where XDG is applicable, suddenly making it applicable would break every place that assumes that it's not applicable (e.g. I no longer have configuration for some utility that uses the pre-change value of the AppSupport folders.

Because it was designed for Linux distros, by Linux distros, not UNIX vendors.

Where is XDG on Open Group standards?


I wrote elsewhere that I think the solution to this is to add another variable to the XDG Base Directory Specification to explicitly opt into using XDG variables, and then to support that flag on libraries that target Windows / MacOS and which would otherwise choose mac Application Support / Windows AppData folders.


No just accept that XDG_CONFIG_DIR is not always ~/.config


Where did OP said that "XDG_CONFIG_DIR is always ~/.config"? You can of course set it to another directory, if you wish so, and that should be respected.


So set it to ~/Library/Preferences and no one should be complaining - Apple see configs in the correct place and XDG see it correct as well.

So why is there a problem?


Apple's docs have this to say about ~/Library/Preferences/:

This directory contains app-specific preference files. You should not create files in this directory yourself.


Provide the whole quote. Contains the user’s preferences. You should never create files in this directory yourself. To get or set preference values, you should always use the NSUserDefaults class or an equivalent system-provided interface.

The system provided interface in XDG apps is the XDG path - the Apple code there implies writiing a GUI app as none of the paths it says are not updated by the App ie no manual updates.


At my former job at a FAANG, I did the math on allocating developers machines with 16GB vs 64GB based on actual job tasks with estimates of how much thumb twiddling waiting time that this would save and then multiplied that out by the cost of the developer's time. The cost benefit showed a reasonable ROI that was realized in Weeks for Senior dev salaries (months for juniors).

Based on this, I strongly believe that if you're providing hardware for software engineers, it rarely if ever makes sense to buy anything but the top spec Macbook Pro available, and to upgrade every 2-3 years. I can't comment on non desktop / non-mac scenarios or other job families. YMMV.


No doubt the math checks out, but I wonder if developer productivity can be quantified that easily. I believe there's a lot of research pointing to people having a somewhat fixed amount of cognitive capacity available per day, and that aligns well with my personal experience. A lot of times, waiting for the computer to finish feels like a micro-break that saves up energy for my next deep thought process.


Your brain tends to do better if you can stay focused on your task for consecutive, though not indefinite, periods of time. This varies from person to person, and depends on how long a build/run/test takes. But the challenge for many is that 'break' often becomes a context switch, a potential loss of momentum, and worse may open me up to a distraction rather than a productive use of my time.

For me, personally, a better break is one I define on my calendar and helps me defragment my brain for a short period of time before re-engaging.

I recommend investigating the concept of 'deep work' and drawing your own conclusions.


>"A lot of times, waiting for the computer to finish feels like a micro-break that saves up energy for my next deep thought process."

As an ISV I buy my own hardware so I do care about expenses. I can attest that to me waiting for computer to finish feels like a big irritant that can spoil my programming flow. I take my breaks whenever I feel like and do not need a computer to help me. So I pay for top notch desktops (within reason of course).


There’s also the time to market and bureaucracy cost. I took over a place where there was a team of people devoted making sure you had exactly what PC you need.

Configuring devices more generously often lets you get some extra life out of it for people who don’t care about performance. If the beancounters make the choice, you’ll buy last years hardware at a discount and get jammed up when there’s a Windows or application update. Saving money costs money because of the faster refresh cycle.

My standard for sizing this in huge orgs is: count how many distinct applications launch per day. If it’s greater than 5-7, go big. If it’s less, cost optimize with a cheaper config or get the function on RDS.


Also worth factoring in that top-spec hardware will have a longer usable life, especially for non-power users.


This is true, but I find my train of thought slips away if I have to wait more than a handful of seconds, let alone two minutes.

Tying this back to your point, those limited hours of focus time come in blocks, in my experience, and focus time is not easily "entered", either.


One person's micro breaks are another person's disruption of flow state


Simple estimates work surprisingly well for a lot of things because a lot of the 'unquantifiable' complexity being ignored behaves like noise. When you have dozens of factors pulling in different directions—some developers multitask better, some lose flow more easily, some codebases are more memory-hungry, and so on—it all tends to just average out, and the result is reasonably accurate. Accurate enough that it's useful data to make a decision with, at least.


That sounds reasonable, but there are also factors pulling in the opposite direction, for example Wirth's Law [1], that suggests devs with powerful computers create inefficient software.

1. https://en.wikipedia.org/wiki/Wirth%27s_law


For me the issue is at work with 16gb of ram, I'm basically always running into swap and having things grind to a halt. My personal workstation has 64gb and the only time I experience issues is when something's leaking memory


Well depends what kind of time periods you're talking. I've seen one in the past that was 60 minutes vs. 20 minutes (for a full clean compile, but often that is where you find yourself) - that is far more than a micro-break, that is a big chunk of time wasted.


You’re not waiting for the end of a thing though. You might hope you are, but the truth is there’s always one little thing you still have to take care of. So until the last build is green and the PR is filed, you’re being held hostage by the train of thought that’s tied to this unit of work. Thinking too much about the next one just ends up adding time to this one.

You’re a grownup. You should know when to take a break and that’ll be getting away from the keyboard, not just frittering time waiting for a slow task to complete.


The hours I sometimes spend waiting on a build are time that won't come back latter. Sometimes I've done other tasks but I can only track so much and so often it isn't worth it.

a faster machine can get me to productive work faster.


Most of my friends at FAANG all do their work on servers remotely. Remote edit, remote build. The builds happen in giant networked cloud builders, 100s to 1000s per build. Giving them a faster local machine would do almost nothing because they don't do anything local.


...and this is a great setup.

On the laptop you need: - low weight so you can easily take it with you to work elsewhere - excellent screen/GPU - multiple large connected screens - plenty of memory - great keyboard/pointer device

Also: great chair

Frankly, what would be really great is a Mac Vision Pro fully customised as a workstation.


When I worked at a FAANG, most developers could get remote virtual machine for their development needs. They could pick the machine type and size. It was one of the first thing you'd learn how to do in your emb^H^H^H onboarding :)

So it wasn't uncommon to see people with a measly old 13" macbook pro doing the hard work on a 64cpu/256GB remote machine. Laptops were essentially machines used for reading/writing emails, writing documents and doing meetings. The IDEs had a proprietary extensions to work with remote machines and the custom tooling.


Ah so the coding was done locally but run remotely?

I nearly went insane when I was forced to code using Citrix.


> Ah so the coding was done locally but run remotely?

Both, depending on the case and how much you were inclined to fiddle with your setup. And on what kind of software you were writing (most software had a lot of linux-specific code, so running that on a macbook was not really an option).

A lot of colleagues were using either IntelliJ or VScode with proprietary extensions.

A lot of my work revolved around writing scripts and automating stuff, so IntelliJ was an absolutely overkill for me, not to mention that the custom proprietary extensions created more issues than they solved ("I just need to change five lines in a script for christ's sake, i don't need 20GB of stuff to do that")... So ended up investing some time in improving my GNU Emacs skills and reading the GNU Screen documentation, and did all of my work in Emacs running in screen for a few years.

It was very cool to almost never have to actually "stop working". Even if you had to reboot your laptop, your work session was still there uninterrupted. Most updates were applied automatically without needing a full system reboot. And I could still add my systemd units to the OS to start the things i needed.

Also, building onto that, I later integrated stuff like treemacs and eglot mode (along with the language servers for specific languages) and frankly I did not miss much from the usual IDEs.

> I nearly went insane when I was forced to code using Citrix.

Yeah I can see that.

In my case I was doing most of my work in a screen session, so I was using the shell for "actual work" (engineering) and the work macbook for everything else (email, meetings, we browsing etc).

I think that the ergonomics of gnu emacs are largely unchanged if you're using a gui program locally, remotely or a shell session (again, locally or remotely), so for me the user experience was largely unchanged.

Had i had to do my coding in some gui IDE on a remote desktop session I would probably have gone insane as well.


Yeah it's always handy to be able to work effectively with just terminal tools.

However, VScode and Zed (my editor of choice) both have pretty decent inbuilt SSH/SFTP implementations so you can treat remote code as if it was local painlessly and just work on it.


It sounds more like doing embedded development with a TFTP boot to an NFS mounted root filesystem.


more than that, in the faang jobs I've had you could not even check code out onto your laptop. it had to live on the dev desktop or virtual machine, and be edited remotely.


> it rarely if ever makes sense to buy anything but the top spec Macbook Pro available

God I wish my employers would stop buying me Macbook Pros and let me work on a proper Linux desktop. I'm sick of shitty thermally throttled slow-ass phone chips on serious work machines.


Just Friday I was dealing with a request from purchasing asking if a laptop with an ultra-low-power 15W TDP CPU and an iGPU with "8GB DDR4 graphics memory (shared) was a suitable replacement for one with a 75W CPU (But also a Core i9) and NVidia RTX4000 mobile 130W GPU in one of our lead engineer's CAD workstations.

No, those are not the same. There's a reason one's the size of a pizza box and costs $5k and the other's the size of an iPad and costs $700.

And yes, I much prefer to build tower workstations with proper thermals and full-sized GPUs, that's the main machine at their desk, but sometimes they need a device they can take with them.


Curious perspective. Apple silicon is both performant and very power efficient. Of course there are applications where even a top spec MacBook would be unsuitable, but I imagine that would be a very small percentage of folks needing that kind of power.

Sadly, the choice is usually between Mac and Windows—not a Linux desktop. In that case, I’d much prefer a unix-like operating system like MacOS.

To be clear, I am not a “fanboy” and Apple continues to make plenty of missteps. Not all criticisms against Apple are well founded though.


You very clearly have no experience on powerful desktop machines. A 9950x will absolutely demolish an M3 or M4 Macbook Pro in any possible test, especially multicore testing. And I don't care how "performant" or "efficient" you think it is, those M series chips will be thermally throttled like anything else packaged into a laptop.

Oh, and the vastly superior dekstop rig will also come out cheaper, even with a quality monitor and keyboard.


That’s my bad for not clarifying I am talking solely about the laptop form factor here. It’s a given that laptops are not comparable in performance to desktops. In terms of laptop hardware, Apple Silicon performs quite well

Nice assumptions though.

It’s not just my opinion that Apple silicon is pretty performant and efficient for the form factor; you can look up the stats yourself if you cared to. Yet, it seems you may be one of those people that is hostile towards Apple for less well-founded reasons. It’s not a product for everyone, and that’s ok.


I have a 7950x desktop and an M3 max, they are very distant in performance for development, albeit I'll give credit to Apple for good single core performance that show in some contexts.


I have a decent rig I built (5900x, 7900xt) of course it blows my M1 MacBook out of the water.

You seem like a reasonable person that can admit there’s some nice things about Apple Silicon even though it doesn’t meet everyone’s needs.


Wish my employers didn’t same calculation.

Gave developers 16GB RAM and 512MB storage. Spent way too much time worrying about available disk space and needlessly redownloading docker images off the web.

But at least they saved money on hardware expenses!


You mean 512GB storage?


I always bought a really large monitor for work with my own cash. When most dev's had 19" or 20" I got a 30" for $1500.

best money ever spent. lasted years and years.

for cpus - I wonder how the economics work out when you get into say 32 or 64 core threadrippers? I think it still might be worth it.


FAANG manages the machines. Setting aside the ethics of this level of monitoring, I'd be curious to validate this by soft-limiting OS memory usage and tracking metrics like number of PRs and time someone is actively on the keyboard.


My personal experience using virtual desktops vs a MacBook aligns with your analysis. This despite the desktop virtual machines having better network connections. A VM with 16 GB of memory and 8 VCPUs can't compete with an M1 Max laptop.


To put a massive spanner in this, companies are going to be rolling out seemingly mandatory AI usage, which has huge compute requirements .. which are often fulfilled remotely. And has varying, possibly negative, effects on productivity.


I think those working on user-facing apps could do well having a slow computer or phone, just so they can get a sense of what the actual user experience is like.


Same for internet.

I've had the misfortune of being in a phone signal dead spot at times in my life.

On slow connections sites are not simply slow, but unusable whatsoever.

https://danluu.com/slow-device/


No doubt you mean well. In some cases it’s obvious- low memory machine can’t handle some docket setup, etc.

In reality, you can’t even predict time to project completion accurately. Rarely is a fast computer a “time saver”.

Either it’s a binary “can this run that” or a work environment thing “will the dev get frustrated knowing he has to wait an extra 10 minutes a day when a measly $1k would make this go away”


One of the big things I think a lot of tooling misses, which Geoffrey touches on is the automated feedback loops built into the tooling. I expect you could probably incorporate generation time and token cost to automatically self tune this over time. Perhaps such things as discovering which prompts and models are best for which tasks automatically instead of manually choosing these things.

You want to go meta-meta? Get ralph to spawn subagents that analyze the process of how feedback and experimentation with techniques works. Perhaps allocate 10% of the time and effort to identifying what's missing that would make the loops more effective (better context, better tooling, better feedback mechanism, better prompts, ...?). Have the tooling help produce actionable ideas for how humans in the loop can effectively help the tooling. Have the tooling produce information and guidelines for how to review the generated code.

I think one of the big things missing in many of the tools currently available is tracking metrics through the entire software development loop. How long does it take to implement a feature. How many mistakes were made? How many errors were caught by tests? How many tokens does it take? And then using this information to automatically self-tune.


You can show the exact opposite of this in a degenerate fixed point situation. Say you have -1, 0, +1 in each dimension. The only valid coordinates are the 6 on each face. (+-1, 0, 0) (0, +-1, 0) (0, 0, +-1). Not sure if this is the only counter example. I'd guess that with floating point math and enough bits the bias would be very small and probably even out.


If you're going to the effort of writing a procmacro, you may as well output a string from the macro instead of code.

If you're going idiomatic rust, then you might instead output a type that has a display impl rather than generating code that writes to stdout.


Thanks for this Rudy. I remember enjoying reading the Ware Tetralogy as a teen in the 90s. I wonder how well it holds up today. Might have to put that on my re-read list.


oh man, flashbacks to feeling slightly squicked at the sentient plastic sex toys. lol


Technically, a more direct analogy would be that some newspaper print on demand service exists, and the instructions for printing are distributed to the machines that print the newspapers, but are modified during distribution before the newpaper is printed by the reciever.

As much as I'm pro ad blockers, this seems like a reasonable reading of the law. An interesting way to convince yourself of this is to find a solid line that you could draw based purely on a set of principals grounded by some legal standard about what the difference between a desktop computer program, a downloadable JavaScript program, CSS and HTML really is in terms of how they cause a computer to act on the information.

That said, I think you could fairly reasonably find that section 69e of the copyright act (english translation [1]) applies to adblock software, though I'd imagine the plaintiff would probably argue that the use of an adblock software interferes with their interests.

---

Section 69e Decompilation

(1) The rightholder’s consent is not required where reproduction of the code or translation of its form within the meaning of section 69c nos. 1 and 2 is indispensable to obtain the information necessary to achieve the interoperability of an independently created computer program with other programs, provided that the following conditions are met:

1. the acts are performed by the licensee or by another person authorised to use a copy of a program or on their behalf by a person empowered to do so;

2. the information necessary to achieve interoperability has not previously been made readily available to the persons referred to in no. 1;

3. the acts are confined to those parts of the original program which are necessary to achieve interoperability.

(2) Information obtained through acts as referred to in subsection (1) may not be

1. used for purposes other than to achieve the interoperability of the independently created program,

2. given to third parties, except when necessary for the interoperability of the independently created program,

3. used for the development, production or marketing of a computer program which is substantially similar in its expression or for any other acts which infringe copyright.

(3) Subsections (1) and (2) are to be interpreted such that their application neither impairs the normal exploitation of the work nor unreasonably impairs the rightholder’s legitimate interests.

---

[1]: https://www.gesetze-im-internet.de/englisch_urhg/englisch_ur...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: