Hacker News new | past | comments | ask | show | jobs | submit | bigdubs's comments login

Adding to this, a solution might be enabling continuous releases and leaning into release channels could help in terms of getting more out to users.

In practice it's a challenge because the OS bundles a lot of separate things into releases, namely Safari changes are tied to OS changes which are tied to Apple Pay features which are tied to so on and so on.

It would require a lot of feature flagging and extra complexity which may reduce complexity.

Another way is to start un-bundling releases and fundamentally re-thinking how the dependency graph is structured.


I think they’re painted into a corner with WWDC. Everything has to be a crowd pleasing brain busting wow drop each year. I’m certain there are teams that design their entire workflow around the yearly wwdc. It honestly feels like an executive leadership problem to solve.


If that is a significant part of the problem, then moving WWDC from an in-person keynote attended mostly by nerds and glanced at by the media to an overproduced movie geared at the media and ordinary consumers first probably didn't help. They could've gone back to a stage presentation after COVID, but some of that transition had already been happening prior to that (I recall an increase in how many jokes/bits they were doing in the late 2010's, although that could just be my perception).


IMHO SD cards fail much more often than the USB-C connector would, what's the worry? If the camera mounts as a mass storage device it's one fewer thing to go wrong.


Firstly the main failure of connectors in cameras isn't the connector per se, it's generally the board that the connector is mounted on breaks off from the mainboard.

Secondly, and more importantly, the lifespan of your camera is now the lifespan of the internal storage. That seems very unappealing.


That's not a random limitation, there are very specific reasons[1] you cannot easily add generic methods as struct receiver functions.

[1] https://go.googlesource.com/proposal/+/refs/heads/master/des...


For someone not well-versed in language implementation details, it may very well feel random.

I've been using Go as my primary language for a decade, and the lack of generics on methods was surprising to me the first time I ran into it, and the reasoning not obvious.


Yeah. I'm not claiming they didn't back themselves into a corner here.

There's no theoretical reason not to have it, the reason is because of a random intersection of other design decisions... unless you're saying they made those choices fully expecting to have these restrictions on generics later?


> Or, we could decide that parameterized methods do not, in fact, implement interfaces, but then it's much less clear why we need methods at all. If we disregard interfaces, any parameterized method can be implemented as a parameterized function.

What? Methods are not needed if not for implementing an interface?

Anyway, functions could also be implementing interfaces, some languages allow that.

I swear the go docs read like a cult.


Functions in Go can be generic, just not methods.

And unless you're also using interfaces, methods are no different from functions aside from call syntax.


But "methods are only needed because of interfaces" is simply not true. Not true in all other OOP languages that I know of, not true in go, and not true in go's stdlib (that is, in practice).

Methods bind state with a function.

That an object can satisfy an interface is secondary here. In different languages, an interface could be satisfied with a combination of methods, fields, or nominality.

If the statement "we could decide that parameterized methods do not, in fact, implement interfaces, but then it's much less clear why we need methods at all" was true, then there should not be a single struct in go (stdlib nor elsewhere) that does not implement some interface (and it must be used via that interface to make sense). This is obviously not the case.


If the method is not dynamically dispatched, it is exactly equivalent to a function with receiver passed as the first argument. The receiver-dot notation is just a convenient form of implicit namespacing, then, nothing more. And, in Go, methods are only dynamically dispatched on the receiver in the context of interfaces. So, everything else is just syntactic sugar. And what the doc is saying is that supporting this syntactic sugar makes the spec much more complicated, so they deemed it not worthwhile, given that a global function works just as well in this context.


Yes, we all know what dynamic dispatch is, which is exactly what the docs talk about.

> Or, we could decide that parameterized methods do not, in fact, implement interfaces, but then it's much less clear why we need methods at all.

I will say it again: the go docs want to gaslight the reader into believing everything is allright, a design decision, and never a bad one. Constantly. It reads between a cult and a marketing piece, sometimes.

Methods have many more upsides than dynamic dispatch, or else there would be no (or little) methods that don't have an interface. We could all code like it's C, yet we don't. The only times I have been forced into this C style is when I wanted a method with generic parameters.


I don't see what other upsides are there aside from the slightly more convenient syntax, though. In other languages methods also serve as part of encapsulation mechanism, but in Go all visibility is handled on package level. What other upsides do you have in mind?

Don't get me wrong, BTW, I'm not at all a fan of Go overall design and numerous inconsistencies and limitations. But, in this case at least, they have a valid point wrt complexity of the feature (which, to be fair, is largely induced by design of Go's other features, but it is what it is at this point) versus its usefulness. I can only wish for other PL warts of Go to be as minor as this one.


Don't ask me, ask why go does it, all over the stdlib, 3rd party libs, and projects (your project too), and why this fact directly contradicts your interpretation of the doc comment about being "questionable".

I am okay with go doing lots of type complexity tradeoffs for one of the fastest and leanest compilation times out there. I'm not okay with the go creators consistently gaslighting readers into believing that this tradeoff is not happening. In other words, they want to have the cake (speed) and eat it too (my type system is actually good).

I understand that in this day and age, you are forced to do marketing (manipulate the truth) to make a PL popular. But the go project just does it too blatantly for my taste.


People assuming this is a competitive posture exclusively are missing the point.

The app store isn't just about making more money, it's about enforcing privacy and security guidelines for apps through the review process and through checks for unauthorized api usage.

Apple's product is privacy; they view privacy as a premium feature worth paying for, and 3rd party app stores that are the wild west for privacy are antithetical to this.


Said customers who care about privacy would therefore continue to use the App Store.


Except important products will be pulled from the App Store because they will become exclusives in 3rd party stores.


These "important products" are already getting out of the Apple Store, but instead of moving to the competing App Store alternative (which currently doesn't exist), they are moving to another ecosystem


What if Apple presented better deal terms on a level playing field?


Why should Apple divert resources creating a secondary environment?


Because it's the law in a region Apple wants to do business. Simple.


Apple did what they thought was best. The EU said no and the next step is the court system. When that result is known Apple can decide what markets to operate in.

When Apple Kerberos all their services they essentially made it possible to granularly manage service availability by user, device, and region.

The EU fining schedule is so out of proportion that it creates a real business risk to Apple that may be greater than the EU market size.

I don’t think the EU efforts are as detailed or grounded in reality as the EU considers them.

As a spectator I can’t wait to follow the ups and downs of this.


which is about brand marketing, which is about making more money. without the differentiation of the app store, they are that much closer to being android.

> Apple's product is privacy

that's a large component of their brand, not their product.

for contrast, signal's product is privacy. (note: i'm no fan of signal)

consider that the facebook app is allowed, on the first party app store. i'm failing to see how the app store gates privacy.


The App Store rules are mostly about making more money. So what, it's not a charity.


3rd party app stores will still be subject to GDPR, which apply to all European customers, so calling it a wild west is a weird claim.


Many ring-buffer implementations grow the backing storage array transparently on enqueue but do so in place, discarding the old arrays; what's the advantage of keeping the previous arrays? Naively I'd say it would reduce GC churn because you wouldn't have to free the old arrays, but I'm curious what the impact of that is in benchmarks.

Separately; the simulator is cool and very helpful!


If you discard the old array (and allocate a bigger one before), you must also copy all enqueued material.

Also this one enqueue will be mega expensive - a clear "fat tail" in the latency histogram.

In MultiArrayQueue you keep all already enqueued material in-place, "just" allocate the bigger array, register the new diversion, enqueue the one new element - and done.

Thanks


why not free previous smaller chunk after reader finished reading from it?

for me it would be better to allocate new buffer but allow reading from old one until it contains data and after that dealocate it and keep only new one in use


You might run into ABA, though that isn't an issue for managed languages.


Yeah, leaving small old buffers behind seems like a major no-no to me. It could be useful if you think you'll shrink back down, but it feels like cache locality suffering and iteration/tracking penalties strongly incentive getting rid of the old buffer asap.

One other thing I want to shout out, I saw what I thought was a really neat multicast ring buffer the other day where the author has an atomic for each element, rather than the typical reader/writer atomics. The promise was having much less contention on any given atomic, in most cases. https://github.com/rezabrizi/SPMC-Queue https://news.ycombinator.com/item?id=40410172


Removing anything in non-blocking structures is problematic, see e.g. the referenced lecture of Professor Scott.

You never know how many concurrent threads still "are" on the place you wish to remove.

You would have to deal with stuff like hazard pointers, limbo lists and the like.

Better to keep the small arrays there.


Isn't this only issue when you allow referencing data in queue?

If queue only allows to copy out data you can increase reader pointer after data were copied to different buffer, therefore nothing can be at the place we are removing


With ConcurrentMultiArrayQueue, there can be N threads INSIDE of the program code of the Queue, running or preempted (for a not predictable time) and you cannot control it.


I'm sorry but that doesn't seem at all like any kind of fundamental constraint. At most basic, simply not have your readers advance until after they're done reading? This seems trivial as fuck. I've seen a lot of protestations that have seemed like, ok, maybe perhaps perhaps perhaps I'm missing some factor? But no, it really seems like the problem of understanding when data is done being read isn't anywhere near as hard as the rebuttals here make it seem, and it seems like everything works much better when we can accept this constraint.

Perhaps we want to have any of the given multicast readers able to read more than one element at a time, and that does complicate things somewhat. But hardly impossible to handle.

Again: deeply disagreeing with the premise here that this can't be done. And it isn't even really a significant penalty, if your consumers do have to be async consumers that need to hold open their reading for a while. Unclear what the protests are.


Hi jauntywundrkind, just to make sure we have the same understanding:

The smaller arrays are not "left behind" in the garbage sense - the queue will use them again and again in the next rounds. See simulator. Re-use, not Re-cycle - the Garbage-Free Mantra.

If the Queue re-cycles the smaller arrays, it would not be garbage-free anymore.

If you still believe that the smaller arrays should be re-cycled (would be curious why), then comes the technical problem:

Let's imagine a reader stands immediately before reading the array (e.g. to check if the writer has already written). Now the OS preempts him. For how long: We don't know. In the meantime all things in the queue move forward and the program code in some other thread (writer probably) decides to de-allocate the array (and indeed does it).

Now the preempted reader wakes up and the first thing it does is to read from that (deallocated) array ...


When I wrote it [0], the goal was semi-bounded [1] latency.

[0] It needs a refactor, but here's some version of the idea. https://github.com/hmusgrave/zcirc

[1] You're not _really_ bounded since you have blocking calls to the underlying allocator (usually somewhere close to the OS, and even that can do more work than you might expect when you ask for a contiguous array), but it's still much cheaper than a bulk copy.


Quantity may have a quality all its own in warfare but for comments having the invite tree and accountability is pretty nice!

I'd rather have (2) really insightful comments than 300 trying to promote themselves.


From my experience in invite-only forums, it can only assure some bottom line (mainly, fewer troll posts), but it doesn't help at all on the quality of comments.

The only thing I saw that ever significantly improved the quality of comments is vote-based comment system when it firstly started to emerge (I'm thinking Reddit in its first few years). But unfortunately it nowadays has been gamed to death, probably even worse than old public forums.


Except you don't see those 2 really insightful comments there.


Invite doesn't always guarantee insightful comments.


> 300

Exaggerating doesn't make reflect well on your argument.

HN doesn't have a notable problem with self-promotion.


We use DuckDB extensively where I work (https://watershed.com), the primary way we're using it is to query Parquet formatted files stored in GCS, and we have some machinery to make that doable on demand for reporting and analysis "online" queries.


Storing data in Parquet files and querying via DuckDB is fast and kind of magical.


Shockingly fast and nice and having the intermediate files be immutable is super nice.


Haven't used it yet, but this aspect seems very appealing.


Do you load the Parquet files in duckdb or just query them directly?


We query them directly in most cases with the httpfs plugin, but for hot paths we fetch them and cache them on disk locally.


Tailscale has some server components (account management etc.) that are powered with SQLite.

Can read more here: https://tailscale.com/blog/database-for-2022/


What one engineer can operate confidently doesn't necessarily extend to the rest of the team, who will have to support that system if that engineer leaves or has to work on other projects.

The lowest common denominator on a team is boring.


There is a deeper strategy here with go vs. node; having a standard library maintained by professionals.

I would rather build on a common set of libraries secured by people who are paid full-time to maintain them, and maybe have slightly worse ergonomics, than have a community of libraries that come and go and have inconsistent quality.

This standard library approach yields fewer dependencies, fewer changes over time, and better consistency between projects.


The downside of the standard library approach is that things tend to ossify. While I agree that slower change can be a good thing sometimes, putting things like a HTTP server in the standard library means less experimentation around different ways of doing things, and more difficulty getting performance and other improvements into the hands of language users.

Sure, people can make a third-party module that implements a HTTP server, but the incumbent default that's shipped with the language has an inherent (and often unfair) advantage and a lot of inertia behind it.

I don't really care about the whole "professionals" bit. Sure, I don't want to be relying on something mission-critical to me that's maintained by one person doing it in their spare time. But there is a world of possibilities between that and having a dedicated paid team. Consider, also, that the Go team is only funded so long as Go is important to Google's corporate strategy. Once it isn't, funding will start to dry up, and Go will have to look for a new funding and governance model. That's not necessarily a bad thing, and I'm sure Go would still succeed despite that. But that's kinda my point: this whole "maintained by funded professionals" thing doesn't really matter all that much.


Isn't Node API an equivalent of go standard library?


Yes but it's super barebones. Its successor, Ryan Dahl's second attempt at JS runtime, Deno, has a much fuller standard library (inspired by Go).


I wish we'd stop trying to make broken languages work. This feels like hill-climbing into the strangest local optimum possible. JS is not the best example of an interpreted language. Wouldn't it be better to put Python in the browser than to put JS on the server? Can't wait for WASM to be a first-rate citizen on the web so we don't have to deal with this anymore.


I think you would be surprised to learn that more developers love TypeScript than Python these days, according to one popular survey.[0]

All of this is subjective, of course. WASM isn't going to make Python the language of choice for browsers any time soon.

[0]: https://insights.stackoverflow.com/survey/2021#section-most-...


I don't think the comparison is entirely fair since one of the main attractions of TS is that it runs in the browser. Python can unfortunately not fill the same role right now. So I'd keep that in mind while looking at that ranking. But yes, I see many people like it. Maybe I'm missing something, but it's still too JavaScript-y for me.


> Wouldn't it be better to put Python in the browser than to put JS on the server?

I think that's a categorical "no", because Python isn't an objectively better language than JavaScript. I'm saying this as a Python developer since v1.5 (>20 years).

Subjective opinions are a different matter.


Yes Node.js ships with what is effectively a very thin standard library for some low level things like interacting with the file system, the process model, some security features like TLS.


The Fetch API, supported by browsers for over 5 years now, is only now making it into the official Node API https://blog.logrocket.com/fetch-api-node-js/


Node.js had `http` in its standard library for a long time though.


so, cathedral vs bazaar


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: