Hacker Newsnew | past | comments | ask | show | jobs | submit | more half-kh-hacker's commentslogin

What else would "self-hosting all of Bluesky" mean other than a copy of the entire site? If you just want to participate in the network host a PDS, which only stores your own posts.


Surely there's some middle ground between only hosting your own data and being reliant on another site to keep track of your following / followers and hosting a duplicate copy of the entire network?


For sure. If you just want to host your own data, you can do that. A PDS for you and maybe some friends is very small and cheap to host.


My understanding though is that having a PDS on its own is useless without an AppView to collect the data from the relay? Or am I misunderstanding the architecture here? https://docs.bsky.app/docs/advanced-guides/federation-archit...


I'm talking about the case where you wanted to run your own PDS and use all of the other infrastructure being run by Bluesky.

If you fully want your own copy of everything, then you'd want to run a copy of everything. But you don't have to. It really depends on what your goals are. That's why the post is about the maximal scenario. "Just your own PDS" is the minimalist scenario. But I think it's the one that makes sense for 95% of users who want to self-host.


Right, and I'm saying "surely there must be a middle ground between "using all of Bluesky's infrastructure" and "having a 4.5tb copy of every post ever made on the network""


What exactly would that be?

I feel like the middle ground your talking about could be just a feed?

A feed is: a server that consumes the firehose and decided on whether to store posts, when loaded in the app it returns some post to create a feed

So essentially you only store references to part of the network rather than storing the whole thing


consider the nostr protocol


Your following list is stored in your own repo, so it lives on your PDS. You can theoretically have partial replicas of the network but nobody has bothered yet; if you want to make software like that, a good start would be subscribing to the firehose and filtering down to DIDs you care about / supplying the watched DIDs parameter to a Jetstream instance


The middle ground you're looking for is impossible in the AT protocol, it is however what the Nostr protocol is aiming towards.


Speculatively, I think Mastodon being AGPL could have played some role here


A lot of Electron applications in the Arch Linux package repositories use a system electron package, it's nice. They have to be split by major version, though.


It's needlessly conspiratorial to ascribe intention here, especially when the Servo community have already expressed interest in supporting upcoming (difficult!) standards like CSSOM


> We've shown that many measurements of latency [...] ignore the full capture and playback pipeline

In the repo linked in OP is a screenshot showing a wall clock next to its playback on the streaming site -- that's end-to-end to me. So how is this relevant?


Because latency is a distribution and these photos are often selected at the best-case P0 end of all the encode/decode processes whereas actually what matters is the worst case P99.

A proper implementation will make sure the worst-case latency is accounted for and not cherry-pick the best case.


The methodology of the linked keyboard latency article including the physical travel time of the key has always irked me a little


Yet as a lover of Cherry Reds and Blues, in my opinion that time should most definitely be included. I am not a gamer, but I do notice the difference when I'm on a Red keyboard and when I'm on a Blue keyboard.


My initial gut reaction to this was - yeah, of course. But after reading https://danluu.com/keyboard-latency/ - I'm not so sure. Why exactly should physical travel time not matter? If a keyboard has a particularly late switch, that _does_ affect the effective latency, does it not?

I can sort of see the argument for post actuation latency in some specific cases, but as a general rule, I'm struggling to come up with a reason to exclude delays due to physical design.


It's a personal choice of input mechanism that you can add to the measured number. Also, the activation point is extremely repeatable. You become fully aware of that activation point, so it shouldn't contribute to the percieved latency, since that activation point is where you see yourself as hitting the button. This is the reason I don't use mechanical keyboards; I can't activate the key in a reasonable time.


>This is the reason I don't use mechanical keyboards; I can't activate the key in a reasonable time.

From what I understand, non-mechanical keyboards need the key to bottom out to actuate, whereas mechanical switches have a separate actuation point and do not need to be fully pressed down. In other words mechanical switches activate earlier and more easily. What you said seems to imply something else entirely.


If you're comparing a mechanical key switch with 4mm travel to a low-profile rubber dome with 2mm or less of travel, the rubber dome will probably feel like it actuates sooner—especially if the mechanical switch is one of the varieties that doesn't provide a distinct bump at the actuation point.


No, I’m speaking only of travel required to activate the key. There’s still travel to the activation point for mechanical keyboards. I’ve yet to find a mechanical switch with an activation distance as small as, say, a MacBook (1 mm). Low travel mechanical switches, like from Choco (as others have mentioned) are 1.3mm. Something like a Cherry Red is 2mm.


There are mechanical switches with near 1 mm travel, comparable to laptop keyboards. E.g. Cailh choc switches have 1.3 mm travel.

(I would love to see scissors-action keys available to build custom keyboards, but I haven't seen any.)


We don’t all press keys in exactly the same way. How would you control for the human element?


Variance isn't a reason to simply ignore part of the problem.


A lot of modern keyboards allow you to swap out switches, which means switch latency is not inherently linked to a keyboard.

It also completely ignores ergonomics. A capacitive-touch keyboard would have near-zero switch latency, but be slower to use in practice due to the lack of tactile feedback. And if we're going down this rabbit hole, shouldn't we also include finger travel time? Maybe a smartphone touch screen is actually the "best" keyboard!


Latency isn't everything; but that doesn't mean it's irrelevant either. I'm OK with a metric that accurately represents latency with the caveat that feel or other factors may be more important. If key and/or switch design impacts latency in practice; shouldn't we measure that?

I guess that is an open question - perhaps virtually all the variance in latency due to physical design is tied up with fundamental tradeoffs between feel, feedback, sound, and preference. If so - then sure: measuring the pre-activation latency is pointless. On the other hand, if there are design choices that meaningfully affect latency without meaningfully impacting other priorities, or even where gains in latency are perhaps more important than (hypothetically) small losses elsewhere - then measuring that would helpful.

I get the impression that we're still in the phase that this isn't actually a trivially solved problem; i.e. where at least having the data and only _then_ perhaps choosing how much we care (and how to interpret whatever patterns arise) is worth it.

Ideally of course we'd have both post-activation-only and physical-activation-included metrics, and we could compare.


I'm fine with wanting to measure travel time of keyboards but that really shouldn't be hidden in the latency measurement. Each measure (travel time and latency) is part of the overall experience (as well as many other things) but they are two separate things and wanting to optimize one for delay isn't necessarily the same thing as wanting to optimize both for delay.

I.e. I can want a particular feel to a keyboard which prioritizes comfort over optimizing travel distance independent of wanting the keyboard to have a low latency when it comes to sending the triggered signal. I can also type differently than the tester and that should change the travel times in comparisons, not the latentcies.


Because starting measuring input latency from before the input is flat out wrong. It would be just as sensible to start the measurement from when your finger starts moving or from when the nerve impulse that will start your finger moving leaves your brainstem or from when you first decide to press the key. These are all potentially relevant things, but they aren't part of the keypress to screen input latency.


Yeah that is definitely something that shouldn’t be included


my favorite thing about modern landscape of desktop apps is that… Electron apps are tinkerable! you go to the app's resources folder, extract an asar with one command, and then you can edit the node.js-side files. then, you can just change what the BrowserWindow instance is loading to resources you control, and you have end-to-end control over the entire application


I'm working on an additive synthesizer right now and it's so fun to make plucks where the partials have different envelopes


Which dark mode are you talking about? As far as I can tell the page (& demos) don't ship any styles for prefers-color-scheme: dark at all


Reminds me of the meme of the person shoving a stick into their bikes wheel then complaining about ______. Used to get it a lot here with people complaining about sites being broken because they were hard blocking essentially all javascript, the issues are your own making here...


In my case firefox used the OS's (Windows 10) dark mode preference.

In about:config the layout.css.prefers-color-scheme.content-override setting was on 2 (system). Setting it to 3 (browser) turned the dark mode off.


a whole script? surely it's just something like `find . -name '*.aiff' | parallel ffmpeg -i {} {}.mp3`


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: