Hacker News new | past | comments | ask | show | jobs | submit | Anunayj's comments login

Can someone also comment on how secure the built in password in manager in Firefox is to unsophisticated malware attacks that simply copy your browser extension data and such. Compared to bitwarden which requires a password to unlock it, and as I understand stores everything encrypted on disk.


If you don't use a master password, it's unsafe. And even with master password, I vaguely remember it's not that safe either, but that might be outdated info.

This was going around the last days: https://github.com/Sohimaster/Firefox-Passwords-Decryptor


I recently experimented with running llama-3.1-8b-instruct locally on my Consumer hardware, aka my Nvidia RTX 4060 with 8GB VRAM, as I wanted to experiment with prompting pdfs with a large context which is extremely expensive with how LLMs are priced.

I was able to fit the model with decent speeds (30 tokens/seconds) and a 20k token context completely on the GPU.

For summarization, the performance of these models are decent enough. However unfortunately in my use case I felt using Gemini's Free Tier with it's multimodal capabilities and much better quality output made running local LLMs not really worth it as of right now, atleast for consumers.


you moved the goalposts when you add 'multimodal' there; another item is, no one reads PDF tables and illustrations perfectly, at any price AFAIK


Supposedly submitting screenshots of pdfs (at a large enough zoom per tile/page) to OpenAI gtp4o or Google’s whatever is currently the best way of handling charts and tables.


Where was the off by one error? I read the article and didn't find any mention of it.


I haven't seen it reported anywhere. It was my machine that found the key. To commemorate the event a plaque was made and permanently attached to the case. If it is still in the archives somewhere, the plaque lists the key reported as found by the client, which you would see is one off from the actual key that was the solution.

I never asked the distributed.net team, but I always suspected this was intentional to maybe thwart people from front-running a submission, so not truly a bug. If Adam, Jeff or David read this, maybe they could chime in.


Why is low latency livestream so hard, while at the same time Cloud Gaming Tech like Nvidia Gamestream and such can have such a flawless experience?

I've used Moonlight + Nvidia Gamestream with ~40ms RTT and couldn't feel a difference in competitive shooters, so total latency must be pretty low.

Does it have something to do with the bandwidth requirements? (1 stream v/s potentially hundreds)


There's no way that people cannot tell the difference, I can with various streaming methods from my pc to my shield/tv, with wire in the same house. Mouse to photon latency of a good pc will be in the range of 10-20ms, best case you're doubling or tripling that.


I can feel 40ms for sure and there’s no way you play a competitive shooter with a 40ms delay. Hell even a Bluetooth mouse gets annoying.

Maybe if you’re playing Microsoft Office it’s ok.


Nah to be fair it's fine for a lot of games which are also played on old gen consoles with terrible gamepad to TV latency. Sure twitchy multiplayers are definitely not some of them. I'm not big on competitive multiplayer, only Rocket League and I can't do this over local streaming. Pretty much anything else I play is ok though.


You, my dear Internet friend, are confidently expressing your lack of experience. No one who has played multiplayer LAN games, or low latency Internet games, could or would ever say that streaming gaming, such as the dead stadia, or moonlight, whatever, are comparable to the alternative, Nah, they couldn't.


You conflate local streaming vs internet streaming, and I specifically excluded twitchy multiplayer games...


I don't think that I could feel the difference between 40ms and 10ms RTT when playing something like DOTA2 or AoE2.


Most online games use client side prediction, so any input made by the client happens almost instantly on the client and it feels really good, and can be rollbacked if the server disagrees. If you stream your game remote with 40ms it will add 40ms to your input and that just feels bad (not to mention jitter, especially if you're on semi-congested wifi), but its not unplayable or even that noticeable in many games. Would I play some casual Dota like that? Sure. But not high ranked games.


> Maybe if you’re playing Microsoft Office it’s ok.

You're not going to be able to do the best combos with that kind of latency, but I guess it's ok for mid-level play.


yeah, I just feel the lag and everything, even on monitors claiming 1ms I can feel it while playing FPS and it is really annoying to me if game is not fluent I will not play it


Cloud gaming is streaming from a server in a data center to one nearby client. Twitch-style live streaming is from a client, to a data center, to a CDN, to multiple clients.


TLDR, there are a lot of moving pieces, but people are working on it at the moment. I try to summarize below what some of the challenges are

Bandwidth requirements are a big one. For broadcasts you want your assets to be cacheable in CDN and on device, and without custom edge + client code + custom media package, that means traditional urls which each contain a short (eg 2s) mp4 segment of the stream.

The container format used is typically mp4, and you cannot write the mp4 metadata without knowing the size of each frame, which you don't know until encoding finishes. Let's call this "segment packaging latency".

To avoid this, it's necessary to use (typically invent) a new protocol other than DASH/HLS + mp4. Also need cache logic on the CDN to handle this new format.

For smooth playback without interruptions, devices want to buffer as much as possible, especially for unreliable connections. Let's call this "playback buffer latency".

Playback buffer latency can be minimized by writing a custom playback client, it's just a lot of work.

Then there is the ABR part, where there is a manifest being fetches that contains a list of all available bitrates. This needs to be updated, devices need to fetch it and then fetch the next content. Let's call this "manifest rtt latency".

Lastly (?) there is the latency from video encoding itself. For the most efficient encoding / highest quality, B-frames should be used. But those are "lookahead" frames, and a typical 3 frame lookahead already adds ~50 ms at 60fps. Not to mention the milliseconds spent doing the encoding calculations themselves.

Big players are rewriting large parts of the stack to have lower latency, including inventing new protocols other than DASH/HLS for streaming, to avoid the manifest RTT latency hit.


For HLS you can use mpeg ts, but mp4 is also an option (with the problem you talk about).

IMO one of the issues is that transcoding to lower resolutions usually happens on the server side. That takes time. If the client transcoded that latency would go away (mostly).


A lot of it is buffering to work around crappy connections. Cloud gaming requires low latency so buffering is kept to a minimum.


Because there are so many middlemen in series buffering frames, and also because access circuit between user terminal to nearest CDN is jittery too. The latency must be few times over max jitter for a par course user experience.


Slightly tangential, do you think Moonlight (with say Sunshine) is good enough for proper work? I've used a few "second screens" apps like spacedesk on my iPad but generally when the resolution is good enough for text, it's too laggy for scrolling (and vice-versa).

(For more details, I'm planning to stream from an old laptop after turning it into a hackintosh. I'm hoping staying on the home network's going to help with latency.)


Absolutely yes, especially if you use a GPU with a recent video encoder.

I have all my PCs connected to each other via Moonlight+Sunshine and the latency on the local network is unnoticeable. I code on my Linux workstation from my laptop, play games on the gaming PC from my workstation, etcetera and it is all basically perfect.


Thank you! When you say GPU with a recent video encoder, do you mean them separately (i.e. don't use software/cpu streaming; and use an efficient encoder), or do you mean use a GPU that supports recent encoders? I'm afraid my Intel HD 520 isn't particularly new and likely doesn't support modern encoders.


A GPU with a dedicated encoder for x264 or preferably x265/AV1. You can do without it, but you'll spend a core or two on software encoding and the overhead will add a few tens of ms of lag.

With full hardware capture and encode (default on windows, can require tweaking on Linux ) it's virtually free resource-wise.


You're off by one letter: the codec is h264/h265. x264 and x265 are the CPU encoding softwares for the codec.


Without a doubt! I use moonlight for ten+ hours a week. I never use it for gaming and it never failed once.


Thanks, that's great to hear/know! Would you be okay sharing your hardware (CPU/GPU) setup on the server (Sunshine) side? Thanks!


Yes. I host both on my NixOS desktop that has a 13900KF and an RTX4080 (using nvenc and AV1), as well as from my MacBook Pro M3 Pro.


In my head the answer is simple, moonlight is a one-to-one stream, while broadcasting to many clients at once is a whole different setup.


GPU would do transcoding, build network packet and copy data via PCI-E, all using hardware, avoid memory copy.

OBS+WebRTC is mostly software doing heavy-lifting.

Imagine if the camera would build WebRTC UDP packets directly and zero-copy to NIC, that would lower latency quite a bit.


I wouldn't be surprised to learn that Nvidia is doing exactly that on their cloud: Compressing the video on the GPU using NVENC, building a package around it and then passing it to a NIC under the same PCIe switch (mellanox used to call that peerdirect) and sending it on its way.

The tech is all there, it just requires some arcane knowledge.


"arcane knowledge" is too strong of a phrase. You need someone who is familiar with Nvidia hardware and is willing to write software that only works on Nvidia hardware.


It is arcane as in information how all of this works on their specific hardware is not publicly available, but probably widespread within.


This is premature optimisation. The bus bandwidth and latency needed to get a few Mbps of compressed video to the PC is microscopic. It's completely unnecessary to lock yourself into NVIDIA just to create some UDP packets.


I was talking about Nvidia's Cloud gaming offer (GeForce Now). For them it's certainly not a premature optimization.


Exactly this with „…NVIDIA GPUDirect for Video, IO devices are fully synchronized with the GPU and the CPU to minimize wasting cycles copying data between device drivers“.[1]

1. https://developer.nvidia.com/gpudirectforvideo


Was this laptop in question from Hewlett Packard (HP)? Because I swear I've seen this exact behaviour on a HP laptop.


Well there are a couple of ways one can do this!

1. Recursively lookup DNS, so domains will have to be blocked at the registrar level, since DNS is unencrypted, it can be blocked at ISP level as well.

2. Use a protocol alternative to DNS, a good mature example is GNS. It aims to replace DNS, with a built from group up, modernish protocol. Using a DHT and public-key cryptography.

3. There are "block chain" solutions to the whole domain problem, look at Handshake, ENS etc.


I wanted to play around with this! But too bad I have a Samsung Snapdragon :(

And it doesn't expose ADR/carrier phase.


Are there any observable effects of such events that I can see on everyday equipment? Something like increased Bit Flips caused by Cosmic Rays or such?


The solar inverter here on the farm dropped off-grid 18 times between 11:30 and 16:15 here on western Sweden, on closer observation this seems to be caused by the voltage running too high (up to 250V AC where 230V is normal). This happened while the Swedish electricity networks have shifted down their production to make sure they have spare capacity in case of damage due to the current geomagnetic storm.


I really shouldn't be the one to doubt, considering the size we've seen computers go from and to. But is it even possible to fit all that in that size? Like as I understand optics limits a lot about how far the screen has to be, even with the use of lenses. And it wouldn't be fully passthrough if they use the Google glass like approach as I understand?


Well it's at least probably not impossible? I think they can avoid using a screen at all by instead using multiple low-power lasers embedded in the goggle frames to directly image on the wearer's retinas. Then the main lenses only have to act as selective shutters to block out parts of the real world. But as of today this concept is mostly science fiction; several major breakthroughs would be needed.


For people who might be wondering why git hasn't moved to sha256 yet, here's a lwn article on it: https://lwn.net/Articles/898522/


For a more recent bit of info on it (I remembered this thread from the latter half of last year on the mailing list):

https://lore.kernel.org/git/2f5de416-04ba-c23d-1e0b-83bb6558...

The important snippet from this thread is:

> On 6/29/23 07:59, Junio C Hamano wrote:

> > Adam Majer <adamm@zombino.com> writes:

> > > Is sha256 still considered experimental or can it be assumed to be stable?

> > I do not think we would officially label SHA-256 support as "stable" until we have good interoperability with SHA-1 repositories, but the expectation is that we will make reasonable effort to keep migration path for the current SHA-256 repositories, even if it turns out that its on-disk format need to be updated, to keep the end-user data safe.

> That could be a different definition of stable. But I'm satisfied that current sha256 repositories will not end up incompatible with some future version of git without migration path (talking about on-disk format).

> So maybe my question should be reworded to "is sha256 still considered early stage, for testing purposes only with possible data-loss or can it be relied on for actual long lived repositories?"

> > So while "no-longer-experimental" patch is probably a bit premature, the warning in flashing red letters to caution against any use other than testing may want to be toned down.

> Agreed. I think it should be clear that SHA256 and SHA1 repositories cannot share data at this point. The scary wording should be removed though, as currently it sounds like "data loss incoming and it's your fault" if one chooses sha256

> - Adam

TLDR: SHA256 works well and can be considered "beta". It's viable but you can't really share data between a SHA1 and SHA256 repo. So if your repo doesn't need to merge to/from existing sha1 repos and you aren't dealing with subtrees or anything fancy like that, you should be able to use SHA256. It is not expected to break your repo, it's reasonably well supported, and in the event a migration needs to occur, there will be support for that as well.

So if you want to jump on the SHA256 train you absolutely can provided you are following a pretty normal/boring use case.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: