Hacker News new | past | comments | ask | show | jobs | submit | marcan_42's comments login

No, the throttling is much more aggressive than real time. I suspect it's Google's way of sneakily breaking ancient YouTube clients that won't run arbitrary JS as a countermeasure for downloaders, without really breaking them all the way, so users of e.g. ancient smart TVs just think their network is (unusably) slow instead seeing an outright error, so they don't complain en masse that their TV no longer works properly.

Typical video bitrates for bog standard 1080p30 will be in the 1MB/s range, so the throttling is around 20x slower than real time.


Right: I forgot about the higher end encodings. 50kB/s is a speed similar to that of the "base" 360p ("format 18") - ~100MB every half hour.


Sorry, one detail (cannot edit the former reply): 1080p30 will be in the 1Mbit/s range (not «1MB/s»). That's maybe ~2.5x slower (not 20x).


This is complete nonsense. The overwhelming majority of video download time is spent transferring the actual video data, limited by throttling or network speed. The time it takes for these tools to start up and deal with the API/JS stuff is inconsequential for any video longer than a minute or so. And even then most of the time is network latency for the API stuff, not local processing. Netcat isn't going to go any faster.

Absolutely nobody thinks optimizing the meta/API processing of yt-dlp & co is worth it. This is exactly why we have high level programming languages that make all of this much easier, instead of trying to write HTML and JS parsing in plain C. Keep in mind these tools support dozens or hundreds or websites, not just YouTube.

If you think rewriting yt-dlp in C is worth it, go right ahead, but you're not going to make it significantly faster; you're just going to make maintenance a much bigger pain for yourself. Pick the right tool for the job. Python is absolutely (one of) the right tools for this job.

(For the record: I use C and Python on a daily basis, and will use whatever language is appropriate for each situation.)


"Python is absolutely (one of) the right tools for this job."

This opinion assumes it is a single job. I see multiple jobs. The number "options" provided by yt-dl(p) gives us a clue.

There is nothing wrong with preferring to use larger, more complicated, "multi-purpose" utilities. There will always be plenty to choose from.

However the idea of using smaller, less complicated, single purpose utilities is not "nonsense". It makes sense in many cases and some users may prefer it.

The statements I make about speed are from day-to-day experience not conjecture.


Of course, that requires tenants trust Intel's security.

As a security researcher and given past showings from Intel, I wouldn't put much faith in SGX, even if they try to fix past flaws. SGX as a concept for tenant-provider isolation requires strong local attacker security, which is something off the shelf x86 has never had (not up to contemporary standards, ever) and certainly not in anything Intel has put out. They've demonstrated they don't have the culture nor security chops to actually engineer a system that could be trusted, IMO. Plus then there's all the microarchitectural leak vectors with a shared-CPU approach like that, and we know Intel have utterly failed there (not just Spectre; there was absolutely no excuse for L1TF and some of the others, and those really showed us just how security-oblivious Intel's design teams are).

Right now, the x86 world would probably do well to listen to Microsoft, since their Xbox division managed to coax AMD into actually putting out secure silicon (they're one of the two big companies doing proper silicon security at the consumer level, the other being Apple and Google trying to catch up as a distant third). But given the muted response to Pluton from the industry, and the poor way in which this is all being marketed and explained, I'm not sure I have much hope right now...


> Of course, that requires tenants trust Intel's security.

I generally agree with you. But I recently realized there might be one usecase, and it's pretty much what signal is doing. They're processing address books in SGX so that they can't see them. I don't have much faith in the system because I don't trust SGX, of course.

But there is one interesting aspect to this. If anyone comes knocking and tells them to start logging all address books and hand them over, they can say that it's not possible for them to do so.

Anyone wanting to do that covertly would at least need to bring their own SGX exploits, meaning it probably offers SOME level of protection. Certainly not if the NSA wants the data or some LEA is chasing something high-profile enough that they're willing to buy exploits and get a court order allowing them to use them. But it does allow them to respond with "we don't have this kind of data".


Secure enclave as legal defense is an interesting angle, thanks for sharing.

It's become a moral cause to make a lot of big-data computing deniable, to be data-oblivious. This is a responsible way to build an application, is well-built security, and I like it a lot.


I think “responsible” is a bit too strong a word, when most of these computations could just run on the client.

I agree this work is important and enclaves are better than nothing though.


I want to spew curse words, because, from what I have been able to comprehend, all the web crypto systems contravene what you & I seem to agree is a moral, logical goal. All are design to give the host site access to security keys, & to insure the user-agent/browser has the fewest rights possible. We have secure cryptography, but only as long as it's out of the user-agent/client's control.

We've literally built a new web crypto platform where we favor 100% the vile fucking cloud fuckers for all computation rather than the client, which seems as fucked up horseshit backwards trash city dystopia as could be possible. Everything is backwards & terrible.

That said, we 100% cannot trust most user-agent sessions, which are infected with vast vast spyware systems. The web is so toxic about data sharing that we have to assume the client is the most toxic agent, & make just the host/server responsible. This is just epically fucking fucked up wrong, & pushes us completely backwards from what a respectable security paradigm should be.


Hi chiming in to double down on this, as the downvotes ongoingly slowly slowly creep downward even still.

In most places, end-to-end security is the goal. But we've literally built the web crypto model to ensure the end user reaps no end-to-end benefit from web cryptography.

The alternative would be to trust the user-agent, to allow end-to-end security. But we don't allow this. We primarily use crypto to uniquely distinctly identify users, as an alternative to passwords.

This is a busted jank ass sorry sad limited piece of shit way for the web to allow cryptography in the platform. This is rank.

The Nitrokey security key people saw this huge gap, & created a prototype/draft set of technologies to enable end-to-end web encryption & secure storage with their security keys. https://github.com/Nitrokey/nitrokey-webcrypt


This paper convinced me it will be at least a decade before SGX or similar have any semblance of security:

https://www.usenix.org/system/files/conference/usenixsecurit...

The basic idea is that you can play with the clockspeed and voltage of one ARM core using code running on the other. They used this to make an AES block glitch at the right time. The cool part is that, even though the key is baked into the processor, and there are no data lines to read the key (other than the AES logic), this lets them infer the key.

Hmm. The paper is 5 years old. I still think we are a decade away.


That's one reason why most TrustZone implementations are broken: usually the OS has control over all this clocking stuff. It's also one way the Tegra X1 (Switch SoC)'s last remaining secrets were recently extracted.

It's also how I pulled the keys out of the Wii U main CPU (reset glitch performed from the ARM core). Heh, that was almost a decade ago now.

That's why Apple uses a dedicated SEP instead of trying to play games with trust boundaries in the main CPU. That way, they can engineer it with healthy operating margins and include environmental monitors so that if you try to mess with the power rails or clock, it locks itself out. I believe Microsoft is doing similar stuff with Xbox silicon.

Of course, all that breaks down once you're trying to secure the main CPU a la SGX. At that point the best you can do is move all this power stuff into the trust domain of the CPU manufacturer. Apple have largely done this with the M1s too; I've yet to find a way to put the main cores out of their operating envelope, though I don't think it's quite up to security standards there yet (but Apple aren't really selling something like SGX either).


You trust the security of your CPU vendor in all cases. SGX doesn't change that. If Intel wanted to, they could release a microcode update that detects a particular code sequence running and then patches it on the fly to create a back door. You'd never even know.

"SGX as a concept for tenant-provider isolation requires strong local attacker security, which is something off the shelf x86 has never had"

Off the shelf CPUs have never had anything like SGX, period. All other attempts like games consoles rely heavily on establishing a single vendor ecosystem in which all code is signed and the hardware cannot be modified at all. Even then it often took many rounds of break/fix to keep it secure and the vendors often failed (e.g. PS3).

So you're incorrect that Intel is worse than other vendors here. When considering the problem SGX is designed to solve:

- AMD's equivalents have repeatedly suffered class breaks that required replacing the physical CPU almost immediately, due to simple memory management bugs in firmware. SGX has never had anything even close to this.

- ARM never even tried.

SGX was designed to be re-sealable, as all security systems must, and that more or less has worked. It's been repeatedly patched in the field, despite coming out before micro-architectural side channel/Spectre attacks were even known about at all. That makes it the best effort yet, by far. I haven't worked with it for a year or so but by the time I stopped the state of the art attacks from the research community were filled with severe caveats (often not really admitted to in the papers, sigh), were often unreliable and were getting patched with microcode updates quite quickly. The other vendors weren't even in the race at all.

"there was absolutely no excuse for L1TF and some of the others, and those really showed us just how security-oblivious Intel's design teams are"

No excuse? And yet all CPU vendors were subject to speculation attacks of various kinds. I lost track of how many specex papers I read that said "demonstrating this attack on AMD is left for future work" i.e. they couldn't be bothered trying to attack second-tier vendors and often ARM wasn't even mentioned.

I've seen some some security researchers who unfortunately seemed to believe that absence of evidence = evidence of absence and argued what you're arguing above: that Intel was uniquely bad at this stuff. When studied carefully these claims don't hold water.

Frankly I think the self-proclaimed security community is shooting us all in the foot here. What Intel is learning from this stuff is that the security world:

a. Lacks imagination. The tech is general purpose but instead of coming up with interesting use cases (of which there are many), too many people just say "but it could be used for DRM so it must die".

b. Demands perfection from day one, including against attack classes that don't exist yet. This is unreasonable and no real world security technology meets this standard, but if even trying generates a constant stream of aggressive PR hits by researchers who are often over-egging what their attacks can do, then why even bother? Especially if your competitors aren't trying, this industry attitude can create a perverse incentive to not even attempt to improve security.

"the x86 world would probably do well to listen to Microsoft, since their Xbox division managed to coax AMD into actually putting out secure silicon"

SGX is hard because it's trying to preserve the open nature of the platform. Given how badly AMD fared with SEV, it's clear that they are not actually better at this. Securing a games console i.e. a totally closed world is a different problem with different strategies.


"SGX is hard because it's trying to preserve the open nature of the platform"

Except that was an afterthought. Originally only whitelisted developers were allowed to use SGX at all, back when DRM was the only use-case they had in mind.


It clearly wasn't an afterthought, I don't think anyone familiar with the design could possibly say that. It's intended to allow any arbitrary OS to use it, and in fact support on Linux has always been better than on Windows, largely because Intel could and did implement support for themselves. It pays a heavy price for this compared with the simpler and more obvious (and older) chain-of-trust approach that games consoles and phones use.

The whitelisting was annoying but gone now. The justification was (iirc) a mix of commercial imperatives and fear that people would use it to make un-reversable ransomware/malware. SGX was never really a great fit for copy protection because content vendors weren't willing to sell their content only to people with the latest Intel CPUs.


Indeed, it remains to be seen whether or not SGX2 will be trustworthy; the proof is in the pudding. However, other vendors have their own solutions to the same problem, and least AMD's approach is radically different, so one hopes that at least one of them will stand up to scrutiny.


This is how Jamulus works; it delays your own monitoring feed by your latency, so it is in sync with everyone else. If your ping to the server is 30ms then you'll hear yourself 30ms late (plus processing/audio buffer latency).


The AGX GPU MMU (variously called "GART" and "UAT", no relation to AGP GART) uses the ARM64 page table format (the GPU coprocessor literally uses it as an ARM page table among other things), so I'd expect it to support huge pages just fine since ARM64 does too. I don't know if macOS supports them, though.

The page size is indeed 16K. I don't know if 4K is supported at all in the GPU.

I agree though, the article reads like a rambly piece that doesn't follow and really just boils down to "my code runs slow on this machine and I'm going to blame the architecture" without going into proper details.


Gold plated connectors are absolutely normal and completely standard and essential. The vast majority of durable data/signal connectors used today are gold-plated. Hack open any random decent microUSB connector if you don't believe me. You need a solid connection for signal integrity. Corrosion messes that up. Of course, with modern cost optimization, usually only the contact surface is gold plated these days, not the entire pin, and the plating is quite thin and cheap. Connectors intended for higher connect/disconnect cycles will be more expensive in part because they need a thicker gold plating to avoid wearing out.

Gold plated PCBs are also completely standard (google ENIG). In this case the plating is even thinner and just serves to ensure solderability and avoid corrosion. It's so thin that it immediately dissolves in the solder during the solder reflow process; it's just there to allow that process to happen properly, and to avoid corrosion in non-soldered areas. Almost every modern high tech PCB

Gold containing solder, on the other hand, is nonsense.


> Gold containing solder, on the other hand, is nonsense.

Well, actually™, it turns out gold-tin alloys (which is what this probably is) get used a lot for solder, particularly in optoelectronic and microelectronic device packaging. You may already know this, but I didn't until tonight, when I thought, "Hey, is this really a thing?" and looked it up. :)

(I can't see anything that suggests it helps with "clearer sound," of course. I also searched for "audio-grade solder" out of curiosity and it seems there are some companies that market solder that way, but they seem to generally be (a) silver alloy, not gold, and (b) regarded rather skeptically.)


I actually didn't know about that one. Sounds like it's mostly for specialty applications though (e.g. ceramic packages). Looks like it's something like 80% gold, so that would get expensive really fast if you attempted to use it like regular solder :)


None of them are ever willing to do the experiment (double blind).


Low quality audio hardware absolutely exists and is rampant at the cheap end. And some standards, like USB2.0 (when used for data), are terrible and will cause audible noise if lots of care isn't taken to filter things and avoid ground loops.

But if you're spending $2500 on a headphone amp instead of $250 on a professional quality audio interface, or $1500 on jewel encrusted RCA cables instead of $30 on balanced XLR cables, or you think 192kHz sounds better than 48kHz because you have superhuman hearing and can somehow hear ultrasound, you're wasting your money and you have no idea what you're doing when it comes to audio quality.


Apple GPUs have little to no IMG technology left. You can tell there is some remaining influence in the design, but Apple's is much better and cleaner and what IMG's GPUs should have been.


>Apple GPUs have little to no IMG technology left

They still uses PVRTC and are still licensing and paying IMG royalty. Surely if there were little to no IMG tech left Apple could have simply stopped paying them and dont renew the contract. They are not paying substantially less royalty than they were previously either.


What the lawyers do and how the technology actually works are not necessarily related.

The shaders are completely different, the controller is completely different, the command submission structures are different. Sure, it's still a tiled architecture and they probably have to keep PVRTC for compatibility and pay the royalty for that, and you can see some remaining PVR influence in the design. But it's not PVR, and quite possibly has zero actual PVR technology left (as in silicon IP from IMG). They're probably only paying patent royalties.

We know this because we're reverse engineering AGX and IMG just dumped an upstream submission to Mesa and now anyone can compare them. PVR's architecture is, for starters, a lot more insane than Apple's.


And plenty of evidence that the journalists had no idea what they were doing and were grasping at straws to construct a story out of nothing. See e.g. https://twitter.com/marcan42/status/1049512159481217025 https://twitter.com/marcan42/status/1049687546945392640


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: