The history of RAD is a history of media read speeds - playing back full-motion video from a CD on a PS2 with its near-zero available CPU resources was a superhuman effort.
So low-overhead (fast) decompression is essential for AAA titles. But recently I guess developers think disks are fast enough now, because PC games have been using more uncompressed assets to free up CPU time, massively bloating install size. Current-gen consoles also all have NVMe SSDs.
Given this trend, RAD's heyday was definitely in the days before consoles had NVMe disks (Bink in particular let games punch far above their weight) so this might be a nice forever-home.
Blame Intel for not making faster CPUs, and consumers for tolerating massive install sizes, i guess.
The other angle to this story is that the PS5 has a hardware decode unit for RAD Kraken. To get the best use out of the PS5 hardware, it's essential that the encoder is still made available. This is a huge competitive moat. (PCs probably won't get a comparable decompression accelerator card and such a card wouldn't get enough market penetration anyway.)
While the codecs have been central to RAD nearly from the start(Miles was the first product, closely followed by Smacker), the talent pool they have is exceptional across many other categories relevant to Epic - so there is an element of aquihire here, even considering the IP. It's probably a good time to exit since the alternative would mean coming up with a sufficiently awesome next-gen thing again, and even with experienced hands, that can be a moonshot.
I heard that the main issue with decompressed assets was audio, not video (granted, video is image and audio so one is a strict superset of the other). One game - Titanfall or a CoD game, IIRC - had something like 20 GB of uncompressed audio in a 35GB installation footprint. Rationale was saving CPU cycles.
Meanwhile here I am with a sound card in my desktop PC for no real reason anymore :\
That doesn't really compute. Audio decompression is pretty light on CPUs these days.
It takes about half a second in a single core to decompress a 4 min MP3 to WAV in my 2012 MB Air, including writing to disk. In a gaming machine it will be way less. If anything, the audio could be decompressed and cached on demand, when loading levels, or even during installation.
Also sound cards do not participate on the decompression process. It's been CPU from the start, baring rare exceptions. Sound cards just receive raw PCM. There used to be hardware codecs but they're not really common, especially in consumer/gamer-grade soundcards.
Couple of things to note: MP3 is not appropriate for use in real time due to a variable amount of silence added to the start of the sample intrinsic to the compression. You can sometimes use it for music But if there’s any changes based on game events mp3 is unusable. A lot of work has been put into optimizing mp3 including at os and hardware levels but that’s not usable in games. Commonly it’s some custom spin on vorbis which is
Additionally, there can easily be 20-40 sounds playing at once, more if you haven’t optimized yet (which typically happens in the last few months before release). These also need to be preloaded slightly, and once playing stream from disk, so source starvation needs to be handled and the codec needs to not be glitchy if missing some packets.
It’s also happening in a system that needs to be real-time and keep each frame timer on the order of milliseconds, though Moore’s law moving to parallelization has helped a lot. You’d be surprised how under powered the consoles are in this regard (caveat: I haven’t developed on the upcoming gen, which is getting better)
As for loading and caching on demand, that’s limited by memory, given the sheer amount of samples used in games, it’s just not practical. For specific example in a very well known game, there are over 1600 samples for boxes hitting stuff (impact sounds). What I’m building right now is meant to make generative audio easier and reduce the number of samples needed, so more tools to process sound could make this naive caching approach practical
> For specific example in a very well known game, there are over 1600 samples for boxes hitting stuff
That almost sounds as if it could be worthwhile to synthesize on demand from a very small set of (offset/overlaid) base samples through a deep chain of parameterized FX. With 1600 FX parameter preset tuples as the MVP, bonus points for involving game state context.
That’s literally my startup, I won’t get deep into the reasons why good tools for this don’t exist yet, but if you imagine game development is like regular development but with orders of magnitude more chaos you can understand how difficult it is to build stuff for reuse. After 15 years in the industry, my approach is just the same as yours
> You’d be surprised how under powered the consoles are in this regard
As another commenter mentioned, these games shipped with compressed audio for consoles. Also that generation of consoles have pretty good hardware codecs for audio (320 channels in the Xbox).
And MP3 was just an example of what I had here at my disposal. But as an exercise I converted my 4 minute MP3 to Vorbis. Decoding it converting to WAV took the same amount of time as before: about half a second on a very old and underpowered MacBook Air. Most of this time is spent writing 50mb to disk.
Yeah that is curious if consoles shipped with compressed audio but not PC. The prevailing wisdom on PC is codecs are easier to deal with due to dedicated audio thread. Decisions like that are not made lightly so now I’m curious what the reason was
Edit: reasoning is here: https://www.rockpapershotgun.com/2014/03/12/respawn-actually...
Minspec is 2-core PC, probably to support large player base and as noted before there can be 20-40 audio files all streaming from disk and decoding, so sure one file might be fast but no way that’s happening on a 2-core PC. Sure one file might decode fast, but 40 of them, all also streaming from disk while keeping frame rate playable, just impossible
Good points. But there's still the possibility to decompress during installation, which shouldn't be too hard even for 2-core computers, and is probably faster than downloading.
Also, according to the article they're packing all the locales. To me this seems like a bigger issue.
If it's any similar to the hardware decoders on iPhones, they're probably limited to n streams. That might be good for playing some background music or ambient sounds, but it gets tricky really quickly when you have multiple sounds playing at once.
Not really: The Xbox has 320 independent decompression channels according to its marketing material, which is kinda powerful. The PS3 had similar tech in its Motion Decoder, but they didn't disclose how powerful it was.
And even if it had just a single decoder, there's always the possibility to pre-decode audio. Or just decode the music/film.
All the console versions shipped with compressed audio due to the anemic hard drive and disk speeds, as well as tiny RAM amount.
In general it doesn't make sense to use wavs. Games have been using compressed audio since the late 90s without any significant trouble, and mp3s are aurally transparant at 320kbps.
Rumour I heard was that it was because they realised 20% of their customers using much older PC's wouldn't be able to decode fast enough, so they distributed it with the uncompressed to everyone because no distribution platform allowed "optional" downloads.
I'm sure Sony didn't burn kraken decompression into hardware without a license saying all their third party devs can use the compressor for Sony's PlayStation console (and future ones for backwards compatibility at least) in perpetuity.
>playing back full-motion video from a CD on a PS2 with its near-zero available CPU resources
Eh, a PS2 with a ~300mhz MIPS CPU is more than able to play VCD with MPEG1 encoded video. You must be young, because the PS2 is on par a Pentium MMX/Pentium 2 and for sure you could play MPEG videos back in the day.
If that's all it was doing while the video played, sure. That's not all they were doing while videos played. Often, videos were shown while assets were unloaded from RAM and others were loaded from disk.
PS2 had hardware to decompress DVD quality video with very low cpu overhead. Rad's bink video compression (at the time of the PS2) was slower and more dvd bandwidth heavy, only reason to use it was if you didn't want a separate set of video files for your PS2 game sku.
On the PC Bink was a great choice for FMV as it had a really clean API and 'just worked'.
That's nothing when the PS2 is able to play MPEG2 video (DVD's), something only a late Pentium2 could do, often with a hardware decoder or a good video card.
And the MPEG1 spec is compatible with MPEG2 based decoders on the PS2, so the effort on it would be almost null.
As I said, Gen-Zers understimate late 90's/early 00's hardware.
Also, the PS2 GPU was a data monster, refilling the VRAM like nothing. Ask the PCSX2 developers about that.
Another thing that changed from say 10 years ago, some games have better launchers and ways to configure them.
What I'm hoping to see at this front, since install sizes are bloating, that the installers/configuring becomes slightly more advanced.
When I install a game I want to be able to choose the following at install-time:
Texture Packs (main blobs that takes up a lot of space - why download 50GB if you need the 10GB version)
Language Packs
Voice/Audio Packs
Mode Packs (Single Player Campaigns, Multi player, maps)
This way you can take a game that currently cost say 80GB for everyone to average out 30GB to 50GB for most players. On the low end the same game needs to work at 10GB and at the high end can consume the whole 80GB for the person with hardware that can take it. Obviously for console players, they just want to play and get on with it and maybe wont enjoy the choices mentioned above, but PC and tweakers/benchmarkers/modders should enjoy it.
A 1TB SSD drive costs around 100 (€/ $). A person who enjoys tweaking their PC likely can easily get terabytes of storage space. I don't really see the utility of fiddling with 10GB here and there...
Okay, imagine you have 300+ games on Steam/Epic/Gog etc... if you were to install 50 of them, that alone would eat up 1TB easy peasy. If you want to install more of them, a 1TB nvme drive is not going to cut it. Maybe 10TB either. So what do most people do? They install/uninstall frequently. But for those with crappy internet where 30GB take 2 days to download, it's better to make a backup before uninstall/reinstall. If the backups are smaller, that helps too.
So yes, 10GB here and there isn't much.. but as a collective, if you have 1000000 users downloading 300GB less per month (I know, silly numbers, it's way more in reality), it would make a huge difference to the "gaming industries & network load"-effects as a whole. Plug-in some real numbers (which I don't have)(Players X ave number games downloaded per year X ave game size).
Other industries like streaming/music sites, they optimize their transmission sizes and even small gains are often worth it (10 million people downloading 2MB less per song on spotify, while playing 50 songs per day... the numbers add up quick). Somebody will pay for the transmission - either the consumer or the business. The business only needs to ask is it cheaper to pay a few devs to optimize their code or is cheaper to pay for a fatter data pipe. I think long term, engineering effort always wins vs throwing money at the problem.
I don't have any stats so I don't know how large is this population who want to rotate through their Steam library with high cadence who lack fast internet. About the lack of fast internet - It feels like streaming and remote work have normalized "fast enough" internet connections of 100mb and up but I might be completely wrong here.
I wonder if it would be possible to get 250 GB SSDs for like $25 or less. You could then literally sell your game on an SSD. According to this [0] Nintendo charges $8-20 per cartridge to third parties.
Given the popularity of digital sales it's hard to see the business model where this would make sense.
I have a bog standard internet connection with a download speed of 1000gb. Most games download faster than it would take me to go to the store or order a package.
Yeah, this is basically exactly what Nintendo is doing. I guess for PC and larger consoles you could use larger memory cards that might be cheaper to make. But anyway, it's still a problematic approach since it eats into the margins unless you increase the price of the game, which most game publishers try to avoid.
I really like Nintendo Switch cartridges, I prefer to buy physical copies when I can, but I don't think there's a future in them. I kind of wish someone would make "smart cards" that just gives you access to the game online. Since there's so many essential updates to games these days, that's almost what cartridges are anyway.
Yes, https://www.aliexpress.com/item/4000266629080.html - guessing that the AliExpress price is 2x the Taobao price which is ?x the quantity price. I do wonder how fast the controller/flash in this is, though (this is limited to SATA3) - that's now many times slower than the Series X / PS5 speed.
That would be terrible from an archival perspective. Flash memory is subject to charge leakage that causes data be lost over time. High density flash, which has the lowest cost per byte, also has the poorest data retention lifetime.
Not everybody has gigabit FTTH. On a 25 Mbit connection waiting for humongous downloads really sucked when I wanted to play a bit of GTA 5 (not even online!) after not touching it for a few weeks.
So in steam, when you go into the properties of a game, there is a DLC's tab. When you select/unselect DLC's, it will often trigger a download to happen (although small). With Dota, they deliver support for OpenGL & Vulkan basically as DLC's.
So the mechanism to have modular game delivery already exists in steam, they can just repurpose it and give it a different label, or bake the options into the Install dialog so you can choose the parts you want upfront.
And yeah, other launchers are a pain (with proton), I agree.
Yeah, is like most of HN should be kids, because, FFS, it's a PS2, not a Nintendo from the 80's.
They are so used to badly implemented software in Electron requiring humoungous CPU and RAM requeriments that basic performing tools it's mind blowing to them.
Heck, any Pentium2 on par of a PS2's CPU or better could play DivX videos under MPlayer under Linux/NetBSD with ease.
420p, ok, but video.
every aaa game still uses rad tools. compression is still a thing for various reasons: fitting more textures in the gpu/ram/etc. for instace, with decode happening on gpu
It seems weird to use specialized video decode hardware when videos are almost always played in a full-screen exclusive fashion. Like, other than loading, what is the CPU doing during those times?
I worked on COD:AW and we used video in game often. The lead UI artist would have used it significantly more if performance was better. Might be an exception but it's not uncommon, I've worked on multiple titles with full 3D and video playback at the same time
Video used as textures, either in world on 3D meshes, or as part of UI (HUD) elements. It could be prohibitively expensive to play a single video in some scenes, and we didn't support multiple videos. Not because it wasn't possible or anything, mostly because performance, although there would have been an engineering cost to doing also (ie I would have had to add support to the engine)
Also remember "expensive" is relative. I think it added little over 1ms to total frame time per video, but in a 60fps console game that. Our budget was about 1ms or less for the entire UI during gameplay (including scripting, layout and rendering), so a video could more than double that. 4 simultaneous videos would be like 1/4 of the entire frame :).
Videos can be used for a lot of different things. You can animate textures, obviously, but you can also animate things like normal maps to get really interesting effects. I recently read an article that described how one of the more recent Assassin's Creed games used animated normal maps to make rainfall patterns on the ground.
likely the performance of playing small video elements as part of the UI, oftentimes it's faster to keep a whole uncompressed texture atlas in memory for animations and such.
Videos have been used frequently for all sorts of other stuff in games since the playstation, if not earlier. In StarCraft the portrait of your selected unit was a movie file (RAD smacker, I think? But I don't have a copy of starcraft lying around to double check).
Supergiant Games' titles since Bastion all use RAD's Bink codec to store offline renders of their character models (as movies) instead of traditional sprite sheets or realtime rendered 3d models, so they're playing back dozens of movies at all times.
As another reply mentioned, it's also standard at this point for loading screens to be movies.
Encoding videos across consoles in h.264 is a pain and unreliable. I did it, and so I can see why a big budget title would use blink. Each console has its own odd and undocumented requirements for the encoding specification. Think 1079p at 29.99fps pm one console while vannila 1080p 30fps on another. Get it wrong and your video is blank with only a "failed to decode" in the logs.
Absolutely. Massaging video codec settings to exactly match the console hardware requirements and feeding video data in/out of each console's propriety video API is a royal PITA.
My feeling is that big budget games are prepared to jump through the hoops in order to wring the last drop of video quality (or add hours more 'content') but when timelines are short and engineers are thinly spread Bink is a great way to just get it done and move along.
> PCs probably won't get a comparable decompression accelerator card and such a card wouldn't get enough market penetration anyway
I doubt this particular moat means much since PC has practically unlimited disk space. This recent install size hype is basically irrelevant if you're on PC. Just get a nice $100 SSD and continue. It matters only to PS/Xbox because they do not provide easy/cheap expansion.
I have stopped playing games because the updates were too big and I got sick of waiting for them to download. I'm sure the games companies are tracking this stuff, so I'm likely in the minority.
No I was thinking of modern filesystem compression solutions. They offer relatively poor compression ratios, but computationally efficient decompression.
So low-overhead (fast) decompression is essential for AAA titles. But recently I guess developers think disks are fast enough now, because PC games have been using more uncompressed assets to free up CPU time, massively bloating install size. Current-gen consoles also all have NVMe SSDs.
Given this trend, RAD's heyday was definitely in the days before consoles had NVMe disks (Bink in particular let games punch far above their weight) so this might be a nice forever-home.
Blame Intel for not making faster CPUs, and consumers for tolerating massive install sizes, i guess.
The other angle to this story is that the PS5 has a hardware decode unit for RAD Kraken. To get the best use out of the PS5 hardware, it's essential that the encoder is still made available. This is a huge competitive moat. (PCs probably won't get a comparable decompression accelerator card and such a card wouldn't get enough market penetration anyway.)