I hate to say it but this SSD story has been plastered all over the news by the PR machine, mainly because the new consoles have no other inovations whatsoever....
No, it is not going to usher a new area of gaming, just make game have less 'loading' screens, and perhaps larger open worlds.... but it is no near the innovation and changes we experiences in the 90s and early 2000's....
Let me say in a more popular language: History has demonstrated that gamers don't give a f@ck on how fast is the SSD....
PS1 won over N64 even though it had huge loading times, compared to near instantaneous N64's loading screens....
I meant, personally, i think it is great, as PS4 loading time are bad, but just fast loading time it is not going to make me buy a game.
Anyway, just a sad PR piece trying to get people exited for something that looks a small (but good) evolutionary improvement over the current PS4 Pro/Xbox X.
Now if we had custom fast/ray-casting technology, than it would have been exiting. But it looks like if enabled, a game will either have to drop resolutions and cap frame-rates to barely playable 30fps.
I think you missed the real point: Textures stored on the SSDs is now byte addressable by the texturing unit on the GPU. When the texturing unit issue a read for texture data, the read can now go directly to the SSD controller, and SSD controller issue NVMe commands to read the data. The texture decompression block sits between the GPU and SSD controller, decompresses the texture at line speed. This way, the GPU can read the texture at the exact detail levels given by the amount of pixels covered by the texture. It gives a texture infinite mips level. The GPU can also place the texture data needed for the current frame in the GPU memory, GPU memory becomes more like a cache for textures. This reduce GPU memory usage. Imagine you want to render a camera flying through a forest. Before you need the texture for the trees and have multiple version of them for far away trees and nearby trees. As the camera flies over, you transition from the low detail texture to the most detailed texture. That is why you see the texture pop in in today's games. Going forward, you just need the most detailed texture, and have the GPU automatically sample the texture according to the pixel needs to be rendered. There will be no stepping in between, it will be continuous. This solves texture pop in at a fundamental level.
Another benefit is reduce CPU involvement. Textures are fundamentally stored on disk, and GPU needs to access that data for rendering. Before we would have the CPU use OS's filesystem storage stack, read a file block, decode the texture in CPU memory, copy the texture to GPU memory using PCI-e, and ask the GPU to render with that data. This method allows the GPU directly read from a SSD, bypassing all of the steps in between. As SSD becomes faster and faster, OS's filesystem is becoming the bottleneck.
> GPU directly read from a SSD, bypassing all of the steps in between
Ehh... unless you have more specific knowledge that’s still under NDA, the reasonable assumption is that the CPU still mmap()es the file into the GPU’s address space, and then the CPU pages in data from the SSD as the GPU generates page misses. Which is technically possible on PCs today, but isn’t done because you can’t assume fast SSDs and you definitely can’t assume shared memory. (actually I’m pretty sure discrete GPUs can generate page misses for mapped CPU memory, but I’m not certain which graphics APIs really let that happen)
Better be at least 2 MB pages then. 5.5M (max. 22 GB/s compression output on PS5) page faults per second for 4 kB pages might otherwise ruin this scheme a bit...
2 MB page size would reduce this to "just" 11k page faults per second.
I know nothing about this, but wouldn't most textures be quite small, and with a large page size you might end up reading in much more than you need, making it often worse than useless?
That's 113 pages, does this really cover the issue of reading too much? IF so, I'll try and find the time.
I mean I'm not complaining about TLB issues which large pages do better at for obvious reasons, but thrashing the TLB is likely to be a lot cheaper than reading too much from the disk.
the reasonable assumption is that the CPU still mmap()es the file into the GPU’s address space, and then the CPU pages in data from the SSD as the GPU generates page misses.
You seem to know quite a bit about this - just wondering - how does this work? The GPU can generate PCIe bus traffic to do RAM reads or writes, but how does it cause a page fault in the CPU? Is this some kind of IOMMU? Is there any place I can read more about this?
I don't think the GPU can generate page faults in the traditional sense (which core would #PF be delivered to?). The CPU has to pre-fetch data before the GPU tries to use it. On the PS5, the GPU may be able to issue read requests directly to the IO coprocessor and have it load data off the SSD, run it through the decompressor, place the data into the requested DRAM addresses, and invalidate the necessary cache lines in any GPU or CPU cores. But I'm not sure that the PS5 can actually do that with zero CPU involvement.
You do realize the GPU can just read it from the physical address space and not via the CPUs MMU.
It's not an innovation. There is still a lot of level3 Bus transactions. If the SSD had two buses, with 1 directly to the GPU (read only type) there would be less pressure on the level3 bus and that would speed things up.
You can tell someone hasn't watched the Ratchet & Clank demo from the other day[0]. The SSDs in these consoles go far beyond short load times. They can radically change how games are made and designed.
Ratchet & Clank is the only demo with new gameplay based on the SSD.
But it's overused, giving each game world less weight. The technological imperative, preoccupied with whether they could, etc.
In contrast, artificial constraints add depth to gameplay, e.g. limited movement speed/duration ("sprint").
All that said, keeping processors fed with data is a central problem of CS. The innovations here are not just the SSD itself, but elimination of bottlenecks in the architecture (e.g. direct placement in GPU RAM).
Typically, storage is 1000x slower than RAM. On PS5, it's 50x. That has got to be a revolution in algorithmic space-time tradeoffs... which has got to be reflected in gameplay, somehow, somewhen.
Because execs will only be seeing demos after they’re loaded, and there’s no way some 55 year old is going to spend more than 5 minutes playing some Star Wars game and therefore trigger no world loading, games that do something special with the SSD will not be financed by anyone other than Sony. The technology is basically DOA for third parties. The stuff that third parties are saying is basically moot.
Conversely when Nintendo makes new hardware, it’s the same deal - they’re the only ones financing games that use the balance board or gesture controls or labo or whatever. They just put up a lot more money and have internal studios with more autonomy. SSDs are a Sony problem not a game design/engineering problem.
That portal transition effect likely hides the load screen - it's still there, nearly half a second, it's just shorter. This is comparable to current load times with an SSD on PC!
Or a PC game can just slap a minimum RAM requirement on and be done with it.
I’m impressed with the tech but it seems like the end goal was to keep console manufacturing costs down. Now
it’s being sold as a gameplay-enabling feature and reason to upgrade. The upgrade only looks impressive because the PS4 by now is so old.
Relying on cheap SSD storage instead of expensive RAM, and relieving CPU effort via the storage streaming chip is a cool trick. But that tech alone enables absolutely zero gameplay experiences.
It hasn’t been proven to us whether or not a typical gaming PC’s increased memory just overcomes the need for this tech. If I have a PC with 32GB of RAM and my GPU has 8GB of its own RAM I’m not convinced that a PS5 with 16GB of shared RAM will do anything that the PC setup can’t.
Desktop computers eclipsed the performance of current consoles gen consoles so long ago that I am still suspect: my prediction is that a decent mid-range gaming computer is completely capable of playing any PS5 game.
Most tests I've seen of real-time game asset loading between the various types of SSDs on PC are incredibly inconsistent - for most games it is hardly noticeable. I'd be excited if I was proven wrong but this really feels like the typical console hype ramp up to black friday that South Park portrays so well...
This is not great logic, as those games are not designed around taking advantage of the faster SSD storage like new games will be with the launch of the new consoles. Most games are built with an HDD in mind and thus the ssd is not the bottleneck.
All AAA games need to be cross platform to maximize revenue with a long tail, so they target the lowest common denominator for hardware requirements.
No one is going to design gameplay for a special hardware constraint unless the gameplay can degrade to lowest common denominator. Which of course, makes needing the special hardware optional.
There are few exceptions to this rule. Some platforms pay for exclusivity, effectively covering lost revenue from other platform streams. And Nintendo alone makes a profit on hardware, so they can produce platform exclusives to drive addtl revenue from hardware sales.
Special SSD pipelines, while PC gamers are still using 7200rpm HDDs, are about as appetizing to game devs as waggle controls or Kinect sensor games.
The new consoles include these SSDs not to make something possible now, but to remain relevant in ten years time when PCs may have caught up.
This is the game industry's equivalent of supporting IE 11.
This is simply not true. That's like saying no game on PC is possible because not everyone has a good enough graphics card. Just have a minimum spec for required storage speed and you're golden.
So imagine this tech in a simulated war mmo, where each server of 64 players represents a part of a battlefield and you instantly can transition to a new server representing the next part of the battlefield when you walk there. All of WW2 would be a collection of servers that represent parts of Europe.
Takes a little imagination, but when you look at the ingenuity of something like F-Zero on SNES (2.5d graphics), it’s pretty clear that your typical tech really can used in amazing ways.
Assuming the maps and assets are static, the only thing needing to be downloaded is the current state of the game and entities, which is a handful of megabytes. I agree that an SSD is completely irrelevant here, but so is the network. The hard problem here is how do you sync state between the different servers in real time so that transitioning between servers is seamless (and how do you handle reconciliation after an eventual net split).
From what I know, World of Warcraft does something like that on a conceptual level. Obviously syncing game state in non-action game like WoW is probably a little easier than something like a full blown action FPS.
Planetside 2 is a great and current example of multiple massive 100+ player battles occurring simultaneously in the same server/map.
Obviously its graphical detail is nowhere what was shown here, but the precedent exists, and the only thing that would be different is the geomotry and texture res.
Wow, I'm not going to believe that's realtime in my hands. I started the video thinking "whats so special about this", and when they started going interdimensional I realized either this is very cool or just hype.
That seems to be progressing on rails, what does the SSD help here? Maybe you can postpone preloading the assets later before the transition, and thus need to free up memory slightly later for next scene's asset decode/load buffers. Though there isn't anything here that looks like 2 or more scene's worth of assets couldn't be in memory at the same time.
That is impressive if each of those worlds is loaded as you go, but I feel like both Microsoft and Sony were holding off on this next generation until graphics tech got better.
We still are nowhere near photo realistic, cinema type game play (and yes I realize that such animation has to be scaled down a bit because it can disturb people if it's too real, but we're not even at that point yet).
Real time ray tracing was going to be nVidias whole push into a new generation of 3D graphics, but that turned out to be not all that impressive (and with huge performance costs).
it's been forever since I played the first Prey but I don't remember anything like this aside from perhaps the single usable portal in the game? Was pretty cool at the time and then Valve explored the concept properly in Portal 1 & 2.
Having said that, the original Prey looks pretty rough these days. And I don't see how a single portal is "the same thing" at all.
In fact, I think the only thing that I can think of to compare to is the ending of Portal 2 where you shoot a portal to the moon, but even that was just a single portal to a fairly low detail environment.
Are you referring to Prey's usage of that portal? Or something else.
FYI, the new Prey game released a few years ago is amazing. It has basically nothing to do with the original because they apparently just wanted to use their rights to the name on a flagship title. But it's amazing if you're into System Shock/BioShock type games.
It just means fast loading assets, so if SSD becomes RAM one day, anything interesting at all? Same goes for ray tracing, looking more realistic with control of light...
Nothing special here, no need to hype
edit:
You know what really makes a difference?
You playing an VR open world game where every single artificial intelligence NPC does thing in every possible way leading to different gameplay and outcome. And you as a character can just grab anything you want and throw at any monster that you just CREATED in that game itself
Or you can control that monster that you created in your own way, time travelling to another open world game saying hello to your friend playing in his house back and forth
Together with some AI NPC friends you made in that game, you live in that dimensional space forever as you want even after your human body dies, your conscious stays in electronic form
"SSD as RAM" is a bad way to think about it. What you need to realize is that the standard for games has been to treat RAM as storage, because the hard drive was too slow to use for loading data on the fly. SSDs mean games can use storage as storage, but they still have to fit the working set in RAM.
And that's really what the hype is about in terms of better game experiences on these new systems — we should be able to have larger working sets because you don't need to waste RAM as storage.
To get specific about what this enables, I think we will see many more indie games with great looking graphics. The combination of high res asset scans, automatic resolution scaling, automatic texture compression, generally less tight performance budgets that don't need teams to do optimization work (next gen consoles), and a financial model around tools to take advantage of all of this (Unreal + Quixel as the leader here) should make this next generation of games pretty awesome.
It's not just load times of maps. Think load times of assets in real-time. Traditionally youd store everything in ram because the disk storage was slow. This leads to obvious caps in how much you can have loaded at a time. But with fast enough storage you can rely on pulling assets from storage during game, not having to preload. This opens all sorts of flexibility that wasn't possible before.
Watch the UE5 demo. Realize what they're doing. They're loading the high-detail geometry into the GPU and letting the GPU do the mesh reduction. For that to work, huge amounts of geometry has to be loaded into memory the GPU can see. The PS5 is built for that. The GPU has access to the same memory (24GB) the CPU does, and, presumably, so does the SSD controller. So you can put vertex buffers in files and load them directly into the GPU if you want.
On the PS5, a Blu-Ray type drive is optional. Are you going to have 100 GB downloads from the network? That's going to take a while. Even if you have a gigabit network connection, it's going to take 15 minutes or so to download a large game. Few people have that much bandwidth. Red Dead Redemption 2 needs 150GB of disk space now, so games are already that big.
Games will have to have some starting level that keeps the user busy while the downloader pulls in content. But after that, performance should be impressive.
The whole storage architecture (not the SSDs alone) is a generational leap in gaming because it's a generational leap in game production economics. Asset cost has been skyrocketing for modern games, and the slate of storage technologies in these consoles directly addresses that. These improvements make assets reusable. SSDs + built in compression / storage co-processing + dynamic resolution makes it possible for small teams to plonk high res assets from libraries into games without spending crazy amounts of time doing asset performance optimization.
It's also going to be a driver of 1440 and 4k gaming, because there will be more games that look good enough to make high res gaming worth it.
Yeah. What's frustrating is I feel like a lot of ports have long load times just coded into them. Sometimes it barely matters if you run them on an NVMe or even a RAMDisk.
I'm sure the cost of speeding up loading isn't worth it after you've written it with one set of expectations in mind.
I just hope an unintended side effect here is that load times for big studio PC games speed up too.
This is exactly it. With most games being designed around HDD speeds, storage is rarely the bottle neck. It’s why testing game loading times with different SSDs is so inconsistent amongst different games.
This will change as the new lowest common denominator will eventually become the new consoles with their faster NVMe drives
I'm reminded of the PC version of Mass Effect 2, where the loading time was significantly shorter than the video played during loading, but the video always played until the end.
i think you have become way too cynical. of course we don’t have the huge leaps we used to. the marginal utility of twice as many polygons or pixels is much less now. the effort that goes into these advances we do get is much more than it was nonetheless, and borders on magic.
amazing time to be alive for computing hardware power and great to see the consoles letting us fully leverage it
I hate to say it but this SSD story has been plastered all over the news by the PR machine, mainly because the new consoles have no other inovations whatsoever...
These new SSD's are more of an upgrade then we've had in any console over multiple generations. Xbox One, Xbox 360, PS4 were all pretty standard "mini-PC" consoles. PS3 had a different architecture that didn't really pay off until much later in the console's life-cycle and still didn't have a huge impact, hence the PS4 dropping Cell for the PS4.
Umm - streaming data in to working memory at radically improved speed has the potential to provide techniques way more interesting/impressive than *tracing techniques. Who knows what geometry representation and handling this will enable.
Yes, gamers don't give a toss about the tech details [0], and they shouldn't, Sony gave a presentation about all of the internals of the PS5 and all they got in response was "ZZzzZZZZZZ".
They only want the console for the games, man, nothing else.
It was originally intended for the Game Developers Conference which was cancelled (postponed?). Sony didn't do the best job setting expectations ahead of time, but either way people were hungry for info. At the time, Sony had been fairly quiet.
The N64 looked way better, loaded faster, had a better controller, and to me had a better game library as a whole. The PS1 had comparatively horrible architectural choices like no built-in z-buffer, texture filtering, or floating point processing. Apparently N64 sold less per year available though. What did everyone see in the PS1?
storage space for CD quality audio, storage space for full-motion video, on-board full motion video decoders, cheaper media costs, easier media to deal with on development side, and most importantly a game library that catered more towards 'adultish' fans.
I remember reading back when these consoles were current that the biggest draw for most consumers was the fact that full-motion-video was a first class citizen on the PS1, and it was a huge technical wow for people. FMV was difficult on the N64 due to size and codec constraints, so it was rarely done -- and when it was if was usually subpar compared to the PS1 version of the same game. Resident Evil franchise was notorious for this phenomenon.
Games. Nintendo lost Square to Sony, and Dragon Quest from Enix kinda petered out and eventually shipped on PS1. The effectively had a huge catalog of RPGs that n64 was sorely lacking. There was also a comparative lack of 'mature' themed games. And the catalog was smaller, about 5x so. Even Zelda was not a launch title. Probably because the cards were substantially more expensive than CDs.
There was also a supply issue early on in the n64's lifecycle, I seem to recall it being just shy of Tickle-Me-Elmo hard to find.
The ps1 was also easier to pirate, double edged sword though it be.
Sony marketed the device to an older generation e.g they had PlayStation demo station in night clubs. Until Sony entered the gaming market, consoles were largely considered a children toy.
Personally, watching a friend roam around the world map of final fantasy VII, I had never seen a game like that in my life. I couldn't stop thinking about it. Even goldeneye couldn't talk me out of getting a PS1 after that.
Maybe N64 had a better library of games to you, but for others PS1 had Silent Hill, MGS, FFVII, Gran Turismo, Tomb Raider, Tekken, all games that define their genre even 20 years later.
The PS1 also had much more memory available for texture data - there was a set of trade offs for both consoles. Personally I loved both - but the PS1 turned out some really impressive game’s that weren’t possible on the N64 (at least not without looking like blurry messes) even discounting the benefits a CD-ROM had for storing large amounts of data.
I seem to recall PS1 games being cheaper than N64 games which allowed kids to purchase more. Allowances in the 90s were on average between 5-20/wk. PS1 games were more like $40 compared to N64 games which were usually $60.
> Now if we had custom fast/ray-casting technology
No offence but I just cannot understand this viewpoint. A game is about fun, about gameplay. Pretty graphics do little for that (IMO) once the novelty's worn off, and can easily start to get in the way. A game's tech is mostly decoration (I said mostly, sometimes it can help). The real breakthrough in games would be in their depth of interaction, and that would take some kind of comprehensive model of humans and human experience, and more. We're not getting that soon.
I used to think like you. But the way a game looks definitely has impact on the way the gameplay feels. Smoke particles out of a gun, the slight tilt of your viewport. The flash that reflects of the walls. These things have an incredible impact on gameplay even if mechanically it makes no difference.
Vision is one of our most important senses and discounting it and saying well it shouldn't matter because gameplay is the most importing is shortsighted. Graphics are a huge part of gameplay.
Some of it can get in your way. I usually turn off bloom and blur, I get a higher FPS and my peripheral vision is already running a native blur function, so why waste resources on another process doing the same thing? A lot of technological innovation ends up being spent on style points for the sake of making a sexy trailer and getting pre orders. When Battlefield 3 came out with all the engine improvements, many users ended up hating all the blue and shiny flashes with everything and blur and bloom that got in the way of actual game play.
Sometimes people running lower specs might even play better than someone running higher specs, if that means the higher spec has more grass textures that better hide character models or something like that. Those sort of improvements are improvements for the sake of vanity, and are generally panned by a game's core players.
There is a really critical component of having an SSD, and knowing you'll have an SSD (as a game developer) which is easily missed (the article may cover this, but just to reiterate).
Lets say you only have 8gb of memory. Most importantly: You know you'll have 8gb (not more, not less). You build your game around having 8gb; textures are a specific size and quality, levels are designed around this, etc.
With a hard drive (or disk), games don't just store what is "on-screen" in memory; they have to store everything which could reasonably be expected to appear on-screen within the next, lets say, 10 seconds. This is because disks are slow; if the player violently flips the camera around, they can't wait 30 frames to page in the textures for that wall behind them; it has to be stored, even if the player isn't using it, and may never use it.
But, with SSDs, this changes entirely: new textures can be paged in within just a couple frames. Now, that paging can happen more on-demand.
So, wait: This means that 8gb of memory, which previously contained N% textures just used for what is being rendered, and M% textures which weren't being used, can now contain even more textures for what's just on-screen. Those textures can now be bigger! Higher detail. Bigger open worlds. The like.
Second point: Series X and PS5 do have ray tracing (not just ray casting, which you seem to suggest they don't have?) The Ratchet & Clank video we saw during the PS5 press conference seemed to showcase this, as it had multiple scenes showcasing reflections that didn't seem possible with just screen-space reflections. These consoles can absolutely, 100%, do super-1080p rendering at 60fps with ray tracing. Maybe not 4k60, we'll see, but 1440p60 w/ ray tracing is clearly attainable, at launch.
Third point, and this is critical, possibly the most critical: These consoles are all built with the same underlying technology that powered the previous generation; they just turned the dial to 12. This has, in the history of Playstation at least, never happened before. PS2 was very different from PS1. PS3 was a clusterfuck of technology that took developers years to get their heads around. But PS4 to PS5? Same platform tech. Developers know these platforms; the ramp up will be startlingly quick, and they'll be able to re-use and improve many of the very powerful game engines already there on PS4, like Decima, to make use of the new tech instead of starting closer to scratch like they did with PS4, PS3, and PS2.
I really don't think you could be more wrong than your assertion that this generation isn't going to be big. I'll go one-further: The media is underhyping the impact these technological advancements are going to have, and this generation will be the biggest single-generation leap in technical performance since the Nintendo 64. Only one AAA cutting edge title that I'm aware of was capable of consistently hitting 4K60 on a previous generation console (Gears 5 on X1X), and really, most games looked more like Control or Fallen Order (barely hitting 1080p30, no impressive technical achievements on consoles). We're now entering an era where practically every game will be 4K60, with substantially increased graphical fidelity, with ray tracing, with 3D audio, and it wasn't enabled with some crazy technology like IBM Cell; its the same shit that everyone already knows.
Yes, a lot of people commenting here should really go watch the talk Mark Cerny gave [0].
It's clear there's no "big technological breakthrough". SSDs are great but have been around for a while already, we simply had to wait for the technology to mature enough.
What we are seeing is that we can't just keep slapping better CPUs and GPUs to the machines, because the improvements nowaday are not that impressive anymore, so at least for the PS5 they have switched and started focusing on the architecture, removing bottlenecks, removing major pain points, and using additional hardware to handle tasks like compression and freeing resources for other purposes.
Some people might say these improvements happen because they were lazy before and never focused properly on dev needs and bottlenecks and the specific needs of game development. I partially agree, but the SSDs really don't just improve disk speed, they allow a completely different set of architectural optimizations.
Right; this isn't a Nintendo 64 situation where the jump is so technologically huge that it enables fundamentally new experiences because of the jump. Its a situation where all the pieces were already out there, and now they're just integrated in a way that is developer-focused and standardized such that products can be built to rely on those pieces.
Putting an SSD in a PS4 is not interesting. You can go do it today. Hardcore gamers and streamers will do it, because it does reduce load times by maybe 20% or so. But, until the entire pipeline, from SSD to hardware decompress to system memory to platform libraries to everything, is aware of that SSD and built around it, we wouldn't get the 10x benefits. This generation is where that's happening.
When I think about the PS3 to PS4 generation jump, I didn't get that excited. Sure, games looked better, but they're still 30fps by-and-large, they're still using rendering tricks all over to speed things up instead of truly awesome new tech like ray tracing, we're still waiting minutes for many games to load. The PS4 to PS5 jump (and similar jump on X1, don't want to leave out those players) will be far more transformative, in ways that actually matter to players and developers alike. It sucks that some people are still so jaded as to think "I have an SSD in my computer, its nice but its not huge"; they're missing the bigger pictures, and that bigger pictures is what people like Mark Cerny and Phil Spencer spend their entire lives thinking about.
"But, with SSDs, this changes entirely: new textures can be paged in within just a couple frames. Now, that paging can happen more on-demand."
Erm, texture loading from RAM to the GPU memory already is critiqued in games, since players can see the slowdown when it happens.
Either this is going to be using the system RAM as a cache for the SSD data, (and then it will have the same problem, plus the developers will have to make sure that scenes can stay completely in the system RAM they have available, so this will be the limitation, not the size of the SSD), or they are getting the GPU to pull data directly from the SSD, and SSDs are very slow compared to RAM.
> Erm, texture loading from RAM to the GPU memory already is critiqued in games, since players can see the slowdown when it happens.
Well then, its good that the PS5 doesn't have system memory; like the PS4 before it, it uses a unified pool of GDDR memory for everything.
> or they are getting the GPU to pull data directly from the SSD, and SSDs are very slow compared to RAM.
Yes, this is what they're doing.
SSDs are slow compared to memory. But, that's not the important metric. The important metric is, are they fast enough to meet a specific rendering target, given each game's specific asset quality. But, actually, even that's not accurate: Games will change their asset quality to match the speed of the SSD such that they're capable of doing this, not the other way around. That's the special sauce of consoles; developers optimize specifically for the hardware available, because everything has the same hardware.
On PC, game developers say "alright, we'll bin 8 different asset qualities the player can select from based on their PC, hope they pick right, ship it." On consoles, developers say "hmm, if the player turns around in this segment, the framerate drops from 30fps to 20fps; lets dig in there and pre-load some of those assets." You cannot, fundamentally, at any level, compare PC gaming to console gaming, because the optimizations go all the way back to the overworked engineers at the development companies making sure the experience is great for the hardware console gamers have available. On PC, developers say "gamers can theoretically see the whole world, but we'll layer in some adjustable fog in case they can't handle it." On consoles, developers say "we know the system cant handle it, so lets block some sightlines with a mountain range and a few buildings."
The PS5 SSD can load data at upward of 9GB/s. This means the SSD is capable of loading 150MB of data within one frame at 60fps. If you develop games, you'd know how much data that is (its a ton). Hell; you can drop ten whole frames, 60fps to 50fps, most gamers short of DigitalFoundry wouldn't even notice, and you can literally replace 10% of the entire GPU memory. Even at 60fps, the SSD is fast enough to become a pretty dang suitable hot-cache for GPU memory.
In your last paragraph you are conflating throughput with latency. What good is 'loading 150MB of data within one frame at 60fps' when your game freezes while doing that. You cant simply count on MMU page table fault being handled, and new missing texture loaded 'just in time' from compressed SSD. You cant even do that _today_ on modern PC with uncompressed textures in main RAM, latency is still too big.
Plus this isnt exactly new idea. 13 year old Carmack megatexture was one of the first widely known implementations. AMD shipped hardware support for Partially Resident Textures almost 10 years ago https://www.anandtech.com/show/5261/amd-radeon-hd-7970-revie.... We can go further 20 years back back to Gamecube flipper gpu virtual texturing (specialized texture unit MMU) http://www.nintendoworldreport.com/guide/1786/gamecube-faq-n...
At the end of the day PS5 will still precache streaming geometry/textures like PS4 does now (COD https://www.youtube.com/watch?v=mvdTtl27TpM), like PS3 did (GTA4), like PS2 did (GTA3), like PS1 did (Crash Bandicoot). Sadly there is no magic breakthrough coming with next gen consoles, and this SSD thing was the best they could muster to build the hype :(
Predicatively caching assets will absolutely still happen, but the critical point is that it has to happen less; if developers had to "predict" the next 20 seconds of viewports a player may see on a hard drive, an SSD may drop that to 1 second, or slightly more or less depending on what is expected to happen next in the scene. Not only is this strictly fewer assets, but its far, far easier to predict, which improves the accuracy of the predictions and thus lowers cached assets further.
Which brings me back to that 150MB in 1 frame number; that's not saying that nothing is being predicatively cached; its saying that some N frames are being predicatively cached, but if the developers make an intentional or unintentional cache miss, recovery from that miss is far, far faster.
Imagine you're walking down a grimy hallway; the player opens a door, and walks out into a sun-drenched paradise. 98% of players will just walk down the hallway and open the door, so the developers have a problem: They should definitely be predicatively loading the assets for the paradise outside, but should they also load the assets for the hallway behind the player? Most players won't turn around. If they throw out some of those assets behind the player, that could make more room for the paradise that's coming. Could have more particles, more bloom, higher res textures, all good things. Today, the answer to that question is usually Yes; if those hallway assets get de-loaded, the frame drops would be too huge if a player decides to turn around. Or, more commonly, its Yes (But): Maybe we could build a bend in the hallway, or put a second door behind the player, so if they turn around the immediate assets we'd need to load would be less, and we'll have more time to load what's further behind them if we need to.
With an SSD, that answer may be No; if they can load at 8 GB/s, they know the maximum turn speed of the player, the math is more likely to work out that they can actually just throw away those assets, or never build that bend in the hallway. If they end up needing them, maybe the framerate will drop from 60fps to 45fps for a couple seconds; that's still faster than nearly every game from current gen.
Technical limitations like IO bandwidth and latency have guided the development of games for generations, all the way to the very artistic design we take for granted as being the "vision" of some creative leader on the development team. You're playing Halo; you drop down a ledge; you can't climb back up the ledge to go backward. It feels real, we don't question it, but really that ledge is only there because an engineer needed a checkpoint to make sure the player couldn't go backward, so they could de-load some assets. Simple as that. Its not driven by artistry (though, it could be); its driven by technology and its limitations (learning about these "checkpoint ledges" is a "I can't un-see that" moment for many people; checkpoints are everywhere in many boundary-pushing games. Even the hyper-recent FF7R uses them all over, often like "duck underneath this pipe" or "squeeze through these boxes"; they usually let you go back, but it takes several seconds in a confined area, which gives the game more time to load the next area. Same concept).
With an SSD, maybe a creative designer says "lets put a ledge there, we want to keep the player moving forward". But it'll be an artist making that call, not an engineer.
This is also why this generation is significant even for PC gamers! What we're talking about isn't at a technical level; its not faster load times, its not higher framerate, its fundamental changes to the game design itself which were necessary for games running on consoles. It didn't matter that your $3,000 PC had a 970 Pro NVME SSD; Assassins Creed Odyssey was designed to run on a PS4, and it doesn't. But, now that everyone has an SSD, even PC games will benefit because the game will be designed under the assumption that its alright to load multi-gigabytes per second of assets from the cold storage. This "SSD-by-default" will get built into game engines which power games across every platform. And sure, we'll see the biggest benefits on the PS5 and with its exclusives, as its SSD and hardware decompress pipeline is far-and-beyond more powerful than anything PC gamers or the Series X has, but everyone will benefit none-the-less.
But there is nothing new here. Its not like developers avoided streaming assets and now somehow will do it. We have 7 years of Full hardware support in current gen consoles and plenty of games using it. ps4 - Partially Resident Textures, x1 - Tiled Resources https://blogs.windows.com/windowsexperience/2014/01/30/direc...
Then why do even modern games still use these map and character movement tricks? According to you, this problem should be solved!
Because its not solved. Games have only gotten bigger; assets are bigger than ever. Having a hard drive in every console actually helped a ton, because the hard drive/memory bandwidth is larger than the disk/memory one, but it didn't help enough.
There's no way around the fundamental bandwidth at the hardware: If a game needs a gigabyte of assets for the next major scene transition, it will probably need to use tricks to make sure that scene transition takes longer than, say, 10 seconds (50-100MB/s is a typical read rate). Checkpoints; stall sections which slow the player down; limiting the top speed of the character or (especially) vehicles; slow opening doors (think Dark Souls/Bloodborne); vestibules between two doors with an adjustable length based on expected load time... They use technical tricks on the hard drive, copying commonly used assets all around the platter to reduce seek times. I've even heard it said by a developer that "we'd be fine in this transition if the game gets installed to the inner edge of the platter, but on the outer edge the load will take too long"; that's how close modern games run against hardware limitations.
The software in play hasn't changed. Duh. We're not talking about the software, we're talking about the SSD.
I see a huge change through what Epic Games showed with UE5 and that geometry.
For me, this next Gen basically just solved Load Times, SSD Load Priority, Raytracing/Light, Detailed Geometry with Light etc.
It feels very obvious, don't get me wrong, but it is still not available yet. It is new.
I personally like the independed IO chip very much. I have and had seen enough latency issues on current Operating Systems due to full busses due to super fast NVMs.
I was used to SSD Speed, then i added an NVM and holy shit that shit is so much faster.
No, it is not going to usher a new area of gaming, just make game have less 'loading' screens, and perhaps larger open worlds.... but it is no near the innovation and changes we experiences in the 90s and early 2000's....
Let me say in a more popular language: History has demonstrated that gamers don't give a f@ck on how fast is the SSD.... PS1 won over N64 even though it had huge loading times, compared to near instantaneous N64's loading screens....
I meant, personally, i think it is great, as PS4 loading time are bad, but just fast loading time it is not going to make me buy a game.
Anyway, just a sad PR piece trying to get people exited for something that looks a small (but good) evolutionary improvement over the current PS4 Pro/Xbox X.
Now if we had custom fast/ray-casting technology, than it would have been exiting. But it looks like if enabled, a game will either have to drop resolutions and cap frame-rates to barely playable 30fps.
But, hey your SSD will be fast.....