Interesting write-up, but I feel like it would be helpful to put an image of the final render up front so the reader has a better point of reference for what the intended effect of all the steps is.
That image was taken before post-processing, e.g. screen-space ambient occlusion and bloom, both of which can visually glue things together. Especially the SSAO would likely add some smooth darkness around the person's feet.
Yes the blog post explains that this is how the game works. Some lights do make shadows, but they are expensive so artists may only place a few per sceen. There seem to be none which could make him cast any. Also note how his face is all red even though that red is way further behind in the periphery.
Seems strange that there is nothing instead. Even a round shadow blob beneath, like the games from ps2 used, would make the character look much more grounded in picture.
Meanwhile Sony just announced they will be refunding purchases for this game and taking it off their virtual store due to it being extremely buggy and seemingly unfinished on consoles. This is an unprecedented move for a game of this stature and is a monumental failure from CDPR.
I suspect the Sony decision came on the tail of CDPR asking upset fans to request a refund from Sony. For the last few days people have been posting screenshots of Sony denying refunds, making Sony look bad. I feel like this is Sony saying to CDPR "If its broken enough you're sending people to us for refunds, it's too broken to sell"
You're right this is about avoiding bad press on Sony (or the need for Sony to revise their return policy), but it's also about Sony as a publisher enforcing their relationship with studios.
Sony's return policy for digital isn't consumer-friendly and something I want to defend, many call it a "no refunds" policy, as if you start the game even once you can't get a refund. But as publisher it's their right, and within their interest, to enforce it. It's not up to studios to unilaterally change it.
On Dec 13th, CPDR issued the tweet/statement, "For copies purchased digitally, please use the refund system of PSN or Xbox respectively."[0]
The next day at an investor call, investors asked them if they had a special refund agreement with Sony and Microsoft. They said they didn't. [1]
Then the next public statement from either was Sony saying it's removed. The next statement from CPDR was to investors but was leaked, essentially saying they met with Sony and Sony told them it was being removed.[2] Oh, to be a fly on the wall for that meeting.
Microsoft has also said full refunds for anyone who requests, matching Sony.[3]
This game should not have passed certification and how it happened clearly shows to the public what has long bugged me: how much you get away with and how many, or how serious, issues you get a waiver for in cert is directly proportional to how high profile your game is. This reinforces the issues with big titles being released in a bad state.
Which is weird, because my impression was that it was only truly broken on last-gen consoles (i.e. PS4, not 5), where it didn't really belong in the first place. It's buggy on all platforms, but it's the normal open-world-game level of buggy (remember when Skyrim launched?).
I've been watching my girlfriend play it on PC and it's been generally fine
There is controversy around this idea. The game is certainly buggy and many similar open world games have also been buggy on release. However, a lot of the problems in Cyberpunk 2077 don't appear to be bugs. They appear to be active design decisions that were made as a shortcut on promised features. Here is a Twitter thread with some examples of design decisions that might appear as bugs to an untrained eye.[1]
It seems like CDPR knew they had an unfinished game and couldn't deliver it close to its release date. Management decided to do whatever had to be done to release a game in time for Christmas. This "open world games are always buggy on release" narrative plus some shady behavior surrounding how they handled reviews[2] allowed them to get largely great reviews and a huge influx of cash. Hopefully that provides enough capital to justify a long term commitment to finishing this game because it is doesn't appear to be a few bug fixes or performance improvements away from being the game that was promised.
I’m loving the game so far (on a PC with a 3080), but it is certainly wide instead of deep. It feels really really shallow. There is a lot of content but very little feels meaningful. Outside of the missions with the girlfriend there is very little relationship building. Even this come short compared to most other rpg’s. I have also had a few insane bugs, but nothing came breaking. For example for a while my character was standing naked in his bike, and in another instance a lady was standing like the Rio Christ statue for a whole mission.
I don’t really mind the examples in the Twitter thread, but I do mind that you really cannot interact with the world in any meaningful way.
Sounds like the eternal issue of open world games though. Olympic size wading pools is how I like to describe them. Many things to do, but with minimal depth and little, if any, impact on the game world.
None other than skyrim is one of the best examples of this.
Thats why I actually like that Cyberpunk 2077 seems to have a fairly focused mission structure. (I haven't played very far, mind you). It always bothered me in e.g. Bethesda's games that I was supposed to be on some all-important world-saving quest, but it's no problem if I want to just spelunk in ancient dwarves ruins for a few months instead.
Yes, playing it as anything but a rails game seems to be wildly out of character once the story has progressed beyond the initial handshake. The open world backdrop makes the railed story far more immersive than an actual rails tunnel that you cannot leave could ever be, but only if you mostly ignore it. And the main story track is pulling much stronger than in something like Skyrim.
With this game it's probably a generational thing on more than one level: not only do most people tend to grow out of side quest competitionism at some point, I suspect that older parts of the audience are also drawn into this particular main story stronger than their younger peers. It's just so perfectly 1990ies vibe that you almost forget that people did not in fact have mind machine interfaces and fancy prosthetic limbs when they sent their first email on AOL. The game represents exactly how people born around 1977 remember the movies of their youth (entirely unlike how you'd perceive them if you'd see them today).
Sure, but cyberpunk is based on a pen and paper RPG so I really hoped it had a little more.
But yeah your description is pretty good.
For me it was great on the beginning, the mood and the graphics are pretty good. Some missions are great as well. But now I’m not really sure if I will even finish it. It doesn’t really feel meaningful.
Also even though there are some explicit sex scenes (but so few I’m not sure why they are there), it is really not edgy at all. It’s like a corporation designed the game which is everything cyberpunk is not supposed to be.
There are a ton of great ideas in there, but then someone realized it would take too long so they half assed everything.
If you role-play your character there is zero room for toying around with side missions once the story kicks in. cp77 simply isn't a do whatever you like game, it's a story game with a backdrop of a somewhat interactive surrounding. Compared to earlier cityscape story backdrops like e.g. in Deus Ex series games or the endless iterations of Assassins Creeds the backdrop is far more interactive and even a little gamified. But it shouldn't be considered the game, it's a backdrop.
> There are a ton of great ideas in there, but then someone realized it would take too long so they half assed everything.
Totally agree. Take for example character backgrounds. They are meaningless. You have a custom 5 min introduction for that background and then you are playing the same game as everybody with the addition of just a few dialogs.
They could have just focused on one background and make the side quests more relevant to that background.
Not sure of that. The most critically acclaimed games e.g. RDR2, Horizon Zero Dawn etc. have very deep and intriguing stories while maintaining the world as “open” as possible. Even the Witcher is another example. Cyberpunk sounds like a case of massive overhyping and under delivering. Some fanboys here and there are unwilling to face the reality, but most reviews I’ve read seem to be sober and corroborate the above point.
I disagree. Maybe don't skip the dialogues :) I have quite a contrary impression - the game is very deep and their world is not so wide as it could be.
Its buggy and some parts seem to be missing, like can cops/enemies even track you at all?
But at same time it gives more than a well built game would, yesterday I tried to make easy money by running over max-tac cops and taking their lvl 50 stuff.
I cant kill them with guns, but hitting them with car seems to do the job.
In normal game, high level enemies would kill you instantly, but in cyberpunk, you can just take their stuff, run couple blocks and everybody forgets that you just killed 2 cops, then go back and do it again.
That is a great example of an actual bug, that ALL players saw, and many didn't notice at all.
The police seemly doesn't have an AI, people actually believed they didn't.
But a player happened to see a police car with cops inside (extremely rare to see that), and saved his game near, and ran several tests.
He found out if you piss off the cops, using a car (on foot they just exit the car and never climb back on), and then you hit their car with yours (shooting them doesn't work, they just go away following a pre-programmed path), they start chasing your GTA-style.
So this means they DID code the cop cars to chase the player car, but a sequence of bugs make the only way to see this content to be go out of your way to do it (you need to see cops in a car spontaneously while you are inside a car too, then run over a pedrestrian, then hit the cop car with your car).
Really npc's are stupid in skyrim, gta, cyberpunk whatever. And it doesn't matter. When you look at Watch Dogs, its getting very repettetive anyway; You look at a few and discover that you actually don't care about it anyway.
I play cyberpunk right now for the story they build and they build, so far, a very good story.
They can keep the npcs as they are honestly; Finetune perhaps but meh
You are right that NPCs are generally stupid across the board. However, in CP2077 they are outright broken. If you look at an NPC, turn 180 degrees and then look back, the NPC is either gone or replaced with another NPC entirely. There are stuff like that, not related to "stupidity", that breaks the immersion for me.
PS. I'm also enjoying the game a lot, but soft locks and brokenness is starting to take it's toll.
Both GTA V and Red Dead 2 have pretty broken and uninteresting NPCs as well unless you start causing havoc in immersion breaking and unrealistic ways. I think the fact that in Cyberpunk 2077 you can't have fun just randomly blowing up shit and causing havoc is a good thing. It is very unsatisfactory to randomly kill people or run them over, because there is nothing interesting about it. The art direction and ambiance works very well as a "movie set" but not so much as a sandbox.
GTA 5 does that too, but it may be observed more often when you’re low on RAM/VRAM. When I played GTA 4 on a microwave, it spawned the same car over and over until you get far enough from a current location. Not saying that it is okay for CP2077 in 2020, but this behavior is not unique. Maybe they overestimated where hardware industry will be in 2020Q4.
I played a lot of GTAV, and I never noticed it. Perhaps it preserved actors within a certain radius of the player? Or maybe the third person view made it less apparent. To dynamically spawn/despawn actors is a necessity I suppose, but I've never seen it handled so poorly and sloppy as in CP2077.
GTA generally is pretty good about keeping things close to you around, e.g. looking backwards won't usually remove traffic in front of you. The most obvious case are NPCs that aren't "relevant" anymore, e.g. people that have places around the city that depend on time of day. If their "active" time ends, they'll start wandering off from their location. If you follow one, you can for a long time, but if you then even look away for a split second they'll be removed, presumably because the game tracks them as something it wants to unload after their useful time has ended, but won't do it while you look.
In earlier GTAs like San Andreas there is an explicit difference between "traffic in the distance" and "traffic close by" - the former will be removed if out of sight, doesn't have full-fidelity 3D models, apparently doesn't have physics applied to them, ... Once they get close enough, they get more complete and sticky. (GTA V quite possibly does the same, I just know less about the details there and its a lot less obvious)
Yeah, I think GTA has been doing a good job at handling these things gracefully.
Regarding your second point, I'm not sure if you've seen how it's handled in CP? Basically all roads in the distance have a string of car/headlight sprites (yes, 2D) moving on them. Makes the world look amazingly populated and alive.
However the density of the sprite cars is much higher than the actual cars (depending on your settings). So when driving on a long road you always seem to be chasing a large group of cars that you can never reach. There seems to be no connection between actual cars spawning in and the sprite based crowd illusion.
And if you use a scope with good zoom to observe the sprite cars, it looks very funny. Basically you have very obvious Doom faux-3D sprites in an otherwise stunning environment.
Still doesn't matter to me and has not had any real influence on me whatsoever. I'm not doing anything with them. They are there for the feeling not for relevance.
100% agree. I think the people that harp on about this are expecting the game to be something it's just not. Why not focus on the aspects of it that are good, instead - like, literally anything else? Visuals? Story? Tactics? Ambience? Different character builds? The game gives you plenty of freedom to decide how you want to experience it. It seems like a lot of people just want to focus on the uninteresting/minor parts for some reason.
People have focused on the poor story and gameplay too.
That said, why do we have to remain purely positive for an expensive AAA game that promised so much and sold at a high price tag? Why should they be excused?
Reading the Twitter thread I wonder if people just have unrealistic expectations.
Self driving cars are a billion dollar industry, and yet you expect a game dev studio to come up with a realistic AI for drivers?
It's a game. People know that it's not real. Sure, it's fun to discover the tricks that the devs had to pull off to make it work. But man, don't blame a game dev studio that they haven't come up with an algorithm for self driving cars.
I don't mean to be a jerk, but I don't think you know what you are talking about here. Driver AI in a video game isn't nearly as complex as self driving cars in the real world. Plenty of games have been able to create a more realistic driver AI.
For example take a look at this video[1], specifically the part in which the user shoots at AI driven cars. The AI in GTA 5 responds in a much more realistic fashion. This has a huge impact on immersion and creating a lifelike world. GTA 5 came out 7 years ago. There is no technical reason for Cyberpunk's AI being this bad. CDPR simply didn't want to invest the time in making it better.
You're right, I had no idea how realistic drivers behave in newer games. Would be interesting to see what tricks GTA pulls off to hide the limits of their AI. Obviously they do a better job.
(Sidenote: When the guy started beating up pedestrians I just couldn't continue watching that video. Maybe I'm getting old.)
Self diving cars are not comparable to a driver AI. The AI exists within a world where it can have full knowledge of everything, and will have significantly less edge cases to worry about. Many racing and open world games do a great job at this, so it's not that it can't be done.
edit: Also hitting a pedestrian isn't a huge deal in a videogame.
FWIW, I got hit by cars a lot in my cyberpunk playthrough.
Fits the dystopian narrative I guess, but even in a dystopia people worry about scratching their car or an insurance nightmare so not too realistic experience.
Other than the game is very enjoyable on a PC, with some really enjoyable NPC action, interesting side quests and a great main story line. Also a great value compared to buying a movie or going to the cinema for a similar price - so far sunk 50h into the game.
> Which is weird, because my impression was that it was only truly broken on last-gen consoles (i.e. PS4, not 5), where it didn't really belong in the first place.
The game was announced 8 years ago and the last gen consoles were the main target. What do you mean it didn't belong there in the first place. The game came out now because it was delayed, otherwise it would have been out when we ONLY had the consoles where it "didn't really belong in the first place".
The main target was PC. You can tell just by looking at the controls. They were specifically designed to be played with a mouse and a keyboard (and they are terrible with a gamepad)
As a PC player I have the opposite reaction. Menus and input feels very console ported. I guess they decided on a compromise that would leave everyone feeling equally alienated.
This doesn't excuse the unplayable experience on consoles. If you're making a PC game, release it on PC. Don't advertise it or sell it as a console game.
When you watch footage of it on last-gen consoles, it looks nearly unplayable. It clearly wasn't optimized for those consoles at all, and definitely deserves to be refunded/pulled (not passing judgement on CDPR, just pointing out that it was very clearly developed for modern hardware and then shoe-horned backward)
I played it on PC. Honestly it didn't feel buggy at all. Especially compared to Skyrim or similar. You can get rare weird physics glitches, I had one situation where I seemingly teleported/blacked out (not sure if that is a quest) but nothing outstanding, and sometimes bodies don't fall to the ground in a proper way but generally it felt very smooth and bug free.
I am running it on a high end PC, maybe the bugs manifest more often on a lower spec PC?
RDR2 manages to offer a level of fidelity, detail on a larger scale and runs on these systems just fine. I don't think it's unreasonable for consumers to expect a game in development for so long to live up to at least the level of that game.
I honestly don't know what you're on about. What is detailed about RDR2? I played it, and its world felt mostly empty and devoid of variation. Even in cities, it severely lacks details, partly owing to the "mundane" historical period it is set in.
Cyberpunk has vastly more variation in NPCs, models in general, textures, light sources, and so on. In my playthrough, the largest crowd present in RDR2 was maybe 10 people on screen at the same time. Twenty at best. In my mind, the level of ambition doesn't compare, unless you were to limit your Cyberpunk experience to the Badlands.
To me it's not so much the bugs but the AI in general that seems to be worse than gta3's AI. It's an AAA game that's been hyped to death and delayed 3 times to deliver sub par results.
I like rockstars approach much better: very limited hype, limited numbers of trailer/videos, no crazy promise, &c. Release on a limited amount of platforms first and take your time to port it to others so that everyone gets a polished version (or at least as good as it gets given the hardware). Cyberpunk looks like your typical "over promise, under deliver" approach.
It's glitchy as hell on Ps4, but I'm loving every minute of it. The jankiness really adds to the whole cyberpunk aesthetic. Anyone complaining about buying a broken ass game based on a techno dystopian future will never be punk, let alone cyber.
If I could request one single upgrade though, putting user input fully onto a thread that isn't also responsible for rendering would be great.
I'm just confused why anyone buys day-1 games at all.
What would be the point of putting user input on its own thread, though? Do you letting the UI run at 60 if the 3D graphics are janky? Otherwise there's no point collecting input faster if the game isn't going to draw it until next frame.
> What would be the point of putting user input on its own thread, though?
Shooting things with a console controller becomes much easier. With a high power single shot weapon, you can sweep over your target and pull the trigger at the right moment. Works great for games that accurately record your input timing regardless of whether there's a frame drawn at the right moment or not. But once you learn this technique, playing games that require a frame to process input becomes very painful.
edit: Sure, it doesn't specifically need to be implemented as a thread if the controller events come with a timestamp, but my sibling comment indicates that in CP2077 the input event between frames might actually be lost.
Ironically, the released build for all consoles is made primarily for last-gen consoles. CDPR intends to release a patch for enabling "next-gen features" next year. So no, it runs like shit on last-gen because of poor/rushed engineering. Not because last-gen is "too dated".
Not sure if it’s comparable with Skyrim. Apparently the graphics are just broken beyond repair with the frame rate dropping below 15 on a lot of scenes in PS4 which is just insane;m, and which I simply doubt that any post-release patch is going to be able to fix at all. It is simply crazy that they released unplayable games at this state for those platforms.
Also bear in mind that this is the PS4 version that’s being pulled. Maybe that’s where your confusion came from. The proper PS5 edition is not released yet. The PS5 runs this PS4 version somehow fine... But that’s definitely not how it’s supposed to work in the first place.
We're kostly hearing people having troubles of course as they're the most vocal so it's hard to paint a complete picture, but I think it's in part from how fragmented console performances have become.
So you hear complaint from people with PS4 Pro and people with standard PS4 that swear they only have few issues but playable performances, which makes no sense unless the game is dependent on some less visible spec, like the performances of the media it has been saved to, which circles back to the original point, consoles are no longer this stable, uniform hardware that's hard to master but easy to support, and the requirement to target every hardware combination is anachronistic
I've played it for 12 or so hours on my non-pro PS4. Had to restart once because the controls stopped working ("mouse cursor" just moved to the bottom of the screen/huge delay in input) and one time a hint overlay got stuck and covered 1/3 of the screen and couldn't be closed. Savegame reload fixed it (the other bug required a restart).
The gameplay is fun and I don't regret buying it (physical copy). But I'm a sucker for the cyberpunk theme and Witcher 3 is probably my #1 console game of all time so maybe I'm willing to accept too much. Far from unplayable though and I don't really care for maxed out graphics so the look and feel is just fine for me.
This matches my experience. I played it for 50 hours on PC (and I am a very lazy gamer, typically I get tired of everything in one hour), and didn't encounter any breaking bugs. Overall, it's a very enjoyable story-driven game, although "AI is more A than I".
My explanation for the outrage is that in the past few years quite a few people made youtube careers bashing AAA games. Making outrage videos a lucrative business, apparently, and the new generation of wannabe game journalists jumped on the train.
I’m constantly wondering who even cares about the bugs, seeing as this “game” has hardly anything worth to be called gameplay. Imagine having to drive a tank through narrow back alleys while being drunk. That’s what the controls feel like…
Why people don't see other POV ? Maybe, Sony forced CDPR to release ps4 version as ultimate for ps5 release. Like than "CDPR, you want to ps5 release? So release both versions together.
So you think Sony did not notice the state of the game for ps4 and nobody in the Sony management did not aware of the problems? For most anticipated game of the year nobody at Sony warned, you kidding me.
The discussion on the SSAO suggests that the game's system uses GTAO - lots of features suggest at the least heavy inspiration from the technique (see "Practical Realtime Strategies for Accurate Indirect Occlusion" - relatively accessible paper).
The way the game's rendering is set up seems very carefully thought out. Namely, the relative lack of pre-baking and lighting "tricks." This accomplishes several things at once: It simplifies development of locations and assets - hugely important in a game with a scope this large, it makes the lighting/visuals of the game more robust and consistent in quality, and it also allows the integration of ray tracing effects without changing the artists' and designers' process.
Since even without ray tracing the game uses PBR and broadly physically plausible (though carefully designed) lighting, adding ray-tracing doesn't require a massive amount of extra work. All the information is already there, and it'll be much easier to make it look good.
All in all, reading this article has done just as much to convince me that the game is "next-gen" (at least in terms of graphics) as actually playing it has :)
As a comparison, the ENB mod for Skyrim allows dynamic lighting in that game, but to actually use physically plausible light locations rather than the base game's prebaked light maps essentially requires redoing the lighting for all the locations in the entire game (there are several mods that do indeed do this.)
Before CP2077 I still hadn't seen a game that definitely beat what you could accomplish with modded Skyrim for visuals - even in very recent games it was all still fundamentally done the same way. Good to see that essentially a decade of near stagnation has ended. It's a little vexing that the state of PC gaming is so dependent on the release of new consoles, though.
"It's a little vexing that the state of PC gaming is so dependent on the release of new consoles, though"
Thats apparently where the big money is.
Also, there are lots of folks like me, who play only occasionally and therefore not needing to spend so much for a state of the art gaming PC, and get by with their 5+years old one.
But yes, if the high end PC gamer hardware would be standard ... that would allow for some serious graphical (and other) improvements.
I am finally back into game developement (as a side project) - and I am surprised again, how much hardware constraints limit game developement, too. Many things I want to do, are just not possible on the average machine (but thats the bonus from developing on average hardware, you find out about that early on).
I still dream of a voxel sandbox gameengine with integrated chemistry and physics. There are quite some projects promised that years ago, but apparently it is not yet possible with todays hardware. Maybe soon?
All dynamic lighting, dynamic shadows, no lightmaps. I guess it was a couple generations too early for that to work.
Maybe lightmaps are dying off now because game worlds are getting bigger and denser, and storage isn't getting faster as fast as GPUs are, so it's better to throw together something approximate and dynamic that can be rendered in 2 or 3 frames, than something perfect that can never move and has limited resolution at bake time.
I would say it's a few things at once. GPUs now have pretty much general purpose compute, and they're typically memory-bandwidth bound more than compute bound. So - the paper & article briefly talk about - for example temporal reprojection/accumulation in order to get enough samples (see figure 3 in paper) and applying extra compute (filters) using the information already locally available to increase image quality while having no effect on performance due to the memory bandwidth limitation.
You can see what this accumulation looks like for CP2077's SSAO in this Gamer's Nexus video: https://youtu.be/zUVhfD3jpFE?t=960 He attributes it to TAA but to me it looks like a clear example of the exact temporal accumulation technique described by the GTAO paper in order to get samples performance-efficiently.
This ("free" compute due to memory bottlenecks) is true for the last-gen consoles as well as modern cards.
So rather than a storage limitation, it's more a limitation of being able to get the data from system memory or even on-card GPU memory to the GPU cores fast enough since the cores now process it so quickly. A general increase in resolution of textures and towards 1440p and 4K displays has not helped with that issue.
Going along with that is a general trend towards PBR (physically based rendering) in all fields that use computer graphics - film/tv, 3D modelling, design, games.
The PBR philosophy is generally anti using tricks & hacks where possible. The reasons why are the reasons that PBR has been so widely (almost universally) adopted: PBR makes it easy for artists to create consistent and robust results, using understandable methods that mostly work how you would intuitively think they should, and the results automatically look good and are interoperable since there is a clear target - to emulate real light transport, materials and so on.
It's simply not feasible (or at the very least extremely costly in artist/dev labour) to be manually designing bespoke tricks to get each individual scene to look right. It's a "let the computer do the work" type of philosophy, which Cyberpunk seems to follow since they didn't even seem to include manually placed occlusion planes, for example (going by the original article).
So it's a combination of the hardware now being capable of it, having gradual software/algorithmic improvements to get better approximations to correct lighting, and a relatively wholesale overhaul of the entire graphics pipeline toward PBR, due to its substantial benefits. Then, since the PBR approach is highly compatible with ray-tracing inherently, ray tracing is the cherry on top, for now still only usable on top end hardware for realtime games.
Couldn't most textures have a lot of procedurally generated elements? That way you wouldn't need to store all the information.
Artists would just need to specify the procedural part, ie their editors would need to have that as well.
I think currently, the way artists work, they anyway have a lot of layers, and some of them are procedurally generated. I know people at least used to use something like "clouds" procedurally generated in Photoshop, and use that to apply some effects to the texture like wearing or staining. But in the end they "burn" the texture to bitmaps, and a lot less layers. Maybe that could be instead done on the GPU. Then the base texture becomes mostly flat, easily compressable, and the clouds can be procedurally generated, requiring no storage at all. It can still be deterministic, you can just store the seed.
That's essentially what PBR game engines do. They take in base textures/resources (NB: a "texture" doesn't necessarily correspond in any straightforward way to the final image, and shouldn't be thought of as an "image" except in the mathematical sense) and apply all sorts of processes to get a good looking final result.
Not everything can be procedurally generated on the fly (at least with a small algorithm that would run quickly enough on a GPU). At a certain point just providing compressed bitmaps becomes more efficient, especially since GPUs are highly specialised to work with exactly those. Everything is a tradeoff in real time graphics.
> Namely, the relative lack of pre-baking and lighting "tricks."
Possibly this explains the issues on ps4/xb1 hardware. If cp2077 were designed for the new generation of GPUs where those tricks are unnecessary, it clearly wouldn't backport well onto hardware that requires those tricks.
I do think it contributes, however the techniques for rendering these have also advanced in quality and performance over the years, so it's probably possible for them to get the game to a decent state even on last gen consoles.
> As a comparison, the ENB mod for Skyrim allows dynamic lighting in that game, but to actually use physically plausible light locations rather than the base game's prebaked light maps essentially requires redoing the lighting for all the locations in the entire game (there are several mods that do indeed do this.)
Skyrim doesn't use prebaked lights and shadows at all. That's why you generally only have a handful of active light sources per scene with everything else being filled in with ambient (interiors) or sky (exterior) light.
If you want to see a game whose graphics are unbounded by consoles, take a look at Star Citizen. Yes, it's incomplete. Yes, it's buggy. Yes, it might be a microtransaction hoax. But damn, it looks good.
To me you've kinda proved my point. Nothing in that video strikes me as being "next gen" in terms of graphics. They have a decent filmic post processing filter and high res textures, I guess? CP2077 looks much, much better than that.
I am stunned how good CP2077 looks. I feel like any scene is a wallpaper. Especially with how smooth and connected everything is. I am running on RTX3080 on max settings at 1440p so your mileage may vary.
I don't think it's subjective. It's moments like this: https://youtu.be/nJ_9kli9gwU?t=961 that reveal the limitations of SC's engine. Coming from playing CP2077 the lighting looks badly off.
I do agree it does some things better, but it overall still looks "current gen."
That was a really fun read, I would like to know more about the RTX/DLSS influence on the game and how much of it just layers on top versus being fundamentally different ways of building a game.
One thing I feel it does in Cyberpunk is adds a layer of lighting correctness that really sells the environment. Sometimes it doesn't look perfect but it always feels correct and closer to true to life.
I have long held that lighting more than textures was the missing realism in games, and if you have ever played retraced minecraft I think you get what I mean. Even the pixelated textures and block world of minecraft feel like a real world when rendered with correct lighting.
I've been meaning to get in deeper in 3D rendering. I always thought it is fascinating. I took a shot at creating my own 3D engine in the browser from scratch using webgl and it's kinda hard. Back then I was between jobs so I had a lot of free time.
How do you got into CG? Do you fiddle with other areas, like AI, ML...? I'm asking cause I think you really have to concentrate in one field to get to the level of the guys behind cyberpunk. They probably never had to learn react / redux in order to get a job.
It's honestly super fun to go from zero to one with rendering just because it opens your eyes to how that second processor on your machine works.
Writing vertex and fragment shaders blew my mind the first time I did it. "Wait, this code will be run for EACH PIXEL ON THE SCREEN?" It's obvious in principle, but the actual implementation felt awesome to experience first-hand. Like the feeling of accomplishment that you get in re-deriving a famous theorem yourself. It's not groundbreaking or novel, but it unlocks a much deeper understanding for why we need GPUs at all.
You can take a look at the humble repo I used to both learn C++ and explore 3D rendering [1]. If by some chance you are interested in learning more about how this stuff works, I highly encourage you to check out TheCherno's youtube channel [2]. I'm not a huge fan of his newer stuff (Hazel engine), but his older series implemented 3D graphics from scratch in Java. Taught me a huge amount about how this stuff works and really ignited my interest in programming.
Thank, I'll take a look. I used to watch the cherno's videos, long time ago.
Funny story: following a video (ThinMatrix) about creating a 3D engine in Java (I was translating everything to JavaScript and working around the old OpenGL concepts_) I learned that you can't have an object as a index in a JavaScript Dictionary. Living and learning.
ThinMatrix is great too. Although, I've come to like Dave Frampton's channel even more than TM [1]. The perfect balance of technical detail, artistry, game design, and calm steady progress.
The rendering engineers on Cyberpunk are definitely career experts. But, hobbyists can get impressive results in their spare time. IMHO, a good target to shoot for is to write your own https://www.khronos.org/gltf/ viewer. It's a scalable goal that you can take as far as you like.
> I don't know jack about animations / bones / skin deformation
I only know the basics, but as far as I know that stuff is generally pretty straightforward (at least compared to actual rendering)
Basically you have an invisible vertex floating inside the mesh, and vertices on the visible mesh are "weighted" to it (with a floating point weight), so that when the bone moves, it "pulls" vertices towards it based on the weight strength. A mesh vertex can be weighted to multiple bone vertices at once. Bones typically consist of two vertices, where one is treated as an anchor and the other rotates around it. They're then arranged in a hierarchy, so that a parent bone's transform (translate/rotate/scale) applies to its children, and so on. So for example, rotating an upper arm bone causes the forearm bone to rotate with it.
An animation is then a series of keyframes on those bones (translate/rotate/scale to a particular x/y/z at time t). Those keyframes are interpolated when the animation is applied, and the mesh vertices are dragged along accordingly.
Motion-capture is just taking position data from a live actor and mapping it to those bone keyframes.
Things get slightly more complicated when it comes to blending between multiple animations (for example, running + swinging a sword; you basically average the keyframes). And then there's IK or Inverse Kinematics, which basically means that instead of driving an animation purely from explicit keyframes, you procedurally place the "end" of the limb (a character's foot goes all the way to the ground, for example) and then procedurally work backwards adjusting the bones up the hierarchy to satisfy that position. This is pretty common these days and leads to dramatically more natural-looking animation for characters and creatures (you can see it in action by walking on some stairs in a recent game). I don't know exactly how it's implemented.
But that's about it from a technical perspective. Of course the artistry of getting all of this just right is much harder - from the bone-weighting ("rigging") to the keyframes to the motion capture data cleanup - but the technical aspect isn't nearly as complicated as rendering (to my understanding).
I get the concept because I used to be a 3d artist... But when it comes to program this...
A static model I just copy it's vertex position to a buffer, right? So when there's a bone animation I just update the vertex in my models and the copy it again to the buffer, for every frame?
That can work. But, it's slow. The way this has been done is to not modify the buffers and instead do the work in the vertex shader. The vertex shader sends the skinned verts straight to the triangle rasterizer and then then are never needed again. So, no need to read-modify-write the buffer just to have the rasterizer read it back again.
Your vertex buffer should hold indices for which bones influence the position of each vertex and by what fraction. Each vertex could, for example, have 4 indices and 4 floats to indicate these values, and these only need to be uploaded to the gpu once.
The transformation matrices are interpolated for each rendered frame using the hierarchial animation data and the computed matrices are uploaded to the gpu in a buffer. The matrices are accessed in the vertex shader by indexing into the matrix buffer to calculate the vertex position.
> They're then arranged in a hierarchy, so that a parent bone's transform (translate/rotate/scale) applies to its children, and so on. So for example, rotating an upper arm bone causes the forearm bone to rotate with it.
How do you reconcile noise in motion capture data to bone transformations? For example, isn't it possible that the motion capture data slightly elongates or shortens the forearm in a way that makes it impossible to maintain the relationship with the upper arm bone?
In my head, it feels like you would have to pick the closest point on the sphere that's created if you rotate the forearm all around the elbow joint, but that seems expensive to do everywhere all the time.
You can have bones that are parented but don't have one of their vertices directly attached to one of the parent's vertices. It's really just a hierarchy of transformation (child's overall transform = its transform relative to parent + parent's transform). Put differently, the "attached at the joints" notion is often approximated just because that's how human skeletons work, but it's by no means required. And "bones" in this sense don't always even correspond to actual skeletal bones; they're really just a tool for moving around groups of vertices without having to mess with hundreds or thousands of them individually.
That said - and I haven't done this myself, I'm just speculating - if you're using motion capture data directly maybe it makes sense to ditch the hierarchy altogether and have each bone be independent? Though, that could be problematic if you wanted to blend some "artificial" animations together with the recorded ones. I'm not sure what the standard practice is.
Raw motion capture data almost always requires some editing after it has been captured. This is done by the animator, and they can’t work without their rigs. Rigging is an entire craft... tried a enough to know that I suck at it. Rigging quadrupeds is a god level skill.
You can learn quite a lot by writing a toy raytracer with things like textures, shadows, bump mapping and the like.
Of course raytracers don't work like polygonal hardware rendering but a lot of the maths and concept do carry over I've found. If you understand how normal mapping and reflections work in a software Raytracer you shouldn't have too much trouble implementing the same algorithms in a hardware accelerated pixel shader for instance.
In general I think that implementing a raytracer is just one of these projects that's just too cool and simple not to do, so I don't think you'll be wasting your time anyway.
It's a lifelong journey of passion and learning :) I put together a list of resources here, just start somewhere and keep going, there are so many options nowadays the really the best thing is to chose a project/path and go from there http://c0de517e.blogspot.com/2020/11/baking-realistic-render...
I would recommend checking out other GPU APIs, OpenGL is kind of terrible. DirectX and Metal are nice (and also have a nice debuggers) or WebGPU. WGPU is a nice WebGPU implementation https://github.com/gfx-rs/wgpu
Learn basic CG theory. Implement ray tracing and rasterization. Understand the math; I'll come up quite often.
Learn how the GPU works. Learn a graphics API, like OpenGL.
For the level you see in Cyberpunk, you need to learn modern, physically-based techniques. This requires a bit more math, though nothing you wouldn't see in a CS degree. My favorite reference on PBR techniques is the PBR book, which is free online. Applying some of these techniques in a real-time setting is of course another challenge.
However, all this probably takes a ton of time and math and low-level programming knowledge for a newcomer.
There's plenty of job openings in game development industry outside of the top of the field roles for the best AAA projects. These job openings require less expertise and experience, but yet will allow you to get your hands dirty with various areas, 3d rendering included — although in smaller studios working on smaller projects you most likely will not be a part of dedicated rendering team and will be involved in different areas of the project.
Peter Shirley's "Ray Tracing in One Weekend" is probably the best zero-to-one introduction to graphics but it doesn't have much to do with the tooling you'd need in industry (and probably better for it!):
There are two types of rendering now in common use for games.
In forward rendering you allocate one RGB color buffer for the screen, then rasterize triangles to this buffer, directly producing final RGB color values at each pixel.
In deferred rendering instead of one RGB color buffer you allocate a set of screen-sized buffers to hold various attributes of your choosing for each pixel, which can include colors but also things like the surface normal, material type, roughness, velocity, etc. These buffers are collectively referred to as the G-buffer. When you rasterize triangles you fill in all the attributes for each pixel instead of just a final color. Then in a second full screen pass, for each pixel you read all the surface attributes you wrote earlier and combine them with other data such as the positions of nearby lights, do some calculations and output the final color.
By deferring lighting calculations and separating them from the geometry rasterization, deferred rendering can offer more flexibility and performance in some cases. However it is tough to use with MSAA and/or transparent objects, and can be a memory bandwidth hog. Modern rendering is all about tradeoffs.
3D objects are drawn using 3D triangles projected and interpolated on to the 2D screen (2D array of colors).
Classically, you draw the whole triangle, start to finish in one go. Interpolate the triangle, look up any colors from textures, do the lighting math, write it all out. Simple and straightforward.
For complicated reasons, it often actually works out better for current games on current hardware (games/hardware of the past decade) to defer the lighting math and do it in a separate pass. So, interpolate the triangle, look up the textures, write all of the inputs to the lighting math (other than the lights themselves) to a multi-layer 2D array of parameters (surface color, normal, roughness, etc...). Then later determine which lights affect which pixels and accumulate the lighting using only the 2D parameter arrays --without needing the triangle information.
The multi-layer 2D arrays of material parameters is called the "G-Buffer". I think it originally stood for "Geometry Buffer". But, the name has taken on a life of it's own.
It stand for geometry buffer. It looks like none of the replies led with that. If you render positions into a pixel buffer and normals into another pixel buffer, you can shade the pixels and avoid shading lots of fragments that will be hidden. It gets more complicated obviously (material IDs, reflection roughness, etc.) but those are the basics.
> If you render positions into a pixel buffer and normals into another pixel buffer, you can shade the pixels and avoid shading lots of fragments that will be hidden.
Avoiding unnecessary shading is only one small reason to use a G-buffer - you could just do a depth pre-pass for that. The bigger advantage of deferred shading is that it gives you a lot more flexibility with lighting, letting you decouple lighting complexity from screen complexity. This is especially useful with many tiny lights that only affect a small portion of the screen.
I didn't say it was the only reason, but it is a big reason.
Even if you use a depth map to exit early on fragments you are still going through all your geometry twice, once to get your depth map and again to do all your shading.
The history of renderman (prman) went in a similar same way, though not through using actual g-buffers. Originally it would shade and then hide because of memory constraints and displacement of micropolygons happening in the surface shader and not in a separate displacement shader. Eventually the architecture shifted to try to do as much hiding as possible and split out displacement shaders into a separate type.
Another thing that can be done with g-buffers is using mip-mapping and filtering of the g-buffer to do some sampling or effects on lower resolutions. Pixels can also be combined into distributions instead of just averages, though I'm not sure how much this is done currently.
It is a component of a deferred shading rendering pipeline.
"A screen space representation of geometry and material information, generated by an intermediate rendering pass in deferred shading rendering pipelines."
https://en.wikipedia.org/wiki/Deferred_shading
As an aside, I worked in visual effects for film for years and never heard the term g-buffer. I am sure we had some other term that we thought was common, but was only used inside our studio.
AOVs, render passes, whatever. What they're doing in the final pass in games is fancy comp with all of that. We've played with re-lighting in comp as well. You ought to be familiar with that.
The game itself is so far away from "really good" I can't even begin to explain.
It's a "film critic who doesn't believe games are art"'s idea of a game - a pretty world with few meaningful choices, dumb AI, huge swathes of features obviously ripped out, very little aesthetic customisation etc.
The world is quite nice, but I feel more interested in (and more engaged by) Fallout 4 (and that's something because Fallout 4 isn't particularly good either and barely an RPG).
There is more to it's problems than just performance.
I've completed the main story and is at my second play-through now. First play-through was main quests only basically, second is a few main and then every single non-main quest I can get at and only then main quests. It is by far the best made open-world game I have ever played and while there are a lot of bugs it is nowhere near "Bethesda level buginess". Fallout and Elder scrolls -some of my favorite games- was worse than this even after the first 5-10 updates, since at least 90% of the bugs I saw was game breaking, same-game-corrupting or crash-to-desktop stuff.
I have had zero crashes and zero bugs that required me to reload. 100% of bugs have been cars floating a few centimeter above ground, a weapon falling a few centimeters below ground (but I can still pick it up), the GPS not taking the fastest route, etc. Basically not a single bug that is cause for all the yelling and shouting I'm seeing. In a few patches it is everything GTA could only dream of and I'm sure this will be a cash-cow. A cash-cow for the only anti-DRM and customer-friendly big developer out there is a huge win.
That I can play it at high settings (using Digital Foundry's recommended settings) at my not at all brand new PC is just icing on the cake.
I agree that the complaints are at a fever pitch, and I'm certainly enjoying the game, but I balk to compare it to GTA, which I believe is simply in another league of immersion.
This may be because I'm not very far into the main quest (without spoiling, I have about 3 hours ago gotten past the prologue and interlude).
But just off the top of my head, stark differences between gta and Cyberpunk:
Vehicle types. Gta has cars, motorcycles, helicopters, planes, blimps, atvs, and iirc bicycles. Cyberpunk has cars. I've seen motorcycles but I haven't seen that I can ride them yet. I've seen flying car type things too but again no evidence I can drive them. Same for blimps.
Gta lets me enter buildings and do things like bowl, play darts, pool, whatever. Get a drink at a bar, iirc. Strip clubs. Again, not too far at the game, but I tried going to a bar in Cyberpunk and it was like buying a drink in skyrim. It simply appeared in my inventory and that's that. I visited a hooker and he fucked me and that was kinda fun I guess, but not as interactive as a gta strip club.
Cops. I've seen a funny video in Cyberpunk where someone sniped someone from a roof and cops teleported behind him (nothing personel, kid). In GTA the cops have to actually arrive in some form of transportation to respond to your crimes.
So those are my main complaints, but only because we're comparing to gta here. On its own I actually enjoy Cyberpunk more than gta, but I do so fully cognizant of its flaws. I still have fun though.
I tried to play GTA V, but found the story very off-putting and uninteresting. I couldn't relate to any of the characters and found some of them frankly disgusting. All the missions in Cyberpunk I've played so far are well made, some of them really good like the parade scene in Japan town.
Exactly, I wasn't saying it was objectively bad it's just clearly not up to the standard that it set for itself.
If CDPR have the balls to look to add as much as they can over the next few years they could have a very nice game in the end, but currently it's just a bit mediocre.
That is 100% what happened. They aren't Bethesda i.e. they aren't quite cynical enough to Funnel shit without at least trying.
The game has already made it's money back, I hope that they bring it up to spec. I want to like it, it's just a bit dull to me (But it's probably a bad sign when the most recent CoD game has a more interesting plot)
But it's a homogenous platform instead of the mad set of different driver versions/-implementations, desktop environments, windowing protocols, audio protocols, etc. that is colloquially referred to as the GNU/Linux desktop. If Google brought out a stadia console with one consistent OS image, hardware, and driver, it might become easier for devs to support it.
Indeed, good point. I too heard that the consoles are buggy. I think it's mainly a problem of the team not having spent time on fixing them. Note that consoles also are a much larger share of the market than GNU/Linux, so they are more likely to put focus on fixing the bugs in the future. For GNU/Linux though, such a task is insurmountable.
Last generation consoles are really far behind today’s PCs. Like not comparably. Sure you can optimize them a ton since they are all mostly the same, but at some point you hit the limits.
All abstracted by available open source libraries, notably libSDL. Besides, Windowing is a laughably trivial part of a game and audio isn't that much more complex when it comes to the OS interface.
The decision to port/not port to desktop Linux is not a technical one when the renderer has already been ported to Vulkan and the rest of the game already compiles on a POSIX-like platform.
On AMD the GPU driver stack is unlikely to be materially different from Stadia, I doubt Google is using anything else than (possibly customized/patched) amdgpu+mesa.
This year alone I had two different bugs with my GPU drivers. First in the Laptop with AMD graphics where the screen would be black on resuming after suspend. After updating I'm not getting the bug any more.
Then I got another bug on the desktop with a builtin Intel GPU. Those are usually regarded as the ones with the best driver support on Linux, but that didn't save me. After waking up after it has went to sleep, sometimes, that means often, parts of the screen would flicker. It happened directly after the update from Ubuntu 20.04 to 20.10, or kernel 5.4 to 5.8. Now I've manually installed 5.10 and the bug is gone, but without my manual intervention the bug would have continued for months.
Nothing is bug free. But overall experience with AMD is very good for me. And you should always use the latest kernel if you want to avoid stuff like that.
I've logged about 20 hours in it so far, I've seen just as many bugs of the same severity when I played GTA5, and that was multiple years after launch.
Maybe does not run on some potato console, and the real failure of CDPR here is launchig on console at all, that extra effort could have gone to squishing some more bugs and getting the PC version more polished before launch.
Just like people buy consoles to play a certain game, they buy PCs to play a certain game, and Cyberstalker is definitely a good reason to upgrade if you're behind the curve.
I have a question for video game graphics experts:
Assuming a game has stealth mechanics, why do some games let you shoot out almost every light (Ghost Recon Wildlands, GTA 5), while others don’t (Far Cry 5, Cyberpunk)?
Is it a technical decision that needs to be made early on in the game process?
Perhaps being able to shoot any light out involves more work for world and mission designers?
As the other comment alluded, lighting is traditionally static. This is because all the lighting would be calculated during development and be "baked in" to the level. Up until dynamic lighting became feasible (doing it real-time is very expensive computationally) the games that had shootable lights had to resort to clever programming, at the very least, to get it working
And in any case it takes additional logic to make a light be destroyable, in addition to needing the proper flags to be set in the level design process. I can't think of very many games where all lighting is truly dynamic like you are thinking, and up until recently a lot of graphics cards could not handle more than a few dynamic lights at a time
When DOOM 3 released, part of its technical wow factor was how it was able to get dynamic lighting working using early 2000's graphics tech. I remember being blown away watching the light change as an overhead lamp was being affected by physics
I think Ghost Recon Wildlands and GTA 5 do it well in the sense that lights that would clearly be destroyable by a bullet are indeed destroyable, from lamp-posts to car headlamps to light bulbs. If a light needs to be fixed, at least it helps for the source to be designed to look like it has metal grating around it or something.
I remember DOOM 3 blew me away, too. Those zombies you could only see with the flashlight in the first level. Scary! I played it to the end with lights off with my brother and we both got jump-scares regularly!
I'm an amateur but I'd have to guess it's the difference between static lights (often baked into the level itself) and dynamic lights (flashlights, moving lights, etc) that are computed at play time.
Depending on the engine, yes, it might need to be made relatively early. But in practice it's, as everything, design. Some games want to have that element in their gameplay, some do not, it changes things a lot as lighting is used to make things visible (obviously) and to steer attention, it's a fundamental part of level-design, so things change considerably in terms of the game you're playing based on if you want to allow lighting to change or not.
Very interesting analysis. Now I understand why the current hardware struggles so much to compute graphics that looks similar to what could be done using Unreal Engine 3 years ago on smaller worlds: no light maps and probably no pre-baking at all.
Current games look much better than what was ever done on the unreal 3 engine. It doesn't have much to do with smaller worlds and lightmaps are still used all the time. Look at the most recent doom and compare it to anything done with unreal 3.
You are right, looking back at screenshots of these old UE3 games they don't look as good as I remembered.
However Cyberpunk (without raytracing) is not the most beautiful game I played. It has a few nice scenes, the one discussed in the article looks good, but you also have a lot of areas that are nothing special and have very bad performances.
So that's how games are being seen by the gamedevs :) It was interesting - would be interesting to read something like this also about the RTX-parts, their description sounds so great - I hope they will some day show us the new horizon in rendering. Hope it's not just marketing cheap trash like it was with DLSS.
I have RTX card and I see (pretty noticeable) difference when I turn on RTX reflections and shadows in Cyberpunk, but I can't see what is the difference in RTX-lightning.
DLSS is game-changingly brilliant if it continues to improve.
"DLSS is just [an approximation]" is a common comment, but it's bollocks because everything in a modern game engine is an approximation to some underlying phenomenon. Even raytracing is basically a bunch of noise to our eyes before the magic makes it viewable.
Is it perfect? Of course not, but being able to fake detail that our eyes don't take issue with could be enormously helpful to smaller game studios who can't achieve the same results - I'm specifically thinking of flight simulators which often don't run very well, where 60% of the pixels on the screens are effectively filler (Terrain, Clouds etc.)
You really said nothing to change my opinion. Saying "brilliant" about this pile of garbage is just ridiculous. I don't care how complicated technology, I care about the result. I see results from the RTX-reflections - they are great and it's not some kind of trick or simplification or "optimization for performance". I see the opposite about DLSS - huge FPS boost, but the price is upscaling which looks like cheap upscaling. You can do the same in many games, even old games, using a "rendering resolution" setting or something named like this. I would be happy to jump into the wagon of those who love the new shiny tech called DLSS, but for that, I need to see opposite results. Right now it's a disaster.
It does look like cheap blurry upscaling. I was trying to be respectful to the opinion of people who love it, but you can't be respectful to my opinion. You are saying ”doesn’t look cheap” and ”can’t afford” in the same comment - think about this. You don't need to be a rich person to buy 3070/3080 - smartphones are less affordable nowadays.
DLSS is not the end-solution to rendering, well, nothing is, but it's an AMAZING piece of technology. All these temporal and dynamic resolution techniques are here to stay as they -improve- the look of games no matter the HW.
What do I mean? Obviously dynamic resolution and temporal reprojection are worse than say, a fixed 8k rendering at 240hz! Yes, true! But, that's not the correct math.
The more correct math would be, on a given hardware, say a 3080, would you rather spend the power to render each single pixel exactly, or would you rather "skip" some pixels and have smart ways to recover them for a fraction of the price, almost equal to the real deal, so now you have extra power to spend somewhere else?
Of course if you just do less work with DLSS or similar technologies, you're losing something, it's bad. But that's never the equation. The real equation is that no matter how powerful the HW, the HW is a fixed resource. So if you spend power to do X, you cannot do Y, and you have to chose whether or not X is more valuable than Y.
Makes sense?
Now, all that said, it's also true that sometimes you max out everything in a game, you cannot have more of anything because that's literally all the game has to render, and at that point sure, it's reasonable to spend power even in things that are not that great bang-for-the-buck because literally you cannot do anything else anyways! So for the very top PC HW, you end up doing silly things, like rendering in native 4k because you cannot use that power in any other way.
But that's in a way "bad", it's a silly thing that we have to do as there is no other option! If we had the option though, it would be much better even on a 3080 to render say a 2k or 1080p upscaled to 4k via DLSS, and use the remaining power to say, have 2/3 times the detail in textures or geometry or number or shadow-casting lights etc etc...
Here is our disagreement. For my eyes DLSS is unacceptably blurry (even in ”Quality” mode). You can look at it - ok, but it's only one of the opinions. You can spend 1 minute in google to find quite contrary opinions. Whole your long comment is based on the idea that upscaling is an optimization - you are forgetting that upscaling is a tradeoff.
My main complaint about DLSS ”hype” marketing is exactly about this: do not promise ”incredible quality” when under the hood it's just a pity upscaling. Some HW is not good enough, some games are not optimized enough - it's fair and it's okay, there are things to sacrifice, there are workarounds. Just don't lie.
Respectably, I don't care about "your eyes" - all people are different and that's ok. Nor I care -specifically- about DLSS, obviously technologies are always evolving, it's not that DLSS is the best ever that the concept can be. Also its implementation varies among games.
What I meant to say is that we live at a time where rendering every single pixel all the times is simply a waste of resources - that can be better spent somewhere else.
And you're still saying it's "blurry" - that's not the point. Certainly temporal reprojection will -always- be blurrier than not using it. But you're not considering what you're -gaining- by that blur. The real question is - would it be better to say, have a world with 1 million objects at 4k, a bit blurry, or a perfectly sharp image, at 2k, with 100k objects...
Temporal reprojection saves time that then can be invested in other things.
Lastly. CP2077 ALWAYS uses temporal reprojection. ALWAYS. If you disable DLSS it uses its own TAA instead. If you disable TAA (which cannot be done in the settings menu, but there are hacks to force it) is STILL USES temporal for most of its rendering before the final image.
I'm happy to see at least one person on HN can agree with this.
After re-reading your and my comments I concluded that my real issue is lack of the settings in CP2077. I do respect opinion of people who want to use TAA and DLSS, I’m just upset that I can not pick something I like, can’t decide what to sacrifice and what to prioritize.
Your article is quite interesting and I’m grateful for it. Please keep writing things like this.
To let you better understand me: I have vomiting calls when I remember how TAA+DLSS image looks. I just can't force myself to use it, it's like torture. FPS drop or aliasing are much smaller problems.
Disagree about DLSS being cheap marketing trash. Especially on the quality or balanced settings, it produces a significant framerate improvement in Cyberpunk without any noticeable quality loss.
I'm going to have to side with the other poster here as well. Enabling "DLSS" on auto significantly improved framerate without noticeable quality loss(to me). I'm on a 2070 Super running "high"ish settings @ 1440p.
You probably wont get convinced by what anyone here is saying so just watch this video from gamersnexus where they nitpick everything about dlss and their conclusion is that you should run it. https://www.youtube.com/watch?v=zUVhfD3jpFE
In some instances it add detail, in other instances it removes detail.
https://1.bp.blogspot.com/-gO2R-e5TmHs/X9umb0DXKFI/AAAAAAAAC...