That's great, and for project like yours (Voxel Quest), it'll definitely help.
I'm wondering though, if the demo also uses the CPU for other things - physics, audio, collision, path-finding or some other form of ai, state machines, game script, game logic. My point is that 10x might be possible (on a 10 core cpu) if the cpu's are only used for graphics, but there are other things that come into play... But even then, even if only half the cpu's are used for graphics, it's still better.
The bigger question to me, is how would game developers on the PC market (OSX/Linux included) would scale their games? You would need different assets (level of detail? mip-mapped texture levels? meshes?) - but tuning this to work flawlessly on many different configurations is hard...
Especially if there are applications still running behind your back.
E.g. - you've allocated all cpu's for your job, all to be taken by some background application, often a browser, chat client, your bit-coin miner or who knows what else.
The bigger question to me, is how would game developers
on the PC market (OSX/Linux included) would scale their
games? You would need different assets (level of detail?
mip-mapped texture levels? meshes?) - but tuning this to
work flawlessly on many different configurations is
hard...
This isn't really any different from what it has been until now. All AAA games have different levels of detail for meshes/textures/post-processing etc. Even when not exposed to the user as options in a menu these different levels of detail exist to speed up rendering of for example distant objects or shadows where less detail is needed. DX12/Vulkan is not going to change anything in that regard.
Doing a good PC port is not as easy as it may seem at first glance. Different hardware setups and little control over the system cause lots of different concerns that simply don't exist on consoles, which means nobody bothered taking that into account when the game was originally built. These new APIs will help though; the slow draw calls on PC are a pain compared to lightning fast APIs on consoles (even Xbo360/PS3!).
The demo they were showing was as basic as it gets - drawing a whole lot of textured boxes (probably in individual draw calls) to the screen. From what I gathered you could not send large jobs to the GPU without blocking off smaller jobs (i.e. no thread priority) - at least according to one engineer from NVIDIA, which is something I was hoping they might implement as it would benefit applications like VQ which are attempting to generate things while running the game.
Its possible that these perf gains are actually working in a single thread, and the gains are from eliminating the default driver overhead that would be in these drawcalls. To some extent this could be mitigated by batching but it is still ideal to have the option to do far more drawcalls in a frame.
I'm wondering though, if the demo also uses the CPU for other things - physics, audio, collision, path-finding or some other form of ai, state machines, game script, game logic. My point is that 10x might be possible (on a 10 core cpu) if the cpu's are only used for graphics, but there are other things that come into play... But even then, even if only half the cpu's are used for graphics, it's still better.
The bigger question to me, is how would game developers on the PC market (OSX/Linux included) would scale their games? You would need different assets (level of detail? mip-mapped texture levels? meshes?) - but tuning this to work flawlessly on many different configurations is hard...
Especially if there are applications still running behind your back.
E.g. - you've allocated all cpu's for your job, all to be taken by some background application, often a browser, chat client, your bit-coin miner or who knows what else.