Hacker News new | past | comments | ask | show | jobs | submit login
Accurate N64 emulation gets resolution upscaling powered by Vulkan (RetroArch) (libretro.com)
280 points by libretro on June 14, 2020 | hide | past | favorite | 71 comments



I'd love to see how it does in Perfect Dark a few seconds after that screenshot was taken.

There's a part in the hallway to the left where you fly a little drone into a laboratory to take a picture. It's one of the traditionally hard places to emulate for these sorts of emulation strategies that abstract the graphics output from the memory pathway of the real system. That's because the effect to make it look like a security camera that would happen these days as a post process fragment shader, is instead done on the main CPU, just reading and writing to the framebuffer. So the GPU emulation actually needs to write out the framebuffer at the correct format/resolution. And then you have to read the framebuffer from memory, not short circuited out of the GPU directly like is the core of how this is designed to get it's benefits.


I would hug someone if I could have a playable Vigilante 8 experience. Explosions are black squares. The menu does weird things. Many other smaller bugs.

I played that game so much as a teen. It's possible the PS version is more playable but it would feel off playing the wrong version.


Have you tried the Dreamcast version?


Nope. How does it compare?


Similar to Pokémon Snap them.


And Mario kart! Greets to all of my 64-heads! Man I still remember the late 90's.. Not a care in the world saving up all my money to buy the next cartridge..


The Mario Kart example is interesting, because you can achieve it's effect by simply teeing off the GPU output both to RAM (scaled down to native) and directly scanned out.

The neat thing about the perfect dark effect is that they read modify write the entire framebuffer and it represents the full scanned out frame, so in those cases you'd need to drop scan out resolution back down dynamically, where as on the Mario Kart example you can keep the higher scan out resolution.


I think pokemon snap does it all on the GPU side, but I'm not 100% on that.


I wish people would stop referring to increased internal resolution in emulators as “upscaling”. There is no upscaling/resampling/interpolation/filtering happening here, it's just rendering at a higher resolution than on the original console. Referring to it as “upscaling” makes it sound like post-processing applied to a low-resolution image, like on an upscaling DVD or Blu-ray player or NVidia's DLSS, which is not what is happening here.


This. I read upscaling and my interest dropped. If I increase the resolution the game renders at it's not upscaling it's rendering at a higher resolution.


If you happen to have a old N64 lying around, or can find one to buy, there are also great options for getting it working with a new TV.

The RAD2X being a easy way to get started https://www.retrogamingcables.co.uk/RAD2X-CABLES In combination with refurbishing the original controllers with some new plastic https://store.kitsch-bent.com/product/n64-joystick-gears you and maybe a everdrive you can enjoy the original experience without too much work.

Still.. It can't increase the original resolution like this, this is just gorgeous... And so sharp


One of the most important things those cables do is bypass the N64s built in blur filter. This was basically a unique thing to the N64 in that generation and it really is just a blurring of the video output, like you've smeared vaseline on your screen.

Maybe it made sense on the already kinda distorted consumer TVs of the day as a kind of primitive anti aliasing, I think it's horrible though.

https://www.youtube.com/watch?v=QDiHgKil8AQ


This is interesting! It sounds from the video like the cable isn't bypassing the filter, but instead applying a deconvolution filter to the final image to reverse the blur.

Deconvolution is super cool. You can use some deconvolution algorithms in Gimp using the G'MIC plugin. There are a few different ones in the Details section under Sharpening, for example Richardson-Lucy [1] or Gold-Meinel. You can play with blurring an image and then using the deconvolution to remove the blur - it's surprising how much of a Gaussian blur can be removed. I've used it in the past to remove blur from some deliberately blurred 'preview' images. Try the different algorithms as some produce much better results than others, but I forget which.

[1] https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconv...


The N64 really is blur central, because of this post-processing you point out but also because of the severe texture limitations (and primitive kinda-bilinear-but-not-really filtering they used) meant that you had small resolution textures stretched onto massive polygons. The fact that cartridge ROM space was also expensive didn't help since few cartridges went above 32MB (I think the biggest official N64 ROMs use 64MB cartridges, but those were pretty rare). Meanwhile a CD could hold one order of magnitude more data at no additional cost.

As a result I subjectively find that despite having significantly weaker hardware, no perspective correction and no subpixel-precision PSX games often end up looking a lot more impressive. And that's got a lot to do with the incredible texture work these games use: https://www.mobygames.com/images/shots/l/243918-vagrant-stor...


> As a result I subjectively find that despite having significantly weaker hardware, no perspective correction and no subpixel-precision PSX games often end up looking a lot more impressive. And that's got a lot to do with the incredible texture work these games use: https://www.mobygames.com/images/shots/l/243918-vagrant-stor...

OTOH, Vagrant Story came very late in the PS1's life (the PS2 came out like a month later), that's from an in-engine cinematic scene, the texture swimming issues aren't really noticeable in a screenshot, and it's hard to say for sure, but that may be a screenshot from an emulator.

The real experience was a bit more like https://www.youtube.com/watch?v=5GVE4a8ULww&t=912 (screenshot is from around 15:13).

Still pushing the hardware to the max, but games in a similar part of the n64's lifecycle (especially the ones requiring the expansion pak, Majora's Mask, Banjo-Tooie, Perfect Dark) also look pretty remarkable for the time.


I'm pretty certain that first screenshot is from an emulator - it's too high res (and too clean) to be a capture from real hardware.


> no additional cost

The additional cost was an enormous step up in seek time and read time, which for some games manifested as load times everywhere, and for others meant a herculean effort in managing asset streaming, per the fascinating Andy Gavin talk that Ars published a few months ago:

https://www.youtube.com/watch?v=izxXGuVL21o


That's fair, but as long as you packed your assets correctly it was bearable for the time IMO. So much available storage meant that you could duplicate textures instead of seeking all over the disc for instance.

Also streaming assets was rather uncommon at the time, Naughty Dog is really pushing the envelope here. It was especially uncommon because most games streamed the background music in real time straight from the CD, so if you wanted to side-load assets on demand you had to be very clever about it lest the audio got interrupted.

As a result you generally had long loading times at the start of levels but that's about it. Some games were really bad about it though, and had long loading times all over the place (some even when you do something as trivial as opening a menu) but that could generally be attributed to shoddy programming, not a weakness of the console per-se. Overall I think in hindsight the decision to use a disc drive was the right one and cartridges ended up being a rather severe liability for Nintendo at that time, although of course it's far from the only factor at play when comparing the successes of both consoles.


I never owned a PSX, but my impression is that a lot of games did that thing where the actual game content was a few dozen MB at most, and the rest of the disc was either empty or used for background music, FMVs, etc.


The RDP's texture memory (that the RSP had to dma a texture in to before it could be drawn) was 4kb. Yup, with a k.

There were a few texture formats you could use. 16bit RGB was the lazy choice but you could squeeze more resolution if you used 4 or 8 bit greyscale (single channel) and put colors in the vertices. You could also use palletized textures, with either 8 or 4 bit lookups (eg 256 or 16 unique colors per texture). Unfortunately that split the texture memory in half - the palette was 2kb and lookup was 2kb. If you were doing things right you spent a lot of time tweaking palettes by hand and writing code to best chose palette colors.

Tldr; 48x48 16bit RGBA, 48x48 palletized 256 color RGBA, 64x64 palletized 16 color RGBA, 96x96 4 bit intensity.


And if you enabled mipmapping, that practically cut it down to 2KB. 32x32x16bpp.


OMG, no. That's simply atrocious. It makes your beautiful N64 games look like PS1 ones! My guess in this debate always has been that those ugly sharp pixels must be an aquired taste of those on the side of the console war of the time who had to rationalize their preference for the look of games on their less capable platform of choice. ;)


I'm a fan of the result achieved with just the HDMI cable in that video - the first pass of processing that's removed by the gameshark hides the dithering, so removing this seems to make things look worse in a lot of cases (and as you point out makes it feel less like an n64 and more like a PS1 - just add some polygon jitter and you'd be all the way there). But the second layer of processing that the HDMI cable removes just adds more blur for no reason.

Edit: actually watched the whole video and realized the HDMI cable is actually doing post hoc image processing to reverse the blur, which is why the dithering is still wiped out but the blur is gone, which is pretty neat.


I'm not 100% sure but I think the final output blur was there to just map the output buffer resolution to the hardware output. If you ran the N64 in 640x480 resolution (with AA enabled) it looked gorgeous. Unfortunately the RDP wasn't fast enough to update more than around 1/3 of the screen before you saw tearing on that mode - so it was really only practical on mostly static screens.

The standard output mode was 320x240 but developers realized you could reduce the buffer sizes and play with the screen borders to try to render less pixels per frame. Dropping resolution was a quick way to get frame rate up, and when your target TV was a NTSC CRT it didn't seem so bad.

The Antialiasing (which the game shark can disable) was what stopped the nasty pixel crawl and jaggies that Playstation games of that era suffered from. It was cutting-edge for the time - it used the 'extra' bit in the 9bit RAMBUS ram (that would have been for ECC in serious applications) to store coverage bits and blend edge pixels while maintaining crispness on interior edges.


N64 consoles are quite rare and expensive these days AFAIK


I ordered a refurbished one from a specialist shop in my country for christmas, they go for €80 here including adapter, AV cables and SCART converter, delivered neat, clean, well packaged, and working just fine. I see similar prices on ebay. According to Wikipedia, a total of about 33 million have been sold worldwide - still ranking "only" #17 in the list of most sold consoles, but still a respectable amount. That said, if you're interested in getting an N64 at some point but don't yet want to play it or whatever, get a good refurbished one now, they will only go up in price.


They hardly ever sold them here in South Africa, I guess that is my experience.


I guess it depends on your definition of expensive. A brief look at eBay shows plain ol' gray N64s selling for around $50-75 several times a day, usually including cables and a controller. Some games are a bit pricier, but there are good flash carts available.


Can someone explain the advantage of increasing the native resolution vs upscaling after rendering?

For something like a photograph, you want as high a 'native resolution' as possible, so that there is as much information as possible in the original image. But for old video games, the assets and textures are still the same. Is this a bit like taking a high megapixel photo of a low quality print? Or is my understanding wrong?

I'm sure things like AA work better in the native renderer. Are there other advantages?


The N64's "GPU" is the Reality Display Processor (RDP) chip which is specialized hardware built by Silicon Graphics specifically for the N64 console. The RDP runs a custom program that is part of each game that tells it how to take high-level commands from the game code and render them. For this accurate N64 emulation, it is being emulated at a lower level than before.

Previously, N64 emulators just used the commands sent to the RDP to tell the host system GPU what to do with something higher-level like OpenGL or DirectX (of course, this meant a lot of game-specific "hacks" in the emulator), rather than emulating the RDP itself by sending lower-level commands directly from the RDP to the host GPU with something like Vulkan. This is so-called High-Level Emulation (HLE), and it's a massive shortcut to emulating the whole RDP -- which meant that N64 could even be "emulated" on a PC from 1999.

Lower-level emulation of the RDP itself has been recently made possible, and now it can also now be "up-scaled" to arbitrary resolutions -- instead of just sticking to high level emulation and telling OpenGL or DirectX to render to a larger resolution -- or even worse, scaling the rasterized rendered output frames by treating them as images.

In practice, Mario 64 was just telling the RDP to render triangles anyways so it mapped nicely to OpenGL for the HLE case, but for more "accurate" emulation (this is like getting from 95% to 99%), the RDP itself needs to be emulated as well for things like the Perfect Dark drone camera mentioned elsewhere.


To be pedantic the RDP was the rasterizer (fixed function) which was fed by the RSP processor. The RSP ran the display-list commands from the cpu/memory (very similar to a Vulkan command list) and did vertex transforms/clipping before feeding its output to the RDP.

Most games used one of a few Nintendo provided RSP programs although later in the machine's lifetime they opened up the RSP compiler and tools to developers.


Yes, I was not going to go into that much detail :)


For 2d stuff you'd be correct - but with 3d you can scale the polygons infinitely just like vector art. This works especially well on the N64 because it has a lot of games that make heavy use of flat shading (see Super Mario 64, but Perfect Dark isn't one of these - it uses a lot of pretty low res textures).

I'm a (sort of) purist who has an emulation (hence the sort of) PC hooked up to a 240p CRT TV for older games, but N64 running at higher res does look pretty nice in some games, and anything looks better than native 240p output with blurry bilinear upscaling on an LCD.


"240p" is not a meaningful statement when referring to CRTs. CRT phosphors are not pixels.

For more on that: https://www.youtube.com/watch?v=Ea6tw-gulnQ


The 'p' in '240p' doesn't stand for pixels, it stands for progressive scan. Also 240p/480i/480p are the standard accepted terms for these low resolution video signals[0], nitpicking technical details as a 'gotcha' when people use standard terminology isn't helpful.

[0] https://en.wikipedia.org/wiki/Low-definition_television


Phosphors may not be pixels, but 240p doesn't say anything about pixels. The number tells us how many lines, and the p tells us that each screenful of lines covers the whole picture (the p is for progressive, vs i for interlaced). The whole phrase 240p CRT TV tells us it's a normalish NTSC tv, not a hi-res tv with fancier electronics to work with digital tv and which would likely have more processing delays.


Interestingly the '240p' signal sent out by video game consoles of that era is really a hack, as 240p wasn't a standard signal supported by TVs of the time.

It's actually a 480i signal with the timing fiddled with so that the alternate lines still strike the same part of the screen (this is why games from that era had such noticeable scanlines - the CRT beam is only lighting up alternate horizontal lines).

This also means that a lot of more modern TVs (and even some upscalers marketed for retro gaming) do an extra terrible job of upscaling 240p signals because they run the same logic that they would if it was normal 480i, resulting in unnecessary flickering or dropped frames.


Wait, what kind of analog TV signal allows for that kind of control? Are you sure that they didn't just scan out the same framebuffer twice?


The Analog TV doesn't have a framebuffer, and neither do most consoles, until you get into the 3d era.

My understanding is that the timing of the vblank signalling that comes between fields determines weather the next field is an even field or an odd field. If the vblank signalling comes in the middle of the last scanline, the next field is an even field; if the vblank comes aligned with the end of the last scanline, the next field is an odd field.

If you always start vblank signalling in the middle of a scanline, you get all even fields, if you always start vblank signalling at the end of a scanline, you get all odd fields.


Essentially all pre-HD analog TVs allow for that kind of control.

https://www.hdretrovision.com/240p


There's also two halves to this, sure the TV itself might not be made up of clean square pixels like an LCD. But the source image that's being sent to it absolutely does have a discrete horizontal and vertical resolution in square/rectangular pixels.


I always thought of the intersection of NTSC and computer monitors as being 320x200, not 320x240. The latter is more like quarter VGA or something.


Often times, the native resolution of the console isn't high enough to show the full detail of the textures. This is most strikingly apparent on Super Nintendo Mode 7 graphics: https://arstechnica.com/gaming/2019/04/hd-emulation-mod-make...

PlayStation 1 games suffer from a similar problem, and rendering at higher resolution in an emulator also often reveals a surprisingly high polygon count for eg. character models that look like badly-drawn sprites at native resolution.

> I'm sure things like AA work better in the native renderer.

Rendering at higher resolution is the highest-quality form of AA possible.


There are some projects that swap out textures for higher res versions.

But I think the bigger advantage is for distant objects - if you zoom in the goldeneye screenshot, you can see the face texture of the guy at the other end of the hall. If the regular native resolution had been used and upscaled afterwards, you wouldn't see the face at all, just a face-coloured blur


With ML the textures can be upscaled too. Though, no one has attempted to do this as an emulator plugin yet as far as I know.

Image superscaling is a well known problem with real world solutions so, it's only a matter of time and interest.

ML will draw in details that never existed to begin with. It's quite amazing, but most versions of image superscaling that exists today is trained on drawn art, like from Deviant Art, so the ML makes the images look subtly pastel. It's great if you want a 4k desktop background.

Heck, I could make one if I cared enough. A friend of mine makes a popular emulator. Maybe she'd appreciate the functionality.


The difference is especially noticeable on distant objects in games where you need to see ahead: e.g. in Gran Turismo you can't tell which way chevron turn signs are pointing unless you have higher internal resolution.

(Not suggesting that one should drive by signs in GT, but they're still fine as a cue.)


Generally details will still be better the closer to native display resolution you can render rather than rendering to native source resolution and then up scaling.

Even if the textures are blurry (they can be swapped too), the polygon details will be easier to see.


>the [...] textures are still the same

I wonder if someone has already thought of using one of the available pixel art upscalers to improve the texture resolutions. If not, it's probably only a matter of time.


Yes, they have. You see this on a lot of Super Mario 64 videos.

I personally think it looks awful.


Rendering at a higher resolution makes a giant difference, even for low poly games. Old games were not only low res, but didn't have a lot of antialiasing either. On an emulator you can have resolution, antialiasing and better texture filtering.


I don't know the details of this project but I have seen emulators use processes more complex than a basic image upscale to get really nice results out of low resolution textures. I think its similar to those anime upscailing tools where its trained on a dataset of textures and is able to redraw them at higher resolutions.


This emulator prioritizes accuracy, not fancy graphical options, that's why they lack this kind of features.


Sadly, those techniques aren’t nearly fast enough to work for emulation.


You'd be surprised at what image processing algorithms are fast enough to work in realtime when they have optimised GPU implementations. Anime4K is one of those CNN-based anime upscaling tools, and it can run in realtime as a shader in mpv. (I'm not a fan of those upscaling tools myself, but I don't see why they couldn't be used in emulators, for the people who like them.)


Could they not run once on all of the textures and then save them to a file for use in game?


Are there examples of before/after shots? My memory of these games is a little fuzzy and tinged with nostalgia. The images feel crisper than I remember the games being, but it would be great to see a side by side example.


HLE has had increased internal resolution for many years now. What's the advantage of implementing in in an LLE fashion?


Higher accuracy, mostly. The closer you get to the behavior of the real hardware the less game specific hacks you have to have to make everything look correct.


Unfortunately, doesn't look like there's a Mac version in the works. It's awesome that people are still working on this kind of stuff though. I somewhat wish there were equivalent screenshots of the games as seen with the original resolution for comparative purposes.


RetroArch and Vulkan based projects in general are supported on Macs, since RetroArch has integrated MoltenVK.

For this particular project, it's up in the air whether it actually works. Parallel RDP is not a "standard Vulkan game", and if I understand correctly behaves more like a compute shader program written in Vulkan. As a result, it requires the presence of certain more niche Vulkan extensions. MoltenVK by its nature is not as "feature rich" a Vulkan driver as bare metal Vulkan drivers are, so it might be missing extensions required for Parallel to work. In Parallel RDP's first iteration, it required a Vulkan extension that allows GPUs to use system memory, which was only present in certain Windows/Linux GPU drivers but not in MoltenVK. There's already a workaround for this with a minor performance impact.

Parallel RDP now seems to work on a few mobile GPU Vulkan implementations [1], which would be encouraging for MoltenVK as those drivers tend to be lower quality and have less coverage also. Maybe Parallel RDP already works on MoltenVK in fact, and just requires some testing [2].

[1]: https://www.libretro.com/index.php/parallel-n64-rdp-android-...

[2]: Worth noting it has only been released for Windows and Linux so this would require some building yourself: https://www.libretro.com/index.php/parallel-rdp-rewritten-fr...


Request your OS developer to support the modern graphics API that the rest of the industry already does, instead of lodging their heads firmly up their own asses to justify re-inventing this wheel.


OSX does support Vulkan.


You mean MoltenVK? It's not an official Apple project and it lags pretty far behind Vulkan on other platforms.



edit: I was wrong. My apologies.


That's because there are currently no Vulkan implementation in macOS. The closest thing is MoltenVK and I'm not sure how well it performs


I think there’s a Dolphin branch using it that does reasonably well.


The effect is kind of strange. Some things get upscaled and others don't. The icons retain their original jaggedness. Low-detail background objects are not sharpened. Only the main character seems to get a full resolution upgrade. Does this use some anime character specific algorithm?


The textures are unchanged, so texture details will stay blurry. However, Mario mostly has flat textures with detail given by Gouraud shading, which is more amenable to upscaling.


There's been a Super Mario 64 texture pack for years now. Presumably it would be possible, but not trivial, to combine it with this new project.

[0] https://youtu.be/a1vnSHMjuuA?t=72


Star Wars Rogue Squadron runs choppy in the previous emulators. I wonder if this is the fix for it?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: