It's that old consoles (up until the Sega Saturn, Nintendo 64, Sony Playstation era) didn't draw to frame buffers. They didn't have enough RAM to have a real frame buffer.
You configured the graphics chip what you wanted it to draw, and as the scanline scanned across the screen, it outputted the color that was supposed to be output depending on the background/sprite/palette settings. You could change where the sprite is halfway across the scanline, and it might immediately, in the middle of drawing a scanline, output a different color. There was no delays anywhere in the system, only the propagation delay of electricity in wires and transistors. The various HW registers that controlled what and where the sprites were were connected via a small pile of non-clocked digital logic gates to a DAC to the analog output pins of the graphics chip, which was connected to the electron guns directly if you used component video or via a simple, no-latency analog circuit if you had composite video.
These days, you program the graphics card to draw what you want it to draw. Once you've programmed everything for it to draw, it draws that into the backbuffer. Once the current frame has been sent to the display, the backbuffer is swapped with the frontbuffer. When the next frame begin is sent to the display, your changes finally go out across the wire. Depending on the display (see especially motion adapting TV screens) there might be more delay. Then whatever changes you make are finally displayed on the screen.
It's kinda weird. Old video game consoles (up until the SNES and Genesis) had extremely low latency. And that's been gone for 25 years. Not only is it gone, but it will likely be gone forever- we don't even make the technology anymore to show the new generation what it was like. On the one hand, the new technology is "better"; there's no way to do today's advanced graphics without a deep drawing pipeline that outputs to a frame buffer. But it's also somehow worse. We can make it less bad with technology like 144Hz and Freesync, but the old era is gone.
I started on the Apple II and i'ts been downhill since then. If I remember correctly the Apple II has one of the smallest latency between touching a key and seeing the result on the screen.
While this is true of LCDs, OLEDs have basically no draw latency, so conceivably they could directly update as the frame buffer was populating like a CRT if the gpu driver and OLED driver allowed it. This would still have tearing though for a modern game, so it is likely that it would need to buffer and draw entire frames at a time, which could still give you only 1 frame of latency. Most of the older games were not updating the positional information of sprites between frames (just using scanlines for raster effects) so there was already 1 frame of latency on input updates.
The N64 had a unified memory architecture, so the frame buffer was just a region of memory within the unified system RAM that was drawn to. The Z buffer was the same way. (assuming the programmer chose to enable to the Z buffer) It was still a frame buffer though.
It's that old consoles (up until the Sega Saturn, Nintendo 64, Sony Playstation era) didn't draw to frame buffers. They didn't have enough RAM to have a real frame buffer.
You configured the graphics chip what you wanted it to draw, and as the scanline scanned across the screen, it outputted the color that was supposed to be output depending on the background/sprite/palette settings. You could change where the sprite is halfway across the scanline, and it might immediately, in the middle of drawing a scanline, output a different color. There was no delays anywhere in the system, only the propagation delay of electricity in wires and transistors. The various HW registers that controlled what and where the sprites were were connected via a small pile of non-clocked digital logic gates to a DAC to the analog output pins of the graphics chip, which was connected to the electron guns directly if you used component video or via a simple, no-latency analog circuit if you had composite video.
These days, you program the graphics card to draw what you want it to draw. Once you've programmed everything for it to draw, it draws that into the backbuffer. Once the current frame has been sent to the display, the backbuffer is swapped with the frontbuffer. When the next frame begin is sent to the display, your changes finally go out across the wire. Depending on the display (see especially motion adapting TV screens) there might be more delay. Then whatever changes you make are finally displayed on the screen.
It's kinda weird. Old video game consoles (up until the SNES and Genesis) had extremely low latency. And that's been gone for 25 years. Not only is it gone, but it will likely be gone forever- we don't even make the technology anymore to show the new generation what it was like. On the one hand, the new technology is "better"; there's no way to do today's advanced graphics without a deep drawing pipeline that outputs to a frame buffer. But it's also somehow worse. We can make it less bad with technology like 144Hz and Freesync, but the old era is gone.