Were normal maps new in 2007? I feel I learned about normal maps in around 2005, as a high schooler just hobbying around in Maya. Did they maybe get a head start in PC games, or in raytracing? Maybe my memory is just mixing things up, Mario Galaxy did feel like a state of the art game when it came out although I wouldn't associate the cartoony style with normal maps, maybe that's part of why it doesn't have them.
Normal maps were not new in 2007. I wondered if the Wii might have lacked support, but apparently the PS2 was the last home console to lack normal mapping hardware (you had to use the vector units to simulate them), and the Dreamcast was the first to have it though games made little use of it (that bloomed with the Xbox).
It’s still possible EAD Tokyo didn’t put much stock in the technique though, or that their workflow was not set up for that at this point, or that hardware limitations otherwise made rendering coins using normal mapping worse than fully modelling them.
There are tests of normal map assets found in Super Mario Galaxy, but they're unused. The GameCube "supported" limited normal mapping through its indirect unit, assuming the lighting response is baked. The BLMAP tech seen in Skyward Sword can generates lighting response textures at runtime and could support normal mapping, but I don't think the studio used it outside of a few special effects.
The bumpmapping hardware most wikis mention as "supporting normal mapping" is pretty much irrelevant.
The first time I saw them used in gaming was when John Carmack showed off Doom 3 running on Geforce 3 at a Macworld keynote in 2001. It was very impressive and felt like a huge leap into the future.
Matrox G400 (also 1999) had dot3 bump mapping as well. IIRC, they advertised it as a first in the PC 3d accelerator space, so I'd guess they shipped earlier in '99.
The G400 was generally lackluster at 3d performance, so the feature didn't really get used by games.
Super Mario Galaxy ran on the Wii, which had a GPU that is very different from modern GPUs (and very different from even the GPUs of other consoles in that generation i.e. Xbox 360 and PS3). I'm not sure whether using normal maps was feasible/efficient on it.
Quite often it’s not even a question of the GPU itself but the development pipeline and the tooling used. As well as ofc does the engine itself supports it.
Also at the end of the day your GPU has finite amount of compute resources many of them are shared across various stages of the rendering pipeline even back at the 6th and 7th gen of consoles where fixed function units were far more common.
In fact in many cases even back then if you used more traditional “fixed function pipelines” the driver would still convert things into shaders e.g. hardware T&L even on the Wii was probably done by a vertex shader.
Many teams especially in companies that focus far more on gameplay than visuals opted out for simpler pipelines that have fewer variables to manage.
It’s much easier to manage the frame budget if you only need to care about how many polygons and textures you use, as if the performance is too low you can reduce the complexity of models or number/size of textures.
Shaders and more advanced techniques require far more fine grain optimization especially once you get into deferred or multi pass rendering.
So this isn’t hardware, it’s that the team behind it either didn’t want or didn’t need to leverage the use of normal mapping to achieve their design goals.
According to the wiki, the PS2 was the only console of the 6th gen to lack hardware normal mapping (and that’s including the Dreamcast), and all the 7th gen supported it.
The Wii's GPU did indeed have a fixed function pipeline, but that fixed function pipeline did support normal mapping. ("dot3 bump mapping" as was the style at the time.) However, the Wii's GPU only had 1MB of texture cache. So it was generally preferable to do more work with geometry than with textures.
Some exec from Nintendo is on record as saying something along the lines that texture mapping looks bad and should be avoided. My conspiracy theory is that it was a conscious decision at Nintendo to limit the ability of its consoles to do stuff with textures. The N64 was even worse, a lot worse: it only had 4kB of texture cache.
I don't think they were. Maybe they were new in gaming, but for the 3D/animation scene they were not new at that point, I remember using them earlier than so, and also using dump and displacement maps which are related concepts.
And on the other hand Unreal 2 from 2003 was not using them IIRC, and I felt back hen that it was a missed opportunity and immediately made it look a bit dated with the other cool looking stuff coming out around then.
I remember The Chronicles of Riddick: Escape from Butcher Bay (2004) used them quite a lot. I remember the effect was so obvious that I noticed it despite not knowing what it was or what it was called. Good looking game.