I don't see why that would be particularly difficult to accomplish. A dataset made up of 3D assets would actually give an algorithm more information to work with. The question is whether it could generate a usable model and not an Eldrich horror of disconnected triangles and non-manifold geometry.
The easiest win would probably be to have an algorithm pick between predetermined types of assets (heads, appendages, clothes, etc), reshape them without actually adding new geometry, and then doing essentially what the linked page does with skins and shaders.
Sure, yet a shape-key character creator with something that will instantly imagine original textures and apply different design styles to the existing base meshes would be extremely useful and time-saving. You wouldn't necessarily need as many riggers or manual rigging at all because a base character rig from say Cloudy With A Chance of Meatballs could immediately be used for a completely different character design in Book Of Life just by training the algorithm on hand-drawn sketches.
I mean, that's about as close as we may get to the holonovels from Star Trek in our lifetime.
Say "computer, delete crowd and replace with cowboys" and it just does it using imagined designs consistent with the rest of the movie/game.