Checking the schema and upgrade scripts into source control is extremely important, and this is how I have managed it in the past. (names like table.alter.12.sql). I regret not setting this type of versioning table up from the beginning, though - when you have multiple environments (dev, QA, staging, etc) it can get very confusing in a hurry as to which schema changes have been applied to which databases. With a versioning table and maybe even some deployment time scripts to sanity check versions between the release and the DB, mismatches become immediately apparent.
The team behind the iBatis OSS project (now known as MyBatis) has a good tool named "migrations" which does exactly this. The original OSS project is a SQL mapping library for .net and java but the tool is independent of platform/language/OS.
Each change is stored as a script which have names like "<timestamp>_<brief description>.sql", and the timestamps serve as IDs as well in the "changelog" table in the DB. Creating the changelog table is always the first change.
The "migrate" command of the tool then has options for "status", "up" "down" etc., and features to create scripts to go from v1 -> v2.
Another nice concept they provide to the mix is adding an "Undo" section to each script so that you can rollback changes and downgrade schemas just as easily as upgrading them.
Yes, This is what we usually did in a previous job. The only benefit of storing in the database is that the code can check schema version if that's a necessary component. I could see some value there.
Nowadays I'm primarly working with non-relational datastores where this seems to be much less of an issue. There we version the entities that we store instead of a schema and that is done entirely in source control as well.
And resist the temptation of hacking the schema directly in your servers, it ends being a larger effort :)