I'm not here to say the author is right. S/he is making a lot of assumptions, and as you can see in this thread, everyone is saying it depends.
But one fact is that reducing redundancy in a database is a GOOD thing.
To make a comparison, I'd say it's like when I have to maintain a codebase that is large, and I have to pass a constant (let's say, ERR_STATUS = 1) around in some functions in various folders.
I wouldn't want to have to define it in every single file. One would want that in a header, in a library or in a file that defines static values somewhere. Anywhere but only one location for it.
Database is a bit like a dumping ground. Once the business gets big enough, a lot of other application are going to plunge in and do things with data. I wouldn't want to have inconsistent data. As a programmer, if one had inconsistent things in one's source code, one could deal with it. But data is different, data is produced by users and interactions and so on. Once you begin to have inconsistent data, you can't trust it anymore until you find out exactly what did it. A lot of wasted time. And that leads to a lot of problems.
But I agree with you about your point on performance. However by organising data correctly, having indexes on things that matter, joining in the right way, one's using business knowledge and programming knowledge to optimise the database. And that's not something that can be done 100% on its own (as of now).
But one fact is that reducing redundancy in a database is a GOOD thing.
To make a comparison, I'd say it's like when I have to maintain a codebase that is large, and I have to pass a constant (let's say, ERR_STATUS = 1) around in some functions in various folders.
I wouldn't want to have to define it in every single file. One would want that in a header, in a library or in a file that defines static values somewhere. Anywhere but only one location for it.
Database is a bit like a dumping ground. Once the business gets big enough, a lot of other application are going to plunge in and do things with data. I wouldn't want to have inconsistent data. As a programmer, if one had inconsistent things in one's source code, one could deal with it. But data is different, data is produced by users and interactions and so on. Once you begin to have inconsistent data, you can't trust it anymore until you find out exactly what did it. A lot of wasted time. And that leads to a lot of problems.
But I agree with you about your point on performance. However by organising data correctly, having indexes on things that matter, joining in the right way, one's using business knowledge and programming knowledge to optimise the database. And that's not something that can be done 100% on its own (as of now).