There's no need to burden yourself with making a system scale more than 10x further than it needs to, as long as you're confident that you'll be able to scale it as you grow.
It should be noted that the 10x quote comes from Jeff Dean. Not only does it make sense from the business point of view, it's also nearly impossible to scale a system 100x: you simply have no idea what the traffic patterns and bottlenecks will be at 100x the volume.
In other words, going from 60k uniques to 600k uniques can be possible with the same architecture. Thinking you can go 60k to 6,000,000 on the same architecture is hubris (as is planning for the 6,000,000 mark while you're at 60k).
It's also extremely annoying to see the what started as people looking into right tools to do their jobs dissolve into a hype bandwagon (but I guess it happens to any technology and I'll be a happier person if I ignore this :-)). First the hammer for all persistence was MySQL + an ORM (ignoring both other RDBMS e.g., Postgres, approaches such as BerkeleyDB), then it was CouchDB now it seems to be Mongo or Cassandra. These are entirely different systems, each has a use case where it shines. Why are people all the sudden demanding that others use a "NoSQL solution"?
That being said, MySQL isn't that hot in terms of reliability (of course starting off with Oracle is recipe for some technical debt and vendor lock-in). There's no substitute for proper backups and administration. This is more of an argument for MySQL (or my personal favourite, Postgres) than against it: with MySQL the backup and administration solutions are very well known.
Some of the points mentioned in the article in regards to distributed systems are also fairly confusing and incorrect. CouchDB and Mongo are (scalability-wise) essentially the same as MySQL: "potentially consistent" replication (no ability to perform quorum reads, no guarantee of eventual consistency, no version vectors) and
not truly distributed (a 1000 cluster of consisting of 500 "shards" two replicas each is really 500 separate clusters). MySQL already has very relaxed semantics (a master-slave replication system makes absolutely _no_ guarantees about commits made to master being consistently read from the slaves or even being persisted on the slave if the master crashes).
Dynamo and BigTable inspired systems are radically different. However, as with any other system under active development (which also includes RDBMS such as VoltDB), unless you're ready to ask a committer on one of these systems to institute a special QA cycle and support for your use case (or you are a committer on one of the systems, in which case you know what you're doing) when using one of these systems as a primary datastore, you're taking on a technical risk. A technical risk that's not needed when you need to persist data for 60k monthly uniques: I can do that just fine with any system. Depending on the use patterns of the application, you can also do for 60,000,000 monthly uniques while sticking with MySQL (while most other parts of the applications, including the data access layer to MySQL, would have to be re-written many times over).
In the end Adam's decision is probably the correct decision (I don't work for Quora, so I can't tell), but the technical reasons sound slightly ignorant. It's perfectly fine to be ignorant of these topics, it's not, however, okay to speak authoritatively on a topic you're ignorant of (e.g., I know nothing of machine learning, so I am not going to speculate on how Quora should be using machine learning to make their homepage more relevant -- which they probably are -- I'd add nothing to the conversation).
It should be noted that the 10x quote comes from Jeff Dean. Not only does it make sense from the business point of view, it's also nearly impossible to scale a system 100x: you simply have no idea what the traffic patterns and bottlenecks will be at 100x the volume.
In other words, going from 60k uniques to 600k uniques can be possible with the same architecture. Thinking you can go 60k to 6,000,000 on the same architecture is hubris (as is planning for the 6,000,000 mark while you're at 60k).
It's also extremely annoying to see the what started as people looking into right tools to do their jobs dissolve into a hype bandwagon (but I guess it happens to any technology and I'll be a happier person if I ignore this :-)). First the hammer for all persistence was MySQL + an ORM (ignoring both other RDBMS e.g., Postgres, approaches such as BerkeleyDB), then it was CouchDB now it seems to be Mongo or Cassandra. These are entirely different systems, each has a use case where it shines. Why are people all the sudden demanding that others use a "NoSQL solution"?
That being said, MySQL isn't that hot in terms of reliability (of course starting off with Oracle is recipe for some technical debt and vendor lock-in). There's no substitute for proper backups and administration. This is more of an argument for MySQL (or my personal favourite, Postgres) than against it: with MySQL the backup and administration solutions are very well known.
Some of the points mentioned in the article in regards to distributed systems are also fairly confusing and incorrect. CouchDB and Mongo are (scalability-wise) essentially the same as MySQL: "potentially consistent" replication (no ability to perform quorum reads, no guarantee of eventual consistency, no version vectors) and not truly distributed (a 1000 cluster of consisting of 500 "shards" two replicas each is really 500 separate clusters). MySQL already has very relaxed semantics (a master-slave replication system makes absolutely _no_ guarantees about commits made to master being consistently read from the slaves or even being persisted on the slave if the master crashes).
Dynamo and BigTable inspired systems are radically different. However, as with any other system under active development (which also includes RDBMS such as VoltDB), unless you're ready to ask a committer on one of these systems to institute a special QA cycle and support for your use case (or you are a committer on one of the systems, in which case you know what you're doing) when using one of these systems as a primary datastore, you're taking on a technical risk. A technical risk that's not needed when you need to persist data for 60k monthly uniques: I can do that just fine with any system. Depending on the use patterns of the application, you can also do for 60,000,000 monthly uniques while sticking with MySQL (while most other parts of the applications, including the data access layer to MySQL, would have to be re-written many times over).
In the end Adam's decision is probably the correct decision (I don't work for Quora, so I can't tell), but the technical reasons sound slightly ignorant. It's perfectly fine to be ignorant of these topics, it's not, however, okay to speak authoritatively on a topic you're ignorant of (e.g., I know nothing of machine learning, so I am not going to speculate on how Quora should be using machine learning to make their homepage more relevant -- which they probably are -- I'd add nothing to the conversation).