Hacker News new | past | comments | ask | show | jobs | submit login
NoSQL is What? (zawodny.com)
137 points by timf on July 24, 2011 | hide | past | favorite | 60 comments



"threw up in my mouth a little." "Gee, let me get this straight." "Bullshit." "Seriously?"

I have to say that one of my regrets about growing up in programmer circles is seeing stuff like this held up as an acceptable example of how adults communicate with other adults.

It took me a long time to realize that this style of communication is not necessary, is not effective, and reflects poorly on the speaker. C'mon, this guy appears to be in his 30s or 40s and has written books, so why does he write like he's an angsty teen? (I know Linus does it. I think it's lame when he does it too.)

There's still room for humor and snark, here are three of my favorite blog postings/articles ever, all very snarky, but not embarrassingly juvenile:

http://wanderingbarque.com/nonintersecting/2006/11/15/the-s-...

http://diveintomark.org/archives/2004/01/14/thought_experime...

http://www.info.ucl.ac.be/~pvr/decon.html


If you're a programmer of a certain vintage, you've been reading writing like this on mailing lists for years. It's easy to think programmers in particular produce embarrassing writing. However, most blogs are written in this style, not just blogs by programmers. This issue is not confined to the internet, or the contemporary age. The editorial page in my hometown's daily paper was not any better. If one reads historical letters to the editor in regional newspapers, you'll find that adults have been communicating in poorly written, juvenile language for centuries .


Not even just regional papers, lower level stuff.

It is very very very important to remember in this, that language which sound dignified and mature to us today, is in many cases just vulgar "crap talk" of a previous era. The reason it sounds so fancy and polished to us is because it is old more than because it is good in many cases.


This is how adults communicate with one another. You are longing for some era of erudite euphemism that never was and never will be.

Blogging is vernacular. Just because you see it in print doesn't mean we have to judge it like an essay from the Saturday Evening Post.

If Zawodny's piece was only snark, you might have a point, but he backs it up with solid argumentation. If you paid more attention to his vinegary interjections than the main point, it says more about you than it does him.


I suspect this is something you and I will never agree on. You see, I think it reflects poorly on someone to see bullshit and not call the bullshitter out. That doesn't give someone unlimited license to say what they want. It's just that if someone is spewing bullshit, they deserve to be told they're spewing bullshit. And more importantly, people on the receiving end of said bullshit deserve to be told it's bullshit as well.


All three of the links I posted were calling out bullshit. But they did it in a way that didn't sound like it came off Jerry Springer. I have a million times more respect for those guys and those guys had far more substance to their criticism too.

Just calling "bullshit" rant-style is a cult-of-personality sort of move: people are responding more to your machismo than the actual substance of your argument.


There's something to be said for being straightforward though. If you can subtly build a flowing narrative that eloquently describes your beliefs without stating them, good for you. I generally do better just by calling a spade a spade, even if it doesn't like being called a spade. Sometimes people don't like that, and I accept it. But in general, I think people trust me because they know there aren't multiple levels of meaning to everything I say.

And besides that, what's wrong with cult-of-personality moves? Generally, strictly logical arguments don't get anywhere. I know because I've spent years losing arguments because I couldn't focus on anything but the logic. :-)


It's a stylistic flourish and is likely not representative of how he behaves in person. The same can be said for many journalists.


Clearly, we have to identify the non scaling or performance related qualities of NoSQL for the debate to make any sense. I don't think it is possible in general to define those qualities, because NoSQL systems don't have much in common. Using a negation to name the category is telling in itself.

You mention schemaless, but non of the BigTable derived systems are schemaless. Key-value stores are schemaless but RDBMS can do key-value storage just fine as can file systems.

I think this whole debate boils down to whether or not you need to normalize data. If you normalize, you need joins and that's the weak spot of most NoSQL systems. Doing joins in procedural code requires all data to be transferred into application process memory, which is only viable for modest amounts of data. (I'm not saying that only RDBMS can ever do proper joins, just that the popular NoSQL solutions in use today don't)

Normalization is also what mandates ACID because normalization means you're losing what I would call the "physical unit of consistency". Normalization, joins and ACID go together. It's all or nothing. (Of course pragmatically it's never all or nothing but it's useful to highlight the general point)

So, my conclusion is this: Use RDBMS or don't normalize (much). All the debates around RDBMS or NoSQL being simpler or more complicated turn out to be implicit debates about the need for normalization. When some people say this or that model is simpler, they either imply or don't imply a need for normalization.

In my view, whether or not you need to normalize depends primarily on whether or not the data is single purpose or multi purpose. If it's one app and its own private data island, then not normalizing often makes sense for simplicity and performance reasons.

If the data has it's own seperate life cycle, idependent of any individual app, then not normalizing is a terrible mistake that brings down everyone's productivity no matter how simple it may appear initially.

Having worked on data integration and anlytics projects for many years, I'm leaning towards the view that most data is multi purpose even if it's not initially expected to be. But that may well be survivors bias as apps that die young never cause integration issues. That doesn't mean they haven't fulfilled their original purpose.


If the data has it's own seperate life cycle, idependent of any individual app

this is the conclusion i've been heading towards too. when you use an RDBMS there's typically a layer of abstraction between the base schema and the domain model (the concepts your application "works with"). it may be nothing more than the queries used to extract data, or it may be a complex set of views and triggers. but deciding whether or not that layer is "a good thing" helps choose between relations and nosql approaches. if your data have their own logic, separate from your application, then this layer helps you match your (often evolving) application to the (frequently more static) data. but if your data are closely tied to your application then it simply "gets in the way".

and i agree, too, that data typically do have their "own" logic. on the other hand, i think people could argue that there are cases where one application becomes so large, and so dominant, that it can make sense for that to "drive" the data. which helps explain the idea that nosql and scaling go together (when you really, desperately, need to scale, it could be because one particular thing is so huge that it drives everything else).


I really like this idea of basing your decision on the need to normalize or not. It certainly fits document-oriented databases and key-value stores, but I'm less sure about column-oriented databases (I have no experience with them, except many hours trying to understand them...).

The single-purpose vs. multi-purpose that comes from denormalization vs. normalization would explain why certain companies stick to RDBMS for their main data and use NoSQL only for certain specific scenarios.


I agree. And using both has been the status quo for ages if you count the file system as a kind of NoSQL data store.

As far as I can tell, there are no significant qualities in the BigTable clones that are not related to scalability.


> Key-value stores are schemaless but RDBMS can do key-value storage just fine as can file systems.

Moreover, any serious RDMBS has an XML data type for efficient storage of unstructured/semi-structured data. Some even include index mechanisms to handle XPath/XQuery expressions efficiently, so handling tree-structured data isn't an issue, either.


I've used XQuery in Oracle quite a bit and I wouldn't exactly say it isn't an issue.

My use case is weird - thousands of xml files that have to be uploaded and then queryable/joinable to an existing relational model. We found that the only way to pull this off with decent performance is to pull out the "most queried" xml nodes and node values in materialized views with indexes.

I've also experimented with nested sets, which work very well for our situation.

Regardless of how we store the XML and query/join it to our relational data, I continue to struggle with a decently performing "pivot" step. The XML we have is really messy, with each independent XML tree having lots of 1 . . N nodes for a specific element. This makes "flattening" the XML out with relational data messy. If it was only a single XML tree, it wouldn't be a big deal. But our user queries tend to hit many of the XML trees and we wind up with lots of "empty" node elements in the total result set.

Plus, node order from our XML is extremely important to users, so node A containing node B containing node C with node order represented by depth in the tree must now swap over to left-to-right order and preserve the relationships between A->B->C.

It's actually fun . . . the whole ordering, "pivot" mess is really a presentation layer thing, our users want all this cranked out in grids with Excel export capability. From a pure programmatic standpoint, cranking on the combined XML and relational data model is not difficult.


Ok, whenever someone opens with something to the effect of 'I use MySQL, so I have experience with relational databases and can make a comparison with NoSQL' all credibility is lost.

MySQL is a 'relational database', but one in which JOIN is so expensive and poorly optimized that you almost have to use it as a key-value store, looking everything up directly with synthetic primary keys.

I've had this discussion several times. Some startup guys say 'we should look at NoSQL', and I ask questions to get to the bottom of why they think that. They will say something like 'we have this huge join we have to do, but it's too expensive, so we pre-compute it'. I ask more questions, and the 'huge join' is not huge at all, in fact it is just a reasonable join, something that you could expect to do on every page view without difficulty. Well, except they are using MySQL, and it can't join for shit. The MySQL query planner is disgusting.

So, although I don't expect to persuade the world to stop using MySQL (to be honest, I love that it is the go-to thing, those of us who use a decent database like Postgresql end up with a huge competitive advantage, better performance, more features, more scaleable, amazing query planner, top shelf performance analysis), I think we should at least admit that in practice, to get any performance out of it, you have to effectively use it as a key-value store anyway. And when comparing MySQL, which is a shitty key-value store, against real key-value stores, you can make a case for some NoSQL thing.


Totally agree


I have to agree with one of the comments. All you did was rant and didn't say something useful. Perhaps you can tell your readers about your experiences so that you can convince them that NoSQL is useful (Of course, I am NOT saying it isn't) to implement in their projects?


[deleted]


A detailed post about his experiences or some tricks that he learned is much more useful than saying "bullshit".


However, saying "bullshit" with his disclaimer at the bottom, I'm also the original author of "High Performance MySQL" published by O'Reilly Media certainly lends more credibility to his statements than many other people (Bob Warfield for example).


It's called "Argument from authority" and he will probably succeed with his argument.


If someone is known to have a high level of knowledge on or experience with a topic, then giving weight to that person's opinions on the topic is not a bad thing to do.

(also, I'm tempted to label comments like yours "argument from informal fallacy" -- you've offered no actual critique of any actual arguments, you've just named an informal fallacy, apparently in hope that people will take that in itself as an argument)


Perhaps the videos and slides I linked in the comments (all previously published on my blog as well) help in that department?


I did find this point from the original article to be very dubious:

In fact, I would argue that starting with NoSQL because you think you might someday have enough traffic and scale to warrant it is a premature optimization, and as such, should be avoided by smaller and even medium sized organizations. You will have plenty of time to switch to NoSQL as and if it becomes helpful. Until that time, NoSQL is an expensive distraction you don’t need.

Consider:

- how hard most organizations find it to refactor, rewrite, retest, especially in systems that are online 24x7

- when would you prefer to climb the learning curve with an immature technology, when you are small and starting out, or when you are a large company with a large set of users and under "mission critical" constraints (and possibly stockholders and the like.)

My guess is that ongoing companies find it extremely difficult and expensive (and wanting for talent) to switch from one sql database to another, much less switch from sql to nosql.


It's not a matter of SQL vs. NoSQL. They are complementary.

The fact of the matter is, there are some components of systems for which Redis, Riak, etc may be better suited than SQL in the long term. Starting out, keeping everything in SQL provides less friction, but as time goes on it may be necessary to scale the component separately from typical relational data storage, and that's the point at which these switches are evaluated. These companies would be replacing SQL only in these components — not across the board.

The myth of a silver bullet datastore solution is just that: a myth. Different data stores have different strengths and weaknesses and it becomes necessary to mix and match at scale.

To quote Benjamin Black: "Scale is pain, princess. Anyone who tells you different is selling something."


When you're small and starting out is definitely not the time to be mucking around on a learning curve.

Learning a new tech in a startup is doing a lot of 'busy' work that is only beneficial to you as you're learning something, it doesn't benefit the business, it slows it down. You're also more likely to make fundamental mistakes in your implementation as you don't know the tech.

And switching when you're running is not as hard as you'd think as you already have the domain knowledge of how the solution actually needs to works.

It's all a balancing act, if the new tech is a fundamental selling point (for example your program's 10x faster than incumbents) I can understand it. If it's to deal with future scalability problems, well, that'll be a good problem to deal with later.


You hit the nail right on the head.


This is opinion versus opinion. I'm sorry to say there's no real content here. The author went from Yahoo to Craigslist so there's no such thing as premature optimization at that scale and with the small staff at CL you can be sure that chasing NoSQL as a fad can ruin the company. Obviously he doesn't fit the bill of the essay he's criticizing but most devs don't experience the scale of his problems.

You can't do the topic of NoSQL vs SQL justice with an essay because it would just be semantic, we're talking about a different theoretical representation of data structure. You might as well scream "better taste!", "Less filling!".


Agreed. This is opinion.

Mine is based on years of experience, but I got the impression that the original article was written based on some cherry-picked reports of what a few companies said (as opposed to actually being there and doing it).

Maybe I'm being overly critical?


One thing that bothers me is people who talk about "SQL databases vs. NoSQL databases." That's like framing a debate on transportation as "Cars vs. Not Cars," where "Not Cars" includes bicycles, planes, buses, subways, boats, zeppelins, etc. etc.

If you take CouchDB, Redis, MongoDB, and all the other "NoSQL" databases and compare them, the only thing they share in common is that they do not use a relational data model or SQL. The way the word "NoSQL" is used, however, implies that they are some kind of united front against SQL databases, which is not the case at all. (It's why I am not a big fan of the term.)

Just like you would not use bicycles, planes, subways, and boats for the same things, you would not use CouchDB, Redis, MongoDB, and Cassandra for the same things. If you're choosing a database just because it's "NoSQL," then you are completely missing the point.


I think the problem is the term NoSQL itself, originally was penned as Not Only SQL. But everyone now looks at the term with No being the actual word No in relation to SQL, as if there is some war between SQL and not...SQL. I think that alone is causing more heartburn than needed between the two camps.


Firstly, I can attest that migrating the datastore of an application which has scaled to require a NoSQL solution is no trivial task.

Secondly, I believe the author of the original posting really meant that "premature optimization is the root of all evil." Like this post points out, NoSQL solutions vary wildly in their abilities and usefulness. A relational database is a good place to start on the path to an MVP. And if you need features that a NoSQL solution can provide, and you understand the problem you're trying to solve, then use a NoSQL solution.


I think that most people who argue that such a migration "isn't that bad" haven't actually done it. Or at least they haven't done it for anything sizable.


It matters little how bad the migration is, when you ain't gonna need it in 99.9% of cases. When you are big enough to need the migration, your company has enough resources to roll your own Hadoop distribution.


Obviously this doesn't really have any bearing on points the author is making, but a small nit for posterity's sake - I think the point Clayton Christensen was making in The Innovator’s Dilemma was not that people should adopt inferior technologies to gain leverage later.

I think the point in that book was more that new technologies are often inferior in many ways to existing technologies when they first start out, and the way these new technologies survive/grow is by appealing to niches that value the existing ways in which the new technology is superior. Then, when the new technology matures a little more, the market to which it appeals grows a little larger, and this repeats.


The problem is that NoSQL is such a broad term for datastores. Some of them are simple (like redis) and some more complex (like Cassandra/HBase). They also have different targets for data types. Using one just because it is a NoSQL can be a premature optimization just like using a RDBMS can be a premature optimization. You really need to understand the data and how it will be used. Before you know what you want to build, it is easy to prematurely optimize for something you don't need.

Start simple, then iterate...


Undertaking "optimization", in this case selecting and developing with a NoSQL datastore early in the process, should only be considered premature if the costs of doing so (which will be mainly represented by developer-hours spent) are greater than the value provided by having a datastore that can accommodate well the needs of the application itself, development team, and end-users.

Adaptability, flexibility (with regard to schema/key structure migration and maturation), as well as ease of partitioning data intelligently ahead of demand are all hugely important factors that can and often should inform the process of selecting a datastore.

If the datastore selected for use: - shortens development time, - provides improved performance for anticipated scale, - better represents the data model needing to be captured, - avoids re-work and "post"-mature optimization of data models & datastores, - or accomplishes any combination of the above ... ... then the selection of that datastore should not be considered premature optimization.

Finding that your traditional RDBMS does not well support the data models you have developed, especially once the product is out of the gate, will not be fun. Having to engage in a refactor and data migration to move to a more appropriate or more performant datastore will be a time- and resource-consuming process.

As soon as the initial synthesis phase of development can begin, it may be well worth the effort to experiment with multiple datastores as a means of evaluating their performance and suitability. Depending on the scope and potential for the project to scale, modularizing distinct pieces of core functionality into separate services, each with their own most-suitable datastore, can also provide great benefit in flexibility of development processes, as well as adaptability of the product to the demands of the end-users.


when there are arguments, like in the Jeremy post, commenting about tones and formal things is a huge FAIL. It is part of the expression of everyone to use the words and tones he wishes, as long as no one is going to be offended (if you are super-sensible this is your problem). One thing I always feel as a problem is that the programming community here in HN is a bit too middle class-ish, this is annoying: you are off topic, you are not polite, respect the fact I don't understand, blablabla. Hacking is in my vision connected with cultural freedom, and not being polite is not the only but one of the possible expressions. So reply to arguments and stop to be so childish.


He's only experienced with MySQL? How can he judge the SQL vs NoSQL battle when he's never used a proper SQL system? NoSQL does not 'save development time' in general, it's just a different tool. A much younger and less refined one at that. Real RDBMSs do a whole lot more than execute your SQL queries for you.


I don't think there's a SQL versus NoSQL battle, whatever that means.

SQL refers to relational databases, which are databases using the "relational model" of representing data: http://en.wikipedia.org/wiki/Relational_model

This means that any SQL database is very flexible in regards to what you can store in it, not to mention that it is based on proved theory and battle-tested implementations of various features, like ACID.

But the relational model also breaks heavily when wanting to work with data structures that don't blend well -- like graph data. It also breaks down heavily when you want to spread your data across many servers. It is also not well suited to storing and querying billions of records -- sooner or later, your indexes are going to go beyond whatever storage / RAM capacity your servers have.

Btw, MySQL is a real RDBMS. Even if it lacks some features, it doesn't lack anything essential to calling it "relational" and talk about advantages or disadvantages of RDBMSs versus key-value stores or other NoSQL types.


Facebook seems to be doing just fine with their graph data on SQL


There has been talk for sometime that they're MySql issues are crippling progress and development due to complexity of management and upkeep.

http://gigaom.com/cloud/facebook-trapped-in-mysql-fate-worse...


Stonebraker has something to sell them, though.


Very true.


He spearheaded the adoption of MongoDB and probably Redis at Craigslist. That's more action than most commenters will see in their entire careers.


He was also a major contributor to one of the best books on MySQL. The one that Monty worked on.

The guy knows databases, period.


Proper SQL? Nice No True Scotsman.

Everybody has a pet feature in their preferred SQL DB that they think makes it a "real" SQL database, Postgres and Oracle people in particular. I agree that MySQL is a bit janky, but get real.


Well, the 'pet feature' that MySQL lacks is 'ability to efficiently join between tables'. I think that is pretty high up on the list of important features for a relational database.

Saying this is a 'no true scottsman' is a 'straw man' on your part. Lots of people only have experience with MySQL, and think that it is somehow representative of the quirks and performance characteristics of all relational databases, which is simply not true. MySQL is popular, and that is the entirety of the list of positive things I can say about it. It's buggy (known segfault bugs accepted in final release versions), slow (I've seen identical fairly simple queries run on identical schemas with identical indexing and identical data run side by side on Postgresql and MySQL when we were moving away from MySQL. In Postgresql the query in question ran in about 4 seconds, on MySQL the query never finished, even after allowing hours to pass), bizarre (many table engines parse and silently ignore all FK constraints), non-transactional (unless you use one of the transactional table engines, and then it is slow), poor with concurrency (several table engines have table level locks, the ones that are MVCC have silly concurrency limitations, until recently Innodb could only do 4 core concurrency, due to some weird implementation), and will fuck you in sneaky ways (a VARCHAR(N) can hold N bytes, so if you store unicode data you can't necessarily store N characters. That's not the only absurd and horrifying wart, there are literally dozens).


> In Postgresql the query in question ran in about 4 seconds,

For someone accustomed to the response times and behaviors of things like MongoDB and Riak, the fact that you consider this a high mark of your comparison is borderline comical and really only proves my point.

JOINs, when used prolifically as they tend to be with RDBMS, are an abstraction that doesn't work in a distributed environment without aggressive data locality and eventual consistency. The model is broken for a range of needs, regardless of whatever quibbling the SQL fans want to do over their pet database.


I agree that JOIN starts to fall apart in a distributed environment. However, with replication, and proper understanding of the level of consistency, or lack of, your application can tolerate, a single database can scale quite far.

In fact, you will only run into three sorts of major deal breakers with the big-db + replication model. The first is if your data is just too large to fit in a single db to begin with. This is quite a bit of data, but certainly big datasets are all over the place. The second is when the write load is so high that the master database cannot keep up, or the slaves are using a lot of resources just to keep up with the replication. The last is that you need very fast access to the data, faster than a general purpose SQL engine can satisfy (think kayak, search engines, etc).

However, NoSQL isn't a solution to any of these problems. NoSQL datastores provide the storage component which you can use to build a system that can handle large datasets with huge churn. I don't mean this in the trivial 'the database isn't the application' sense, I mean it in the 'you are going to end up implementing a JOIN like construct in application code with NoSQL' sense.

The query in question, the one that took 4 seconds, simply could not be implemented in a NoSQL system, except for implementing all the logic of the query in application code, and even then there is assurance it would actually be as fast or faster unless you are smarter in your implementation than the query planner of whatever relational DB. Relational databases are a general purpose tool, and therefore in theory you can always implement a custom solution that will outperform any given query. However, your solution is at that point fast but inflexible, if you need to slice the data a different way, or produce reports that join or group the data differently than you had anticipated, a NoSQL solution will require you to write far more code that is harder to understand and verify the correctness of than a SQL query.

NoSQL solutions also push a lot of data consistency tasks into the application code. In a 'good' relational database, you can put lots of controls in place to ensure that the data is completely consistent, and you can even put some 'sniff test' constraints on, to, for example, ensure that the data at least conforms to what you expect it to look like. There is nothing that scares me more as an application developer than data quietly being corrupted. Such a scenario can cost months of developer hours to correct. Really robust constraints don't completely eliminate the possibility, but they at least cause some reasonable percentage of bugs that would cause silent data corruption to be caught early, and without causing any change in the underlying data (transaction rolls back). Your application code is therefore much simpler, easier to test, and faster to implement.


>The query in question, the one that took 4 seconds, simply could not be implemented in a NoSQL system, except for implementing all the logic of the query in application code

I sincerely doubt that.

>NoSQL solutions also push a lot of data consistency tasks into the application code.

Sometimes, not necessarily. Depends on the consistency model and how much you care about consistency in each case. SQL also forces you to implement eventual/less-than-immediate consistency in various forms just so your application doesn't fall apart.

Lets readdress the first thing I responded to by probing how aware you are of what is out there.

If one were to implement, and I mean this in the simplest possible terms, a join and a group by in Riak, what would be the most straightforward means of doing so?


I'm certainly not a Riak expert, however I did find a blog post which claims that this:

  SELECT addresses.state, COUNT(*)
    FROM people
    JOIN addresses 
         ON people.id = addresses.person_id
   WHERE people.age < 18
  GROUP BY addresses.state

translates into this Raik abortion:

  { "input":"people",
  "query":[
    {"map":{"language":"javascript", "name":"Riak.mapValuesJson"}},
    {"map":{
       "language":"javascript",
       "source":"function(value, keyData, arg){ 
         var data = Riak.mapValuesJson(value);
         if(data.age && data.age < 18) 
           return [[value.bucket, value.key]]; 
         else 
           return []; 
        }"
      }
    },
    {"map":{
       "language":"javascript",
       "source":"function(value, keyData, arg){ 
         var data = Riak.mapValuesJson(value);
         if(data.address && data.address.state) 
           return [{data.address.state: 1}]; 
         else 
           return []; 
        }"
      }
    },
    {"reduce":{
       "language":"javascript",
       "source":"function(values, arg){ 
         return values.reduce(function(acc, item){
           for(state in item){
             if(acc[state])
              acc[state] += item[state];
             else
              acc[state] = item[state];
           }
           return acc;
         });
        }"
      }
    }
  ]
  }
Can you do that same SQL query in Riak in a simpler way? Will the Riak query ever benefit from indexing, or does it map against the entire dataset every time?

Also, in the Riak query (and the example SQL) there is a hardcoded 18. In SQL you can use bind parameters to ensure that there is no chance of an injection attack. How would you do this in Riak? It seems like you would have to escape the value into the JS code you are sending to the server? Code generation is a huge pain, and string concatenation isn't going to cut it, but that sad state of affairs is what I see here: https://github.com/basho/riaktant/blob/master/examples/log-h...

That code is linked from the Riak documentation, and is ugly as sin. Do you have to specify queries against indexes based on what was indexed? In Postgresql, the query planner will make use of whatever indexes are available if they will help performance, but the SQL will execute with our without those indexes. In Riak it appears that it will simply map-reduce all data unless you specify a 'search' something or other to narrow it down, but that 'search' functionality is on the outside of the map-reduce, so the developer has to know which fields are indexed and which are not, and there isn't much hope of more complex index enabled joins.


Just LOOK at all the dev time you save not creating DB tables! /sarcasm


Meaningless copypasta, I was asking for a simple description of what it provides that would enable this.


Sorry, Professor!

I feel like I went out of my way to satisfy your arbitrary demand, and throughout this entire 'debate' you are just insisting I am wrong, but not backing it up with any facts, evidence, or even arguments.


"Again, I think we need to talk about the best tool for the job, not the best tool for every job. Relational databases are not the best tool for every data storage job."

Pretty much disqualifies him as moron. Hell, he doesn't say anything.


Does this mean that you think that relational databases are the best tool for every data storage job?


Uhm, what?

I was trying to counter balance some of the crap I read in the original posting--was that really not clear?


The above comment by myself is in response to 'i_crusade', not you...


Ugh. Reply fail. Sorry. I was intending to reply to the parent comment, not yours.


no worries




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: