Based on his experience building a document database, RavenDB, Ayende writes about data normalization:
If you think about it, normalization in RDBMS had such a major role because storage was expensive. It made sense to try to optimize this with normalization. In essence, normalization is compressing the data, by taking the repeated patterns and substituting them with a marker. There is also another issue, when normalization came out, the applications being being were far different than the type of applications we build today. In terms of number of users, time that you had to process a single request, concurrent requests, amount of data that you had to deal with, etc.
While I do think he’s wrong about the rationale of normalization — very shortly, main reason for normalizing data is to guarantee data integrity, working with a document database will offer you a different perspective about organizing data. Just like with programming languages: even if you don’t use every programming language you know or learn, each of them will hopefully give you a different perspective on how to deal with problems.
Original title and link for this post: Normalization is from the devil (published on the NoSQL blog: myNoSQL)