Skrenta dubs their screed: "The sound of disruption".
Basically they mis-understand the purpose of the thing. One thing Map / Reduce is great at is processing log files. Databases aren't so hot when you have 100M things a day or more to look at.
As I wrote earlier, Google is making efforts to get college kids to learn to think in map / reduce ways. Now they are offering free access to scientific datasets in mapreduce clusters to certain universities.
The upshot is that the web requires parallel processing. No one has really extracted a lot of knowledge out of the terabytes of web usage data that flow by every day.
But the data is out there, and paradigms like map / reduce are how it's gonna be dissected. So if you want to work in the consumer web, with billions of users doing stuff every day, leaving data tracks, you should spend some time learning map / reduce.
Not to get too geeky, but to collect these in a single place, here are some key papers to read if you want to understand the google architecture:
And a bit of bonus inspiration - the story of how an New York Times blogger converted 70 years of archives (over 11 million articles) to PDF in under a day using Hadoop on Amazon EC2.