Not strictly a NoSQL or even data related series of posts from Google research, but a very interesting read about Moore’s law, what kind of research is happening in this space, and what we need to do to have the same advance in 10-15 years:
This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore’s Law situation, or promises for such a Law that would drive their future performance.
- Brief history of Moore’s Law and current state
- More Moore and More than Moore
- Possible extrapolations over the next 15 years and impact
- Moore’s Law in other domains
As we look at the years 2020–2025, we can see that the physical dimensions of CMOS manufacture are expected to be crossing below the 10 nanometer threshold. It is expected that as dimensions approach the 5–7 nanometer range it will be difficult to operate any transistor structure that is utilizing the metal-oxide semiconductor (MOS) physics as the basic principle of operation. Of course, we expect that new devices, like the very promising tunnel transistors, will allow a smooth transition from traditional CMOS to this new class of devices to reach these new levels of miniaturization. However, it is becoming clear that fundamental geometrical limits will be reached in the above timeframe. By fully utilizing the vertical dimension, it will be possible to stack layers of transistors on top of each other, and this 3D approach will continue to increase the number of components per square millimeter even when horizontal physical dimensions will no longer be amenable to any further reduction. It seems important, then, that we ask ourselves a fundamental question: “How will we be able to increase the computation and memory capacity when the device physical limits will be reached?” It becomes necessary to re-examine how we can get more information in a finite amount of space.
Original title and link: Google Research: Moore’s Law series ( ©myNoSQL)
A 3-part, a bit too high level for me, article about what is to be gained (and lost) when using Riak instead of a relational database:
What I always like about Basho’s posts is that they don’t shy away from covering the tradeoffs.
Original title and link: Relational to Riak ( ©myNoSQL)
The Oracle version of HealthCare.gov. Let’s see:
“Oracle was contracted to deliver the exchange,” Merkley said, “they promised it would be fairly delivered on time and it’s in complete dysfunction.”
Oregon has spent more than $40 million to build its own online health care exchange. It gave that money to a Silicon Valley titan, Oracle, but the result has been a disaster of missed deadlines, a nonworking website and a state forced to process thousands of insurance applications on paper.
Some Oregon officials were sounding alarms about the tech company’s work on the state’s online health care exchange as early as last spring. Oracle was behind schedule and, worse, didn’t seem able to offer an estimate of what it would take to get the state’s online exchange up and running.
The biggest reason Cover Oregon’s website lags behind is because Oracle didn’t meet its deadline and should have begun testing last May, rather than delaying until this summer when it was too late to resolve the problems it encountered, King said. Oracle has been paid handsomely by Cover Oregon for its consulting and software development. It’s received $43.2 million this year – accounting for $11.1 for hardware, $9.5 million for software and $22.6 million for consulting.
So even if you use Oracle for everything—hardware, software, and consulting payed with a paltry $43.2mil in 2013, you can still fail? What a surprise!
✚ Who’ll take the blame if HealthCare.gov and Cover Oregon would just switch their contractors?
✚ Could we have these played on repeat for those blaming MarkLogic for HealthCare.gov’s failure? Also for those that accepted this excuse?
Original title and link: Blame it on… Oracle style ( ©myNoSQL)
As you can see the SSL overhead is clearly visible being about 0.05ms slower than a plain connection. The median for the inserts with SSL is 0.28ms. Plain connections have a median at around 0.23ms. So there is a performance loss of about 25%. These are all just rough numbers. Your mileage may vary.
Some of you may recall my security webinar from back in mid-August; one of the follow-up questions that I was asked was about the performance impact of enabling SSL connections. My answer was 25%, based on some 2011 data that I had seen over on yaSSL’s website, but I included the caveat that it is workload-dependent, because the most expensive part of using SSL is establishing the connection.
These 2 articles are diving much deeper and more scientifically into the impact of using SSL with MySQL. The results are interesting and the recommendations are well worth spending the time reading them.
Original title and link: The SSL performance overhead in MongoDB and MySQL ( ©myNoSQL)