oracle: All content tagged as oracle in NoSQL databases and polyglot persistence
The Oracle version of HealthCare.gov. Let’s see:
“Oracle was contracted to deliver the exchange,” Merkley said, “they promised it would be fairly delivered on time and it’s in complete dysfunction.”
Oregon has spent more than $40 million to build its own online health care exchange. It gave that money to a Silicon Valley titan, Oracle, but the result has been a disaster of missed deadlines, a nonworking website and a state forced to process thousands of insurance applications on paper.
Some Oregon officials were sounding alarms about the tech company’s work on the state’s online health care exchange as early as last spring. Oracle was behind schedule and, worse, didn’t seem able to offer an estimate of what it would take to get the state’s online exchange up and running.
The biggest reason Cover Oregon’s website lags behind is because Oracle didn’t meet its deadline and should have begun testing last May, rather than delaying until this summer when it was too late to resolve the problems it encountered, King said. Oracle has been paid handsomely by Cover Oregon for its consulting and software development. It’s received $43.2 million this year – accounting for $11.1 for hardware, $9.5 million for software and $22.6 million for consulting.
So even if you use Oracle for everything—hardware, software, and consulting payed with a paltry $43.2mil in 2013, you can still fail? What a surprise!
✚ Who’ll take the blame if HealthCare.gov and Cover Oregon would just switch their contractors?
✚ Could we have these played on repeat for those blaming MarkLogic for HealthCare.gov’s failure? Also for those that accepted this excuse?
Original title and link: Blame it on… Oracle style ( ©myNoSQL)
This HadoopSphere post list 8 data-related products that Oracle has in its portfolio. I’m not sure it’s complete though as I didn’t see TimesTen, Coherence, etc.
It’s nice to be able to tell your customers, potential and existing, that you have tools for everything. The tricky part is in integrating these, making them work seemlessly together, and being able to offer a clear picture to every user. Or if you are Oracle, you could charge customers for this part too.
Original title and link: Oracle’s Big Data Components ( ©myNoSQL)
Based on ESG’s modeling of a medium-sized Hadoop-oriented big data project, the preconfigured Oracle Big Data Appliance is 39% less costly than a “build” equivalent do-it-yourself infrastructure. And using Oracle Big Data Appliance will cut the project length by about one-third. For most enterprises planning to take big data beyond experimentation and proof-of- concept, ESG suggests skipping the idea of in-house development, on-going management, and expansion of your own big data infrastructure, to instead look to purpose-built infrastructure solutions such as Oracle Big Data Appliance.
This is an extract from Oracle’s whitepaper “Getting Real about Big Data: Build Versus Buy“. It’s a nice reading excercise to better understand how the database leader is positioning their Oracle Big Data Appliance compared to Hadoop’s commodity-hardware cluster.
I’d love seeing the equivalent paper from Hortonworks1.
The only reason I’m referring directly to Hortonworks and not also Cloudera is that the Hadoop part of Oracle Big Data Appliance is offered by Cloudera. ↩
Original title and link: Oracle Paper: The Cost of Do-It-Yourself Hadoop vs Oracle Big Data Appliance ( ©myNoSQL)
MariaDB-5.5.21-beta is the first MariaDB release featuring the new thread pool. Oracle offers a commercial thread pool plugin for MySQL Enterprise, but now MariaDB brings a thread pool implementation to the community!
Original title and link: MariaDB 5.5 Connection Thread Pool ( ©myNoSQL)
Teradata sells software, hardware, and services for data warehouses and analytic applications. Part of the Teradata portfolio is also the Teradata Aster MapReduce Platform a massively parallel processing infrastructure with a software solution that embeds both SQL and MapReduce analytic processing for deeper analytic insights on multi-structured data and new analytic capabilities driven by data science.
Hortonworks offers services around the 100% Apache-licensed, open source Hortonworks Data Platform, an integrated solution built around Hadoop.
The interesting bits from the announcement and media coverage:
Teradata and Hortonworks will join forces to provide technologies and strategic guidance to help businesses build integrated, transparent, enterprise-class big data analytic solutions that leverage Apache Hadoop. The partnership will focus on enabling businesses to use Apache Hadoop to harness the value from new sources of data. Businesses will be able to quickly load and refine multi-structured data, some of which is being discarded today, for discovery and analytics. The resulting insights will enable analysts and front line users to make the best business decision possible.
For example, each day websites generate many terabytes of raw, complex data about customers’ viewing and buying habits. These web logs can be directly loaded into Teradata Aster or Apache Hadoop where they can be stored, transformed, and refined in preparation for analysis by the Teradata Aster MapReduce platform (nb: my emphasis).
The company [Teradata] has already worked with Hortonworks’ competitor Cloudera on a connector between the Teradata Database and Cloudera’s Hadoop distribution, but the Hortonworks deal appears a little deeper and more strategic.
The alliance between Teradata and Hortonworks means that companies can get strategic advice about how to get into the new analytics game from Teradata, and have practical help on running the systems from Hortonworks.
However, there are two important challenges that need to be addressed before broad enterprise adoption can occur:
- Understanding the right use cases in which to utilize Apache Hadoop.
- Integrating Apache Hadoop with existing data architectures in an appropriate manner to get better value from existing investments.
My sense of excitement about the Teradata/Hortonworks partnership is amplified by the fact that it addresses these two core challenges for Apache Hadoop:
- We will be rolling out a reference architecture that provides guidance to enterprises that want to understand the best use cases for which to apply Hadoop. As part of that, we will be helping Teradata customers use Hadoop in conjunction with their Teradata and Teradata Aster analytic data solutions investments.
- We will also be working closely with the Teradata engineering teams on jointly engineered solutions that optimize the integration points with Apache Hadoop.
From Hortonworks perspective this deal is weaker than the Oracle-Cloudera deal.
In the former case, new Teradata sales do not necessary result in new Hortonworks Data Platform installations, while in the case of the Oracle-Cloudera partnership, every sale results in a new business for Cloudera.
From Teradata perspective, this partnership gives them a perfect answer and solution for clients asking about unstructured data scenarios.
Depending on the level of integration the two team will pull together, this partnership might result in one of the most complete and powerful structured and unstructured data warehouse and analytics platform.
I’m looking forward to seeing the proposed architecture blueprint once it’s finalized.
- terradata.com: Teradata-Hortonworks Partnership to Accelerate Business Value from Big Data Technologies
- hortonworks.com: The Importance of the Teradata & Hortonworks Partnership
- The Data Blog: Aster Data Blog » Blog Archive » Perspectives on Teradata-Hortonworks Partnership
- Bits NYTimes.com:Teradata and Hortonworks Join Forces for a Big Data Boost
- GigaOM: Teradata taps Hortonworks to improve Hadoop story
- ServicesANGLE: Hortonworks Announces Partnership with Teradata
Original title and link: Teradata and Hortonworks Partnership and What It Means ( ©myNoSQL)