New features in the 15.4 release include a native MapReduce programming interface that uses standard SQL; a Hadoop integration that provides various ways to tie together data from Sybase and Hadoop; a Java interface and additional extensions for existing C++ interfaces for running in-database algorithms; support for PMML (Predictive Model Markup Language) via a partnership with Zementis; and a data mining and statistics library from Fuzzy Logix for use in conjunction with MapReduce.
Sybase is also now offering an Express edition of IQ, which can be used indefinitely, but for development purposes only and with a 5GB database size limit.
If you take a look at the last releases I’ve covered—take for exampleMarkLogic 5—you’ll notice a clear trend these days:
- every data related tool integrates with Hadoop
- and/or it offers some sort of parallel processing support
- there’s a (usually limited) version for developers
Original title and link: Upcoming Sybase IQ Features Big Data, Support for Hadoop and MapReduce ( ©myNoSQL)
Using an MPP shared-everything architecture, Sybase IQ 15.3 PlexQ Distributed Query Platform surpasses typical shared-nothing MPP architectures with better concurrency, self service ad-hoc queries, and independent scale out of compute and storage resources. With this architecture, PlexQ can exceed Service Level Agreements (SLAs) through simple and flexible resource provisioning that allows nodes to be grouped together as unified images that can be assigned to different application profiles.
Is this going against what the web, MapReduce, Hadoop, and (some) NoSQL databases are teaching us?
Update: I realized that my question above can be misinterpreted so here are my real questions:
- How does this shared-everything model work?
- What are the pros/cons of this shared-everything approach?
Markus ‘maol’ Perdrizat
Original title and link: Sybase: Distributed Shared-everything MPP Query Processing Architecture (NoSQL databases © myNoSQL)