future: All content tagged as future in NoSQL databases and polyglot persistence
Caltech and the University of Victoria have broken the world record for sustained, computer-to-computer transfer over a network. Between the SuperComputing 2011 (SC11) convention in Seattle and the University of Victoria Computer Centre, Canada — a distance of 134 miles (217km) — a transfer rate of 186 gigabits per second was achieved over a 100Gbps bidirectional fiber optic link; 98Gbps in one direction, 88Gbps in the other.
From Stanford a lightning-Fast, efficient data transmission:
A team at Stanford’s School of Engineering has demonstrated an ultrafast nanoscale light-emitting diode (LED) that is orders of magnitude lower in power consumption than today’s laser-based systems—”Our device is some 2,000 times more energy efficient than best devices in use today,” said Vuckovic.— and is able to transmit data at the very rapid rate of 10 billion bits per second.
Then from Intel, the Knights Corner one teraflops chip:
The Knights Corner chip acts as a co-processor - taking over some of the most complicated tasks from the computer’s central processing unit (CPU). It packs more than 50 cores - or individual processors - onto a single piece of silicon.
And again from Stanford, a new battery electrode:
A team of researchers from Stanford have developed a new battery electrode that can survive 40,000 charge cycles. That’s about a hundred times more than a normal Lithium-Ion battery, and enough to make it usable for somewhere between 10-30 years.
Last, but not least IBM is looking into GPU-accelerated databases.
Original title and link: The Future of Computing ( ©myNoSQL)
And if that’ll be the case, then I have a couple of concerns related to distribution of big data:
who decides/regulates data ownership?
While you might have granted rights to one company to data, I’m pretty sure that in most cases details like selling for profit have not been agreed upon.
who decides/regulates the levels of privacy on the data set?
As proved by Facebook’s history, privacy has different meanings for different entities. And while some ‘anonymization’ might seem enough at fine grain levels, when talking large data sets things may be completely different.
who can quantify and/or guarantee the quality of the data sets?
Leaving aside the different ‘anonymization’ filters applied to ‘clean data’, there can be other causes leading to lowering the quality of the data. Who can clarify, detail, and measure the quality of such big data sets?