Data marketplace: All content tagged as Data marketplace in NoSQL databases and polyglot persistence
From the Open Data Manual:
Open data needs to be ‘technically’ open as well as legally open. Specifically the data needs be:
- Available — at no more than a reasonable cost of reproduction, preferably for free download on the Internet. Summary: publish your information on the Internet wherever possible.
- In bulk. The data should be available as a whole (a web API or service may also be very useful but is not a substitute for bulk access)
- In an open, machine-readable format. Machine-readability is important because it facilitates reuse, for example, tables of figures in a PDF can be read easily by humans but are very hard for a computer to use which greatly limits the ability to reuse that data.
Sir Tim Berners-Lee’s linked open data star scheme provides an unambiguous way to categorize open data. And while I’m at open data there’s also the Open Data Protocol which is meant to enable the creation of HTTP-based data services.
One stop shop for Data. Get all data you need for your insights: trusted commercial and premium public domain data.
Original title and link: Big Data Marketplace: Windows Azure Marketplace DataMarket (NoSQL databases © myNoSQL)
And if that’ll be the case, then I have a couple of concerns related to distribution of big data:
who decides/regulates data ownership?
While you might have granted rights to one company to data, I’m pretty sure that in most cases details like selling for profit have not been agreed upon.
who decides/regulates the levels of privacy on the data set?
As proved by Facebook’s history, privacy has different meanings for different entities. And while some ‘anonymization’ might seem enough at fine grain levels, when talking large data sets things may be completely different.
who can quantify and/or guarantee the quality of the data sets?
Leaving aside the different ‘anonymization’ filters applied to ‘clean data’, there can be other causes leading to lowering the quality of the data. Who can clarify, detail, and measure the quality of such big data sets?