Key questions when selecting an analytic RDBMS
March 21st, 2013
Key questions when selecting an analytic RDBMS:
Assuming you know that you really want to manage your analytic database with a relational DBMS, the first questions you ask yourself could be:
- How big is your database? How big is your budget?
- How do you feel about appliances?
- How do you feel about the cloud?
- What are the size and shape of your workload?
- How fresh does the data need to be?
I know I’m late, but these questions will still be valid even a year from now.
DBSeer: Making Database Cloud Computing more Efficient
March 15th, 2013
Making cloud computing more efficient:
For database-driven applications, new software could reduce hardware requirements by 95 percent while actually improving performance.
DBSeer – an open source machine learning algorithm to improve DB performance in VMs, being developed at the MIT.
One database to rule them all?
February 22nd, 2013
One database to rule them all?:
Perhaps the single toughest question in all database technology is: Which different purposes can a single data store serve well? — or to phrase it more technically — Which different usage patterns can a single data store support efficiently?
Wouldn’t it be great…
Cloud databases, or database on the cloud?
January 29th, 2013
Cloud databases, or database on the cloud?:
NuoDB has today kicked off that debate with the launch of its Cloud Data Management System and 12 rules for a 21st century cloud database. NuoDB’s 12 rules appear pretty sound to me – in fact you could argue they are somewhat obvious.
Monash has a less favorable view on the 12 rules, or maybe just marks his disagreement in stronger words… an interesting read anyway, as he compares it against Codd’s 12 rules for RDBMS.
However the key piece is Matt Aslett’s note about Cloud databases, this difference is one I’ve been struggling to sell in the enterprise:
Either way, I believe that this is the right time to be debating what constitutes a “cloud database”. Database on the cloud are nothing new, but these are existing relational database products configured to run on the cloud.
In other words, they are databases on the cloud, not databases of the cloud. There is a significant difference between spinning up a relational database in a VMI on the cloud versus deploying a database designed to take advantage of, enable, and be part of, the cloud.
To me, a true cloud database would be one designed to take advantage of and enable elastic, distributed architecture. NuoDB is one of those, but it won’t be the only one. Many NoSQL databases could also make a claim, albeit not for SQL and ACID workloads.
That’s the thing worth thinking about. How much of the technology making a DBMS a Cloud databse is actually new, and how much is just old technologies put to new uses in mainstream DBMSs?
Updated Database Landscape Map
January 5th, 2013
Updated database landscape graphic:
I recently published an updated version but noted that there were a group of database vendors that had emerged in 2012 that didn’t easily fit into the segments we’d created.
It’s so much better… Good overview for anybody interested in understanding the DB world beyond just whatever one or two products or technologies they already know.
Oracle 12c Speculation
August 6th, 2012
Oracle RDBMS 12c speculation by Curt Monash: Thoughts on the next releases of Oracle and Exadata. Curious to see how much they’ll really be able to pull out to make a credible cloud offering… or if Oracle’s architecture and code base is just too much Enterprise and not enough Cloud.
Why the days are numbered for Hadoop as we know it
July 12th, 2012
GigaOm kicked off some good discussion in Why the days are numbered for Hadoop as we know it:
Hadoop is everywhere. For better or worse, it has become synonymous with big data. In just a few years it has gone from a fringe technology to the de facto standard. Want to be big bata or enterprise analytics or BI-compliant? You better play well with Hadoop.
It’s therefore far from controversial to say that Hadoop is firmly planted in the enterprise as the big data standard and will likely remain firmly entrenched for at least another decade. But, building on some previous discussion, I’m going to go out on a limb and ask, “Is the enterprise buying into a technology whose best day has already passed?”
Realtime is king. Low latency his queen. Or the other way around 😉 I don’t think anybody who would refuse getting their results faster, they’re just not complaining about Hadoop and MapReduce today because they don’t know better. Or rather, because better solutions aren’t available at the right price point yet.
What will the Internet look like in 2020?
July 12th, 2012
Not sure how I suddenly stumbled upon this one from early 2011, but it’s interesting nevertheless. A database developer’s (Couchbase co-founder J Chris Anderson) take on the future internet:
Remember “web accelerators”? They’ll be back with a vengeance. So when you pull out your screen thingy, it’ll already have a copy of Hacker News and all the articles it links to and all the articles they link to, that it fetched in the background the night before. New updates will trickle in in real time.
It’ll be interesting, because the further you get from your habitual browsing patterns, the slower the net will get. As you start using a new site more frequently, your browser will up its fetch priority, so that it will already be on your device too, with updates streamed in in real time.
The upshot is that everyone’s phones will have a copy of the slice of the internet they care about on it. The good news is that the interoperability required to make this happen will make web-app vendor lock-in a thing of the past. Eg: once you have all of your flickr photos on all your devices, and they are synchronized around in a standard way, if flickr were to shut down, you could just sync them to photobucket instead. (Or directly to your friends, if you prefer.)
via J Chris Anderson’s answer to Future of Internet: What will the Internet look like in 2020? – Quora.
Realtime incremental updates to data stores is coming big time, whether that’s caches as above, or analytic data stores as in Monash’s ELT example that I wrote about the other day.
Database Diversity Revisited
July 10th, 2012
According to Monash:
every sufficiently large enterprise needs to be cognizant of at least 7 kinds of database challenge.
via Database diversity revisited. I can relate to that for the Financial Industry (not having had much exposure to other industries).
I wonder if maybe there should be an 8. challenge, to provide a cost effective (relational) database service for the 60-80% (OLTP) databases that don’t have any special requirements, and where the traditional RDBMSs require way too much DBA overhead, and your typical Oracle or SQL Server license costs way too much. I’m hoping that the likes of NuoDB and Xeround (AFAIK we still can’t get them for our internal cloud, but there’s still hope), but also SQL Azure or database.com (external cloud) can fill that gap eventually.
NuoDB Closes $10 Million Series B
July 9th, 2012
NuoDB Closes $10 Million Series B With Gary Morgenthaler. These guys have long been on my “Cloud Database” watchlist, now’s the time for them to reach out and grow. GigaOm has some background about how they’re rooted with Database industry veterans (turned venture capitalists…).