About Amazon as price dictator in IaaS:

It is also interesting [...] to observe the degree to which pricing strategies are clearly built around, or in relation to, Amazon. The data also suggests that exceptions to this pattern are generally enterprise market providers, most obviously IBM, which tend to target customers who historically have had different sensitivities to cost. It will be interesting to see if the public providers like Amazon and those clearly attempting to compete with them on a cost basis, like Google, have a longer term impact on more enterprise-oriented cloud providers with respect to price.

via IaaS Pricing Patterns and Trends.

Rackspace vs. Amazon

August 1st, 2012

Rackspace vs. Amazon:

“it’s not Rackspace’s goal to compete with Amazon for sheer number of developers or market revenue share. “

There’s a large segment of cloud users that want to pay for peace of mind, he said, which is where Rackspace excels. It will always remain competitive on price, but it doesn’t expect to be the low-price leader. “If somebody want to … get the rock-bottom cheapest price, we believe there are better options than us.”

via Rackspace CEO: ‘We’re playing a different game’ than Amazon.

PaaS on Hadoop Yarn

July 25th, 2012

PaaS on Hadoop Yarn – Idea and Prototype is looking at what’s missing to offer PaaS with Hadoop Yarn:

YARN is the next generation Hadoop MapReduce architecture. It is designed to be more flexible architecturally, improve scalability, and achieve a higher resource utilization rate, among other things. Although YARN remains similar to the old Hadoop MapReduce architecture (let’s call it Hadoop MR1), the two are different enough that most components have been rewritten and the same terminology can no longer be used for both.

In short, the Hadoop YARN architecture (or called MR2) splits the two major functions of the JobTracker of MR1 into two separate components – central Resource Manager and Application Master per application. This new architecture not only supports the old MapReduce programming model, but also opens up the possibility of new data processing models

And Monash also recently looked at Yarn.

What would tech news be without rumours and speculation? Anyway, this could make sense from multiple angles, be it risk mitigation or just a new setup that allows EMC and VMware each to focus on the core infra assets that they sell to the enterprise, and takes a lot of the other stuff into a new cloud provider.

GigaOM has learned that VMware hopes to spin out some of its cloud assets, including its Cloud Foundry platform-as-a-service division and parent company EMC’s Greenplum assets into a separate company, according to sources close to the deal. The new company will also include assets of Project Rubicon, an infrastructure-as-a-service joint venture between VMware and EMC.

via VMware plans cloud spin out to keep up with Microsoft, Amazon and Google.

Cool idea, difficult to pull off at enterprise quality if the data source providers don’t cooperate and offer stable APIs (but even then…)

It’s not easy to get Salesforce.com data to work with NetSuite data, for example. Connection Cloud is working on connectors for those popular applications as well as for Intacct, Facebook, Eloqua, Google Spreadsheets, Zuora and others, to connect that data to front ends like Jaspersoft, Tableau, Yellowfin, Microsoft Excel and Access, Zendesk scripting, Appcelerator mobile app building tools and Google Appscript. The goal is to let businesses funnel their data from SaaS repositories into their analytical tool of choice which they can use to parse it and combine with other data as needed.

via Startup Connection Cloud aims to free your SaaS data.

According to Monash:

every sufficiently large enterprise needs to be cognizant of at least 7 kinds of database challenge.

via Database diversity revisited. I can relate to that for the Financial Industry (not having had much exposure to other industries).

I wonder if maybe there should be an 8. challenge, to provide a cost effective (relational) database service for the 60-80% (OLTP) databases that don’t have any special requirements, and where the traditional RDBMSs require way too much DBA overhead, and your typical Oracle or SQL Server license costs way too much. I’m hoping that the likes of NuoDB and Xeround (AFAIK we still can’t get them for our internal cloud, but there’s still hope), but also SQL Azure or database.com (external cloud) can fill that gap eventually.

NuoDB Closes $10 Million Series B With Gary Morgenthaler. These guys have long been on my “Cloud Database” watchlist, now’s the time for them to reach out and grow. GigaOm has some background about how they’re rooted with Database industry veterans (turned venture capitalists…).

Cool and innovative idea, makes me want to play with it… if just I had an idea what to use it for?

Once we connect 50 billion devices to the web [...], what will those devices talk to? Chicago Startup Tempo hopes those sensors will take to its database as a service — depositing their tiny bits of time series data inside its custom database. The company [...] has built a specialty database for data that consists of two items, time and a data point.

via Tempo wants to be the database at the center of the Internet of things.

Was about time the global outsourcers got serious about diversifying into IaaS and PaaS.

“Last year we began offering multi-tenant, virtual server hosting to our existing clients as part of large, integrated infrastructure management engagements. Clients have quickly adopted the service, and today, 15% of the servers hosted in Wipro’s data centers are delivered in an as-a-service model,” stated Michael Wilczak, SVP – Strategy, Datacenter Services, Wipro Technologies. “Now, we are expanding the service portfolio, deploying the platform throughout the US, Europe and India, and marketing the solutions as discrete service offerings under the Wipro iStructure service line.”

via Wipro Building Global Utility Computing Platform for Enterprise Clients.

Inspired by Amazon’s recent downtime:

As relates to disaster recovery of databases, public cloud customers need three things:

  • Safe Data Guarantees: To have live, fully up to date and fully consistent copies of all your databases in a location of your own choice. That might be your corporate datacenter, a portable USB drive or an archive facility in a bunker under Nebraska. It might be more than one location.
  • Continuity of Service: To have a database system that runs concurrently in multiple datacenters and/or cloud availability zones with guarantees of consistency in all locations, and resilience to failure of any of those locations.
  • Capacity Recovery: To have the ability to add computers to a running database system to rebuild capacity that may have been lost due to a datacenter or region going down. And to have a database that can restart rapidly in a new location from a Safe Copy of the database see point 1, should all datacenters fail.

via the NuoDB blog: Amazon Downtime – Designing for Failure.