January 25th, 2014
I’m going to disrupt the Silicon Valley script. You know the one. Every talk or article coming out of Silicon Valley follows the prescribed template: start with a dazzling description of awesome new digital technologies and then proceed to explore all the wonderful benefits and opportunities that these technologies will bring to us.
I’m going to do something different. I want to explore the dark side of these technologies. The side that very few tech evangelists want to acknowledge, much less talk about.
What do I mean? It’s the fact that all of these amazing digital technologies are coming together to create a world of mounting performance pressure for all of us, one where the performance pressure will continue to grow and expand on a global basis for the foreseeable future, rather than plateau and recede. Let me repeat: this pressure is not going away. Far from it. It will continue to intensify. If we make the mistake of standing still, we will fall farther and farther behind.
Put it all together and it spells out a growing challenge. How do we keep up? How do we learn faster? How do we prepare ourselves for the cascades of unexpected events coming our way? How do we avoid mounting anxiety and the looming risk of marginalization and burn-out?
My last post on The Dark Side of Technology definitely seems to have hit a responsive chord. Many of us see evidence of this dark side of technology every day in the world around us. But it doesn’t have to be that way.
In my post last week, I made the case that we’ll only be able to overcome the dark side of technology by re-integrating passion and profession. But this is only the first step. If this is all we do, we’ll be doomed to lives of frustration and discontent. Here’s the problem.
It’s fitting that we reach this third installment of my “What Is To Be Done?” series on Martin Luther King day. He’s an icon of the power of narrative and its role in building movements that can fundamentally change how we live and work.
Something to think about over the weekend (or longer).
May 28th, 2013
I earmarked this series for writing about it, just wanted to wait for part four to be published… and somehow totally missed it in my feed, until I remembered and decided to check back today… so without further ado, here you go!
May 27th, 2013
GCE explained quick and dirty: The Google Cloud Platform Q&A:
While the bulk of the attention at Google I/O last week, at least in terms of keynote airtime, was devoted to improvements to user-facing projects like Android and Chrome, the Cloud team had announcements of their own. Most obviously, the fact that the Google Compute Engine (GCE) had graduated to general availability. Both because it’s Google and because the stakes in the market for cloud services are high, there are many questions being asked concerning Google’s official entrance to the market. To address these, let’s turn to the Q&A.
At RightScale Compute last month, Evan Anderson, a technical lead on the Google Compute Engine (GCE) team, gave an introduction to the Google Cloud Platform, the company’s flagship cloud computing offering, and talked about how the RightScale cloud management platform complements GCE’s functionality. Anderson focused on two of the core components of Google Cloud Platform: Compute and Storage. The Compute component includes GCE, which is an IaaS platform, and App Engine, a platform for developing and hosting web applications. The Storage offering includes Cloud Storage and Cloud SQL.
Ignore the RightScale marketing…
May 25th, 2013
Fascinating interview with a (former) blackhat:
One ‘blackhat,’ who asked to be called Adam, that I have spoken to a lot has recently says he’s decided to go legit. During this life-changing transition, he offered to give an interview so that the rest of the security community could learn from his point of view. Not every blackhat wants to talk, for obvious reasons, so this is a rare opportunity to see the world through his eyes, even if we’re unable to verify any of the claims made. [...]
“I like to watch the news; especially the financial side of it. Say if a target just started up and it suddenly sky rocketed in online sales that’ll become a target. Most of these websites have admins behind them who have no practical experience of being the bad guy and how the bad guys think. This leaves them hugely vulnerable.”
“One thing that did hugely affect bot infection rates was the mass removal of Java. When news of a java 0-day gets published people panic (rightly so) and un-install it or patch but as we all know java never stays secure for long.”
“It’s super hard to gather evidence for the crime, and even so the money is impossible to find. Ten or eleven mil over 10-13 years for a 10-15 year sentence. I can’t really say what it’d be like without freedom as I’ve always had it so I can’t imagine losing it.”
May 16th, 2013
I couldn’t help but think of these two together, because I happened to read them within hours.
I think that the data revolution is just getting started. Datasets are currently being, or have already been, collected that contain, hidden in their complexity, important truths waiting to be discovered. These discoveries will increase the scientific understanding of our world. Statisticians should be excited and ready to play an important role in the new scientific renaissance driven by the measurement revolution.
And Stephen Few ranting about the term “Big Data” in A More Thoughtful but No More Convincing View of Big Data:
I have a problem with Big Data. As someone who makes his living working with data and helping others do the same as effectively as possible, my objection doesn’t stem from a problem with data itself, but instead from the misleading claims that people often make about data when they refer to it as Big Data. I have frequently described Big Data as nothing more than a marketing campaign cooked up by companies that sell information technologies either directly (software and hardware vendors) or indirectly (analyst groups such as Gartner and Forrester).
Isn’t Big Data just a (marketing) term for the category of data sets that are difficult to store and analyze with traditional tools? Obviously what size and tools we’re talking about is changing over time…
May 6th, 2013
I am lazy. If there is a shortcut I will take it. I love feeling accomplished, but I don’t always love the hard work it takes to get there.
In the past the hurricane days were exception. And when those days were over, I would be amazed at my achievements and ponder: “If only I could do this everyday, how organized/successful/happy would I be?” I would find my weekends had gone by and I hadn’t really done anything. I needed to make a change.
Here’s your May dose of productivity and anti-procrastination posts…
May 5th, 2013
Good wrap up of the big iron storage industry.
EMC has been gaining marketshare over the last several years. The world’s largest data storage company is getting larger. [...] EMC’s position is analogous to IBM’s in the 70s: EMC has the most successful scale-up OLTP arrays; offers better support; and keeps adding useful features. [...] Expect to see a several of the dwarves leave the big iron storage array business. Let’s look at each of the competitors in turn.
EMC vs. Oracle, Hitachi, Dell, NetApp, HP, IBM and Fujitsu.
May 2nd, 2013
Eric Brewer about CAP Twelve Years Later: How the “Rules” Have Changed:
The CAP theorem asserts that any networked shared-data system can have only two of three desirable properties. However, by explicitly handling partitions, designers can optimize consistency and availability, thereby achieving some trade-off of all three. In the decade since its introduction, designers and researchers have used (and sometimes abused) the CAP theorem as a reason to explore a wide variety of novel distributed systems. The NoSQL movement also has applied it as an argument against traditional databases. [...]
The “2 of 3″ formulation was always misleading because it tended to oversimplify the tensions among properties. Now such nuances matter. CAP prohibits only a tiny part of the design space: perfect availability and consistency in the presence of partitions, which are rare. Although designers still need to choose between consistency and availability when partitions are present, there is an incredible range of flexibility for handling partitions and recovering from them. The modern CAP goal should be to maximize combinations of consistency and availability that make sense for the specific application. Such an approach incorporates plans for operation during a partition and for recovery afterward, thus helping designers think about CAP beyond its historically perceived limitations.
And Todd Hoff recently wrote about a later presentation Brewer gave, and that motivated me to finally blog about above article… Myth: Eric Brewer on Why Banks are BASE Not ACID – Availability Is Revenue:
In NoSQL: Past, Present, FutureEric Brewer has a particularly fine section on explaining the often hard to understand ideas of BASE (Basically Available, Soft State, Eventually Consistent), ACID (Atomicity, Consistency, Isolation, Durability), CAP (Consistency Availability, Partition Tolerance), in terms of a pernicious long standing myth about the sanctity of consistency in banking.
Some good examples about banking and ACID requirements… or the lack thereof, and how that risk is contained.
April 28th, 2013
Few people in Silicon Valley wear as many hats as Aneel Bhusri. Currently known primarily for his role as co-CEO of Workday, the cloud-based human resources software company that floated in an IPO last year, he also maintains an active role as a partner at venture capital firm Greylock Partners.
On leveraging your architecture and your customers’ data to expand into new markets: Financials and Big Data.
April 28th, 2013
Not everybody knows How to Tell a Story with Data:
So how does a visual designer tell a story with a visualization? The analysis has to find the story that the data supports. Traditional journalism does this all the time, and journalists have become very good at storytelling with visualization via infographics. In that vein, here are some journalistic strategies on telling a good story that apply to data visualizations as well.
Stephen Wolfram is one of them, after a year of collecting Facebook users’ data. Data Science of the Facebook World:
More than a million people have now used our Wolfram|Alpha Personal Analytics for Facebook. And as part of our latest update, in addition to collecting some anonymized statistics, we launched a Data Donor program that allows people to contribute detailed data to us for research purposes.
Well done, and interesting read.