Visit Reuven Cohen's blog
For about as long as there have been computer networks, administrators have attempted to keep these networks up and running. It seems to be a continuous battle between faulty hardware, poorly written software, unreliable connectivity and random acts of God. With the emergence of cloud computing we are now for the first time close to realizing a computing environment where we are able to focus less on keeping our applications up and more on making them run more efficiently and effectively.
In the era of cloud computing uptime guarantees and service level agreements (SLA) have started to become standard requirements for most cloud providers. Google, Amazon, and Microsoft have all started to implement some kind of SLA. They do this in an attempt to give their cloud users the confidence to utilize these systems in place of more common in house alternatives. The common goal for most of these cloud platform is to build for what I consider the myth of five nines. (Five nines meaning 99.999% availability, which translates to a total downtime of approximately five minutes and fifteen seconds per year.) The problem with five nines is it's a meaningless goal which can be manipulated to meet what ever you need it to mean.
In the case of a physical failure such as Flexiscales recent one, the hardware downtime might be small, but the time to restore from a backup might be considerably longer. A minor cloud failure could cause a cascading series of software failures causing further application outage of hours or even days for those who depended on the availability of the given cloud. Meaning your cloud may achive five nines, but your application hosted on it doesn't.
Lately it seems there are a number of people in the cloud computing community who are starting to discuss alternatives to the dreaded five nines concept and looking at ways that cloud based infrastructures could be configured / deployed in a mannor that is more proactive than reactive to disasters. There is a growing consensus that cloud based disaster recovery may very well be the "killer app" for cloud computing. To achieve this, we need to start creating reference architectures and models that assume for failure. One that doesn't need to worry when the next disaster will happen next, just that it will happen and when it does, it's going to be business as usual.
In a recent conversation with Alan Gin founder of a super secret stealth firm called Zeronines, Alan described an interesting philosophy. He said the problem with most disaster recovery plans is the recovery is reactive, it is what happens after a disaster has already harmed your business. He said on its face, this is an unsound strategy. He went on to say; That current disaster recovery architectures, which uses the synonym “failover,” is based on the cutover archetype: a system’s primary component fails, damaging operations; then failover to a secondary component is attempted to resume operations. The problem with current cutover approaches is that it views unplanned downtime as inevitable, acceptable, and so requires that business halt.
I really liked this quote from an executive from EMC, a leading computer storage equipment firm, “current failover infrastructures are failures waiting to happen.”
To be competitive in today's always connected, always available world. We need to reinvent the fundamental idea of disaster recovery. One of the major benefits to using cloud computing is that you can make these types of failover assumptions well before they happen using an emerging global toolset of cloud components. It's not a matter of if, but a matter of when, when you take into consideration that application components will fail then you can build an application that features "failure as service". One that is always available, one with Zero Nines.
Reuven CohenFounder & chief technologist for Toronto based Enomaly.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment