Blog has
moved! Please,
update your links.
More than 40 years ago,
a guy named Douglas Parkhill described the concept of utility computing. He described it as containing features such as:
- Essentially simultaneous use of the system by many remote users.
- Concurrent running of different multiple programs.
- Availability of at least the same range of facilities and capabilities at the remote stations as the user would expect if he where the sole operator of a private computer.
- A system of charging based upon a flat service charge and a variable charge based on usage.
- Capacity for indefinite growth, so that as the customer load increases, the system can expanded without limit by various means.
Fast forward 40 years, and we now name pretty much this same concept as Cloud Computing, and everyone is very excited about the possibilities that exist within this new world. Different companies are pushing this idea in different ways. One of the pioneers in that area is of course Amazon, which managed to create a quite good public cloud offering through their
Amazon Web Services product.
This kind of publicly consumable infrastructure is very interesting, because it allows people to do exactly what Douglas Parkhill described 40 years ago, so individuals and organizations can rent computing resources with minimum initial investment, and pay for as much as they need, no more no less.
This is all good, but one of the details is that not every organization can afford to send data or computations to a public cloud like Amazon’s AWS. There are many potential reasons for this, from legal regulations to volume cost. Out of these issues the term Private Cloud was coined. It basically represents exactly the same ideas that Douglas Parkhill described, but rather than using third party infrastructure, some organizations opt to use the same kind of technology, such as the
Eucalyptus project deployed in a private infrastructure, so that the teams within the organization can still benefit from the mentioned features.
So we have the Public Cloud and the Private Cloud. Now, what would a Virtual Private Cloud be?
Well, it turns out that this is just a marketing term, purposefully coined to blur the line between a Private and a Public cloud .
The term was used in
the announcement Amazon has made yesterday:
Amazon VPC enables enterprises to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection, (…)
So, what is interesting about this is that this is actually not a Private Cloud, because the resources on the other side of the VPN are actually public infrastructure, and as such it doesn’t solve any of the problems which private clouds were created for solving in the first place.
Not only that, but it creates the false impression that organizations would have their own isolated resources. What isolated resources? A physical computer? Storage? Network? Of course, isolating these is not economically viable if you are charging 10 cents an hour per computer instance:
Each month, you pay for VPN Connection-hours and the amount of data transferred via the VPN connections. VPCs, subnets, VPN gateways, customer gateways, and data transferred between subnets within the same VPC are free. Charges for other AWS services, including Amazon EC2, are billed separately at published standard rates.
That doesn’t quite fit together, does it?
To complete the plot,
Werner Vogels runs to his blog and screams out loud “Private Cloud is not the Cloud”, while announcing the Virtual Private Cloud which is actually a VPN to his Public Cloud, with infrastructure shared with the world.
Sure. What can I say? Well, maybe that Virtual Private Cloud is not the Private Cloud.