I’ve been asked this and similar questions quite a bit lately. But before I delve into the answer to this I want to lay the foundation and ask you a question. This one question should play a large part in your final assessment to go with EC2 or not. The question you should ask yourself is:

How quickly do you actually need to scale either up or down?

The answer to this will likely influence the correct solution to your problems. The following bullet point list is how I classify levels of scalability, each one comes with its own pros and cons but generally the quicker you need something the more expensive it is going to be.

  • Immediate – within minutes – EC2 or other cloud computing networks
  • Fast – within days to a week – Managed Hosting, Rackspace, The Planet, etc
  • Average – within weeks to a month – Own your own hardware, Dell, HP, IBM, etc
  • Corporate – within months/years – Good Luck

With this in mind, everyone hears the hype of EC2, with its scalability, fully managed hardware and virtualization but there really aren’t that many people out there describing their experiences with it. When we made the decision to go with EC2 we did our research and due diligence before making the switch. There wasn’t much to go on but the few articles and blog posts we did read were all positive. I guess we all got caught up in the hype here as well.

Even after all our research it turns out that going with EC2 was one of the poorer IT decisions we have made. EC2 has turned out to be more expensive, more difficult to implement and with poorer performance than we had ever expected even with our worst case estimations. To top it all off, we didn’t fully utilize the benefits of going with EC2 which was immediate scalability. Our traffic is relatively predictable and grows or shrinks in manageable percentages and can be scaled up within days instead of minutes. We never have any massive spikes in our traffic either up or down. Even if we did have spikes we are limited by our MySQL cluster.

While we had to rethink a lot of our architecture to create a more horizontal platform instead of the traditional vertical scaling, MySQL was by far our biggest bottleneck. The source of the problem is rooted in Amazon’s preset machine size. While they have done an adequate job of offering different types of instances with more memory in one line and more computational power in the other you are still limited to what they are offering. With the large database we have and the latencies between the instances and their permanent storage we were forced to keep as much of our database cached in RAM. Now this shouldn’t have been too big a deal. Just get a machine with a ton of RAM. Well, unfortunately Amazon’s biggest instance only offered us a maximum of 15GB. Needless to say this was not sufficient and forced us to adopt a cluster solution. This in and of itself is not ideal especially when you should be able to run off a single box with 32GB of RAM and access to fast local disks. However, it took us twelve (12) m1.xlarge instances to reach the level of performance and availability we desired. Not to mention the network IO latency between node and disk storage and node to node adding insult to injury.

While the speed and size of the cluster was not desirable, it worked. However, we had to completely forfeit any sort of scalability to achieve a working database. To my knowledge there is no way to quickly and easily boot up more instances of MySQL to supplement a live cluster. In order for us to add more capacity we would have to perform a rolling reboot of every machine in the cluster. Its unfortunate that databases were not designed with EC2 in mind.

However, there are companies who are trying to tap into this pain point. We were looking very intently at a company called Continuent who produces a MySQL cluster monitoring and management tool. Unfortunately, as of Jan 2009 the product was still in private beta and was unavailable to us. This tool would have allowed us to add nodes to the cluster on the fly without having to take it down in the process. Although, even then with this extra tool, which wasn’t cheap, you still couldn’t scale down the cluster without taking it off-line. As far as I am concerned, if you are already using the largest instance available to you (an m1.xlarge or c1.xlarge), there is no way to vertically scale up a database with EC2. Instead you are forced into a less than ideal environment for hosting a horizontal architecture which could have serious consequences for your code base and SQL queries.

To be honest, EC2 offers a lot of benefits that are hard to come by with other solutions. EC2 is great for companies doing lots of non-real-time activities such as batch and queued processing. Companies who have a small database that can be cached in RAM and replicated easily will also benefit from EC2, just boot up a bunch of instances and go to town. However, the bottom line is if you have fairly consistent usage patterns and your applications are performance sensitive then there are much faster and more cost effective ways of abstracting your hardware requirements. We at citysquares are in the process of moving off of EC2 and onto a managed hosting platform. We still enjoy the benefits of leased hardware like we had with EC2 and the ability to quickly add new hardware. Granted, more servers aren’t available to us at the drop of a hat but a couple days lead time to get another box up and running is more than sufficient for us. Not only that but we also have a whole team of IT people working with us to help alleviate our burden of supporting the entire hardware/software stack. We can now focus on what we do best which is our application.

Keep in mind that there is no concrete answer as to whether EC2 or cloud computing in general will work for you or not. You need to determine if the capacity and latencies of the pre-determined instance sizes will meet your growing infrastructure needs. For us the bitter answer was a resounding no. We were able to spec out a solution in a fully managed hosting environment for about half the monthly cost of EC2 while increasing the performance of our application significantly.

So, is Amazon’s EC2 right for you?

I’ve been asked this and similar questions quite a bit lately. But before I delve into the answer to this I want to lay the foundation and ask you a question. This one question should play a large part in your final assessment to go with EC2 or not. The question you should ask yourself is:

How quickly do you actually need to scale either up or down?

The answer to this will likely influence the correct solution to your problems. The following bullet point list is how I classify levels of scalability, each one comes with its own pros and cons but generally the quicker you need something the more expensive it is going to be.

  • Immediate – within minutes – EC2 or other cloud computing networks
  • Fast – within days to a week – Managed Hosting, Rackspace, The Planet, etc
  • Average – within weeks to a month – Own your own hardware, Dell, HP, IBM, etc
  • Corporate – within months/years – Good Luck

With this in mind, everyone hears the hype of EC2, with its scalability, fully managed hardware and virtualization but there really aren’t that many people out there describing their experiences with it. When we made the decision to go with EC2 we did our research and due diligence before making the switch. There wasn’t much to go on but the few articles and blog posts we did read were all positive. I guess we all got caught up in the hype here as well.

Even after all our research it turns out that going with EC2 was one of the poorer IT decisions we have made. EC2 has turned out to be more expensive, more difficult to implement and with poorer performance than we had ever expected even with our worst case estimations. To top it all off, we didn’t fully utilize the benefits of going with EC2 which was immediate scalability. Our traffic is relatively predictable and grows or shrinks in manageable percentages and can be scaled up within days instead of minutes. We never have any massive spikes in our traffic either up or down. Even if we did have spikes we are limited by our MySQL cluster.

While we had to rethink a lot of our architecture to create a more horizontal platform instead of the traditional vertical scaling, MySQL was by far our biggest bottleneck. The source of the problem is rooted in Amazon’s preset machine size. While they have done an adequate job of offering different types of instances with more memory in one line and more computational power in the other you are still limited to what they are offering. With the large database we have and the latencies between the instances and their permanent storage we were forced to keep as much of our database cached in RAM. Now this shouldn’t have been too big a deal. Just get a machine with a ton of RAM. Well, unfortunately Amazon’s biggest instance only offered us a maximum of 15GB. Needless to say this was not sufficient and forced us to adopt a cluster solution. This in and of itself is not ideal especially when you should be able to run off a single box with 32GB of RAM and access to fast local disks. However, it took us twelve (12) m1.xlarge instances to reach the level of performance and availability we desired. Not to mention the network IO latency between node and disk storage and node to node adding insult to injury.

While the speed and size of the cluster was not desirable, it worked. However, we had to completely forfeit any sort of scalability to achieve a working database. To my knowledge there is no way to quickly and easily boot up more instances of MySQL to supplement a live cluster. In order for us to add more capacity we would have to perform a rolling reboot of every machine in the cluster. Its unfortunate that databases were not designed with EC2 in mind.

However, there are companies who are trying to tap into this pain point. We were looking very intently at a company called Continuent who produces a MySQL cluster monitoring and management tool. Unfortunately, as of Jan 2009 the product was still in private beta and was unavailable to us. This tool would have allowed us to add nodes to the cluster on the fly without having to take it down in the process. Although, even then with this extra tool, which wasn’t cheap, you still couldn’t scale down the cluster without taking it off-line. As far as I am concerned, if you are already using the largest instance available to you (an m1.xlarge or c1.xlarge), there is no way to vertically scale up a database with EC2. Instead you are forced into a less than ideal environment for hosting a horizontal architecture which could have serious consequences for your code base and SQL queries.

To be honest, EC2 offers a lot of benefits that are hard to come by with other solutions. EC2 is great for companies doing lots of non-real-time activities such as batch and queued processing. Companies who have a small database that can be cached in RAM and replicated easily will also benefit from EC2, just boot up a bunch of instances and go to town. However, the bottom line is if you have fairly consistent usage patterns and your applications are performance sensitive then there are much faster and more cost effective ways of abstracting your hardware requirements. We at citysquares are in the process of moving off of EC2 and onto a managed hosting platform. We still enjoy the benefits of leased hardware like we had with EC2 and the ability to quickly add new hardware. Granted, more servers aren’t available to us at the drop of a hat but a couple days lead time to get another box up and running is more than sufficient for us. Not only that but we also have a whole team of IT people working with us to help alleviate our burden of supporting the entire hardware/software stack. We can now focus on what we do best which is our application.

Keep in mind that there is no concrete answer as to whether EC2 or cloud computing in general will work for you or not. You need to determine if the capacity and latencies of the pre-determined instance sizes will meet your growing infrastructure needs. For us the bitter answer was a resounding no. We were able to spec out a solution in a fully managed hosting environment for about half the monthly cost of EC2 while increasing the performance of our application significantly.

So, is Amazon’s EC2 right for you?

Just as with any platform you choose, EC2 has its own limitations as well. These limitations are often different and harder to overcome than what you might find while running your own hardware. Without the proper planning and development, these limitations can wind up being extremely detrimental to the well being and scalability of your website or service.

There are quite a few blogs, articles and reviews out there that mention all the positive aspects of EC2 and I have written a few of them myself. However, I think users need to be informed of the negative aspects of a particular platform as well as the positive. I will be brief with this post as my next will focus on designing an architecture around these limitations.

The biggest limitations of Amazon’s EC2 at the moment as I have experienced, are the latencies between instances, latencies between instances and storage (local, and EBS), and a lack of powerful instances with more than 15GB of RAM and 4 virtual CPUs.

All the latency issues can all be traced back to the same root cause, a shared LAN with thousands of non localized instances all competing for bandwidth. Normally, one would think a LAN would be quick… and they generally are, especially when the servers are sitting right next to each other with a single switch sitting in between them. However, Amazon’s network is much more extensive than most local LANs and chances are your packets are hitting multiple switches and routers on their way from one instance to another. Every extra node added between instances is just another few milliseconds that get added to the packet’s round trip time. You can think of Amazon’s LAN as a really small Internet. The layout of Amazon’s LAN is very similar to that of the Internet, there is no cohesiveness or localization of instances in relation to one another. So lots of data has to go from one end of the LAN to the other, just like on the Internet. This leads to data traveling much farther than it needs to and all the congestion problems that are found on the Internet can be found on Amazon’s LAN.

For computationally intensive tasks this really isn’t too big a deal but for those who rely on speedy database calls every millisecond added per request really starts adding up if you have lots of requests per page. When the CitySquares site moved from our own local servers to EC2 we noticed a 4-10x increase in query times which we attribute mainly to the high latency of the LAN. Since our servers are no longer within feet of each other, we have to contend with longer distances between instances and congestion on the LAN.

Another thing to take into consideration is the network latency for Amazon’s EBS. For applications that move around a lot of data, EBS is probably a god send as it has a high bandwidth capability. However, in CitySquares’ case, we wind up doing a lot of small file transfers to and from our NFS server as well as EBS volumes. So while there is a lot of bandwidth available to us, we can’t really take advantage of it, especially since we have to contend with the latency and overhead of transferring many small files. Not only are small files an issue for us but we also run our MySQL database off of an EBS volume. Swapping to disk has always been a critical issue for databases but the added overhead of network traffic can wreak havoc on your database load much more than normal disk swapping. You can think of the difference in access times from disk to disk over a network as a book on a bookcase vs a book somewhere down the hall in storage room B. Clearly the second option would take far longer to find what you are looking for and that’s what you have to work with if you want to have the piece of mind of persistent storage.

The last and most important limitation for us at CitySquares was the lack of an all powerful machine. The largest instance Amazon has to offer is one with just 15GB of ram and 4 virtual CPUs. In a day and age where you can easily find machines with 64GB of RAM and 16 CPUs, you are definitely limited by Amazon. In our case, it would be much easier for us just to throw hardware at our database to scale up but the only thing we have at our disposal is a paltry 15GB of RAM. How can this be the biggest machine they offer? Instead of dividing one of those machines in quarters just give me the whole thing. It just seems ludicrous to me that the largest machine they offer is something not much more powerful than the computer I’m using right now.

Long story short, just because you start using Amazon’s AWS doesn’t mean you can scale. Make sure your architecture is tolerant of higher latencies and can scale with lots of little machines because that’s all you have to work with.

I wrote just yesterday about running your own hardware vs. using EC2 and RightScale and one of the major issues I found with EC2 was the lack of a persistent storage medium. Well, I knew the folks over at Amazon were hard at work on a new service that would allow persistent storage and turns out I received this email in my mailbox this morning:

Dear AWS Developer,

We are pleased to announce the release of a significant new Amazon EC2 feature, Amazon Elastic Block Store (EBS), which provides persistent storage for your Amazon EC2 instances. With Amazon EBS, storage volumes can be programmatically created, attached to Amazon EC2 instances, and if even more durability is desired, can be backed with a snapshot to the Amazon Simple Storage Service (Amazon S3).

Prior to Amazon EBS, block storage within an Amazon EC2 instance was tied to the instance itself so that when the instance was terminated, the data within the instance was lost. Now with Amazon EBS, users can chose to allocate storage volumes that persist reliably and independently from Amazon EC2 instances. Amazon EBS volumes can be created in any size between 1 GB and 1 TB, and multiple volumes can be attached to a single instance. Additionally, for even more durable backups and an easy way to create new volumes, Amazon EBS provides the ability to create point-in-time, consistent snapshots of volumes that are then stored to Amazon S3.

Amazon EBS is well suited for databases, as well as many other applications that require running a file system or access to raw block-level storage. As Amazon EC2 instances are started and stopped, the information saved in your database or application is preserved in much the same way it is with traditional physical servers. Amazon EBS can be accessed through the latest Amazon EC2 APIs, and is now available in public beta.

We hope you enjoy this new feature and we look forward to your feedback.

Sincerely,

The Amazon EC2 team

So this is indeed good news and removes the biggest con I mention about the EC2 platform!

I wrote just yesterday about running your own hardware vs. using EC2 and RightScale and one of the major issues I found with EC2 was the lack of a persistent storage medium. Well, I knew the folks over at Amazon were hard at work on a new service that would allow persistent storage and turns out I received this email in my mailbox this morning:

Dear AWS Developer,

We are pleased to announce the release of a significant new Amazon EC2 feature, Amazon Elastic Block Store (EBS), which provides persistent storage for your Amazon EC2 instances. With Amazon EBS, storage volumes can be programmatically created, attached to Amazon EC2 instances, and if even more durability is desired, can be backed with a snapshot to the Amazon Simple Storage Service (Amazon S3).

Prior to Amazon EBS, block storage within an Amazon EC2 instance was tied to the instance itself so that when the instance was terminated, the data within the instance was lost. Now with Amazon EBS, users can chose to allocate storage volumes that persist reliably and independently from Amazon EC2 instances. Amazon EBS volumes can be created in any size between 1 GB and 1 TB, and multiple volumes can be attached to a single instance. Additionally, for even more durable backups and an easy way to create new volumes, Amazon EBS provides the ability to create point-in-time, consistent snapshots of volumes that are then stored to Amazon S3.

Amazon EBS is well suited for databases, as well as many other applications that require running a file system or access to raw block-level storage. As Amazon EC2 instances are started and stopped, the information saved in your database or application is preserved in much the same way it is with traditional physical servers. Amazon EBS can be accessed through the latest Amazon EC2 APIs, and is now available in public beta.

We hope you enjoy this new feature and we look forward to your feedback.

Sincerely,

The Amazon EC2 team

So this is indeed good news and removes the biggest con I mention about the EC2 platform!

A couple weeks ago I began working with EC2 and RightScale in preparation of our big IT infrastructure change over. Ill start by giving a brief overview of our hardware infrastructure. Currently we’re running the CitySquares’ website on our own hardware in a Somerville co-location not too far from our headquarters in Boston’s trendy South End neighborhood.

From the very beginning our contract IT guy set us up with a extremely robust and flexible IT infrastructure. It consists of a few machines running Xen Hypervisors with Gentoo as the main host OS. Running Gentoo allows us to be as efficient as possible by specifically optimizing and compiling only the things we need. While this is a good step, it is Xen that really makes the big difference. It allows us to trade around resources as we see fit, more memory here, more virtual CPUs there, all can be done on the fly. For a startup or any company with limited resources this is rather essential. You never know where you are going to need to allocate resources in the months to come.

While this is all well and good, we are still limited when it comes to scaling with increasing traffic or adding additional resource intensive features. We have a set amount of available hardware and adding more is an expensive upfront capital investment. Not only that but in order for us to really begin to take advantage of Xen and use it to its full potential we were presented with an expensive option, it required the purchase of a SAN and more servers. For those in the industry I don’t think I need to mention that these get expensive in a hurry. This would have been a huge upfront cost for us, one we didn’t want to budget for. The second option, which is the one we eventually went with was to drop our current hardware solution and make the plunge into cloud computing with Amazon’s EC2.

Here I am now. A couple of weeks into the switch with a lot of lessons learned. There are definitely pros and cons for each platform, either going with EC2 or rolling your own architecture. Before I get into the details I want to make clear that there are many factors involved in choosing a technology platform. I am only going to scratch the surface, touching upon the major pros and cons with respect to my own opinions with best interest for CitySquares in mind.

Let me begin by starting with the pros for running your own hardware:

  • The biggest pro is most definitely persistence across reboots. I can not stress the importance of this one. You really take for granted the ability to edit a file and expect it to be there the next time the machine is restarted.

    • You only need to configure the software once. Once its running you don’t really care what you did to make it work. It just works, every time you reboot.

    • UPDATE 8/21/08: Amazon releases persistent storage.
  • Complete and utter control over everything that is running. This extends from the OS to the amount of RAM, CPU specs, hard drive specs, NICs, etc. The ability to have a economy or performance server is all up to you.

  • Rather stable and unchanging architecture. Server host keys stay the same, the same number of servers are running today as there were yesterday and as there will be tomorrow.

  • Reboot times. For those times when something is just AFU you can hit the reset button and be back up and running in a few minutes.

  • You can physically touch it… Its not just in the cloud somewhere.

Some cons for running your own hardware:

  • Companies with limited resources usually end up with architectures that exhibit single points of failure.

    • As an aside, you can be plagued by hardware failures at any time. This usually is accompanied by angry emails, texts and calls at 3am on Saturday morning.

  • Limited scalability options. For a rapidly expanding and growing website, the couple weeks it takes to order and install new hardware can be detrimental to your potential traffic and revenue stream.

  • Management of physical pieces of hardware. Its a royal pain to have to go to a co-location to upgrade or fix anything that might need maintenance. Not to mention the potential down time.

    • Also, there are many hidden costs associated with IT maintenance.

  • Up front capital expenditures can be quite costly. This is especially true from a cash flow perspective.

  • Servers and other supporting hardware are rendered obsolete every few years requiring the purchase of new equipment.

These pros and cons for running your own hardware are pretty straight forward. Some people might mention managed hosting solutions which would mostly eliminate some of the cons related to server maintenance and hardware failures. However, this added service comes with an added price tag for the hosting. Whether it is right for you or your company is something to look into. We decided to skip this intermediary solution and go straight to the latest and greatest solution which is cloud computing. To be specific we sided with Amazon’s EC2 (Elastic Compute Cloud) using RightScale as our management tool.

Some of the pros for using EC2 in conjunction with the RightScale dashboard are as follows:

  • Near infinite resources (Server instances, Amazon’s S3 Storage, etc) available nearly instantaneously. No more Slashdot DoS attacks if everything is properly configured and set to introduce more servers automatically. (RightScale Benefit)

  • No upfront costs, everything is usage based. In the middle of the night if you are only utilizing one server thats all you pay for. Likewise, if during peak hours you’re running twenty servers you pay for those twenty servers. (Amazon Benefit, RightScale is a monthly service)

  • No hardware to think of. If fifty servers go down at Amazon we wont even know about it. No more angry calls at 3am. (Amazon Benefit)

  • Multiple availability zones. This allows us to run our master database in one zone which is completely separate from our slave database. So if there is an actual fire or power outage in one zone the others will theoretically be unaffected. The single points of failure mentioned before are a thing of the past and this is just one example. (Amazon Benefit)

  • Ability to clone whole deployments to create testing and development environments that exactly mirror the current production when you need them. (RightScale Benefit)

  • Security updates are taken care of for the most part. RightScale provides base server images which are customized upon boot with the latest software updates. (RightScale Benefit)

  • Monitoring and alerting tools are very good and highly customizable. (RightScale Benefit)

Some of the cons for using EC2 and RightScale:

  • No persistence after reboot. I can’t stress this one enough! All local changes will be wiped and you’ll start with a blank slate!

    • All user contributed changes must be backed up to a persistent storage medium or they will be lost! We back up incrementally every 15 minutes with a full backup every night.

    • UPDATE 8/21/08: Amazon releases persistent storage.
  • Writing scripts to configure everything upon boot is a time consuming and tedious process requiring a lot of trial and error.

  • Every reboot takes approximately 10-20 minutes depending on the number and complexity of packages installed on boot. Making the previous bullet point even that much more painful.

  • A few of the pre-configured scripts are written quite well. The one for MySQL is as good as they get. You upload a config file complete with special tags for easy on the fly regular expression customization. The Apache scripts on the other hand are about as bad as they get. Everything must be configured after the fact.

    • With Apache however, you’ll be writing regular expressions to match other regular expressions. Needless to say is a royal pain and you usually end up with unreadable gibberish.

So there you have it, take it as you wish. For CitySquares, EC2 and RightScale were the best options. It allows us to scale nearly effortlessly once configured. It is also a much cheaper option up front where as owning your own hardware is generally cheaper in the long run. We did trade a lot of the pros of owning your own hardware to get the scalability and hardware abstraction of EC2. It was a tough decision for us to switch away from our current architecture but in the end it will most likely be the best decision we’ve made. The flexibility and scalability of the EC2 and RightScale platform are by far the biggest advantages to switching and in the end its what CitySquares needs.