By Hari Doredla
Star Interactive platform is a web application that provides for interactive exchange of messages between a celebrity and fans. Some changes are brought out into this game. The celebrities tweet messages that should reach their millions of fans across the globe and the fans start tweeting back the messages to the celebrity. As the fans started tweeting back the messages , the load on the web server spiked very and the web server was not responding to the requests from the massive number of fans.
We chose the AWS cloud as the best platform to build web applications to get better performance with minimal cost and high availability (fault-tolerant) rather than scaling up the web servers using squid as Load Balancer.
AWS: Amazon Web Services is a collection of remote computing services that make up a cloud computing platform, offered over the Internet by Amazon.com…
Steps followed to implement the solution:
- Launched the Amazon EC2 instance in (VPC) the nearest region to our location.
- Created the Amazon RDS instance with MySQL database and used the separate security group to allow the traffic only from our subnet.
- Installed the application dependent packages in the server and deployed the application code. Tested the app, then created the AMI (Amazon machine image).
- Application provides for upload of videos and photos by the fans as per their choice. Here the issue is we cannot keep the required persistent storage in EC2 instances. So requested the developers modified the application code to use Amazon S3 as data store.
- Created the Amazon ELB to balance the load among EC2 instances and tagged as “sip-elb”.
- Created the Alias(A) record in our domain name maps to the ELB domain name; thus users can use our domain name easily.
- Launched the Auto scaling group with following configuration:
- Minimum size: 1, Desirable Capacity: 2 , Maximum Capacity: 10
- ELB: sip-elb
- Availability zones: All availability zones in a region.
- Implemented the scaling policies to dynamically scale out/scale in the instances Auto scaling group based on Cloud watch metrics (alarms).
- Created SNS topic with support team email address and used that in Cloud watch alarms to send an email about metrics, Auto scaling group scale up/scale down actions information.
- Amazon Cloud Front provides a way to distribute content to end users with low latency and high data transfer speeds. Created the distribution with our ELB (sip-elb) to increase the performance and low latency.
- Reserved these two instances under heavy utilization for 1 year term and saved up to 31 % money with partial upfront payment option.
Technical benefits of using AWS cloud computing:
- Automation – “Scriptable infrastructure”: You can create repeatable build and deployment systems by leveraging programmable (API-driven) infrastructure.
- Auto-scaling: You can scale up/down your applications up to match unexpected demand without any human intervention. Auto-scaling helps automation and drives with greater efficiency.
- Proactive Scaling: Scale up or scale down your application to match the anticipated demand with proper planning based on traffic patterns so that costs are low. Efficient Development lifecycle: Production systems can be easily cloned for use as development and test environments. Staging environments are easily promoted to production.
- Improved Testability: Never run out of hardware for testing. Inject and automate testing at every stage during the development process. You can spawn an “instant test lab” with pre-configured environments only for the duration of testing phase.
- Disaster Recovery and Business Continuity: The cloud provides for a lower cost option for maintaining a fleet of DR servers and data storage. With the cloud, you can take advantage of geo-distribution and replicate the environment in other locations in minutes.
- “Overflow” the traffic to the cloud: With effective load balancing tactics, you can create a complete overflow-proof application by routing excess traffic to the cloud.