top of page

Choose Right AWS Compute Service to Run Application and Workload

Updated: Mar 5

Amazon Web Services (AWS) provides many ways how you can run your application and workload. There are Compute services, Container services and Serverless paradigms. It also depends on where you want to run your piece of software:

  • In the Cloud on AWS infrastructure (hardware, network) and managed by AWS installed software to manage the hardware.

  • On-premise on your local infrastructure (hardware and network) but managed by AWS installed software stack to manage the hardware.

  • On AWS provided hardware that resides in your local network managed by AWS installed software stack to manage the hardware.

  • On AWS provided hardware that runs in Internet providers network managed by AWS installed software stack to manage the hardware.

  • or in a hybrid environment. j

A compute instance is a hardware that gives you CPU and memory to execute your software or service that you want to run. The compute instance could be a bare metal compute server or a Virtual Machine (VM). Sometimes you can configure what kind of processor and memory you want and provision a compute instance from the AWS management Console. Sometimes AWS presents you a few predefined T-shirt sizes to choose from. This article focuses on the VMs and not on the containers and serverless paradigm. Depending on which service you choose you manage the VMs completely, partially, or not at all. In this article, I am going to explain what your options are in terms of Compute Services and which one you may choose depending on your need.


 

Contents

AWS Elastic Compute Cloud (EC2)

AWS Elastic Beanstalk

AWS Amazon LightSail

AWS Outposts

AWS Wavelength

AWS Snowball Family

VMware Cloud on AWS

Understand AWS Batch

Understand Amazon EMR

 

AWS Elastic Compute Cloud (EC2)

You can provision an AWS EC2 instance in any AWS Region and any Availability Zone in a region. An EC2 instance is a compute resource. You can specify or choose how many Virtual CPUs (vCPUs) you want and how much memory you want while provisioning an EC2 instance. You also can specify which Operating System called Amazon Machine Images (AMIs) you want to use. You can also use your AMIs that come with 3rd party software installed or create your own customized AMI. For Windows OS you can also bring your own license. You can also associate different kinds of storage volume; for example, AWS EFS and AWS EBS volumes. You can also associate other AWS services like AWS S3 buckets, AWS databases, Elastic Network Interfaces (ENIs) and Elastic IP address any many more with the EC2 instance.

You can choose different types of instances based on cost models and capacity. In terms of cost model EC2 provides these choices:

  • On Demand instances: You can get these instances any time you want and for any period you need them. No questions asked but of course it comes with the highest price.

  • Dedicated Hosts: You can choose a dedicated physical server for a long time and reserve the capacity only for you. You can reserve these instances with up to 70% of the On-demand price.

  • Spot instances: You can order instances with current spot discount prices, but AWS can take back the instances from you when the sport price increases. Generally, these instances you use when the workload running in the instances runs for a short time are not critical and can be rerun.

  • Reserved instances with Savings Plans: Using the savings plan you commit to AWS that you are going to reserve several instances (of your choice) and AWS will give you whenever you want them. As you preorder and reserve for a long time you get them at less price. But like a Dedicated host you don't get to know or control which physical server will be used. Even if you don't use the reserved instances, you still pay to AWS.

  • Scheduled Reserved Instances: It allows you to reserve instances on a recurring schedule e.g., a particular time and hours in a day, or in a week or month. The workloads which need to run on a specific schedule and do not need 24/7 availability are the ideal ones for this type of instance.

In terms of capacity EC2 has these variations, which are expressed by EC2 families and their sizes:

  • General purpose: These instances are a balance of compute, memory, and network resources, you can use these instances when you are not sure which type is best for you or simply want to host any applications that use an equal proportion of these resources.

  • Compute Optimized: These instances give you more CPU power and should be used for more compute intensive workloads.

  • Memory Optimized: These instances give you more memory for the workloads who need them.

  • Storage Optimized: When we need to store huge data sets in local storage and need high sequential read and write performance, these instances can be used.

  • Accelerated Computing: These instances come with hardware accelerators or co-processors that are basically GPU optimized for graphics or multimedia related workloads. You can also use them for ML workloads.

When you launch multiple EC2 instances for horizontal scalability of your application or workload you can place them into different Availability Zones (AZs), partitions in an AZ or different hardware rack in an AZ with the help of Placement Group. You can achieve high compute and network performance by using High Performance Computing instances. EC2 also provides a mechanism called Auto Scaling where you can define metrics based or predictive configurations that allows you to achieve horizontal scale in and scale out of your EC2 instances dynamically. You can use the EC2 launch template to create and save the configuration of an EC2 instance that you provision in AWS. You can use EC2 Fleet to configure the capacity and cost mix for a group or EC2 instances. The EC2 instances can be configured for automatic Health checking. AWS CloudWatch can be used to enable monitoring the instances and AWS CloudTrail can be used to track the calls made to the Amazon EC2 instances.


In these EC2 instances AWS manages the hardware and the OS by providing patches. But if you install any other 3rd party software or deploy any kind of application or run any workload, those must be managed by you. You will decide how to deploy your code in those EC2 instances, how to run the application or workload, how to manage the lifecycle, how to prepare the environment needed for your application and workload, how to update the environment and how to scale. Even if the EC2 instance crashes you manage how to replace and redeploy your code.


AWS Elastic Beanstalk

When deploying application code in EC2, you need to decide the design and deploy the environment. For example, if you need to run a Java application you need to install Java in the EC2 instance first. In Elastic Beanstalk there are predefined programming language environments and environment configurations. You choose which environment to use starting from choosing the web server environment tier or worker environment tier. Then choose which kind of runtime environment you need e.g., containerized, or non-containerized, if non-containerized then which programming language, if web server then which kind of application server to use, which kind of reverse proxy to use, if you would like to use other AWS services (for example, Amazon RDS, DynamoDB, ElasticCache). You choose how many EC2 instances you want, whether you want to apply load balancing, if you want a separate VPC you configure the VPC and security group and many more. Most importantly where to fetch the code that needs to be deployed. Then AWS creates the resources with specified configuration and environment. For example, AWS creates load balancers and associates the load balancer with the EC2 instances, applies auto-scaling rules and health monitoring of the resources. AWS deploys your code also on those EC2 instances. But at the same time all the resources created are visible to you and you have full control of the resources. Without Elastic Beanstalk you had to configure and launch those AWS resources, associate those resources with each other to build the environment and then deploy the application server and deploy code. Elastic Beanstalk asks all the environment configurations and details needed to build the environment from you and does the rest of the job. At the end, it deploys your code. AWS does not charge you anything to use Elastic Beanstalk, you pay for the underlying Cloud resources that you use.


Amazon LightSail

When you investigate, in AWS Elastic Beanstalk you still need to configure the resources and build your application stack. In Amazon LightSail a predefined technology stack is provided, you need to choose which application stack you want to use e.g., WordPress, LAMP, MEAN even fully configured MySQL and PostgreSQL Database plans with predefined processing memory, storage, and data transfer rates. You can choose on which OS the application stack should run and choose from a pre-configured hardware stack (memory, Disk, Processor, Network Transfer) which is configured with load balancing and network configurations. When you want to run your business application on the most common hardware and software stack it becomes quicker to deploy the code and start implementing business logic and easier to manage your application. All the hardware and software stack are managed by AWS. AWS uses EC2 to host the application that you deploy but it's invisible to you. As you can see LightSail does a lot of tasks for you but is also heavily opinionated. It does not give you a lot of options to configure and manage the hardware and software stack which you need sometimes. Web applications are most suitable to host in Amazon LightSail. AWS does not charge you anything to use LightSail, you pay for the underlying Cloud resources that you use.


AWS Outposts

In the case of EC2, Elastic Beanstalk and LightSail the hardware resources like CPU, memory, storage, and network devices reside with AWS in their datacenters. In AWS Outpost AWS delivers AWS servers and racks to your location. The servers and racks follow industry standards, and certified personnel set the hardware into your datacenter. You can configure the hardware you need and order them to be delivered to your location. But the main difference is that the hardware stays in your datacenter, so data processing done by the applications deployed on these servers are done locally and data does not leave your datacenter. But the Outpost servers and racks need to be connected to a parent AWS region via AWS Direct Connect private connection, a public virtual interface, or the public Internet of your choice. You can use the same AWS tools as Management Console to manage the resources and workloads that run in the outpost. You can use a local gateway to connect outposts to local networks. Along with ensuring data does not leave private data centers you can take advantage of the same AWS API s, tools development methods to run & manage resources and deploy applications in outposts same as AWS cloud. After you connect the outpost to an AWS region, you can extend the Virtual Private Cloud (VPC) in the Region to include the Servers and racks in the outposts by adding the outpost subnet in the VPC. The EC2 instances in the outpost subnets communicate via a private IP address to the EC2 instances in other subnets in the same VPC in the Region. We can associate multiple outpost subnets with a VPC and to multiple VPCs. Here you need to keep in mind that AWS is only responsible for the hardware and software that run AWS services but you as an AWS customer are responsible to secure the outposts. Though all AWS services are not available, many common services are available in AWS outposts.


AWS Wavelength

Unlike Aws Outpost where the AWS servers and resources are in your local network and you need to secure the resources in the outpost, in AWS Wavelength AWS installs the AWS resources in the phone and internet network providers location which are called Wavelength Zones. Wavelength Zones are AWS infrastructures where compute and storage services are deployed in the AWS partner Communication Service Providers (CSPs) equipped with 5G. These are the Edge locations where the Aws resources and services are available for you to develop, deploy, run, and scale your applications which need ultra-low latency network communication capability. This enables you to use mobile edge computing infrastructure in Edge locations. You can host applications that need to provide high-resolution video streaming to your mobile app users or run AI or ML workloads close to the Edge location in your IoT network. The Wavelength Zones support a specific set of EC2 instances and EBS volumes along with VPCs and subnets. Other supported AWS services are Amazon EKS clusters, Amazon ECS clusters, Amazon EC2 Systems Manager, Amazon CloudWatch, AWS CloudTrail, AWS CloudFormation, and AWS Application Load Balancer (ALB). A network component called the Carrier Gateway is used to connect the customer subnets in Wavelength Zones to the CSP’s network, the internet, or the AWS Region through the CSP’s network. You can use AWS Management Console to manage the resources and applications that run in the Wavelength Zones. EC2 instances in differ