AWS
Interview Questions & Preparation Notes
- Global infrastructure: Regions, Availability Zones (AZs), Edge locations.
- Shared Responsibility Model: AWS secures the cloud; customers secure what they put in the cloud.
- Well-Architected Framework: Operational Excellence, Security, Reliability, Performance, Cost Optimization, Sustainability.
Compute services in Amazon Web Services (AWS) provide the processing power required to run applications, host websites, execute backend logic, and process large-scale workloads. AWS offers multiple compute options depending on the application architecture, ranging from traditional virtual machines to serverless computing and container orchestration platforms. These services allow developers to build scalable, flexible, and highly available applications without managing physical hardware.
-
Amazon EC2 (Elastic Compute Cloud):
Amazon EC2 provides virtual machine instances in the cloud that allow users to run applications on
customizable operating systems and configurations. EC2 instances come in multiple instance families
designed for different workloads:
- General Purpose: Balanced CPU, memory, and networking (e.g., t3, m5).
- Compute Optimized: High-performance processors for compute-intensive tasks.
- Memory Optimized: Suitable for large in-memory databases and analytics.
- GPU Instances: Designed for machine learning, graphics processing, and AI workloads.
- On-Demand Instances: Pay only for the compute time used with no long-term commitment.
- Reserved Instances: Long-term reservations that provide significant cost savings.
- Savings Plans: Flexible pricing models offering discounted compute usage.
- Spot Instances: Low-cost instances that use unused AWS capacity but can be interrupted.
-
Serverless Computing and Functions:
Serverless computing is a cloud computing model in which developers can run application code without managing
servers, operating systems, or infrastructure. In a traditional architecture, organizations must provision
servers, configure operating systems, install software, and maintain infrastructure. In contrast, serverless
platforms automatically handle server provisioning, scaling, maintenance, and fault tolerance.
In this model, developers only focus on writing the application logic while the cloud provider manages the
underlying infrastructure. services automatically scale based on demand and charge users only
for the actual execution time of the code rather than for idle server capacity. This makes serverless
architectures cost-efficient, highly scalable, and easier to maintain.
Common use cases of serverless computing include building APIs, automating workflows, processing files,
real-time data processing, and backend services for web and mobile applications.
Services like AWS Lambda are stateless, meaning:
- The function does not permanently store data between executions.
- Each request is treated independently.
- The runtime environment may be destroyed after execution
-
AWS Lambda:
AWS Lambda is Amazon’s serverless compute service that allows developers to execute code in response
to specific events without provisioning or managing servers. Developers upload their function code,
and Lambda automatically runs it whenever a triggering event occurs.
Lambda functions support multiple programming languages such as Python, Node.js, Java, Go, and C#.
The platform automatically handles scaling, load balancing, monitoring, and high availability.
- Triggers: Lambda functions are event-driven and can be triggered by various AWS services such as API Gateway for HTTP requests, Amazon S3 for file uploads, EventBridge for scheduled tasks, DynamoDB Streams for database changes, and many other AWS services.
- Cold Starts: When a Lambda function is invoked after a period of inactivity, AWS may need to initialize the execution environment. This initialization delay is known as a cold start and may slightly increase the response time for the first request.
- Concurrency: Concurrency defines how many instances of a Lambda function can run simultaneously. AWS automatically scales Lambda functions to handle multiple requests in parallel, ensuring that applications can handle sudden increases in traffic.
- Lambda Layers: Lambda Layers allow developers to share common libraries, dependencies, or runtime components across multiple Lambda functions. This helps reduce code duplication and simplifies function management.
-
Amazon ECS and Amazon EKS:
AWS provides container orchestration platforms to manage containerized applications efficiently.
- Amazon ECS (Elastic Container Service): A fully managed container orchestration service that allows users to deploy and manage Docker containers. ECS integrates with other AWS services and can run containers using either EC2 instances or AWS Fargate.
- AWS Fargate: A serverless compute engine for containers that eliminates the need to manage underlying servers.
- Amazon EKS (Elastic Kubernetes Service): A managed Kubernetes service that allows organizations to run Kubernetes workloads in AWS without managing the Kubernetes control plane.
- Auto Scaling Groups (ASG): Auto Scaling Groups automatically adjust the number of EC2 instances based on application demand. This ensures that applications maintain high availability and optimal performance during traffic spikes while minimizing costs during low usage periods. Auto Scaling works together with Launch Templates, which define instance configurations such as instance type, AMI, storage, and networking.
# Example: start/stop EC2 via AWS CLI
aws ec2 start-instances --instance-ids i-0123456789abcdef0
aws ec2 stop-instances --instance-ids i-0123456789abcdef0
Networking in AWS is primarily managed through the Amazon Virtual Private Cloud (VPC) service. A VPC allows organizations to create a logically isolated network environment within the AWS cloud where they can launch resources such as EC2 instances, databases, and containers. By configuring subnets, routing rules, and security controls, administrators can design secure and scalable network architectures similar to traditional on-premise data centers.
-
Virtual Private Cloud (VPC):
A VPC is a virtual network dedicated to your AWS account. It enables you to define your own IP
address range, create subnets, configure routing, and control network access. When creating a
VPC, administrators typically specify a CIDR (Classless Inter-Domain Routing) block such as
10.0.0.0/16to allocate private IP addresses.
Within a VPC, resources can communicate securely with each other while remaining isolated from other networks. VPCs are commonly divided into multiple subnets to organize workloads and enhance security. -
Subnets:
Subnets are smaller network segments created within a VPC. Each subnet resides within a specific
Availability Zone to ensure high availability and fault tolerance.
- Public Subnets: Subnets that allow direct access to the internet through an Internet Gateway. Resources such as web servers are usually placed here.
- Private Subnets: Subnets without direct internet access. Internal services such as databases and backend applications are typically deployed here for better security.
- Security Groups: Security Groups act as virtual firewalls that control inbound and outbound traffic for individual AWS resources such as EC2 instances. They operate at the instance level and are stateful, meaning that if an inbound request is allowed, the response traffic is automatically permitted. Security groups are commonly used to restrict access based on IP addresses, ports, and protocols.
- Network Access Control Lists (NACLs): NACLs provide an additional layer of security at the subnet level. Unlike security groups, NACLs are stateless, meaning both inbound and outbound rules must be explicitly defined. NACLs are often used to enforce broader network policies and block unwanted traffic before it reaches individual resources within the subnet.
-
Gateways and Routing:
AWS provides multiple networking components that allow resources in a VPC to communicate with
external networks and other AWS environments.
- Internet Gateway (IGW): Enables communication between resources in a public subnet and the internet.
- NAT Gateway: Allows instances in private subnets to access the internet for updates while preventing inbound connections.
- Route Tables: Define how network traffic is directed within the VPC.
-
Advanced Connectivity Options:
AWS also provides services for connecting multiple networks and services securely:
- VPC Peering: Connects two VPCs so resources can communicate privately.
- Transit Gateway: Acts as a central hub that connects multiple VPCs and on-premise networks.
- AWS PrivateLink: Enables secure private access to AWS services and third-party services without exposing traffic to the public internet.
{
"CidrBlock": "10.0.0.0/16",
"InstanceTenancy": "default",
"TagSpecifications": [
{
"ResourceType": "vpc",
"Tags": [
{ "Key": "Name", "Value": "prod-vpc" }
]
}
]
}
- S3: Object storage; classes (Standard, IA, One Zone-IA, Intelligent-Tiering, Glacier).
- EBS: Block storage for EC2; gp3/io1/io2; snapshots.
- EFS: Managed NFS file system; multi-AZ; throughput/bursting.
- Lifecycle policies, versioning, replication (CRR/SRR), bucket policies, encryption (SSE-S3, SSE-KMS).
# Upload to S3
aws s3 cp file.txt s3://my-bucket/path/file.txt --sse AES256
- RDS (MySQL, PostgreSQL, SQL Server, Oracle), Aurora (MySQL/Postgres-compatible), DynamoDB (NoSQL), ElastiCache (Redis/Memcached), OpenSearch.
- Backups, multi-AZ, read replicas, Global Tables (DynamoDB).
- Serverless options: Aurora Serverless v2, DynamoDB on-demand.
- Principals: Users, Roles, Federated identities; Policies (identity, resource-based).
- Least privilege, permission boundaries, SCPs (AWS Organizations), MFA, access analyzer.
- KMS for encryption keys; CloudTrail for auditing API calls.
{
"Version": "2012-10-17",
"Statement": [{ "Effect": "Allow", "Action": ["s3:*"], "Resource": ["arn:aws:s3:::my-bucket/*"] }]
}
- CloudWatch metrics/logs/alarms/dashboards; X-Ray tracing.
- CloudTrail for API logs; Config for resource state/history and rules.
- GuardDuty, Security Hub, Inspector for threat detection and posture.
- Load balancing with ALB/NLB/GLB; multi-AZ architectures.
- Auto Scaling Groups; health checks; blue/green and canary deployments.
- Disaster recovery strategies: Backup/Restore, Pilot Light, Warm Standby, Multi-site.
- Use Cost Explorer, Budgets, Trusted Advisor checks.
- Right-size instances, pick storage classes, use Savings Plans/Reserved Instances, Spot where appropriate.
- Turn off idle resources; use lifecycle policies and data tiering.
- CodeCommit, CodeBuild, CodeDeploy, CodePipeline; integrate with GitHub and 3rd parties.
- Security: build-time scans (SAST/SCA), container/image scans, secrets from Parameter Store/Secrets Manager.
- Deploy to Lambda, ECS/EKS, EC2, or Elastic Beanstalk.
# CodeBuild buildspec snippet
version: 0.2
phases:
install:
runtime-versions: { nodejs: 20 }
build:
commands:
- npm ci
- npm test -- --ci
artifacts:
files:
- '**/*'
- Decouple with SQS/SNS/EventBridge; cache with CloudFront/ElastiCache.
- Use multi-account strategy with AWS Organizations and SCPs.
- Design for failure; apply least privilege; encrypt everywhere.
- Design a highly available web app across multiple AZs with ALB, ASG, RDS Multi-AZ, S3 static assets, CloudFront.
- Migrate on-prem app to AWS: landing zone, networking (VPC/Direct Connect/VPN), data migration (DMS/Snowball), cutover plan.
- Secure serverless API: API Gateway + Lambda + Cognito, WAF, logging/metrics, throttling, and IAM authorizers.