Amazon EKS
DevOps Tool
Amazon EKS
Overview
Amazon EKS is AWS's managed Kubernetes service. It automates Kubernetes cluster operations management and achieves enterprise-level container operations through deep AWS service integration.
Details
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that AWS made generally available in 2018. AWS completely manages Kubernetes control plane operations, patch application, backup, and disaster recovery, allowing users to focus on worker node and application management. It automatically provides high availability configuration spanning multiple Availability Zones and achieves enterprise-level security and operability through tight integration with AWS IAM, VPC, Security Groups, ELB, EBS, EFS, and more. Features include Fargate integration for serverless Kubernetes, GPU-enabled worker nodes, Auto Scaling, and Spot Instance support optimized for diverse workloads. Integration with AWS service suite including Amazon ECR, CloudWatch, CloudTrail, and Systems Manager enables unified management from container image management to monitoring and logging. Currently adopted by many enterprises as core infrastructure for cloud-native application development, it has become the standard choice for Kubernetes operations in AWS environments.
Advantages and Disadvantages
Advantages
- Fully managed: No control plane operation required
- High availability: Automatic redundancy with multi-AZ configuration
- AWS integration: Deep integration with IAM, VPC, CloudWatch, etc.
- Security: Inherits AWS standard security features
- Scalability: Auto-scaling and Spot Instance utilization
- Fargate support: Serverless Kubernetes implementation
- Enterprise features: Audit logs, encryption, compliance support
- Operational load reduction: Automated patching and backup
Disadvantages
- Cost: Management plane fees + EC2/Fargate costs
- AWS dependency: Constraints in multi-cloud strategy
- Customization limitations: No access to control plane
- Learning curve: Requires knowledge of both Kubernetes and AWS services
- Network complexity: Importance of VPC and subnet design
- Regional limitations: Service availability constraints in specific regions
- Version management: Need to follow EKS supported versions
- Failure impact: Availability impact during AWS outages
Key Links
- Amazon EKS Official Website
- Amazon EKS Official Documentation
- EKS Workshop
- AWS Container Blog
- eksctl Official Website
- AWS Load Balancer Controller
Code Examples
Cluster Creation with eksctl
# Install eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
# Create cluster (basic)
eksctl create cluster \
--name production-cluster \
--version 1.28 \
--region us-west-2 \
--nodegroup-name standard-workers \
--node-type m5.large \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--managed
# Update kubectl configuration
aws eks update-kubeconfig --region us-west-2 --name production-cluster
Using Configuration File (cluster.yaml)
# cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: production-cluster
region: us-west-2
version: "1.28"
vpc:
subnets:
private:
us-west-2a: { id: subnet-0123456789abcdef0 }
us-west-2b: { id: subnet-0123456789abcdef1 }
us-west-2c: { id: subnet-0123456789abcdef2 }
managedNodeGroups:
- name: standard-workers
instanceType: m5.large
minSize: 1
maxSize: 10
desiredCapacity: 3
privateNetworking: true
ssh:
allow: true
publicKeyName: my-key-pair
iam:
withAddonPolicies:
autoScaler: true
cloudWatch: true
ebs: true
efs: true
alb-ingress: true
tags:
Environment: production
Team: platform
- name: spot-workers
instanceTypes: ["m5.large", "m5.xlarge", "m4.large"]
spot: true
minSize: 0
maxSize: 5
desiredCapacity: 2
fargateProfiles:
- name: fp-default
selectors:
- namespace: default
- namespace: kube-system
addons:
- name: vpc-cni
version: latest
- name: coredns
version: latest
- name: kube-proxy
version: latest
- name: aws-ebs-csi-driver
version: latest
cloudWatch:
clusterLogging:
enable: ["*"]
# Create cluster with configuration file
eksctl create cluster -f cluster.yaml
AWS Load Balancer Controller Setup
# Create IAM OIDC Identity Provider
eksctl utils associate-iam-oidc-provider \
--region us-west-2 \
--cluster production-cluster \
--approve
# Create ServiceAccount
eksctl create iamserviceaccount \
--cluster production-cluster \
--namespace kube-system \
--name aws-load-balancer-controller \
--attach-policy-arn arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess \
--override-existing-serviceaccounts \
--approve
# Install Controller via Helm
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=production-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
Ingress Configuration (Using ALB)
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:123456789012:certificate/abc123
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/healthcheck-path: /health
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app-service
port:
number: 80
EFS StatefulSet Configuration
# efs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: fs-0123456789abcdef0
directoryPerms: "0755"
---
# statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-app
spec:
serviceName: web-app
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: nginx:latest
volumeMounts:
- name: efs-storage
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: efs-storage
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: efs-sc
resources:
requests:
storage: 1Gi
Cluster Autoscaler Configuration
# cluster-autoscaler.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8085'
spec:
serviceAccountName: cluster-autoscaler
containers:
- image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.0
name: cluster-autoscaler
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/production-cluster
Fargate Profile Configuration
# Create Fargate Profile
eksctl create fargateprofile \
--cluster production-cluster \
--name fargate-profile \
--namespace fargate-ns \
--labels app=serverless
# Fargate Pod configuration
kubectl create namespace fargate-ns
kubectl label namespace fargate-ns app=serverless
# fargate-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: fargate-app
namespace: fargate-ns
spec:
replicas: 2
selector:
matchLabels:
app: fargate-app
template:
metadata:
labels:
app: fargate-app
spec:
containers:
- name: app
image: nginx:alpine
resources:
requests:
cpu: 250m
memory: 512Mi
Monitoring Setup (CloudWatch Container Insights)
# CloudWatch agent configuration
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/production-cluster/;s/{{region_name}}/us-west-2/" | kubectl apply -f -
# Prometheus configuration
kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-eks.yaml
AWS CLI & kubectl Operations
# Get cluster information
aws eks describe-cluster --name production-cluster --region us-west-2
# Node group information
aws eks describe-nodegroup \
--cluster-name production-cluster \
--nodegroup-name standard-workers \
--region us-west-2
# Update kubeconfig
aws eks update-kubeconfig \
--region us-west-2 \
--name production-cluster
# Addon management
aws eks list-addons --cluster-name production-cluster --region us-west-2
aws eks describe-addon --cluster-name production-cluster --addon-name vpc-cni --region us-west-2
# Delete cluster
eksctl delete cluster --name production-cluster --region us-west-2