GitHub Actions Tutorial for Cloud Engineers: CI/CD From Zero to Production (2026)
If you're building cloud infrastructure but still deploying manually — SSH into servers, running scripts by hand, clicking buttons in the AWS console — you're not working like a modern cloud engineer. Continuous Integration and Continuous Deployment (CI/CD) is not a nice-to-have anymore. It's the baseline.
GitHub Actions is the most widely adopted CI/CD platform in 2026. It's deeply integrated with GitHub (where most teams store their code), it's free for public repositories, and it has a massive ecosystem of reusable actions. Most importantly, it's what you'll encounter at the companies paying $115K–$165K for cloud engineers.
This tutorial builds a complete, production-grade GitHub Actions pipeline from scratch. By the end, you'll have a workflow that: runs tests, lints code, builds a Docker image, pushes it to Amazon ECR, and deploys it to AWS ECS Fargate. That's the real production stack.
What is GitHub Actions?
GitHub Actions is a CI/CD and automation platform built into GitHub. When events happen in your repository — a pull request is opened, code is pushed to main, a release is tagged — GitHub Actions runs automated workflows.
A workflow is a YAML file that defines:
- When to run (on push, on PR, on schedule, manually)
- What to run (jobs made up of steps)
- Where to run (GitHub-hosted runners: Ubuntu, macOS, Windows)
Workflows live in your repository at .github/workflows/. GitHub detects them automatically.
Your First Workflow: Linting and Testing
Let's start with the most common workflow: running tests on every push.
# .github/workflows/ci.yml
name: CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
name: Test & Lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Run tests
run: npm test
This workflow:
- Triggers on pushes to main/develop and on PRs to main
- Checks out your code
- Sets up Node.js 22 with npm caching (faster subsequent runs)
- Runs your linter and tests
GitHub shows a green checkmark on your commit if everything passes, red X if anything fails. Pull requests are automatically gated — you can't merge broken code.
Key Concepts: Triggers, Jobs, and Steps
Triggers (on:) define when the workflow runs:
on:
push: # Any push to any branch
pull_request: # PR opened, synchronized, or reopened
schedule: # Cron syntax: every day at midnight UTC
- cron: "0 0 * * *"
workflow_dispatch: # Manual trigger with optional inputs
Jobs run in parallel by default. Use needs: to sequence them:
jobs:
lint:
runs-on: ubuntu-latest
steps: [...]
test:
runs-on: ubuntu-latest
needs: lint # Only runs after lint passes
steps: [...]
deploy:
runs-on: ubuntu-latest
needs: [lint, test] # Requires both to pass
steps: [...]
Steps run sequentially within a job. A step either runs a shell command (run:) or uses a pre-built action (uses:):
steps:
- name: Install AWS CLI
run: |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
Secrets Management in GitHub Actions
Never hardcode credentials in workflows. GitHub provides encrypted secrets:
- Go to your repo → Settings → Secrets and variables → Actions
- Add secrets:
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY, etc. - Reference them in workflows:
${{ secrets.MY_SECRET }}
Secrets are masked in logs automatically.
Building a Docker CI/CD Pipeline for AWS ECS
Here's a production-grade pipeline that builds a Docker image and deploys to ECS Fargate:
# .github/workflows/deploy.yml
name: Build and Deploy to ECS
on:
push:
branches: [main]
env:
AWS_REGION: us-east-1
ECR_REPOSITORY: my-app
ECS_SERVICE: my-app-service
ECS_CLUSTER: my-cluster
CONTAINER_NAME: my-app
jobs:
deploy:
name: Build, Push, Deploy
runs-on: ubuntu-latest
environment: production # Optional: requires manual approval
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build, tag, and push image to ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
- name: Download current task definition
run: |
aws ecs describe-task-definition --task-definition my-app-task \
--query taskDefinition > task-definition.json
- name: Update ECS task definition with new image
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: task-definition.json
container-name: ${{ env.CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy to ECS
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true # Waits for deployment to complete
This is the exact pattern used at production-grade companies. When code merges to main:
- AWS credentials are configured
- Docker image is built and pushed to ECR with the commit SHA as the tag
- The ECS task definition is updated with the new image
- ECS rolling deployment is triggered and waits for stability
Environment-Specific Deployments
Production pipelines deploy differently based on branch or tag:
jobs:
deploy-staging:
if: github.ref == 'refs/heads/develop'
environment: staging
steps:
- uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.STAGING_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.STAGING_AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
deploy-production:
if: github.ref == 'refs/heads/main'
environment: production # Requires manual approval gate
needs: [test, deploy-staging]
steps:
- uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.PROD_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.PROD_AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
The environment: production setting allows you to require manual reviewer approval before the job runs — a critical safety gate for production deployments.
Useful Patterns for Cloud Engineers
Matrix builds — test against multiple versions:
strategy:
matrix:
node-version: [18, 20, 22]
os: [ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
Caching — speed up CI by caching dependencies:
- uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
Conditional steps — run only on certain conditions:
- name: Deploy to production
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: ./deploy.sh
Artifacts — save build outputs:
- name: Upload test results
uses: actions/upload-artifact@v4
with:
name: test-results
path: coverage/
retention-days: 7
Terraform CI/CD with GitHub Actions
Infrastructure-as-Code pipelines follow a specific pattern:
name: Terraform
on:
pull_request:
paths: ["terraform/**"]
push:
branches: [main]
paths: ["terraform/**"]
jobs:
terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: terraform
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check
- name: Terraform Validate
run: terraform validate
- name: Terraform Plan
run: terraform plan -no-color
env:
TF_VAR_db_password: ${{ secrets.DB_PASSWORD }}
- name: Terraform Apply
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: terraform apply -auto-approve
env:
TF_VAR_db_password: ${{ secrets.DB_PASSWORD }}
The pattern: PRs get a terraform plan (shows what would change), main branch merges trigger terraform apply (applies the change). This prevents infrastructure surprises.
Security Best Practices for GitHub Actions
Use pinned action versions:
# Vulnerable — tag can be changed
uses: actions/checkout@main
# Better — specific version tag
uses: actions/checkout@v4
# Best — pinned to exact commit SHA
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
Minimal IAM permissions: Create IAM roles with only the permissions your pipeline needs. Don't use admin credentials. For ECS deployments, you need: ECR push/pull, ECS task registration, and ECS service update.
OIDC instead of long-lived credentials: AWS supports GitHub's OIDC provider, which means your workflow can assume an IAM role directly without storing AWS keys as secrets:
- name: Configure AWS credentials via OIDC
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/github-actions-role
aws-region: us-east-1
This is the modern, most secure approach — no long-lived credentials that can be compromised.
What Interviewers Look For
When you're interviewing for cloud engineering or DevOps roles, interviewers want to know you can build and reason about CI/CD pipelines. Common questions:
- "Walk me through your CI/CD pipeline"
- "How would you prevent broken code from reaching production?"
- "How do you handle secrets in your pipelines?"
- "Explain blue/green vs. rolling deployments"
Candidates who have actually built pipelines — not just watched tutorials — can answer these specifically and credibly. A portfolio with a real GitHub Actions workflow that deploys to ECS is worth more than any certification.
The Bottom Line
GitHub Actions is the standard CI/CD tool for cloud engineers in 2026. If you understand: triggers, jobs, steps, secrets, AWS credential configuration, Docker build/push, and ECS deployment — you're ready for the CI/CD component of any cloud engineering interview.
The entire workflow above is production-ready. It's the kind of thing you'll build and maintain as a cloud engineer from day one.
*CloudPath Academy's hands-on curriculum includes building complete CI/CD pipelines in GitHub Actions, deploying to ECS Fargate, and Terraform automation — the exact skills that cloud engineering employers test in technical interviews.*