Set up a Laravel + Next.js app to build, push, and deploy with Docker, ECR, and a self-hosted runner.
I set up a pipeline that builds Docker images for a Laravel backend and a Next.js frontend, pushes them to AWS ECR, and deploys with Docker Compose on a self-hosted server. The goal was a simple and repeatable setup.
Below is the setup, plus a few issues I ran into and what I learned.
php artisan serve)npm start)The CI workflow (.github/workflows/ci.yml) runs on pushes/PRs to master. It:
latestKey bits from my workflow:
name: CI/CD Pipeline
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-north-1
- uses: aws-actions/amazon-ecr-login@v1
id: login-ecr
- name: Build and Push Backend Docker Images
run: |
docker build -t 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-backend:${{ github.sha }} ./back-end
docker build -t 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-backend:latest ./back-end
docker push 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-backend:${{ github.sha }}
docker push 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-backend:latest
- name: Build and Push Frontend Docker Images
run: |
docker build -t 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-frontend:${{ github.sha }} ./front-end
docker build -t 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-frontend:latest ./front-end
docker push 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-frontend:${{ github.sha }}
docker push 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-frontend:latest
Notes on tagging:
latest. The SHA gives an immutable reference per build; latest is convenient for default deploys.The deployment workflow (.github/workflows/cd.yml) triggers on CI completion. It runs on my self-hosted runner (same machine that runs Docker Compose):
latest imagesname: Deployment Pipeline
on:
workflow_run:
workflows: ["CI/CD Pipeline"]
types: [ completed ]
jobs:
deploy:
runs-on: self-hosted
steps:
- uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-north-1
- uses: aws-actions/amazon-ecr-login@v1
- run: |
docker pull 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-backend:latest
docker pull 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-frontend:latest
- name: Restart Docker Compose services
working-directory: /home/basit/Desktop/github_projects/CI-CD Pipeline
run: |
docker compose down
docker compose up -d
- run: docker image prune -af
runs-on: self-hosted. I installed the GitHub Actions runner on the same server that runs Docker and Docker Compose.working-directory is set to the Compose folder: /home/basit/Desktop/github_projects/CI-CD Pipeline. This ensures docker compose runs where the docker-compose.yaml file lives.docker group) so the deploy step can pull images and restart services.Compose file (docker-compose.yaml) defines the app network and services. The key is that everything runs on the same internal bridge network so services can talk to each other by service name.
Highlights:
app-networkhttp://backend:8000 from other containerspostgres:5432, Redis as redis:6379services:
backend:
image: 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-backend:latest
environment:
DB_HOST: postgres
REDIS_HOST: redis
networks: [ app-network ]
frontend:
image: 303759786442.dkr.ecr.eu-north-1.amazonaws.com/azubi-cls-frontend:latest
environment:
- BACKEND_API_HOST=http://backend:8000
depends_on: [ backend ]
networks: [ app-network ]
networks:
app-network:
driver: bridge
Note: In containers, prefer service names (internal DNS) over host IPs.
I initially pointed the frontend to http://172.17.0.1:8000. It “worked” in some cases, but it’s unreliable and depends on Docker’s host networking details. The fix was to use the internal Docker DNS name: http://backend:8000. Once I switched to service-to-service via the bridge network, requests were reliable.
Lesson: Containers should communicate on the Docker internal network using service names, not host IPs.
I used GitHub Secrets for AWS credentials in both CI and CD. This kept sensitive values out of the repo and logs. It also made it easy to rotate keys without changing code.
Lesson: Put anything sensitive (AWS keys, registry creds, DB passwords) in GitHub Secrets or the runner’s secret store—don’t commit them.
Possible improvement: Switch to GitHub OIDC for AWS (no long-lived keys) and use GitHub Environments for environment-specific protection.
Pushing both latest and ${{ github.sha }} gives me a fast default and a precise rollback target. If latest misbehaves, I can pin Compose to a known-good SHA tag and redeploy.
POSTGRES_PASSWORD into secrets/.envBACKEND_API_HOST per environment (dev/stage/prod)