A practical, repeatable deployment workflow using Bash, SSH, Docker, Docker Compose, and Nginx.
I recently took on a project to write a Bash script that deploys applications to remote servers, installs Docker and Docker Compose, configures Nginx as a reverse proxy, builds and runs containers, and verifies the app is actually reachable over HTTP. The goal was to go from a fresh VM to a working, proxied app without manual tweaking.
Along the way I hit a few bumps, mostly on the Nginx side while wiring the reverse proxy. This write up walks through what I built, what went wrong, and the exact pieces that made it reliable.
At a high level, the script:
There are several guardrails to keep deploys safe and repeatable:
You provide:
The script checks access first using git ls-remote, then clones using the branch you specify. The token is stripped from any command output before it is logged.
# Clone while masking the token from logs
if git clone -b "$BRANCH" "$AUTH_URL" repo_temp 2>&1 | grep -v "$ACCESS_TOKEN" | tee -a "$LOG_FILE"; then
echo "Repository cloned successfully"
fi
The deploy stops early if the repository does not include any of these files:
The script verifies SSH connectivity and then prepares the remote machine:
SSH connectivity test:
ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o BatchMode=yes \
-i "$SSH_KEY_PATH" "$USERNAME@$SERVER_IP" "echo 'SSH connection successful'"
The repository is archived locally with common build and cache folders excluded, then copied to the server and extracted in the home directory.
Two paths are supported:
The script waits briefly for startup, prints container status and recent logs, and checks that something is actually running.
Snippet from the deploy step:
if [ -f "docker-compose.yml" ] || [ -f "docker-compose.yaml" ]; then
$COMPOSE_CMD up -d --build
else
$DOCKER_CMD build -t my_app_image .
$DOCKER_CMD run -d -p $APP_PORT:$APP_PORT --name my_app_container --restart unless-stopped my_app_image
fi
The script:
Health check loop:
max_attempts=5
attempt=1
while [ $attempt -le $max_attempts ]; do
echo "Attempt $attempt/$max_attempts: Testing http://localhost:$APP_PORT"
if curl -f -m 10 http://localhost:$APP_PORT >/dev/null 2>&1; then
echo "Application is responding on port $APP_PORT"
break
else
[ $attempt -lt $max_attempts ] && sleep 5
fi
attempt=$((attempt + 1))
done
The script removes the default site and writes a minimal server block that proxies to the app container port. It tests the Nginx configuration and reloads it if valid.
Result:
On your local machine:
chmod +x deploy.sh
./deploy.sh
Follow the prompts. Provide the repository URL, token, branch, SSH details, and the application port. The script will do the rest and print two URLs at the end.
These are easy to adapt if your setup is different:
Getting the reverse proxy right took a few iterations. Here are the issues I ran into and how I fixed them:
This is the server block the script writes on the remote host:
server {
listen 80;
server_name SERVER_PUBLIC_IP _;
location / {
proxy_pass http://localhost:APP_PORT;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
And the enable flow the script uses:
sudo rm -f /etc/nginx/sites-enabled/default
sudo ln -sf /etc/nginx/sites-available/app /etc/nginx/sites-enabled/app
sudo nginx -t && sudo systemctl reload nginx
A candid note: even after these fixes, I still could not get the reverse proxy to behave exactly the way I wanted on the first pass. I did not ship the perfect config, but I learned a lot about how Nginx prioritizes sites, how headers impact WebSockets, and why testing with nginx -t before reload saves time.
This script readable. It does one thing well. It turns a fresh server into a running, proxied container deployment in a few minutes while making every step visible. If you prefer predictable deploys you can reason about, this approach works.