A CLI tool that recreates AWS VPC functionality using Linux network namespaces, bridges, and iptables. Build isolated networks, public/private subnets, NAT gateways, and VPC peering on bare Linux.
I built a command-line tool called vpcctl that recreates AWS VPC functionality on Linux using only native primitives β network namespaces, bridges, veth pairs, and iptables. The goal was to make it easy to spin up isolated VPC topologies locally for learning, testing, and demos without touching any cloud APIs.
The tool lets you create VPCs (bridges), add public and private subnets (namespaces), configure NAT gateways for internet access, apply firewall policies, and even set up VPC peering between separate networks. Everything runs on a single Linux host or across multiple hosts with proper routing.
Below is how I built it, the commands it exposes, and the problems I solved along the way.
The tool replicates AWS VPC concepts using Linux primitives:
βββββββββββββββ
β Internet β
ββββββββ¬βββββββ
β NAT (MASQUERADE)
ββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ
β TestVPC (10.0.0.0/16) β
β Bridge: br-TestVPC β
β Gateway: 10.0.0.1 β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββ€
β β β
ββββββββΌβββββββββββ βββββΌβββββββββββββ βββββββββββββ
β Public Subnet β β Private Subnet βββββββββββ€ Peering β
β 10.0.1.0/24 β β 10.0.2.0/24 β β Connectionβ
β β β β βββββββ¬ββββββ
β β NAT Gateway β β β No Internet β β
β β HTTP/HTTPS β β β MySQL (3306) β β
β β SSH (VPC) β β β SSH (Public) β β
βββββββββββββββββββ ββββββββββββββββββ β
β
βββββββββββββββββββββββββββββββββββββββββββββββββ
β
β VPC2 (192.168.0.0/16)
β Bridge: br-VPC2
β
ββββββββββββΊ 192.168.1.0/24 (web subnet)
The tool is organized as a main CLI (vpcctl) and small Bash modules in lib/:
common.sh β logging, argument parsing, sudo handlingvpc.sh β create/destroy VPC bridgessubnet.sh β create/delete subnets (namespaces + veths)routing.sh β configure IP forwarding and routesfirewall.sh β apply iptables/nftables rulespeering.sh β set up VPC peering with static routesEach operation is idempotent: check if resource exists, create only if missing, record state in config/vpc.conf. This makes commands safe to re-run.
Initially, I created namespaces but forgot to set default routes inside them. Ping to the gateway worked, but pinging other subnets failed. The fix was to add a default route in each namespace pointing to the bridge gateway.
Lesson: Namespaces have completely isolated routing tables. You must explicitly configure routes even if interfaces are connected.
When I ran create-subnet multiple times, duplicate NAT rules piled up in iptables. This caused performance issues and confusing rule lists.
Fix: Use iptables -C to check if a rule exists before adding it:
iptables -t nat -C POSTROUTING -s 10.0.1.0/24 -j MASQUERADE 2>/dev/null || \
iptables -t nat -A POSTROUTING -s 10.0.1.0/24 -j MASQUERADE
This pattern made all operations idempotent β safe to re-run without side effects.
Different Linux distros use different firewall backends. Newer systems use nftables, older ones use iptables. Commands are not compatible.
Fix: Detect which is available at runtime:
if command -v nft >/dev/null 2>&1; then
# use nft commands
else
# use iptables commands
fi
This made the tool portable across Ubuntu, Fedora, and Arch.
When peering two VPCs with overlapping CIDRs (e.g., both using 10.0.0.0/16), routing broke. One VPC's routes shadowed the other's.
Fix: I enforced CIDR uniqueness checks in the peering logic. The tool now refuses to peer VPCs with overlapping address spaces.
Lesson: AWS prevents overlapping CIDRs in VPC peering for good reason. I replicated that validation.
If the tool crashed mid-operation, vpc.conf sometimes had partial entries or duplicate lines. This broke subsequent list operations.
Fix: Use atomic writes β write to a temp file, then mv it over the original:
echo "$new_entry" >> /tmp/vpc.conf.tmp
mv /tmp/vpc.conf.tmp config/vpc.conf
This ensured state updates were all-or-nothing.
Multi-host peering with tunnels: Right now cross-host peering requires manual static routes. I'd add automatic VPN/tunnel setup (WireGuard or VXLAN) so VPCs on different hosts can communicate seamlessly.
DHCP for subnets: Currently, IPs are statically assigned (10.0.1.10). Adding a lightweight DHCP server would make it more cloud-like where instances get IPs dynamically.
Web UI or TUI: A terminal UI (using something like dialog or whiptail) would make it easier to visualize VPC topologies and manage resources interactively.
Kubernetes CNI mode: Package the tool as a CNI plugin so it can provision network namespaces for Kubernetes pods, replicating VPC-style isolation in K8s.
Logging and metrics: Add structured logging and export metrics (namespace count, packet counters from iptables) to Prometheus for monitoring.
Building vpcctl taught me more about Linux networking than any tutorial could. Implementing VPC concepts from scratch β namespaces, bridges, veth pairs, NAT, routing, and firewalling β forced me to understand what each piece does and why it matters.
The most valuable lesson: cloud abstractions like AWS VPC aren't magic. They're thin layers over well-understood primitives that have existed in Linux for decades. Once you understand the primitives, you can build your own abstractions.
If you're learning networking or DevOps, I highly recommend building something like this. It's messy, you'll hit weird bugs (RTNETLINK errors, routing loops, firewall drops), but solving them gives you intuition that's hard to get from reading docs.
The tool is open source and available on GitHub. Try it out, break it, improve it. That's how I learned.
Below is the walkthrough of how to use vpcctl to build a complete VPC topology with public and private subnets, NAT gateway, and VPC peering. I explain what each command does and what happens under the hood.
First, I create a VPC with a CIDR block. This sets up the bridge and gateway.
sudo ./vpcctl create-vpc --name AppVPC --cidr 10.0.0.0/16
What this does:
br-AppVPC10.0.0.1/16 to the bridgeconfig/vpc.confsysctl net.ipv4.ip_forward=1)The bridge acts as the VPC router. All subnets will attach to this bridge via veth pairs. Without the bridge, namespaces can't communicate with each other.
Next, I add a public subnet that will have internet access via NAT.
sudo ./vpcctl add-subnet \
--vpc AppVPC \
--name web \
--cidr 10.0.1.0/24 \
--type public
What this does:
AppVPC-web10.0.1.10/24 inside the namespaceiptables -t nat -A POSTROUTING -s 10.0.1.0/24 -j MASQUERADEThe --type public is key: it adds the MASQUERADE rule so traffic from this subnet can reach the internet. The source IP is rewritten to the host's public IP, just like an AWS NAT gateway.
Now I add a private subnet without direct internet access.
sudo ./vpcctl add-subnet \
--vpc AppVPC \
--name database \
--cidr 10.0.2.0/24 \
--type private
What this does:
AppVPC-database10.0.2.10/24 inside the namespacePrivate subnets can still talk to public subnets (both are on the same bridge), but they have no MASQUERADE rule. If you try to ping 8.8.8.8 from the private subnet, it will fail.
Check what's been created:
sudo ./vpcctl list-vpcs
sudo ./vpcctl list-subnets
This reads config/vpc.conf and shows all VPCs, their CIDR blocks, bridges, and subnets. The state file looks like:
VPC:AppVPC:10.0.0.0/16:br-AppVPC:1762964516
SUBNET:AppVPC:web:10.0.1.0/24:public:AppVPC-web:1762964605
SUBNET:AppVPC:database:10.0.2.0/24:private:AppVPC-database:1762964738
This makes it easy to track resources and avoid name collisions.
To verify the VPC works, I execute commands inside the namespaces:
# Start a web server in the database subnet
sudo ip netns exec AppVPC-database python3 -m http.server 8000 &
# From the web subnet, curl the database subnet
sudo ip netns exec AppVPC-web curl -f http://10.0.2.10:8000
ip netns exec AppVPC-web runs the command inside the AppVPC-web namespace. If the curl succeeds, traffic is flowing between subnets via the bridge.
This confirms:
I can define security groups in config/security-groups.json and apply them:
sudo ./vpcctl apply-firewall --strict
What this does:
config/security-groups.json--strict sets default policy to DROP, then allows only specified portsSample policy:
{
"policies": [
{
"subnet": "10.0.1.0/24",
"ingress": [
{
"port": 80,
"protocol": "tcp",
"source": "0.0.0.0/0",
"action": "allow"
}
]
}
]
}
The tool detects whether the system uses iptables or nftables and applies the right commands.
To connect two VPCs, I use peering:
# Create a second VPC
sudo ./vpcctl create-vpc --name PartnerVPC --cidr 192.168.0.0/16
sudo ./vpcctl add-subnet --vpc PartnerVPC --name api --cidr 192.168.1.0/24 --type public
# Peer the two VPCs
sudo ./vpcctl peer-vpcs --vpc1 AppVPC --vpc2 PartnerVPC
What this does:
vp-AppVPC β vp-PartnerVPCNow subnets in AppVPC can reach subnets in PartnerVPC and vice versa.
To test:
# From AppVPC-web, ping PartnerVPC-api
sudo ip netns exec AppVPC-web ping -c 3 192.168.1.10
If the ping succeeds, VPC peering is working.
To tear down a VPC:
sudo ./vpcctl delete-vpc --name AppVPC
What this does:
The tool handles cleanup in the right order to avoid errors.
Below is how I actually implemented the tool β the technical decisions, the low-level Linux commands it wraps, and the patterns I used to make it reliable.
I organized the code into small, focused Bash modules:
lib/
βββ common.sh # logging, argument parsing, sudo wrapper
βββ vpc.sh # VPC (bridge) management
βββ subnet.sh # subnet (namespace + veth) management
βββ routing.sh # IP forwarding and route configuration
βββ firewall.sh # iptables/nftables rule management
βββ peering.sh # VPC peering with static routes
The main vpcctl script sources these modules and routes commands to the right functions. This kept each module under 200 lines and made testing easier.
When you run create-vpc --name AppVPC --cidr 10.0.0.0/16, the tool runs:
# Create the bridge
ip link add br-AppVPC type bridge
# Assign gateway IP
ip addr add 10.0.0.1/16 dev br-AppVPC
# Bring it up
ip link set br-AppVPC up
# Enable IP forwarding globally
sysctl -w net.ipv4.ip_forward=1
# Record state
echo "VPC:AppVPC:10.0.0.0/16:br-AppVPC:$(date +%s)" >> config/vpc.conf
The bridge is the VPC router. All subnets attach to this bridge, and it forwards packets between them.
When you run add-subnet --vpc AppVPC --name web --cidr 10.0.1.0/24 --type public, the tool does this:
ip netns add AppVPC-web
ip link add veth-web type veth peer name veth-web-br
ip link set veth-web netns AppVPC-web
ip netns exec AppVPC-web ip addr add 10.0.1.10/24 dev veth-web
ip netns exec AppVPC-web ip link set veth-web up
ip netns exec AppVPC-web ip link set lo up
ip link set veth-web-br master br-AppVPC
ip link set veth-web-br up
ip netns exec AppVPC-web ip route add default via 10.0.0.1
This points all traffic from the namespace to the bridge gateway.
iptables -t nat -A POSTROUTING -s 10.0.1.0/24 -o eth0 -j MASQUERADE
This rewrites the source IP for outgoing packets so the subnet can reach the internet.
When you run add-subnet --vpc AppVPC --name db --cidr 10.0.2.0/24 --type private, the process is almost identical:
ip netns add AppVPC-db
ip link add veth-db type veth peer name veth-db-br
ip link set veth-db netns AppVPC-db
ip netns exec AppVPC-db ip addr add 10.0.2.10/24 dev veth-db
ip netns exec AppVPC-db ip link set veth-db up
ip netns exec AppVPC-db ip link set lo up
ip link set veth-db-br master br-AppVPC
ip link set veth-db-br up
ip netns exec AppVPC-db ip route add default via 10.0.0.1
The key difference: I do NOT add the MASQUERADE rule. This means the private subnet can communicate with other subnets in the VPC (via the bridge), but it cannot reach the internet directly.
If you want private subnets to reach the internet, you'd route their traffic through a NAT instance in a public subnet. I left that as an optional feature.
I needed the tool to be safe to re-run. If you run create-vpc twice, it shouldn't fail or create duplicates.
Pattern I used:
create_vpc() {
local name=$1
# Check if bridge already exists
if ip link show br-$name >/dev/null 2>&1; then
log_info "VPC $name already exists"
return 0
fi
# Create it
ip link add br-$name type bridge
# ... rest of setup
}
For iptables, I check before adding:
iptables -t nat -C POSTROUTING -s 10.0.1.0/24 -j MASQUERADE 2>/dev/null || \
iptables -t nat -A POSTROUTING -s 10.0.1.0/24 -j MASQUERADE
The -C checks if the rule exists. If it does, the || short-circuits and we don't add a duplicate.
Newer distros use nftables, older ones use iptables. I detect which is available:
if command -v nft >/dev/null 2>&1; then
FW_BACKEND=nft
else
FW_BACKEND=iptables
fi
Then I route through wrapper functions:
add_nat_rule() {
local subnet=$1
local interface=$2
if [ "$FW_BACKEND" = nft ]; then
nft add rule ip nat POSTROUTING ip saddr $subnet oifname "$interface" masquerade
else
iptables -t nat -C POSTROUTING -s "$subnet" -o "$interface" -j MASQUERADE 2>/dev/null || \
iptables -t nat -A POSTROUTING -s "$subnet" -o "$interface" -j MASQUERADE
fi
}
This kept the high-level code simple and portable.
When you run peer-vpcs --vpc1 AppVPC --vpc2 PartnerVPC, the tool connects the two bridges:
ip link add vp-AppVPC type veth peer name vp-PartnerVPC
ip link set vp-AppVPC master br-AppVPC
ip link set vp-PartnerVPC master br-PartnerVPC
ip link set vp-AppVPC up
ip link set vp-PartnerVPC up
# From AppVPC (10.0.0.0/16), route to PartnerVPC (192.168.0.0/16)
ip route add 192.168.0.0/16 via 10.0.0.1 dev br-AppVPC
# From PartnerVPC, route back to AppVPC
ip route add 10.0.0.0/16 via 192.168.0.1 dev br-PartnerVPC
Now subnets in both VPCs can talk to each other via the peering connection.
I track everything in config/vpc.conf:
VPC:AppVPC:10.0.0.0/16:br-AppVPC:1762964516
SUBNET:AppVPC:web:10.0.1.0/24:public:AppVPC-web:1762964605
PEERING:AppVPC:PartnerVPC:vp-AppVPC:vp-PartnerVPC:1762939717
Format: TYPE:field1:field2:...:timestamp
When you run list-vpcs, the tool parses this file. When you delete resources, it removes the matching lines. I use grep -v for deletion and atomic writes (temp file + mv) to avoid corruption.
I defined security groups in JSON:
{
"policies": [
{
"subnet": "10.0.1.0/24",
"ingress": [
{ "port": 80, "protocol": "tcp", "source": "0.0.0.0/0", "action": "allow" },
{ "port": 443, "protocol": "tcp", "source": "0.0.0.0/0", "action": "allow" }
]
}
]
}
The apply-firewall command reads this with jq and generates iptables rules:
# Set default DROP policy
ip netns exec AppVPC-web iptables -P INPUT DROP
# Allow specific ports
ip netns exec AppVPC-web iptables -A INPUT -p tcp --dport 80 -j ACCEPT
ip netns exec AppVPC-web iptables -A INPUT -p tcp --dport 443 -j ACCEPT
# Allow established connections
ip netns exec AppVPC-web iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
This replicates AWS security groups.
Every function logs to both console and logs/vpcctl.log:
log_info() {
echo "[INFO] $1" | tee -a logs/vpcctl.log
}
log_error() {
echo "[ERROR] $1" >&2 | tee -a logs/vpcctl.log
}
I use set -euo pipefail at the top of each script so any error stops execution immediately. This prevents partial state.
Most operations require root. I added a wrapper in common.sh:
require_root() {
if [ "$EUID" -ne 0 ]; then
log_error "This command requires root privileges. Run with sudo."
exit 1
fi
}
Each command calls require_root at the start.
stage4/.state/ so scripts can show what exists without parsing ip output every time.