Linux Networking Mastery Series Part 10: Container and Virtualization Networking

in #computernetworks2 months ago

Welcome to the final technical installment of Linux Networking Mastery!
By now you have a comprehensive toolkit:

  • Part 1 – network stack basics and inspection tools
  • Part 2 – interface and IP configuration (temporary + persistent via Netplan, nmcli, systemd-networkd)
  • Part 3 – routing tables, static/policy routing, namespaces, simple router setup
  • Part 4 – name resolution, systemd-resolved, per-link/global DNS, troubleshooting
  • Part 5 – firewalls with nftables, firewalld, ufw, stateful rules
  • Part 6 – services (hardened SSH, Nginx basics, NFS/Samba shares, DHCP with dnsmasq)
  • Part 7 – monitoring (ss, tcpdump, iperf3, iftop), troubleshooting workflows
  • Part 8 – bonding, VLANs, bridges, WireGuard
  • Part 9 – wireless client & AP (nmcli, hostapd)

In Part 10 we tie everything together by exploring how containers and virtual machines handle networking — one of the most common real-world applications of the concepts we’ve covered.

We’ll look at:

  • Docker and Podman networking modes
  • Bridge, host, macvlan, ipvlan, overlay networks
  • libvirt / QEMU bridge networking
  • Basic Kubernetes networking concepts
  • A capstone hands-on project combining multiple techniques

1. Docker Networking Basics

Docker (still widely used in 2026) creates its own bridge network by default.

Common modes:

# Default bridge network (NAT + port publishing)
docker run -d -p 8080:80 nginx

# Host network (shares host’s network namespace)
docker run --network host nginx

# No network (completely isolated)
docker run --network none busybox

# Custom user-defined bridge
docker network create mybridge
docker run --network mybridge nginx

Inspect:

docker network ls
docker network inspect bridge

Under the hood: Docker uses a Linux bridge (docker0), iptables/nftables NAT rules, and veth pairs connecting each container to the bridge.

Important 2026 note: Many distributions and new deployments prefer Podman (daemonless, rootless by default).

2. Podman Networking (Rootless & Modern Preference)

Podman uses the same CNI (Container Network Interface) plugins as Kubernetes.

Default (bridge-like) behavior:

podman run -d -p 8080:80 docker.io/library/nginx

Rootless default network (slirp4netns or pasta):

podman network ls
podman network inspect podman

Create custom network:

podman network create mynet
podman run --network mynet -d nginx

Advanced modes (same as Docker):

podman run --network host ...
podman run --network macvlan ...

3. Specialized Network Drivers

macvlan
Container gets its own MAC address and appears directly on the parent network (no NAT).

ip link add macvlan0 link enp0s3 type macvlan mode bridge
podman network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=enp0s3 mymacvlan
podman run --network mymacvlan --ip 192.168.1.200 nginx

ipvlan
Similar to macvlan but shares the parent MAC (L3 mode).

overlay (multi-host)
Used with Docker Swarm or Kubernetes (requires a key-value store or control plane).

4. libvirt / QEMU Virtual Machine Networking

libvirt (used by virt-manager, GNOME Boxes, etc.) supports:

  • NAT (default) — similar to Docker bridge
  • bridge — connect VM directly to physical network (recommended for servers)

Create persistent bridge (from Part 8):

# Already created br0 with IP on it

Attach VM to bridge (edit XML or use virt-manager):

<interface type='bridge'>
  <source bridge='br0'/>
  <model type='virtio'/>
</interface>

VM gets IP from the same subnet as the host bridge interface (via DHCP or static).

5. Kubernetes Networking Concepts (High-Level)

Kubernetes uses CNI plugins (Calico, Flannel, Cilium, Weave, etc.).

Core abstractions (2026 perspective):

  • Pod → gets its own IP (usually from a large overlay or underlay subnet)
  • ClusterIP Service → internal VIP, kube-proxy NAT/rules
  • NodePort → exposes service on every node’s IP at high port
  • LoadBalancer → integrates with cloud LB or MetalLB
  • Ingress → HTTP/HTTPS routing (nginx-ingress, Traefik, etc.)

Most users in 2026 run lightweight distributions (k3s, MicroK8s, kind) where networking is pre-configured.

Capstone Project: Multi-Container Routed Application

Goal: Deploy a small web app with backend, database, and reverse proxy — using custom networking, routing, firewall, DNS, and monitoring.

Scenario

  • Container A: Nginx reverse proxy (port 80 → backend)
  • Container B: Simple Python/Flask API
  • Container C: PostgreSQL
  • All on custom bridge network
  • Expose only proxy to host/external
  • Optional: macvlan for direct backend access, or WireGuard tunnel to another host

Steps outline (detailed commands in your lab VM):

  1. Create custom bridge network
    podman network create app-net

  2. Run Postgres
    podman run -d --name db --network app-net -e POSTGRES_PASSWORD=secret postgres:16

  3. Run backend API (example image or your own)
    podman run -d --name api --network app-net my-flask-api

  4. Run Nginx proxy
    podman run -d --name proxy -p 8080:80 --network app-net -v ./nginx.conf:/etc/nginx/conf.d/default.conf nginx

  5. Sample nginx.conf (proxy_pass http://api:5000;)

  6. Firewall: allow only 8080/tcp inbound (nftables/firewalld/ufw)

  7. Test & monitor:
    ss -tnlp
    podman logs proxy
    tcpdump -i any port 8080
    curl http://localhost:8080

Extensions

  • Add healthchecks & restarts
  • Use macvlan for API to get real LAN IP
  • Add WireGuard tunnel → access from remote host
  • Move to Kubernetes (kind cluster) and use Services/Ingress

Hands-On Exercises

  1. Compare default bridge vs host vs macvlan performance with iperf3 between containers/host.
  2. Set up a Podman pod (shared namespace) with sidecar pattern.
  3. Create libvirt VM attached to custom bridge → ping between VM and container.
  4. Build the capstone project — intentionally break connectivity, then debug using tools from Part 7.

Congratulations! You now have production-grade Linux networking knowledge.

Sort:  

Congratulations @x9ed1732b! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You distributed more than 130000 upvotes.
Your next target is to reach 135000 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP