Four tools, four different jobs
The "web server" decision is mostly a "front-of-stack" decision in 2026. The actual web server (Apache / Nginx as content server) matters less; the reverse proxy / load balancer / TLS terminator that sits at the edge matters more. Four mature options, each with a niche.Nginx — the broad default[/HEADING>
Strengths: battle-tested, broadly understood, fast, low memory, runs on everything. The config language is its own dialect but well-documented. Module ecosystem (rate limiting, OpenResty/Lua, caching) is rich.
Trade-offs: config quirks. Hot-reload works but isn't perfect. Dynamic upstream changes require workarounds (or a paid Plus version).
Use when: reverse proxy + static content + simple routing. Most websites and APIs land here. The default if you don't have a reason to pick differently.
Caddy — the modern entrant[/HEADING>
Strengths: HTTPS by default with automatic Let's Encrypt. Config in JSON or Caddyfile (much simpler than Nginx for typical cases). Single static binary, easy to deploy.
Trade-offs: smaller community. Some advanced features lag Nginx. Heavier memory footprint per connection (workable, not best-in-class).
Use when: simple sites, small services, you value automatic HTTPS without the operational overhead, you don't need bleeding-edge perf at scale.
HAProxy — the load balancer specialist[/HEADING>
Strengths: depth in load balancing — health checks, sticky sessions, queue management, layer-4 + layer-7. Extremely fast under load. Mature ecosystem (Stats UI, Runtime API, Data Plane API).
Trade-offs: not a content server. Not designed for static file serving. Different operational model than Nginx (often runs alongside Nginx, not instead of).
Use when: you need real load balancing — multi-backend, smart routing, advanced health checking. Common at the front of a database cluster or API server pool.
Envoy — the service-mesh-era frontend[/HEADING>
Strengths: designed for cloud-native, dynamic-config-by-default. xDS API for runtime configuration. Strong observability built-in. Foundation for Istio, Consul Connect, AWS App Mesh.
Trade-offs: heavier learning curve. Operational complexity bigger than Nginx for small teams. Not the best choice if you're not in a service-mesh world.
Use when: Kubernetes, dynamic backend topology, service mesh in play. Or as Edge / API gateway in modern microservices stacks.
The decision matrix[/HEADING>
- Single-server / small fleet, classical web app → Nginx
- Need HTTPS automation without paying attention → Caddy
- Multi-backend load balancing with health-aware routing → HAProxy
- Kubernetes / service mesh / dynamic discovery → Envoy
- API gateway role with traffic shaping → either Envoy or a dedicated gateway (Kong, Tyk)
TLS and HTTP/2 / HTTP/3[/HEADING>
All four support TLS termination, HTTP/2, and increasingly HTTP/3 (QUIC). HTTP/3 is no longer experimental — adoption is real, and for high-latency / mobile-heavy clients it pays off. Default-on where the platform supports it.
Caching at the edge[/HEADING>
- Nginx — built-in proxy cache, well-understood
- Caddy — Souin caching plugin, less mature than Nginx's
- Varnish — the dedicated HTTP cache, purpose-built, faster than reverse-proxy caches
- CDN-as-cache — Cloudflare, Fastly, Bunny — when you want global edge caching with minimal ops
For most projects in 2026, a CDN in front + a thin reverse proxy on origin is the simplest cache architecture.
One pattern we'd warn about[/HEADING>
Hand-crafting Nginx config in production. Use config management (Ansible, Chef) or templated configs in a deployment system. The "ssh in and edit nginx.conf" workflow loses to drift.
One pattern that always pays off[/HEADING>
Automated TLS certificate handling — Let's Encrypt + cert-manager (k8s) or certbot + cron (VMs). Manual cert renewal is the kind of operational debt that produces 3 AM outages.
What's your edge stack? And — for the HTTP/3 folks — has the move to QUIC been smooth or are there gotchas in production?
Strengths: HTTPS by default with automatic Let's Encrypt. Config in JSON or Caddyfile (much simpler than Nginx for typical cases). Single static binary, easy to deploy.
Trade-offs: smaller community. Some advanced features lag Nginx. Heavier memory footprint per connection (workable, not best-in-class).
Use when: simple sites, small services, you value automatic HTTPS without the operational overhead, you don't need bleeding-edge perf at scale.
HAProxy — the load balancer specialist[/HEADING>
Strengths: depth in load balancing — health checks, sticky sessions, queue management, layer-4 + layer-7. Extremely fast under load. Mature ecosystem (Stats UI, Runtime API, Data Plane API).
Trade-offs: not a content server. Not designed for static file serving. Different operational model than Nginx (often runs alongside Nginx, not instead of).
Use when: you need real load balancing — multi-backend, smart routing, advanced health checking. Common at the front of a database cluster or API server pool.
Envoy — the service-mesh-era frontend[/HEADING>
Strengths: designed for cloud-native, dynamic-config-by-default. xDS API for runtime configuration. Strong observability built-in. Foundation for Istio, Consul Connect, AWS App Mesh.
Trade-offs: heavier learning curve. Operational complexity bigger than Nginx for small teams. Not the best choice if you're not in a service-mesh world.
Use when: Kubernetes, dynamic backend topology, service mesh in play. Or as Edge / API gateway in modern microservices stacks.
The decision matrix[/HEADING>
- Single-server / small fleet, classical web app → Nginx
- Need HTTPS automation without paying attention → Caddy
- Multi-backend load balancing with health-aware routing → HAProxy
- Kubernetes / service mesh / dynamic discovery → Envoy
- API gateway role with traffic shaping → either Envoy or a dedicated gateway (Kong, Tyk)
TLS and HTTP/2 / HTTP/3[/HEADING>
All four support TLS termination, HTTP/2, and increasingly HTTP/3 (QUIC). HTTP/3 is no longer experimental — adoption is real, and for high-latency / mobile-heavy clients it pays off. Default-on where the platform supports it.
Caching at the edge[/HEADING>
- Nginx — built-in proxy cache, well-understood
- Caddy — Souin caching plugin, less mature than Nginx's
- Varnish — the dedicated HTTP cache, purpose-built, faster than reverse-proxy caches
- CDN-as-cache — Cloudflare, Fastly, Bunny — when you want global edge caching with minimal ops
For most projects in 2026, a CDN in front + a thin reverse proxy on origin is the simplest cache architecture.
One pattern we'd warn about[/HEADING>
Hand-crafting Nginx config in production. Use config management (Ansible, Chef) or templated configs in a deployment system. The "ssh in and edit nginx.conf" workflow loses to drift.
One pattern that always pays off[/HEADING>
Automated TLS certificate handling — Let's Encrypt + cert-manager (k8s) or certbot + cron (VMs). Manual cert renewal is the kind of operational debt that produces 3 AM outages.
What's your edge stack? And — for the HTTP/3 folks — has the move to QUIC been smooth or are there gotchas in production?
Strengths: designed for cloud-native, dynamic-config-by-default. xDS API for runtime configuration. Strong observability built-in. Foundation for Istio, Consul Connect, AWS App Mesh.
Trade-offs: heavier learning curve. Operational complexity bigger than Nginx for small teams. Not the best choice if you're not in a service-mesh world.
Use when: Kubernetes, dynamic backend topology, service mesh in play. Or as Edge / API gateway in modern microservices stacks.
The decision matrix[/HEADING>
- Single-server / small fleet, classical web app → Nginx
- Need HTTPS automation without paying attention → Caddy
- Multi-backend load balancing with health-aware routing → HAProxy
- Kubernetes / service mesh / dynamic discovery → Envoy
- API gateway role with traffic shaping → either Envoy or a dedicated gateway (Kong, Tyk)
TLS and HTTP/2 / HTTP/3[/HEADING>
All four support TLS termination, HTTP/2, and increasingly HTTP/3 (QUIC). HTTP/3 is no longer experimental — adoption is real, and for high-latency / mobile-heavy clients it pays off. Default-on where the platform supports it.
Caching at the edge[/HEADING>
- Nginx — built-in proxy cache, well-understood
- Caddy — Souin caching plugin, less mature than Nginx's
- Varnish — the dedicated HTTP cache, purpose-built, faster than reverse-proxy caches
- CDN-as-cache — Cloudflare, Fastly, Bunny — when you want global edge caching with minimal ops
For most projects in 2026, a CDN in front + a thin reverse proxy on origin is the simplest cache architecture.
One pattern we'd warn about[/HEADING>
Hand-crafting Nginx config in production. Use config management (Ansible, Chef) or templated configs in a deployment system. The "ssh in and edit nginx.conf" workflow loses to drift.
One pattern that always pays off[/HEADING>
Automated TLS certificate handling — Let's Encrypt + cert-manager (k8s) or certbot + cron (VMs). Manual cert renewal is the kind of operational debt that produces 3 AM outages.
What's your edge stack? And — for the HTTP/3 folks — has the move to QUIC been smooth or are there gotchas in production?
All four support TLS termination, HTTP/2, and increasingly HTTP/3 (QUIC). HTTP/3 is no longer experimental — adoption is real, and for high-latency / mobile-heavy clients it pays off. Default-on where the platform supports it.
Caching at the edge[/HEADING>
- Nginx — built-in proxy cache, well-understood
- Caddy — Souin caching plugin, less mature than Nginx's
- Varnish — the dedicated HTTP cache, purpose-built, faster than reverse-proxy caches
- CDN-as-cache — Cloudflare, Fastly, Bunny — when you want global edge caching with minimal ops
For most projects in 2026, a CDN in front + a thin reverse proxy on origin is the simplest cache architecture.
One pattern we'd warn about[/HEADING>
Hand-crafting Nginx config in production. Use config management (Ansible, Chef) or templated configs in a deployment system. The "ssh in and edit nginx.conf" workflow loses to drift.
One pattern that always pays off[/HEADING>
Automated TLS certificate handling — Let's Encrypt + cert-manager (k8s) or certbot + cron (VMs). Manual cert renewal is the kind of operational debt that produces 3 AM outages.
What's your edge stack? And — for the HTTP/3 folks — has the move to QUIC been smooth or are there gotchas in production?
Hand-crafting Nginx config in production. Use config management (Ansible, Chef) or templated configs in a deployment system. The "ssh in and edit nginx.conf" workflow loses to drift.