İçeriğe geç
KAMPANYA

Logo Tasarım + Web Tasarım + 1 Yıl Domain + E-posta + Hosting — $299 +KDV

AIOR

Docker, Podman, and Kubernetes: when to step up the abstraction, and when to stop

Sektör topluluğu — sorularınız, deneyimleriniz ve duyurularınız için.

Docker, Podman, and Kubernetes: when to step up the abstraction, and when to stop

Aior

Administrator
Staff member
Joined
Apr 2, 2023
Messages
175
Reaction score
2
Points
18
Age
40
Location
Turkey
Website
aior.com
1/3
Thread owner
500


The container abstraction ladder​

The "should we use Kubernetes" question gets asked in absolutes. In practice, it's a ladder, and most teams are happiest one or two rungs below where the architecture diagrams suggest. Below are the actual rungs and the questions that pick between them.

Rung 1: Docker Compose on a single host​

What it is: one machine, Docker (or Podman) running compose stacks. Often a single docker-compose.yml file per service.

Where it fits: small to medium internal applications, dev environments, single-node production where downtime is acceptable, the early years of any company.

When to leave it: when you have more than one node and the operational pain of "deploy to each node" exceeds the cost of stepping up.

We have customers running production single-node Docker Compose stacks for years, perfectly happily. The "you must use Kubernetes" pressure is usually wrong.

Rung 2: Docker Swarm or Nomad​

What it is: simple multi-node orchestration. Swarm is "Docker Compose, but across multiple nodes". HashiCorp's Nomad is similar with more polish.

Where it fits: 3-10 nodes, simple service patterns (web app + database + worker), teams that don't want Kubernetes complexity but need multi-node deployment.

When to leave it: when the ecosystem stops keeping pace. Swarm is alive but not actively expanding. Nomad has a smaller community than k8s but is well-maintained.

This rung is the one most teams skip. They jump from compose to k8s, accumulating complexity they don't need.

Rung 3: managed Kubernetes (EKS, AKS, GKE, k3s)​

What it is: full Kubernetes. Managed control plane (so you don't run etcd yourself). Worker nodes you can scale.

Where it fits: dozens to thousands of services, complex networking (service mesh, intelligent load balancing), polyglot stacks, multi-tenancy, dev/stage/prod parity, large team.

When to leave it: you don't, usually. The next step is multi-cluster federation, which is rarely worth it.

Rung 4: self-hosted Kubernetes​

What it is: full k8s on your own hardware or VMs. You run the control plane. You handle upgrades, certs, etcd backups.

Where it fits: regulatory or cost reasons that exclude managed offerings, large enough team to justify dedicated platform engineers, on-prem requirement.

When to leave it: when you realise the platform team's time is more expensive than the managed cluster's price. For most companies, that's most of the time.

The Docker vs Podman conversation​

  • Docker — the default. Mature, ubiquitous, well-supported.
  • Podman — daemonless, rootless-by-default, drop-in compatible CLI. Strong choice on RHEL-family systems where Docker isn't supported.
  • containerd / nerdctl — what runs underneath both. Direct use is rare outside kubernetes-internal contexts.

For development, either is fine. For production on Kubernetes, the runtime is containerd or CRI-O regardless of what built the image.

Image discipline that pays off​

  • Multi-stage builds — build artefact in one stage, copy into a minimal runtime image
  • Distroless or alpine base images — smaller, fewer CVEs
  • Pin base image digest, not just tag — :latest is not reproducible, neither is :v1.20
  • Build with BuildKit — caching is meaningfully better
  • SBOM (software bill of materials) generated at build time — increasingly required by enterprise customers
  • Vulnerability scanning at build time + at registry pull time

The thing nobody plans​

Image storage cost. A team without image lifecycle policies accumulates GB of historical builds in their registry. Multiply by the number of builds, the number of services, and the number of environments, and the registry bill is no longer trivial.

Image retention policies — keep last N versions per service, plus all production-tagged versions, garbage-collect the rest weekly.

Operating Kubernetes — the parts that hurt​

  • Upgrades — cluster + nodes + workloads, all on different cadences. Plan a quarterly cycle, not a one-time effort.
  • Storage — stateful workloads on Kubernetes are a real engineering investment. Operators help; they don't make it trivial.
  • Networking — CNI choice (Calico, Cilium, others) is consequential. Service mesh (Istio, Linkerd) is its own engineering project.
  • Secrets — base k8s secrets are not encrypted at rest by default. SOPS, sealed-secrets, or external KMS integration is required, not optional.
  • RBAC — easy to misconfigure into "everyone is an admin". Audit periodically.

One pattern we'd warn about​

Adopting service mesh in the first year of Kubernetes adoption. The complexity multiplier is real. Get application reliability working on plain k8s services first; add mesh when you have a specific problem only mesh solves.

One pattern that pays off​

Standard Helm charts for the company's services. One chart, all services use it, one place to fix patterns. The team that maintains 50 hand-crafted manifests is the team that drowns in maintenance.

What's your container stack? And — for the on-prem folks — has anyone fully replaced VM-based deployments with Kubernetes in production at industrial settings?
 

Forum statistics

Threads
171
Messages
178
Members
27
Latest member
AIORAli

Members online

No members online now.

Featured content

AIOR
AIOR TEKNOLOJİ

Tüm ihtiyaçlarınız için Teklif alın

Hosting · Domain · Sunucu · Tasarım · Yazılım · Mühendislik · Sektörel Çözümler

Teklif al

7/24 Destek · Anında yanıt

Back
Top