Over the last few weeks, while revisiting how secrets are handled in Kubernetes platforms, one pattern stood out clearly: secret rotation is often treated as an application concern. In practice, rotation stops at the infrastructure boundary.

This distinction sounds subtle, but it has real operational consequences.

The fundamentals are usually not the issue

Most mature platforms already get the basics right:

  • Secrets are externalized into a managed system (for example, AWS Secrets Manager)
  • Workloads authenticate via identity (IRSA), not static credentials
  • Kubernetes acts as a delivery plane
  • GitOps governs declared intent and change history

These choices answer an important question: who may access secrets.

What they do not define is how runtime changes are adopted by a running application.

That gap is where confusion starts.

Where the boundary actually is

Tools such as External Secrets Operator are best understood as synchronization mechanisms. They ensure that rotated secrets are fetched from the source of truth and made available inside the cluster.

Most applications, Spring Boot included, resolve configuration at startup. Even when secrets are delivered as files via mechanisms like config tree, those values are not re-bound dynamically unless the deployment lifecycle explicitly enforces it.

As a result, pod restarts are required for most critical secrets (database credentials, API keys, signing keys).

Configuration delivery is not configuration adoption

Using file-based configuration (for example, Spring Boot’s config tree mechanism) aligns well with container platforms:

  • Secrets are delivered as files
  • Updates are atomic
  • Values are not exposed via process metadata

This is a solid baseline for production systems. But it does not change the underlying constraint:

Secret rotation only becomes meaningful when paired with a controlled restart or redeployment strategy.

Without that, rotation exists only at the infrastructure layer.

GitOps makes the boundary explicit by design

GitOps reinforces this separation.

Controllers reconcile declared intent, not runtime drift. They do not observe secret value changes and restart workloads implicitly and that is a strength, not a limitation.

Different teams operationalize these using tools like Argo CD, Flux CD, or workflow-driven CD platforms, but the specific tool is secondary. What matters is that restart semantics are intentional, versioned, and auditable.

At scale, it stops being an implementation detail and becomes a platform contract.

What holds up over time

The platforms that remain predictable under change tend to draw clear responsibility boundaries:

  • IAM / IRSA defines who may access secrets
  • Secret systems own storage, rotation, and audit
  • Kubernetes handles delivery
  • Applications consume secrets as configuration
  • Deployments define when rotation is adopted
  • Git captures intent and accountability

Final thought

Revisiting this area was a useful reminder that many production risks don’t come from missing tools, but from unclear ownership boundaries.

Good secret management isn’t about hiding values or adding abstraction layers. It’s about identity-first access, explicit adoption boundaries, and predictable behavior under change.

Curious how others have formalized this boundary in their platforms, especially where automatic rotation meets operational predictability.