Kubernetes attributes promoted to release candidate in OTel Semantic Conventions
The OpenTelemetry Semantic Conventions working group has promoted Kubernetes attributes to release candidate status, which matters more than it might sound. If you're running the k8sattributes or resourcedetection processors in your collector pipeline, this stabilization directly affects how your telemetry gets enriched with pod, namespace, and cluster metadata.
The practical impact is straightforward: these attributes are now locked in enough that you can start testing them in production without expecting major schema changes before the stable release. The feature gate mechanism lets you opt into the RC schema while keeping your existing setup as a fallback. This is particularly relevant if you've been dealing with inconsistent attribute naming across different collector versions or if you've built custom processors that depend on specific k8s metadata fields.
What actually changed? The SIG focused on standardizing the attribute names and cardinality expectations for common Kubernetes resource types. Things like k8s.pod.name, k8s.namespace.name, and k8s.deployment.name now have defined semantics that won't shift underneath you. The k8sattributes processor, which queries the Kubernetes API to enrich spans and metrics with cluster context, will use these conventions consistently. Same goes for resourcedetection, which pulls similar metadata but at the resource level rather than per-span.
The cardinality question is worth attention here. Kubernetes metadata can explode your metric and trace cardinality if you're not careful about which attributes you're actually attaching. Pod UIDs, for instance, are useful for correlation but deadly if you're creating metric series from them. The RC conventions include guidance on which attributes are safe for aggregation and which should be treated as high-cardinality identifiers. If you're currently using extract_metadata in k8sattributes with a blanket "grab everything" config, now's a good time to audit what you're actually using downstream.
Testing the RC schema is straightforward. You'll enable it through the collector's feature gates configuration, which lets you run both old and new schemas simultaneously if needed. This matters for teams running multiple collector instances or doing gradual rollouts. You can validate that your downstream systems, whether that's Prometheus, Jaeger, or a commercial observability backend, handle the new attribute names correctly before committing.
The stabilization also reduces future breaking changes, which is the real win. If you've been in the OpenTelemetry ecosystem for a while, you've probably dealt with attribute renames or restructured conventions that required updating dashboards, alerts, and queries. Getting Kubernetes attributes to RC means the community has converged on a schema that works across different deployment patterns, from simple single-cluster setups to multi-tenant platforms with complex namespace isolation.
What's next is providing feedback if you spot gaps. The RC period exists specifically to catch issues before the stable release locks everything down. If your setup uses custom Kubernetes resources or you're enriching telemetry with non-standard labels that don't map cleanly to the conventions, now's the time to raise it with the SIG. The stable release will be harder to change once it ships.
For most platform teams, the action item is simple: enable the RC schema in a non-critical environment, validate your existing queries and dashboards still work, and report any issues. The sooner this stabilizes, the sooner we all stop dealing with semantic convention drift across collector versions.