Skills highlighted in blue are preferred key skills
Key Responsibilities:
Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces) at scale.
Stream OpenShift telemetry via Kafka (producers, topics, schemas) and build resilient consumer services for transformation and enrichment.
Engineer data models and routing for multi-tenant observability; ensure lineage, quality, and SLAs across the stream layer.
Integrate processed telemetry into Splunk for visualization, dashboards, alerting, and analytics to achieve Observability Level 4 (proactive insights).
Implement schema management (Avro/Protobuf), governance, and versioning for telemetry events.
Build automated validation, replay, and backfill mechanisms for data reliability and recovery.
Instrument services with OpenTelemetry; standardize tracing, metrics, and structured logging across platforms.
Use LLMs to enhance observability capabilities (e.g., query assistance, anomaly summarization, runbook generation).
Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs.
Ensure security, compliance, and best practices for data pipelines and observability platforms.
Document data flows, schemas, dashboards, and operational runbooks.
Key Responsibilities:
Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces) at scale.
Stream OpenShift telemetry via Kafka (producers, topics, schemas) and build resilient consumer services for transformation and enrichment.
Engineer data models and routing for multi-tenant observability; ensure lineage, quality, and SLAs across the stream layer.
Integrate processed telemetry into Splunk for visualization, dashboards, alerting, and analytics to achieve Observability Level 4 (proactive insights).
Implement schema management (Avro/Protobuf), governance, and versioning for telemetry events.
Build automated validation, replay, and backfill mechanisms for data reliability and recovery.
Instrument services with OpenTelemetry; standardize tracing, metrics, and structured logging across platforms.
Use LLMs to enhance observability capabilities (e.g., query assistance, anomaly summarization, runbook generation).
Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs.
Ensure security, compliance, and best practices for data pipelines and observability platforms.
Document data flows, schemas, dashboards, and operational runbooks.
Key Responsibilities:
Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces) at scale.
Stream OpenShift telemetry via Kafka (producers, topics, schemas) and build resilient consumer services for transformation and enrichment.
Engineer data models and routing for multi-tenant observability; ensure lineage, quality, and SLAs across the stream layer.
Integrate processed telemetry into Splunk for visualization, dashboards, alerting, and analytics to achieve Observability Level 4 (proactive insights).
Implement schema management (Avro/Protobuf), governance, and versioning for telemetry events.
Build automated validation, replay, and backfill mechanisms for data reliability and recovery.
Instrument services with OpenTelemetry; standardize tracing, metrics, and structured logging across platforms.
Use LLMs to enhance observability capabilities (e.g., query assistance, anomaly summarization, runbook generation).
Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs.
Ensure security, compliance, and best practices for data pipelines and observability platforms.
Document data flows, schemas, dashboards, and operational runbooks.
We have been offering the best candidates to companies based on their needs for over ten years. As a recruitment company, we devote time to learning about candidates' potential capabilities and personality qualities, and we use this knowledge to match them with a suitable position.