‹ Back
Senior Data Engineer (Virtual, US)
JOB SUMMARY
Roles
Data Engineer
Skills & Technologies
Languages:PythonSQLJavaJavaScriptGo
ML/AI:MLflow
Big Data:SparkKafkaAirflowDatabricksFlink
Cloud/DevOps:AWSAzureGCPDockerKubernetesTerraform
Tools:GitCI/CD
Job details
About AvayaAvaya is an enterprise software leader that helps the world’s largest organizations and government agencies forge unbreakable connections.
The Avaya Infinity™ platform unifies fragmented customer experiences, connecting the channels, insights, technologies, and workflows that together create enduring customer and employee relationships.
We believe success is built through strong connections – with each other, with our work, and with
our mission. At Avaya, you'll find a community that values your contributions and supports your growth every step of the way. Learn more at https://www. avaya. comOverviewYou’ll build and scale the real-time and batch data platform that powers a large enterprise contact center solution.
Our products demand ultra-low-latency decisioning for live interactions and cost-efficient big-data analytics for historical insights.
We’re primarily on Azure today and expanding to GCP and AWS. Data is the backbone for our AI features and product intelligence. Primary charter: complex contact center analytics and operational intelligence: an AI-enabled enterprise contact center analytics.
Our vision is a flexible AI-enabled data platform that unifies contact center KPIs, customer/business outcomes, and AI quality/performance, and pervasively applies AI to deliver advanced features that help users easily leverage rich contact center data alongside business data and AI performance monitoring to drive decisions end-to-end. Job DescriptionTeam Tech Cloud: Azure (primary), expanding to GCP/AWS Platform: Databricks, Spark (batch + streaming), Airflow, Apache Superset, Kafka Data governance: Delta Lake, Unity Catalog, Delta Sharing Infra delivery: Terraform, Docker/Kubernetes, CI/CD (GitHub Actions/Azure DevOps) Interfaces: REST/gRPC; schemas with Avro/Protobuf Processing alternatives: Apache Flink/Apache Beam where appropriate; customprocessors/services in Go for specialized low-latency needs App stack: React + TypeScript (front-end), Go (preferred) and Java (backend) Focus: Real-time streaming, lakehouse analytics, reliability, and cost efficiency Experimentation metrics: MLflow for experiment tracking and AI quality/performancemetrics Tooling integration: MCP (Model Context Protocol) to expose/consume data tools for agentsWhat you’ll do Design, build, and operate low-latency streaming pipelines (Kafka, Spark StructuredStreaming) and robust batch ETL/ELT on Databricks Lakehouse. Establish reliable orchestration and dependency management (Airflow), with strong SLAsand on-call readiness for business-critical data flows. Model, optimize, and document curated datasets and interfaces that serve analytics, productfeatures, and AI workloads. Implement data quality checks, observability, and backfills; drive root-cause analysis andincident prevention. Partner with application teams (Go/Java), analytics, and ML/AI to ship data products into
Production. Build and maintain datasets and services that power RAG pipelines and agentic AI workflows(tool-use/function calling). When Spark/Databricks isn’t optimal, design and operate custom processors/services in Goto meet strict latency or specialized transformation
requirements. Instrument prompt/response and token usage telemetry to support LLMOps evaluation andcost optimization; provide datasets for labeling and golden sets. Improve performance and cost (storage/compute), review code, and raise engineeringstandards. Security Compliance Design data solutions aligned to enterprise security, privacy, and compliance
requirements(e. g. , SOC 2, ISO 27001, GDPR/CCPA as applicable), partnering with Security/Legal. Implement RBAC/ABAC and least-privilege access; manage service principals, secrets, andkey rotation; enforce encryption in transit and at rest. Govern sensitive data: classification, PII handling, masking/tokenization, retention/archival,lineage, and audit logging across pipelines and storage. Build observability for data security and quality; support incident response, access reviews,and audit readiness. Embed controls in CI/CD (policy checks, dependency vulnerability scanning) and ensureinfra-as-code adheres to guardrails. Partner with security engineering on penetration tests, threat modeling, and red-teamexercises; remediate findings and document controls. Contribute to compliance audits (e. g. , SOC 2/ISO 27001) with evidence collection andcontinuous control monitoring; support DPIAs/PIAs where required.
Qualifications 6+ years building production-grade data pipelines at scale (streaming and batch). Deep proficiency in Python and SQL; strong Spark experience on Databricks (or similar). Advanced SQL: window functions, CTEs, partitioning/z-ordering, query planning and tuningin lakehouse environments. Hands-on with Kafka (or equivalent) and an orchestrator (Airflow preferred). Strong data modeling skills and performance tuning for low latency and high throughput. Production mindset: SLAs, monitoring, alerting, CI/CD, and on-call participation. Proficient using AI coding assistants (Cursor, Claude Code) as part of daily development. Proficiency building data services/processors in Go (or willingness to ramp quickly), andfamiliarity with alternative frameworks (e. g. , Flink/Beam) is a plus. Preferred
qualifications Experience in multi-cloud or cloud migration (Azure plus either GCP or AWS). Exposure to building data for AI/RAG, LLM-powered features, and agentic AI patterns (tool-use/function calling, planning/execution, memory). Familiarity with LLMOps telemetry (prompt/response logs, token budgets) and agentevaluation pipelines. Background in high-scale product engineering (vs. internal IT-only projects). Contact center or CRM data familiarity (nice-to-have, not required). Bachelor’s or Master’s in CS/EE/Math or similar; strong academic background and/or top-tiercompanies.
The pay range for this opportunity is from $128,200 to $157,000 + performance-related bonus +
benefits.
This range represents the anticipated low and high end of the salary for this position.
This role is also eligible to receive an annual bonus that aligns with individual and company performance. Actual salaries will vary and are based on factors such as a candidate’s
qualifications, skills, competencies. Experience4 - 6 Years of ExperienceEducationBachelor degree or equivalent experienceAdvance Degree preferredPreferred CertificationsFooterApplicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future. Avaya is an Equal Opportunity employer and a U. S. Federal Contractor.
Our commitment to equality is a core value of Avaya. All qualified applicants and employees receive equal treatment without consideration for race, religion, sex, age, sexual orientation, gender identity, national origin, disability, status as a protected veteran or any other protected characteristic.
In general, positions at Avaya require the ability to communicate and use office technology effectively. Physical
requirements may vary by assigned work
location.
This job brief/description is subject to change. Nothing in this job description restricts Avaya right to alter the duties and
responsibilities:
of this position at any time for any reason.
The Avaya Infinity™ platform unifies fragmented customer experiences, connecting the channels, insights, technologies, and workflows that together create enduring customer and employee relationships.
We believe success is built through strong connections – with each other, with our work, and with
our mission. At Avaya, you'll find a community that values your contributions and supports your growth every step of the way. Learn more at https://www. avaya. comOverviewYou’ll build and scale the real-time and batch data platform that powers a large enterprise contact center solution.
Our products demand ultra-low-latency decisioning for live interactions and cost-efficient big-data analytics for historical insights.
We’re primarily on Azure today and expanding to GCP and AWS. Data is the backbone for our AI features and product intelligence. Primary charter: complex contact center analytics and operational intelligence: an AI-enabled enterprise contact center analytics.
Our vision is a flexible AI-enabled data platform that unifies contact center KPIs, customer/business outcomes, and AI quality/performance, and pervasively applies AI to deliver advanced features that help users easily leverage rich contact center data alongside business data and AI performance monitoring to drive decisions end-to-end. Job DescriptionTeam Tech Cloud: Azure (primary), expanding to GCP/AWS Platform: Databricks, Spark (batch + streaming), Airflow, Apache Superset, Kafka Data governance: Delta Lake, Unity Catalog, Delta Sharing Infra delivery: Terraform, Docker/Kubernetes, CI/CD (GitHub Actions/Azure DevOps) Interfaces: REST/gRPC; schemas with Avro/Protobuf Processing alternatives: Apache Flink/Apache Beam where appropriate; customprocessors/services in Go for specialized low-latency needs App stack: React + TypeScript (front-end), Go (preferred) and Java (backend) Focus: Real-time streaming, lakehouse analytics, reliability, and cost efficiency Experimentation metrics: MLflow for experiment tracking and AI quality/performancemetrics Tooling integration: MCP (Model Context Protocol) to expose/consume data tools for agentsWhat you’ll do Design, build, and operate low-latency streaming pipelines (Kafka, Spark StructuredStreaming) and robust batch ETL/ELT on Databricks Lakehouse. Establish reliable orchestration and dependency management (Airflow), with strong SLAsand on-call readiness for business-critical data flows. Model, optimize, and document curated datasets and interfaces that serve analytics, productfeatures, and AI workloads. Implement data quality checks, observability, and backfills; drive root-cause analysis andincident prevention. Partner with application teams (Go/Java), analytics, and ML/AI to ship data products into
Production. Build and maintain datasets and services that power RAG pipelines and agentic AI workflows(tool-use/function calling). When Spark/Databricks isn’t optimal, design and operate custom processors/services in Goto meet strict latency or specialized transformation
requirements. Instrument prompt/response and token usage telemetry to support LLMOps evaluation andcost optimization; provide datasets for labeling and golden sets. Improve performance and cost (storage/compute), review code, and raise engineeringstandards. Security Compliance Design data solutions aligned to enterprise security, privacy, and compliance
requirements(e. g. , SOC 2, ISO 27001, GDPR/CCPA as applicable), partnering with Security/Legal. Implement RBAC/ABAC and least-privilege access; manage service principals, secrets, andkey rotation; enforce encryption in transit and at rest. Govern sensitive data: classification, PII handling, masking/tokenization, retention/archival,lineage, and audit logging across pipelines and storage. Build observability for data security and quality; support incident response, access reviews,and audit readiness. Embed controls in CI/CD (policy checks, dependency vulnerability scanning) and ensureinfra-as-code adheres to guardrails. Partner with security engineering on penetration tests, threat modeling, and red-teamexercises; remediate findings and document controls. Contribute to compliance audits (e. g. , SOC 2/ISO 27001) with evidence collection andcontinuous control monitoring; support DPIAs/PIAs where required.
Qualifications 6+ years building production-grade data pipelines at scale (streaming and batch). Deep proficiency in Python and SQL; strong Spark experience on Databricks (or similar). Advanced SQL: window functions, CTEs, partitioning/z-ordering, query planning and tuningin lakehouse environments. Hands-on with Kafka (or equivalent) and an orchestrator (Airflow preferred). Strong data modeling skills and performance tuning for low latency and high throughput. Production mindset: SLAs, monitoring, alerting, CI/CD, and on-call participation. Proficient using AI coding assistants (Cursor, Claude Code) as part of daily development. Proficiency building data services/processors in Go (or willingness to ramp quickly), andfamiliarity with alternative frameworks (e. g. , Flink/Beam) is a plus. Preferred
qualifications Experience in multi-cloud or cloud migration (Azure plus either GCP or AWS). Exposure to building data for AI/RAG, LLM-powered features, and agentic AI patterns (tool-use/function calling, planning/execution, memory). Familiarity with LLMOps telemetry (prompt/response logs, token budgets) and agentevaluation pipelines. Background in high-scale product engineering (vs. internal IT-only projects). Contact center or CRM data familiarity (nice-to-have, not required). Bachelor’s or Master’s in CS/EE/Math or similar; strong academic background and/or top-tiercompanies.
The pay range for this opportunity is from $128,200 to $157,000 + performance-related bonus +
benefits.
This range represents the anticipated low and high end of the salary for this position.
This role is also eligible to receive an annual bonus that aligns with individual and company performance. Actual salaries will vary and are based on factors such as a candidate’s
qualifications, skills, competencies. Experience4 - 6 Years of ExperienceEducationBachelor degree or equivalent experienceAdvance Degree preferredPreferred CertificationsFooterApplicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future. Avaya is an Equal Opportunity employer and a U. S. Federal Contractor.
Our commitment to equality is a core value of Avaya. All qualified applicants and employees receive equal treatment without consideration for race, religion, sex, age, sexual orientation, gender identity, national origin, disability, status as a protected veteran or any other protected characteristic.
In general, positions at Avaya require the ability to communicate and use office technology effectively. Physical
requirements may vary by assigned work
location.
This job brief/description is subject to change. Nothing in this job description restricts Avaya right to alter the duties and
responsibilities:
of this position at any time for any reason.
Discover the company
Explore other offers from this company or learn more about Avaya.
The company
A
Avaya United States