‹ Back

Sr. Data Engineer

JOB SUMMARY

United StatesPosted on 1/30/2026

Skills & Technologies

Languages:Python
Big Data:KafkaAirflow
Cloud/DevOps:AWSDockerTerraform
Tools:GitCI/CD
Apply
Sponsored
SwiftPrep Logo

SwiftPrep

Ace your interview at TRAC Recruiting

Get a tailored interview study plan, cheat-sheet, and find contacts for referrals.

Real interview questions and answers from Glassdoor, Reddit, Blind
Role-specific prep plan and cheatsheet tailored to TRAC Recruiting
Find insiders for referrals
Get Your Prep Plan
Optimize your resume with Teal - AI-powered resume builder and job tracking tools

Job details

We are seeking a Senior Data Engineer (REMOTE)for a full time and direct hire role for one of our amazing partners on the East Coast.

You can live in ET or CT time zones, but you will need to be available within their core working hours Mon-Fri from 9:00am-12:00pm ET.

This Sr. Data Engineer will be key in developing and managing the data infrastructure, enhance the data engineering processes and help optimize and scale their data systems.

Our client is a technology company that focuses on customer acquisition within the insurance market, and they create personalized experiences for both their clients and customers.

They develop scalable products that enhance the shopping journey for consumers and drive customer growth for their clients. Responsibilities:Work and collaborate closely with BI, Product Engineering, and cross-functional stakeholders to gather

requirements

, define data models, and deliver actionable data products. Architect and manage OLAP data platform (Redshift and related components) and partner closely with Product Engineers on OLTP data DevOps to ensure smooth cross-system integrations and data flows. Design, build, and maintain scalable ETL/ELT pipelines across batch and real-time environments. Build and optimize reliable, observable, and maintainable pipelines using AWS Glue, Kafka/Kinesis, and Python. Own and evolve data models that support analytics, product usage tracking, and real-time decision-making. Develop and enforce best practices for automated testing, data validation, quality checks, and CI/CD workflows for data systems. Improve data reliability, governance, lineage, and observability across the stack. Mentor other engineers and help set strong engineering and data platform standards.

Requirements

: 5+ years of experience working as a Data Engineer and working with large-scale data systems. Expert skills using Python for building and orchestrating data pipelines. Strong experience with AWS Redshift including performance tuning, modeling, and workload management. Hands-on experience with AWS Glue, Kafka/Kinesis, and streaming data patterns. Proven experience with automated testing frameworks and CI/CD for data. Strong collaboration skills with both BI teams and Product/Feature teams, translating data needs into system designs. Experience with Data DevOps practices, including :Managing OLAP data infrastructureCoordinating OLTP data integration and operational workflows with product engineering. Using tools like Terraform, CloudFormation, GitHub Actions, and CodePipeline. Experience with ClickHouse or other OLAP/analytical databases is a nice to have. Exposure to workflow orchestrators (Airflow, etc. ) is a nice to have. Experience with Docker, ECS, EKS, or similar container platforms is a nice to have. Familiar with data observability tooling is a bonus. Experience with LLMs and MCPs is a nice to have. All qualified applicants will receive consideration for employment without regard to race, color, national origin, age, ancestry, religion, sex, sexual orientation, gender identity, gender expression, marital status, disability, medical condition, genetic information, pregnancy, or military or veteran status.

You must be legally authorized to work in the United States without current or future sponsorship.