‹ Back

Senior Data Engineer

JOB SUMMARY

IndiaPosted on 2/12/2026

Skills & Technologies

Languages:PythonSQL
Big Data:Airflow
Databases:PostgreSQLMySQL
Apply
Sponsored
SwiftPrep Logo

SwiftPrep

Ace your interview at Delta Exchange

Get a tailored interview study plan, cheat-sheet, and find contacts for referrals.

Real interview questions and answers from Glassdoor, Reddit, Blind
Role-specific prep plan and cheatsheet tailored to Delta Exchange
Find insiders for referrals
Get Your Prep Plan
Optimize your resume with Teal - AI-powered resume builder and job tracking tools

Job details

About the CompanyAt Delta, we are reimagining and rebuilding the financial system. Join our team to make a positive impact on the future of finance. 🎯 Mission Driven: Re-imagine and rebuild the future of finance. 💡 Most innovative cryptocurrency derivatives exchange. With a daily traded volume of ~$

3. 5 billion, and increasing. Delta is bigger than all the Indian crypto exchanges combined. 📈 Offer the widest range of derivative products and have been serving traders all over the globe since 2018 and growing fast. 💪🏻 The founding team is comprised of IIT and ISB graduates. Business co-founders have previously worked with Citibank, UBS and GIC; and our tech co-founder is a serial entrepreneur who previously co-founded TinyOwl and Housing. com. 💰 Funded by top crypto funds (Sino Global Capital, CoinFund, Gumi Cryptos) and crypto projects (Aave and Kyber Network). Role Summary:Support our analytics team by owning the full ETL lifecycle—from master data to analytics-ready datasets.

You will build and maintain daily batch pipelines that process 1–10 million master-data rows per run (and scale up to tens or hundreds of millions of rows), all within sub-hourly SLAs. Extract from OLTP and time-series sources, apply SQL/stored-procedure logic or Python transformations, then load into partitioned, indexed analytics tables. Reads run exclusively on read-only replicas to guarantee zero impact on the production master DB.

You’ll also implement monitoring, alerting, retries, and robust error handling to ensure near-real-time dashboard refreshes.

Requirements

Required Skills Experience: 4+ years in data engineering or analytics roles, building daily batch ETL pipelines at 1–10 M rows/run scale (and up to 100 M+). Expert SQL skills, including stored procedures and query optimisation on Postgres, Mysql, or similar RDBMS. Proficient in Python for data transformation (pandas, NumPy, SQLAlchemy, psycopg2). Hands-on with CDC/incremental load patterns and batch schedulers (Airflow, cron). Deep understanding of replicas, partitioning, and indexing strategies. Strong computer-science fundamentals and deep knowledge of database internals—including storage engines, indexing mechanisms, query execution plans and optimisers for MySQL and time-series DBs like TimescaleDB. Experience setting up monitoring and alerting (Prometheus, Grafana, etc. ). Key Responsibilities:

1. Nightly Batch Jobs: Schedule and execute ETL runs.

2.

In-Database Transformations: Write optimised SQL and stored procedures.

3. Python Orchestration: Develop Python scripts for more complex analytics transformations.

4. Data Loading Modelling: Load cleansed data into partitioned, indexed analytics schemas designed for fast querying.

5. Performance SLAs: Deliver end-to-end sub-hourly runtimes.

6. Monitoring Resilience:Implement pipeline health checks, metrics, alerting, automatic retries, and robust error handling.

7. Stakeholder Collaboration: Work closely with analysts to validate data quality and ensure timely delivery of analytics-ready datasets.