‹ Back
Data Engineer Sr
JOB SUMMARY
Roles
Job details
Role Overview:The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support data analytics, reporting, and business intelligence.
This role ensures data is accessible, reliable, and optimized for performance across various systems. Key Responsibilities:Design, develop, and maintain ETL/ELT pipelines for ingesting and transforming data from multiple sources. Build and optimize data models for analytics and reporting. Implement and manage data storage solutions (e. g. , relational databases, data lakes, cloud storage). Ensure data quality, integrity, and security across all systems. Collaborate with data scientists, analysts, and business teams to understand
requirements
and deliver solutions. Monitor and improve data pipeline performance and troubleshoot issues. Stay updated with emerging technologies and best practices in data engineering and cloud platforms. Required Skills
Qualifications
:Proficiency in SQL and experience with relational databases (e. g. , Oracle, MySQL, SQL Server). Strong programming skills in Python, PL/SQL, Java, or Scala. Experience with big data technologies (e. g. , Hadoop, Spark, Databricks) and cloud platforms (AWS, Azure, GCP). Hands-on experience with OpenShift or other container orchestration platforms (e. g. , Kubernetes). Knowledge of data warehousing concepts and tools (e. g. , Snowflake, Redshift, BigQuery). Familiarity with workflow orchestration tools (e. g. , Airflow, Luigi). Understanding of data governance, security, and compliance. Preferred
Qualifications
:Experience with streaming data (Kafka, Kinesis). Background in DevOps practices for data pipelines. Knowledge of machine learning workflows and integration with data pipelines.
This role ensures data is accessible, reliable, and optimized for performance across various systems. Key Responsibilities:Design, develop, and maintain ETL/ELT pipelines for ingesting and transforming data from multiple sources. Build and optimize data models for analytics and reporting. Implement and manage data storage solutions (e. g. , relational databases, data lakes, cloud storage). Ensure data quality, integrity, and security across all systems. Collaborate with data scientists, analysts, and business teams to understand
requirements
and deliver solutions. Monitor and improve data pipeline performance and troubleshoot issues. Stay updated with emerging technologies and best practices in data engineering and cloud platforms. Required Skills
Qualifications
:Proficiency in SQL and experience with relational databases (e. g. , Oracle, MySQL, SQL Server). Strong programming skills in Python, PL/SQL, Java, or Scala. Experience with big data technologies (e. g. , Hadoop, Spark, Databricks) and cloud platforms (AWS, Azure, GCP). Hands-on experience with OpenShift or other container orchestration platforms (e. g. , Kubernetes). Knowledge of data warehousing concepts and tools (e. g. , Snowflake, Redshift, BigQuery). Familiarity with workflow orchestration tools (e. g. , Airflow, Luigi). Understanding of data governance, security, and compliance. Preferred
Qualifications
:Experience with streaming data (Kafka, Kinesis). Background in DevOps practices for data pipelines. Knowledge of machine learning workflows and integration with data pipelines.
Discover the company
Explore other offers from this company or learn more about Valce Talent Solutions.
The company
V
Valce Talent Solutions United States




