Click here to join our community of experts to get information on job search, salaries and more.

CloudIngest

Data Engineer

Company: CloudIngest

Location: Remote

Posted on: December 02

Job Title: Data Engineer (Databricks, SQL)

Location: Remote Work

Rate Cap: $55/Hr. (on W2) / $60/Hr. (on C2C)

About the Job Summary:

Were looking for a skilled Data Engineer with strong expertise in Databricks and SQL to join our data analytics team. You will work as part of a cross-functional team to design, build, and optimize data pipelines, frameworks, and warehouses to support business-critical analytics and reporting. The role requires deep-on experience in SQL-based transformations, Databricks, and modern data engineering practices.

Experience:

  • 6+ years of relevant experience or equivalent education in ETL processing/data engineering or related field.
  • 4+ years of experience with SQL development (complex queries, performance tuning, stored procedures).
  • 3+ years of experience building data pipelines on Databricks/Spark (Python/Scala).
  • Exposure to cloud data platforms (Azure/AWS/GCP) is a plus.

Roles & Responsibilities:

  • Design, develop, and maintain data pipelines and ETL processes using Databricks and SQL.
  • Write optimized SQL queries for data extraction, transformation, and loading across large-scale datasets.
  • Monitor, validate, and optimize data movement, cleansing, normalization, and updating processes to ensure data quality, consistency, and reliability.
  • Collaborate with business and analytics teams to define data models, schemas, and frameworks within the data warehouse.
  • Document source-to-target mapping and transformation logic.
  • Build data frameworks and visualizations to support analytics and reporting.
  • Ensure compliance with data governance, security, and regulatory standards.
  • Communicate effectively with internal and external stakeholders to understand and deliver on data needs.

Qualifications & Experience:

  • Bachelors degree in Computer Science, Data Engineering, or related field.
  • 6+ years of hands-on experience in ETL/data engineering.
  • Proficiency in the Python programming language.
  • Strong SQL development experience (query optimization, indexing strategies, stored procedures).
  • 3+ years of experience in Spark.
  • 3+ years of Databricks experience with Python/Scala.
  • Experience with cloud platforms (Azure/AWS/GCP) preferred.
  • Databricks or cloud certifications (SQL, Databricks, Azure Data Engineer) are a strong plus.