Click here to join our community of experts to get information on job search, salaries and more.

O3 Technology Solutions

Solutions Consultant Data Engineering Generalist

Company: O3 Technology Solutions

Location: California, United States

Posted on: May 29

Job Title: Solutions Consultant Data Engineering Generalist

Client Type: Product-Based Company

Location: California (Remote must work in PST time zone)

Duration: 10 Months

Visa Status: Open to all visa types

Position Overview:

We are seeking two experienced Data Engineering Solutions Consultants to support a leading product-based client. These roles focus on delivering high-quality data engineering solutions leveraging Databricks, Apache Spark, and related tools. The ideal candidates will have at least 2+ years of data engineering experience, solid consulting background, and strong hands-on exposure to Databricks environments.

Key Responsibilities:

  • Deliver hands-on implementation and consulting support for Databricks-based data engineering solutions.
  • Build, optimize, and deploy data pipelines using Apache Spark and Databricks on cloud platforms.
  • Work with large datasets and contribute to data processing, transformation, and performance tuning.
  • Collaborate with clients to understand requirements and deliver scalable solutions aligned with business goals.
  • Support development and enhancement of SQL-based data models and ETL pipelines across platforms like BigQuery, Synapse, or Redshift.
  • Use Python and Databricks-supported languages (Scala) to develop data workflows and transformations.
  • Apply best practices in CI/CD, code versioning, and testing for pipeline deployments.
  • Participate in solution design discussions and contribute to data architecture and process improvements.
  • Work across cross-functional teams in an Agile/consultative environment, often engaging with stakeholders.

Required Qualifications:

  • 2+ years of Data Engineering experience.
  • 3+ years of consulting experience in a professional services or product environment.
  • Completed Data Engineering Associate certification (Databricks).
  • At least 1 project delivered with hands-on implementation experience in Databricks.
  • Strong understanding of Apache Spark programming on Databricks.
  • Familiarity with SQL and common data warehouse tools (e.g., BigQuery, Synapse, Redshift).
  • Proficiency in Python, and working knowledge of Scala or other Databricks-compatible languages.
  • Experience working in remote environments and across multiple time zones, especially PST.

Preferred Skills:

  • Knowledge of Databricks Data Engineering and Optimizing Apache Spark courses (completed or in-progress).
  • Experience in data lake, Delta Lake, and multi-hop pipeline architecture.
  • Familiarity with cloud platforms (Azure, AWS, or GCP).


Similar Jobs