Job Summary
Senior Data Engineer
Teema
Contract
Hybrid | Beaverton, OR, United States
Founded in 2020 in Portland, Oregon, My client has grown to over 75 employees and serves more than 100 customers. We are proud to be ranked #26 on Deloittes 2024 Tec
hnology Fast 500 list, recognizing the fastest-growing tech companies across North America, driven by our exceptional revenue growth over the past three years.
Data Engineer (Healthcare Domain)
Join a company at the forefront of data innovation and AI.
Role Overview:
We are seeking a Data Engineer with strong experience in healthcare to help leverage our data assets effectively. Our platform handles billions of data rows monthly, impacting millions of users. Your healthcare industry expertise will support our strategic data usage for meaningful outcomes.
What You'll Work On:
50% Building, scaling, and maintaining the data pipelines.
20% Assisting in the implementation of DataOps methodologies.
20% Writing, optimizing, and tuning queries and algorithms.
10% Supporting, monitoring, and maintaining data pipelines.
We foster a collaborative environment that values proactive communication, continuous learning, and teamwork. Expect strong leadership support when encountering and communicating roadblocks.
Who Thrives in This Role?
Healthcare domain expertise You have a practical understanding and experience within the healthcare industry, including knowledge of healthcare data standards and regulations.
Self-Starter & Curious Learner You're proactive in solving problems, stay updated on industry trends, and continuously seek to enhance your skills (yes, listening to Databricks and data engineering podcasts counts!
Databricks Familiarity Hands-on experience with Databricks technologies across Azure, AWS, or GCP.
Python/Scala Proficiency Strong skills in Python or Scala.
Spark Experience Familiarity with Spark (Databricks), Delta Lakehouse, Delta Live Tables, and Unity Catalog is beneficial.
Skills & Qualifications Required:
6+ years of relevant industry experience.
Practical knowledge of Spark (Scala or Python) and Databricks.
Solid backend engineering skills for data processing, focusing on scalability, availability, and performance optimization.
Familiarity with algorithms and data structures.
Experience with workflow orchestration tools such as Databricks DLT, Azure Data Factory (ADF), Azure Synapse, Airflow, etc.
Understanding of distributed systems architecture.
Proficiency with REST APIs.
Experience working with NoSQL databases.
Recent professional experience involving Scala/Python and Spark (Databricks).
Education:
B.E./B.Tech in Computer Science & Engineering.
#Workwolf