Slak Logo Click here to join our private community of job search experts on Slack to receive the latest updates

icon
Cincinnati, OH (Onsite)

Senior Data Engineer

Company: Cincinnati, OH (Onsite)

Location: Full Time United States

Posted on: March 19

84.51 Overview:

84.51 is a retail data science, insights and media company. We help the Kroger company, consumer packaged goods companies, agencies, publishers and affiliated partners create more personalized and valuable experiences for shoppers across the path to purchase.

Powered by cutting edge science, we leverage 1st party retail data from nearly 1 of 2 US households and 2BN+ transactions to fuel a more customer-centric journey utilizing 84.51 Insights, 84.51 Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.

Join us at 84.51!

As a Senior Data Engineer, you will have the opportunity to build solutions that ingest, store, and distribute our big data to be consumed by data scientists and our products.

Our data engineers use Python, Hadoop, PySpark, Hive, and other data engineering technologies and visualization tools to deliver data capabilities and services to our scientists, products, and tools.

ResponsibilitiesTake ownership of features and drive them to completion through all phases of the entire 84.51 SDLC. This includes internal and external facing applications as well as process improvement activities:
  • Participate in design and development of Hadoop and Cloud-based solutions
  • Perform unit and integration testing
  • Participate in implementation of BI visualizations
  • Collaborate with architecture and lead engineers to ensure consistent development practices
  • Provide mentoring to junior engineers
  • Participate in retrospective reviews
  • Participate in the estimation process for new work and releases
  • Collaborate with other engineers to solve and bring new perspectives to complex problems
  • Drive improvements in people, practices, and procedures
  • Embrace new technologies and an ever-changing environment
Requirements
  • 5+ years proven ability of professional Data Development experience
  • 3+ years proven ability of developing with Hadoop/HDFS and SQL (Oracle, SQL Server)
  • 3+ years of experience with PySpark/Spark
  • 3+ years of experience developing with either Python, Java, or Scala
  • Full understanding of ETL concepts and Data Warehousing concepts
  • Exposure to VCS (Git, SVN)
  • Strong understanding of Agile Principles (Scrum)
  • Bachelor's Degree (Computer Science, Management Information Systems, Mathematics, Business Analytics, or STEM)
Preferred Skills - Experience in the following
  • Experience with Azure
  • Exposure to NoSQL (Mongo, Cassandra)
  • Experience with Databricks
  • Exposure to Service Oriented Architecture
  • Exposure to BI Tooling (Tableau, Power BI, Cognos, etc.)
  • Proficient with Relational Data Modeling and/or Data Mesh principles
  • Experience with CI/CD - Continuous Integration/Continuous Delivery

#LI-DOLF #LI-REMOTE

Recommended Skills

  • Sql (Programming Language)
  • Microsoft Sql Servers
  • Java (Programming Language)
  • .Net Framework
  • Vbscript (Visual Basic Scripting Edition)
  • C Sharp (Programming Language)
Apply to this job.
Think you're the perfect candidate?

Help us improve CareerBuilder by providing feedback about this job:

Job ID: 2308378997

CareerBuilder TIP

For your privacy and protection, when applying to a job online, never give your social security number to a prospective employer, provide credit card or bank account information, or perform any sort of monetary transaction. Learn more.

By applying to a job using CareerBuilder you are agreeing to comply with and be subject to the CareerBuilder Terms and Conditions for use of our website. To use our website, you must agree with the Terms and Conditions and both meet and comply with their provisions.