Sr Data Engineer will lead development of critical data pipelines for Data Foundation platform for Smartwool and Altra brands and will be responsible for the design, implemention and quality of technical deliverables.
Interact when necessary with Product Owners to understand business requirements and translate into technical stories needed to implement end-to-end solutions.
Work in coordination with the data scientists, data analysts and business partners to implement and test advanced data analytics pipelines and applications.
Build and maintain architecture diagrams, technical documentation and best practices for data engineering team; as well as utilizing best practices across industries and striving for innovation and efficiency.
Understand and contribute to the evolution of the enterprise data architecture including the application of current and emerging data frameworks and tools, driving adoption of agile methodology, release management and DevOps processes.
Provide help and support to business and analytics users as well as data scientists, working in coordination to have a seamless integration with Data Science Models, BI Tools and reporting. Drive activities related to architecture designs, DevOps, CICD pipelines and code reviews.
This position will require to be within driving distance of Denver metro area or Fort Worth, preferably Denver.
HOW YOU WILL MAKE A DIFFERENCE
YEARS OF PROFESSIONAL EXPERIENCE: 5-8
- Design and build data ingestion workflows/pipelines, physical data schemas, extracts, data transformations, and data integrations and/or designs using ETL and API microservices in AWS Cloud
- Build data architecture and applications that enable reporting, analytics, data science, and data management and improve accessibility, efficiency, governance, processing, and quality of data.
- Be the point of reference for the Business, Architecture and Data Science team whenever a Big Data technology is required
- Coach and mentor junior team members, actively participates in and often leads peer development and code reviews within each Agile sprint, with focus on test driven development and Continuous Integration and Continuous Development (CICD).
- Evaluate and recommend new technology patterns for the Analytics platform
- Collaborate with AWS Solution Architects to ensure technical direction
- Enable development best practices, re-usability of code, QA and release management processes
- Help to coordinate agile scrum processes, meetings and backlog management
EDUCATIONAL/ POSITION REQUIREMENTS
- 3+ years overall software development experience
- A deep understanding of Data Engineering and related/technologies with 2+ years in AWS cloud platform with Python programming
- Previous experience in leading Cloud/Big Data Engineering projects
- Excellent knowledge of Glue, Lambda, Redshift and other AWS services required to develop efficient data pipelines
- Streaming pipelines experience using Kenisis, Kafka and similar tools
- Ability to evaluate and improve technical design and engineering patterns to increase software reusability
- Familiarity with JIRA & Confluence or similar tracking and management tools
- Familiarity with BI Tools like Tableau, DOMO, PowerBI or similar
- Excellent organizational, verbal and written communication and the ability to present information in a clear, concise and complete manner
- Self-starter, creative, enthusiastic, innovative and collaborative attitude
- Ability to prioritization task based on sense of urgency and accuracy
- Performing work with a high degree of independence of self-management of a large variety of tasks in a matrixed organization
- Communicating verbally and in writing to business customers with various levels of technical knowledge, educating them about our tools and data products
- Develop and provide development support for performant pipelines as part of the quarterly deliverables by your team
- Consolidation of different sources of data (API, SQL Database, CSV, S3 and ftp files etc) in to a centralized data store
- Agile development of Data Lake / Data Warehouse on AWS including serverless patterns using Lambda, Glue, Spark, REST APIs and Docker)
- Redshift and BigQuery as data ware house system.
- Experience with RDBMS (MySQL, Oracle or DB2..) and SQL/DDL Language, and NoSQL (DynamoDB)
- Development of message queue driven systems (Amazon SQS, SNS and Lambda based functions)
- Development of streaming system (ie: Kinesis Stream, Kinesis Firehose)
- Python development, PySpark development
- Lambda development in Python, Node
- Ability to perform code review and technical design review
- AWS Services Knowledge is mandatory
- Knowledge of Terraform and Infrustructure as code principles
$76,000.00 USD - $114,000.00 USD annually
This position is eligible for additional compensation awards that may include an annual incentive plan, sales incentive, or commission potential. Specific details of the additional compensation eligibility for this position will be provided during the recruiting and interview process.
Benefits at VF Corporation:
You can review a general overview of each benefit program offered, including this year's medical plan rates on www.MyVFbenefits.com and by clicking Looking to Join VF?
Please note, our pay ranges are determined and built from market pay data. In determining the specific compensation for this position, we comply with all local, state, and federal laws.
- Sql (Programming Language)
- Microsoft Sql Servers
- Java (Programming Language)
- .Net Framework
- Vbscript (Visual Basic Scripting Edition)
- C Sharp (Programming Language)