About Bitus LabsBitus Labs is a cutting-edge AI Gaming company dedicated to revolutionizing the gaming industry through innovative artificial intelligence technologies. As we continue to grow, we are building a robust data platform to support our ambitious projects and empower our AI-driven solutions.
About the Role
We are looking for a highly skilled Data Engineer to join our team and play a key role in designing, building, and optimizing large-scale data processing architectures. The ideal candidate will have extensive experience working with AWS cloud services and a deep understanding of big data frameworks. You will be responsible for developing scalable data pipelines, ensuring data reliability, and enabling advanced analytics capabilities. We move fast, with precision, and always execute with privacy at the forefront.
Key Responsibilities
- Design and implement scalable and efficient data architectures to process large volumes of structured and unstructured data
- Develop, optimize, and maintain real-time and batch ETL/ELT pipelines using modern data engineering tools
- Leverage AWS services (S3, Redshift, Glue, Lambda, EMR, Kinesis, etc.) to build and manage cloud-based data infrastructure
- Work with big data technologies such as Spark, Hadoop, and Kafka to support real-time and batch processing
- Ensure data quality, security, and governance across the entire data lifecycle
- Monitor and troubleshoot data pipelines and infrastructure issues, ensuring high availability and reliability
- Optimize database performance and design data models that support business intelligence and analytics
- Collaborate with data scientists, analysts, and software engineers to enable data-driven decision-making
Requirements
- 3+ years of experience in data engineering, big data processing, or cloud data solutions
- Proficiency in Python or Scala for data processing and automation
- Solid understanding of data modeling, warehousing, and performance tuning
- Strong proficiency in AWS data services (S3, Redshift, Glue, Lambda, EMR, Kinesis, etc.)
- Hands-on experience with big data frameworks (Apache Spark, Hadoop, Kafka, etc.)
- Experience building and maintaining data models using dbt
- Advanced SQL skills and experience with distributed databases (Redshift, Snowflake, BigQuery, etc.)
- Experience with orchestration tools (Apache Airflow, AWS Step Functions, etc.)
- Familiarity with CI/CD pipelines and infrastructure-as-code (Terraform, CloudFormation, etc.)
- Experience working in an agile development environment
- Must be Chinese Mandarin Fluent.
Preferred Qualifications
- Experience with real-time data streaming solutions (Apache Flink, Kinesis, Kafka Streams)
- Knowledge of machine learning model deployment in a production environment
- Familiarity with data governance frameworks and security best practices
- AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect is a plus
Job Type: Full-time
Pay: From $80,000.00 per year
Benefits:
- 401(k)
- 401(k) matching
- Dental insurance
- Health insurance
- Life insurance
- Paid time off
- Vision insurance
Schedule:
Application Question(s):
- Do you need H1B Transfer?
Experience:
- Big data: 3 years (Required)
Language:
Ability to Commute:
- Irvine, CA 92618 (Required)
Ability to Relocate:
- Irvine, CA 92618: Relocate before starting work (Required)
Work Location: In person