Click here to join our community of experts to get information on job search, salaries and more.

MokshaaLLC

Ml Engineer With Ai Deployment Experience

Company: MokshaaLLC

Location: Remote

Posted on: November 11

Job Title: Machine Learning Engineer AI Deployments on AWS Cloud

Location: Remote (Authorized to work in USA only)

Contract - W2/C2C/1099

Rate: $70/hr to $75/hr

Overview:

We are seeking a Machine Learning Engineer experienced in developing, deploying, and optimizing AI/ML solutions using AWS Cloud. The ideal candidate will have end-to-end ownership of the ML lifecycle from data ingestion and model training to scalable deployment, monitoring, and continuous improvement using AWS-native services.

Key Responsibilities:

Model Development & Training

  • Design, develop, and optimize machine learning and deep learning models using frameworks like TensorFlow, PyTorch, or Scikit-learn.
  • Perform data preprocessing, feature engineering, and model evaluation using AWS data and analytics services.
  • Collaborate with data scientists to productionize research models into scalable, reliable cloud-based AI solutions.

AWS Cloud AI Deployments

  • Deploy and manage ML models in AWS SageMaker (training jobs, endpoints, pipelines, and model registry).
  • Build serverless inference APIs using AWS Lambda, API Gateway, or ECS/Fargate.
  • Implement real-time or batch inference pipelines with AWS Step Functions, Kinesis, or AWS Batch.
  • Manage containerized workloads for ML inference using Docker and Amazon EKS (Kubernetes).

MLOps & Automation

  • Develop CI/CD pipelines for ML using AWS CodePipeline, CodeBuild, and CodeCommit (or GitHub Actions).
  • Automate data versioning, model versioning, and model retraining using SageMaker Pipelines, MLflow, or DVC.
  • Monitor model performance, data drift, and prediction accuracy using Amazon CloudWatch, AWS Model Monitor, or Evidently AI.

Data Engineering Collaboration

  • Work closely with data engineers to design scalable data ingestion and transformation pipelines using:
  • AWS Glue, AWS DataBrew, AWS Data Pipeline, or Apache Spark on EMR.
  • Ensure data lineage, quality, and compliance within AWS data lakes and Redshift environments.

Optimization & Scaling

  • Optimize model performance, latency, and cost efficiency using AWS Inferentia, Elastic Inference, and Auto Scaling.
  • Leverage GPU/TPU-based instances for high-performance training and fine-tuning tasks.