- Quantitative or engineering background (e.g. Bachelors degree in Computer Science, Physics, Economics, Biochemistry, Mathematics; Statistics; or 5+ years relevant experience)
- 3+ years of experience with descriptive statistics (i.e. measures of distribution), inferential statistics (e.g. hypothesis testing, confidence intervals) and understand when such methods are appropriate
Project Kuiper is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world.
Be part of the team integrating and verifying the system level architecture of Project Kuipers broadband wireless system from customer end points to terrestrial gateways via Low Earth Orbit satellites.
This is a unique opportunity to define a groundbreaking wireless solution with few legacy constraints. The team works with customer requirements and wireless implementation teams to define, integrate, test, and document the (i) throughput / availability of network resources, (ii) assignment of resources to each customer area, (iii) coordination of spectrum with regulatory organizations and other users of the spectrum, (iv) optimization of link budgets and system trade-offs, and (v) top level architectural documents for wireless system.
As a Payload Integration and Test Data Analyst you will integrate and analyze the communication system components on satellite and ground station. You will ensure that satellites and the ground network provide the best possible communications system performance while also ensuring compliance with our operating licenses.
The team is looking for an accomplished Data Analyst to build and manage the Data Engineering pipeline and help define, build, and verify the Analytics Roadmap to validate the Kuiper system. This will help us and stakeholders within the Amazon Kuiper organization utilize the data in our systems enabling us to take well informed decision. The insights will play a key role in decision making around resource allocation, program effectiveness, productivity, analysis, and business impact.
The ideal candidate exudes analytical acumen, possesses strong data engineering skills, a high degree of customer-obsession, and has a track record for delivering results. Youll be building things ground up and so enthusiasm for a start-up / builder role is a must.
- Build data pipelines to feed into decision making in real-time as well support more efficient ad-hoc queries & analysis.
- Work closely with the development team, product manager and stake-holders.
- Ensure consistency between various systems, operational, and analytic data sources to enable faster and more efficient detection and resolution of issues.
- Support the teams through our multi-year journey, helping design our future data analysis architecture
Export Control Requirement:
Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
- Masters in computer science, physics, mathematics, statistics, economics, or other quantitative field
- Experience with descriptive statistics (i.e. measures of distribution), inferential statistics (e.g. hypothesis testing, confidence intervals) and understand when such methods are appropriate
- Experience automating processes with Python and SQL
- Track record of processing, extracting, visualizing and communicating insight from large datasets
- Strong understanding of data modeling and building ETL pipelines
- Experience with data lakes, data warehouses
- Familiarity with big data technologies (e.g. Hadoop, Hive, Spark, EMR, etc.)
- Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy
- Knowledge of software engineering best practices across the development lifecycle, coding standards, code reviews, source management, build processes, testing, and operations
- Experience driving large-scale analytics projects.
- Experience with AWS Big Data, ML and Analytics services such as Timestream, RedShift, S3, Athena, Glue, SageMaker, Kinesis, etc.
Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.