
Search by job, company or skills
Amazon's Global Procurement Organization manages indirect supply chain for World Wide Operations and Delivery Services. We work with massive amounts of data and leverage them for advanced applications.
We are Data Engineers to build the facilities to help enable the Global Procurement Organization to gain deep insights into all facets of indirectly supply management and apply advanced data science for automation and decision making.
You are someone who is customer focused, relentless and driven to help us build out a brand-new initiative. You will collaborate closely with product managers, developers and leaders across the org. To be successful in this role, you should have broad skills in database design, be comfortable dealing with large and complex data sets, modeling and designing a strong data warehouse and building self-service data platforms for stakeholders to utilize the data we manage.
Key job responsibilities
. Own and design end-to-end data pipelines and models for complex procurement domains including spend analytics, supplier risk, contract compliance, and catalog management
. Drive technical design reviews and author architecture documents for new data systems and platform components
. Lead scalability and reliability improvements - identify bottlenecks in existing infrastructure, define solutions, and execute with or without a team behind you
. Build frameworks and reusable components that improve engineering productivity and standardize data patterns across GPT data products
. Partner with applied scientists and BIEs to define the data infrastructure needed to power ML models, forecasting engines, and self-service analytics platforms
. Define and enforce data quality standards - including data contracts, freshness SLAs, lineage tracking, and observability instrumentation
. Translate ambiguous business requirements from procurement operations, finance, and leadership into well-scoped technical solutions
. Mentor and develop junior data engineers through code reviews, design feedback, and pair programming
. Operate across multiple concurrent workstreams manage tradeoffs between speed, correctness, and technical debt
- 3+ years of data engineering experience
- 4+ years of SQL experience
- Experience with data modeling, warehousing and building ETL pipelines
- 2+ years of developing and operating large-scale data structures for business intelligence analytics using data modeling experience
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Job ID: 145643747