
Search by job, company or skills
Are you passionate about data and code Does the prospect of dealing with mission-critical data excite you Do you want to build data engineering solutions that process a broad range of business and customer data Do you want to continuously improve the systems that enable annual worldwide revenue of hundreds of billions of dollars If so, then the eCommerce Services (eCS) team is for you!
In eCommerce Services (eCS), we build systems that span the full range of eCommerce functionality, from Privacy, Identity, Purchase Experience and Ordering to Shipping, Tax and Financial integration. eCommerce Services manages several aspects of the customer life cycle, starting from account creation and sign in, to placing items in the shopping cart, proceeding through checkout, order processing, managing order history and post-fulfillment actions such as refunds and tax invoices. eCS services determine sales tax and shipping charges, and we ensure the privacy of our customers. Our mission is to provide a commerce foundation that accelerates business innovation and delivers a secure, available, performant, and reliable shopping experience to Amazon's customers.
The goal of the eCS Data Engineering and Analytics team is to provide high quality, on-time reports to Amazon business teams, enabling them to expand globally at scale. Our team has a direct impact on retail CX, a key component that runs our Amazon fly wheel.
As a Data Engineer, you will own the architecture of DW solutions for the Enterprise using multiple platforms. You would have the opportunity to lead the design, creation and management of extremely large datasets working backwards from business use cases. You will use your strong business and communication skills to be able to work with business analysts and engineers to determine how best to design the data warehouse for reporting and analytics. You will be responsible for designing and implementing scalable ETL processes in the data warehouse platform to support the rapidly growing and dynamic business demand for data and use it to deliver the data as service which will have an immediate influence on day-to-day decision making.
Key job responsibilities
. Design, implement, and support a platform providing ad-hoc access to large data sets
. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources
. Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift, and OLAP technologies
. Model data and metadata for ad-hoc and pre-built reporting
. Interface with business customers, gathering requirements and delivering complete reporting solutions
. Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark.
. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs.
. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
- 2+ years of data engineering experience
- Experience with SQL
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with data modeling, warehousing and building ETL pipelines
- Bachelor's degree or equivalent
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Job ID: 147380577
Skills:
BigQuery, Gcp, Python, Sql, Airflow
Skills:
snowflake , Query Optimization, Plsql, Advanced Sql, Python, Aws S3, AWS, Data Integration tools, ETL ELT processes, window functions, Data Clustering, custom Python solutions, stored procedures, CI CD data pipelines, data quality validation, AWS Schedulers, NiFi, cost management, Data Validation frameworks
Skills:
snowflake , Java, BigQuery, Scala, Kafka, Data Modeling, Sql, Spark, Kubernetes, Python, Airflow, Flink, Iceberg, enterprise data architecture patterns, distributed data platforms, dimensional schema design
Skills:
Python, NVIDIA Cosmos, Cosmos Diffusion, rigging and animating 3D humanoids, NVIDIA Omniverse, domain randomization, 3D synthetic data pipelines, NVIDIA Omniverse Replicator
Skills:
Hadoop, Scala, Pl Sql, Emr, Sparksql, Data Modeling, Ddl, Bodi, Informatica, SSIS, Sql, Hive, Odi, Datastage, Hiveql, Spark, Python, MDX, Warehousing, KornShell, ETL pipelines
We don’t charge any money for job offers