Search by job, company or skills

Amazon Music

Data Engineer I, Amazon

2-4 Years
Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Description

Are you passionate about data and code Does the prospect of dealing with mission-critical data excite you Do you want to build data engineering solutions that process a broad range of business and customer data Do you want to continuously improve the systems that enable annual worldwide revenue of hundreds of billions of dollars If so, then the eCommerce Services (eCS) team is for you!

In eCommerce Services (eCS), we build systems that span the full range of eCommerce functionality, from Privacy, Identity, Purchase Experience and Ordering to Shipping, Tax and Financial integration. eCommerce Services manages several aspects of the customer life cycle, starting from account creation and sign in, to placing items in the shopping cart, proceeding through checkout, order processing, managing order history and post-fulfillment actions such as refunds and tax invoices. eCS services determine sales tax and shipping charges, and we ensure the privacy of our customers. Our mission is to provide a commerce foundation that accelerates business innovation and delivers a secure, available, performant, and reliable shopping experience to Amazon's customers.

The goal of the eCS Data Engineering and Analytics team is to provide high quality, on-time reports to Amazon business teams, enabling them to expand globally at scale. Our team has a direct impact on retail CX, a key component that runs our Amazon fly wheel.

As a Data Engineer, you will own the architecture of DW solutions for the Enterprise using multiple platforms. You would have the opportunity to lead the design, creation and management of extremely large datasets working backwards from business use cases. You will use your strong business and communication skills to be able to work with business analysts and engineers to determine how best to design the data warehouse for reporting and analytics. You will be responsible for designing and implementing scalable ETL processes in the data warehouse platform to support the rapidly growing and dynamic business demand for data and use it to deliver the data as service which will have an immediate influence on day-to-day decision making.

Key job responsibilities
. Design, implement, and support a platform providing ad-hoc access to large data sets
. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources
. Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift, and OLAP technologies
. Model data and metadata for ad-hoc and pre-built reporting
. Interface with business customers, gathering requirements and delivering complete reporting solutions
. Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark.
. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs.
. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers

Basic Qualifications

- 2+ years of data engineering experience
- Experience with SQL
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with data modeling, warehousing and building ETL pipelines
- Bachelor's degree or equivalent

Preferred Qualifications

- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

More Info

About Company

Job ID: 147380577

Similar Jobs

Pune, India

Skills:

BigQueryGcpPythonSqlAirflow

Bengaluru, India

Skills:

snowflake Query OptimizationPlsqlAdvanced SqlPythonAws S3AWSData Integration toolsETL ELT processeswindow functionsData Clusteringcustom Python solutionsstored proceduresCI CD data pipelinesdata quality validationAWS SchedulersNiFicost managementData Validation frameworks

Bengaluru, India

Skills:

snowflake JavaBigQueryScalaKafkaData ModelingSqlSparkKubernetesPythonAirflowFlinkIcebergenterprise data architecture patternsdistributed data platformsdimensional schema design

Bengaluru, India

Skills:

PythonNVIDIA CosmosCosmos Diffusionrigging and animating 3D humanoidsNVIDIA Omniversedomain randomization3D synthetic data pipelinesNVIDIA Omniverse Replicator

Bengaluru, India

Skills:

HadoopScalaPl SqlEmrSparksqlData ModelingDdlBodiInformaticaSSISSqlHiveOdiDatastageHiveqlSparkPythonMDXWarehousingKornShellETL pipelines