Search by job, company or skills

Grid Dynamics

Bigdata Developer

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Experience with creating data driven business solutions and solving data problems using a wide variety of technologies such as Hadoop, MapReduce, Hive, Scala, Spark, NoSQL as well as traditional relational databases like MySQL Experience building real time streaming data pipelines using technologies like Kafka, Spark Streaming, ELK etc Experience building ETL/ELT data pipelines, data quality checks and data anomaly detection and notification systems Experience with development and programming languages such as Java or python Experience with agile development incorporating continuous integration and continuous delivery Knowledge on any AI ( CHATGPT/COPilot/ GitHub copilot/ Claude) is added advantage

Responsibilities

Responsibilities

  • Design and Develop Data Pipelines Build and maintain scalable data pipelines to process large volumes of structured and unstructured data.
  • Big Data Framework Development Develop and optimize applications using big data technologies like Apache Hadoop, Apache Spark, and Apache Kafka.
  • Data Processing and Transformation Implement ETL (Extract, Transform, Load) processes to ingest and transform large datasets from multiple data sources.
  • Database and Data Storage Management Work with distributed storage systems such as Apache Hive, Apache HBase, and cloud data platforms.
  • Performance Optimization Optimize big data applications and queries for high performance and scalability.
  • Collaboration with Data Teams Collaborate with data engineers, data scientist

Requirements

Minimum Requirements Big Data Developer

  • Experience
  • 5-7 years of experience in Big Data / Data Engineering / Data Processing
  • Programming Skills
  • Strong programming skills in Java, Python, or Scala
  • Big Data Technologies
  • Hands-on experience with big data frameworks such as
    • Apache Hadoop
    • Apache Spark
  • Data Processing & ETL
  • Experience building ETL pipelines for large-scale data processing
  • Knowledge of batch and real-time data processing
  • Data Storage Technologies
  • Experience working with tools such as
    • Apache Hive
    • Apache HBase
    • Distributed file systems like HDFS
  • Streaming Tools (Good to Have)
  • Exposure to real-time streaming tools such as
    • Apache Kafka
  • Database Knowledge
  • Strong knowledge of SQL and experience with NoSQL databases
We offer

  • Opportunity to work on bleeding-edge projects
  • Work with a highly motivated and dedicated team
  • Competitive salary
  • Flexible schedule
  • Benefits package - medical insurance, sports
  • Corporate social events
  • Professional development opportunities
  • Well-equipped office

About Us

Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 144181141

Similar Jobs