We are seeking a skilled Python Developer to join our team and help design, develop, and maintain robust and efficient data processing pipelines using AWS Serverless technologies. The ideal candidate will have a strong background in Python programming, experience with serverless computing, and a passion for building scalable solutions for processing and analyzing large volumes of data.
Key Responsibilities
Develop, test, and deploy data processing pipelines using AWS Serverless technologies such as AWS Lambda, Step Functions, DynamoDB, and S3.
Implement ETL processes to transform and process structured and unstructured data efficiently.
Collaborate with business analysts and other developers to understand requirements and deliver solutions that meet business needs.
Write clean, maintainable, and well-documented code following best practices.
Monitor and optimize the performance and cost of serverless applications.
Ensure high availability and reliability of the pipeline through proper design and error handling mechanisms.
Troubleshoot and debug issues in serverless applications and data workflows.
Stay up-to-date with emerging technologies in the AWS and serverless ecosystem to recommend improvements.
Required Skills And Experience
3-5 years of hands-on Python development experience, including experience with libraries like boto3, Pandas, or similar tools for data processing.
Strong knowledge of AWS services, especially Lambda, S3, DynamoDB, Step Functions, SNS, SQS, and API Gateway.
Experience building data pipelines or workflows to process and transform large datasets.
Familiarity with serverless architecture and event-driven programming.
Knowledge of best practices for designing secure and scalable serverless applications.
Proficiency in version control systems (e.g., Git) and collaboration tools.
Understanding of CI/CD pipelines and DevOps practices.
Strong debugging and problem-solving skills.
Familiarity with database systems, both SQL (e.g., RDS) and NoSQL (e.g., DynamoDB).