Why Join Iris
Are you ready to do the best work of your career at one of
India's Top 25 Best Workplaces in IT industry Do you want to grow in an award-winning culture that
truly values your talent and ambitionsJoin Iris Software one of the
fastest-growing IT services companies where
you own and shape your success story.
About Us
At Iris Software, our vision is to be our client's most trusted technology partner, and the first choice for the industry's top professionals to realize their full potential.
With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services.
Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation.
Working with Us
At Iris, every role is more than a job it's a launchpad for growth.
Our Employee Value Proposition,
Build Your Future. Own Your Journey. reflects our belief that people thrive when they have ownership of their career and the right opportunities to shape it.
We foster a culture where your potential is valued, your voice matters, and your work creates real impact. With cutting-edge projects, personalized career development, continuous learning and mentorship, we support you to grow and become your best both personally and professionally.
Curious what it's like to work at Iris Head to this video for an inside look at the people, the passion, and the possibilities. Watch it here.
Job Description
- In this Senior data engineer role, you will be responsible for designing and implementing complex data pipelines, optimizing data transformations, and ensuring the scalability and performance of our data infrastructure .
- The ideal candidate will have a deep technical background and a proven track record in managing data engineering projects and collaborating with cross-functional teams to drive data-driven decisions. This opportunity will provide you with experience working with cloud data platforms such as Snowflake and data transformation tools like dbt. You will gain exposure to AWS infrastructure for building innovative and efficient data solutions and work with various data sources, including Oracle, MS SQL Server, and Postgres.
Basic Qualifications
- Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or related field.
- 6+ years of experience in data engineering, with a proven track record of working in large-scale data initiatives.
- Deep expertise in Python, PySpark.
- Strong hands-on experience with Databricks (Spark, Delta Lake, Workflows)
- Strong experience with AWS (S3, IAM, Textract, Bedrock or equivalent)
- Experience with design and implement scalable document ingestion pipelines using Databricks Auto Loader and AWS S3.
- Understanding of vector embeddings and semantic search
- Strong understanding of data governance, privacy, and compliance in regulated industries (healthcare, life sciences).
Good To Have
- Advanced knowledge of data modeling, lakehouse/lake/warehouse design, and performance optimization.
- Familiarity with generative AI platforms and use cases.
- Contributions to open-source projects or thought leadership in data engineering/architecture.
- Experience with Agile methodologies, CI/CD, and DevOps practices.
- Exposure to FastAPI, or API-based ML services
- Experience evaluating LLM output quality
Key Responsibilities
- Design, develop, and optimize complex data pipelines and transformation processes using Snowflake, dbt, and AWS services.
- Develop and maintain scalable data models and schemas in Snowflake, ensuring they meet performance and business requirements.
- Monitor and fine-tune the performance of data pipelines, queries, and data models to ensure optimal efficiency and cost-effectiveness.
- Utilize Snowflake's features, such as Time Travel, Zero-Copy Cloning, and Data Sharing, to enhance data management and performance.
- Leverage AWS services, such as AWS Lambda, S3, and Glue, to build and manage serverless data processing workflows and data storage solutions.
- Implement data security measures and ensure compliance with data privacy regulations and organizational policies.
- Troubleshoot and resolve complex data issues, including data sync errors, performance bottlenecks, and integration challenges.
- Provide support for data-related incidents and ensure effective resolution of production issues.
- Collaborate with data analysts, and other stakeholders to understand data needs and deliver effective solutions.
- Document data processes, models, and workflows, ensuring clear communication and knowledge sharing across teams.
- Independently assess situations, apply sound judgment and discretion, and make decisions on matters of significant impact without direct supervision
Mandatory Competencies
Cloud - AWS - AWS S3, S3 glacier, AWS EBS
Beh - Communication
Big Data - Big Data - SPARK
Big Data - Big Data - Pyspark
Data Science and Machine Learning - Data Science and Machine Learning - Databricks
Programming Language - Python - Flask
Programming Language - Python - OOPS Concepts
Programming Language - Python - Python Shell
Data Science and Machine Learning - Data Science and Machine Learning - Python
Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate
Perks And Benefits For Irisians
Iris provides world-class benefits for a personalized employee experience. These benefits are designed to support financial, health and well-being needs of Irisians for a holistic professional and personal growth. Click here to view the benefits.