Role Title: Senior Data Engineer - Snowflake
Location: Hyderabad
Who we are
Anblicks is Data and AI Company We bring value to your data.
Anblicks is a Data and AI company, specializing in data modernization and transformation, that helps organizations across industries make decisions better, faster, and at scale. Enabling Enterprises with Data-Driven Decision Making. Since 2004, Anblicks has been enabling customers across the globe, with their digital transformation journey. Anblicks is headquartered in Addison, Texas, and employs more than 550 technology professionals, data analysts, and data science experts in the USA, India, and Australia.
Anblicks is committed to bringing value to various industries using CloudOps, Data Analytics, and Modern Apps. Global customers benefited from our Anblicks Ignite Enterprise Data Platform and Accelerators.
https://www.anblicks.com
Why Join Anblicks
Anblicks, a company, advancing in leaps and bounds, places tremendous emphasis on values. A fundamental ideology is crucial to manoeuvre an organisation towards success. Through a stable value system, Anblicks is enabling an unprecedented transformation. Not just a digital transformation; it is something as expansive as people and a best-in-class global culture.
Key Facts:
- More than 550 Technology Professionals
- More than 200 Customer Served
- More than 900 Project Completed
- Trusted by happy clients including Fortune 500 companies
- 16 Books authored by Employees
- Offices in India, USA & Australia
Role Purpose
We are looking for a highly motivated and experienced Data Engineer to join our team of data experts. The ideal candidate will have a strong background in designing, developing, and maintaining data pipelines and ETL processes using technologies such as Snowflake, DBT, Matallion in Data warehousing. As a Data Engineer, you will work closely with the Lead Data Engineer and Data Architect to implement end-to-end data solutions, build, and maintain data pipelines, and ensure the quality and integrity of our organization's data.
Role Responsibilities
- Collaborate with the Lead Data Engineer and Data Architect to design and implement end-to-end data solutions
- Create, test, and implement enterprise-level apps with Snowflake
- Design and implement features for identity and access management.
- Create authorization frameworks for better access control.
- Implement novel query optimization, major security competencies with encryption.
- Solve performance issues and scalability issues in the system.
- Transaction management with distributed data processing algorithms
- Possess ownership right from start to finish.
- Build, monitor, and optimizeETL and ELT processes with data models
- Migrate solutions from on-premises setup to cloud-based platforms.
- Understand and implement the latest delivery approaches based on data architecture.
- Project documentation and tracking based on understanding user requirements.
- Perform data integration with third-party tools including architecting, designing, coding, and testing phases.
- Manage documentation of data models, architecture, and maintenance processes
- Continually review and audit data models for enhancement
- Maintenance of ideal data pipeline based on ETL tools.
- Coordination with BI experts and analysts for customized data models and integration
- Code updates, new code development, and reverse engineering
- Performance tuning, user acceptance training, application support
- Maintain confidentiality of data
- Risk assessment, management, and mitigation plans
- Regular engagement with teams for status reporting and routine activities
- Migration activities from one database to another or on-premises to cloud
Skills & Experience
- Bachelor's or master's degree in computer science, Information Systems, or a related field
- 5+ years of experience in data engineering and data architecture
- Experience in working with AWS S3/ Azure ADLS Storage Accounts and Snowflake.
- Strong experience in data engineering fundamentals (SQL, RDBMS, Data Models, Data Structures, orchestration, Devops etc.)
- Knowledge of SQL language and cloud-based technologies
- Strong experience building data pipelines with Spark and Python/Scala
- Strong experience building ELT pipelines (batch and streaming) in Snowflake cloud warehouse
- Good working knowledge of leveraging DBT (SQL and Python models) to perform transformations in Snowflake
- Able to write structured and efficient queries on large data sets using Statistical Aggregate functions and Analytical functions and reporting datamarts.
- Experience in working with Snowflake concepts like Snowpipe, Streams, Tasks, Cloning, TimeTravel, Data Sharing, Data Replication e.t.c.
- Handling large and complex datasets like JSON, ORC, PARQUET,CSV filesfrom various sources like AWS S3,Azure DataLake Gen2.
- Understanding customer requirements, analysis, design, development and implementation into the system, gather and define business requirements and enhancing business processes.
- Knowledge in Snowflake tools like Snowsight and SnowSQL and any partner connects.
- Performance tuning and setting up resource monitors
- Snowflake modeling roles, databases, schemas
- SQL performance measuring, query tuning, and database tuning
- ETL tools with cloud-driven skills
- SQL-based databases like Oracle SQL Server, Teradata, etc.
- Snowflake warehousing, architecture, processing, administration
- Data ingestion into Snowflake
- Enterprise-level technical exposure to Snowflake applications
- Experience with data modelling is a plus.
- Excellent problem-solving and analytical skills
- Ability to work independently and as part of a team.
- Experience working in an Agile environment.
- Skilled in building relationships with clients and in practice development activities.
- Excellent written and oral communication skills; Ability to communicate effectively with technical and non-technical staff.
- Must be open to travel.