Data Mapping, Data Modeling, Data Mining, and Data Warehousing/Mart Relational databases and SQL Query performance tuning.
Hands-on experience with at least one programming language Python / Java.
Good understanding of Big Data Fundamentals (Hadoop/Spark) and Data Processing Pipelines.
Hands-on experience with at least one cloud platform (GCP, AWS, Azure).
Responsibilities
Design, Develop and Deploy scalable data pipelines and data services,.
Design, develop, and test data processes per business requirements, following the development standards and best practices as well as participating in code peer reviews to ensure our applications comply with best practices.
Work with business analysts to gather business requirements from end users and translate them into technical specifications.
Gather requirements to define data definitions, transformation logic, and data model logical and physical designs, data flow, and process.
Provide estimates for development.
Perform data analysis and data profiling against source systems and data warehouse.
Test solutions to validate whether requirements have been met; develop test plans, test scripts, and test conditions based on the business and system requirements.
Provide end-user support in post-deployment phases; assess and evaluate all feedback to ensure that the requirements necessary to correct issues are addressed.
Integrate new data management technologies and software engineering tools into existing infrastructure,.
Research opportunities for data acquisition and new uses for existing data.
Collaborate with data scientists and researchers to develop algorithms including prototypes, predictive models, and proof of concepts,.
Work with clients across geographies in an agile team environment,.
Constantly learn new technologies, frameworks, and tools.
Exp
Agile development.
Excellent verbal and written communication skills are a must.