Search by job, company or skills

A

Custom Software Engineer

Save
new job description bg glownew job description bg glow
  • Posted 8 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Project Role : Custom Software Engineer

Project Role Description : Develop custom software solutions to design, code, and enhance components across systems or applications. Use modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs.

Must have skills : Data Analytics

Good to have skills : Data Modeling Techniques and Methodologies, PySpark, Databricks Unity Catalog

Minimum 3 Year(s) Of Experience Is Required

Educational Qualification : A Bachelors degree in Computer Science Engineering Information Technology or related field is required

Summary: As a Custom Software Engineer, you will engage in the development of custom software solutions that are designed to meet specific business needs. Your typical day will involve collaborating with team members to design, code, and enhance various components across systems or applications. You will utilize modern frameworks and agile practices to ensure that the solutions you deliver are scalable and high-performing, contributing to the overall success of the projects you are involved in. Roles & Responsibilities: 1) Design and build multi-page, interactive Power BI dashboards featuring drill-throughs, bookmarks, custom visuals, and role-based filters to deliver tailored, actionable insights for diverse business users. 2) Develop complex DAX measures, calculated tables, and time-intelligence functions to support dynamic KPI reporting and scenario modeling directly within the Power BI data model. 3) Leverage Power Query, M-Query to perform advanced data transformations such as unpivoting, merging, conditional logic, and parameterization ensuring source data is cleansed and shaped for optimal dashboard performance. 4) Configure and maintain on-premises or cloud Power BI Gateways, automate incremental and full refresh schedules, and troubleshoot connectivity issues to guarantee up to date reporting. 5) Optimize report and data model performance through query folding, aggregation tables, star-schema design patterns, and best-practice visuals to minimize load times and enhance user experience. 6) Partner closely with stakeholders to elicit requirements, design storyboard prototypes, and iterate on layouts and visuals based on user feedback and adoption metrics. 7) Architect and implement end to end ETL pipelines in Azure Data Factory, orchestrating data ingestion, transformation, delivery, error handling, and monitoring for reliable data flows. 8) Develop and maintain PySpark notebooks on Azure Databricks to execute complex KPI logic, data cleansing, and aggregation workflows, supplemented by SQL scripts to validate ETL outputs and reconcile key metrics. 9) Provision, secure, and govern data in Azure Data Lake Storage Gen2 via Unity Catalog, managing access controls, data lineage, schema evolution, and audit trails in compliance with organizational policies. 10) Operate within an Agile or Scrum framework participating in sprint planning, daily stand-ups, backlog grooming, and retrospectives and perform root cause analyses to identify data quality and performance issues, implementing preventive measures Professional & Technical Skills: 1) Power BI Desktop and Service, DAX, Power Query- M, 2) Python, Apache PySpark, SQL, T-SQL, ANSI, 3) Azure Data Factory, Databricks, ADLS Gen2, Unity Catalog, CI or CD tools like Git, Azure DevOps, data modeling, performance tuning. 4) Strong problem-solving skills and attention to detail. 5) Excellent communication and collaboration skills. 6) Adaptability to changing priorities, 7) Collaboration in cross-functional teams, 8) Time management, and a continuous learning attitude Additional Information: 1. The candidate should have minimum 3 years of experience in Data Analytics. 2. This position is based at our Bengaluru office. 3. A Bachelors degree in Computer Science Engineering Information Technology or related field is required., A Bachelors degree in Computer Science Engineering Information Technology or related field is required

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147463409

Similar Jobs

Bengaluru, India

Skills:

Apache HadoopData Warehousing ConceptsDistributed computing conceptsData processing and analysis using Hadoop ecosystem toolsPython Programming LanguageData storage solutions