As a Software Engineer, you'll contribute to the design, development, and improvement of software solutions that drive our business, platform, and technology capabilities. This role offers an opportunity to work with cutting-edge technologies, collaborate with cross-functional teams, and play a vital role in the evolution of our digital landscape.
Key Deliverables
- Contribute to the full software development lifecycle, from requirements gathering to deployment and maintenance, ensuring high-quality and scalable solutions.
- Design, develop, and test ETL processes and data warehouse solutions using SQL and PySpark for data validation and transformation.
- Collaborate with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration with business objectives.
- Participate in code reviews, contribute to the development of reusable validation utilities, and promote a culture of code quality and knowledge sharing.
- Ensure code is scalable, maintainable, and optimized for performance, adhering to industry best practices and Barclays internal standards.
- Stay updated on the latest technology trends and advancements in data warehousing, cloud computing, and software engineering, integrating learnings into practical applications.
- Contribute to the continuous improvement of our development processes by suggesting and implementing enhancements to workflows and tools to improve efficiency and quality
Essential Requirements
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 0-1 years of experience in software development, with a focus on data warehousing and ETL processes.
- Strong proficiency in SQL and data modeling concepts (dimensional/3NF, SCDs, CDC).
- Hands-on experience with PySpark for data validation at scale and building reusable validation utilities.
- Practical experience with Linux/Unix operating systems and shell scripting for orchestration, log analysis, and job automation.
- Familiarity with CI/CD pipelines, Git version control, and build/artifact management.
- Excellent problem-solving, communication, and a quality-first mindset.
Preferred Qualifications
- Experience with Ab Initio (GDE graphs, plans, psets, EME object/version management).
- Exposure to job schedulers (Autosys/TWS or similar).
- Experience validating outputs of decisioning/rules engines (e.g., FICO Blaze or similar).
- Familiarity with SOx/KCFC evidencing and control testing processes
Skills
Must-Have Skills
- Technical: Strong SQL expertise for data manipulation, querying, and optimization; proficiency in PySpark for large-scale data validation; experience with Linux/Unix environments and Shell Scripting.
- Domain Knowledge: Understanding of data warehousing concepts, ETL processes, and data modeling techniques.
- Behavioral & Interpersonal: Excellent communication skills; ability to collaborate effectively in cross-functional teams; proactive approach to problem-solving.
- Process & SOP: Experience with CI/CD pipelines, Git version control, and build/artifact management.
- Analytical & Problem-Solving: Strong debugging, troubleshooting, and analytical skills; ability to identify and resolve performance bottlenecks.
Good-to-Have Skills
- Advanced Technical: Practical Ab Initio experience; exposure to cloud data stacks (AWS or Azure); familiarity with containerization technologies (Docker/K8s).
- Additional Certifications: AWS Certified Developer, Azure Developer Associate, or similar certifications.
- Cross-Functional Exposure: Experience working with product managers, designers, and other engineering teams.
- Leadership Traits: Mentoring junior team members, facilitating knowledge sharing, and contributing to team growth.
- Continuous Improvement: Familiarity with Agile methodologies: Scrum, Lean/Kaizen, Six Sigma