Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
Primary Responsibilities:
- Design, develop, and support scalabledata engineeringandAI-enabled applicationsolutions usingApache Spark, Scala, and Python
- Build, optimize, and maintain high-performancebatch and near real-time data pipelinesfor ingestion, transformation, and analytics across large-scale datasets
- Develop and manage cloud-based data workflows onAWS, with hands-on use of services such asAmazon EMR,AWS Step Functions, AWS Lambda andAmazon OpenSearch
- Contribute to the design and deployment ofAI agentsandGenerative AI solutionsusing platforms such asGoogle GeminiandAWS Bedrock
- ImplementRetrieval-Augmented Generation (RAG)architectures by integrating large language models with enterprise data sources, search capabilities, and knowledge repositories
- Work with modern AI integration standards and technologies, includingMCP (Model Context Protocol)and related agentic AI frameworks
- Collaborate closely with cross-functional teams including product management, data science, platform engineering, and business stakeholders to deliver scalable and reliable solutions
- Ensure application performance, reliability, maintainability, and operational excellence across data and AI systems
- Participate in technical design discussions, architecture reviews, code reviews, and engineering best practice initiatives
- Troubleshoot production issues, perform root cause analysis, and implement corrective and preventive measures
- Support automation, monitoring, deployment, and continuous improvement initiatives across the engineering lifecycle
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Required Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related technical discipline
- 3+ years of professional software engineering experience, with a solid focus ondata engineering, big data systems, AI platforms, or backend application development
- Solid hands-on experience withScalaandApache Sparkin developing distributed data processing applications
- Hands-on experience working withAWS cloud services, particularly:
- Amazon EMR
- AWS Step Functions
- Amazon OpenSearch
- Experience or working knowledge in building and deployingAI agentsorGenerative AI applicationsusingGoogle Gemini,AWS Bedrock, or similar LLM platforms
- Good understanding ofRAGconcepts, vector-based retrieval, prompt orchestration, and enterprise knowledge integration
- Familiarity withMCP (Model Context Protocol)and awareness of emerging AI and agentic application technologies
- Solid understanding ofdistributed systems,data pipeline architecture,ETL/ELT design, and performance optimization techniques
- Good knowledge ofSQL, data modeling, and processing of structured and unstructured data
- Familiarity with software engineering best practices including version control, automated testing, code reviews, documentation, and secure development principles
- Proficiency inPythonfor data engineering, workflow automation, and AI/ML-related development
- Proven solid analytical, problem-solving, and debugging capabilities
- Proven effective communication and collaboration skills, with the ability to work successfully in a team-oriented environment
Preferred Qualifications:
- Experience withApache Kafkafor streaming, event-driven, or asynchronous data architectures
- Experience withDocker,Kubernetes, or similar containerization and orchestration technologies
- Exposure toDevOps practicesand related tools for CI/CD, deployment automation, infrastructure management, and monitoring
- Good working knowledge ofshell scriptingfor operational automation and workflow support
- SolidUnix/Linuxcommand-line knowledge and system troubleshooting skills
- Familiarity with workflow orchestration and scheduling platforms used in modern data ecosystems