Search by job, company or skills

V

Snowflake Cortex

new job description bg glownew job description bg glownew job description bg svg
  • Posted 8 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Role Summary

We are looking for a highly skilled Snowflake AI Engineer to design, develop, and optimize AI-driven applications using Snowflake Cortex. In this role, you will be the bridge between traditional data engineering and modern Generative AI, building production-ready Retrieval-Augmented Generation (RAG) systems and conversational analytics interfaces that allow users to interact with data using natural language.

Key Responsibilities

AI Application Development: Build and scale RAG-based workflows and conversational analytics experiences using Cortex Analyst and Cortex Search.

Cortex Implementation: Operationalize Snowflake-native LLM functions (e.g., CORTEX.COMPLETE) for forecasting, anomaly detection, and automated insight generation.

Semantic Modeling: Design and maintain semantic models in YAML to ensure high-accuracy (90%+) text-to-SQL translation for business users.

Pipeline Engineering: Develop scalable data pipelines using Python and Snowpark to process structured and unstructured data for AI consumption.

Application Interfaces: Create interactive data applications using Streamlit to showcase AI-driven insights to stakeholders.

Governance & Security: Implement Role-Based Access Control (RBAC), masking policies, and secure data handling to ensure responsible AI deployment.

Optimization: Monitor and tune LLM inference layers, vector indexes, and embedding models for performance and cost-efficiency.

Required Skills & Qualifications

Experience: 3–6 years in data engineering, AI/ML analytics, or application development with a strong focus on the Snowflake platform.

Snowflake Expertise: Deep knowledge of Snowflake architecture, including dynamic tables, streams, tasks, and warehouse optimization.

AI/ML Proficiency: Hands-on experience with Generative AI concepts, including LLM prompting, tokenization, and vector databases.

Programming: Advanced proficiency in Python (Pandas, NumPy) and SQL (window functions, semi-structured data handling).

Tools: Experience with dbt for modular modeling and Git-based CI/CD workflows for data applications.

Education: Bachelor's or Master's degree in Computer Science, Data Science, or a related technical field.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145453311

Similar Jobs