Search by job, company or skills

TEGNA

Generative AI Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About TEGNA

TEGNA Inc. (NYSE: TGNA) helps people thrive in their local communities by providing the trusted local news and services that matter most. With 64 television stations in 51 U.S. markets, TEGNA reaches more than 100 million people monthly across the web, mobile apps, streaming, and linear television. Together, we are building a sustainable future for local news.

Generative AI Engineer

Position Overview

TEGNA is seeking a Generative AI Engineer to design, develop, and deploy scalable, production-grade LLM-powered solutions leveraging AWS cloud services, Snowflake, and modern Generative AI frameworks. This role focuses on building enterprise-ready AI systems, optimizing LLM inference, and integrating large-scale data platforms with advanced AI technologies.

The ideal candidate combines strong cloud engineering expertise with hands-on experience in prompt engineering, foundation models, agentic AI systems, and enterprise data pipelines within Snowflake and AWS ecosystems.

What You'll Do

AI & Generative AI Development Design, develop, and deploy LLM-powered applications and agentic AI systems in production environments. Implement advanced prompt engineering strategies including prompt chaining, multi-turn orchestration, few-shot and in-context learning, Chain-of-Thought (CoT) and Tree-of-Thought (ToT) prompting, function calling, tool use optimization, and structured output generation (JSON/XML schemas).

Retrieval-Augmented Generation (RAG) & Model Optimization Build and optimize RAG systems integrating Snowflake data with LLMs. Evaluate and fine-tune foundation models using AWS Bedrock or other managed AI services. Develop guardrails to mitigate hallucinations, ensure grounding, and implement safety controls.

LLMOps & Observability Implement LLMOps best practices including model versioning, deployment and rollback strategies, prompt versioning, experimentation frameworks, and evaluation pipelines. Monitor and optimize LLM application performance using observability and logging tools.

Cloud & Platform Engineering (AWS) Architect scalable AI solutions using AWS services such as Bedrock and SageMaker for foundation models, Lambda for serverless deployments, EC2 for GPU-accelerated inference, Step Functions for orchestration of complex workflows and agentic pipelines, and CloudWatch for monitoring and alerting.

Data Engineering & Snowflake Integration Build and optimize data pipelines between Snowflake and AI services. Design feature stores and embeddings pipelines, leverage Snowflake Cortex LLM functions for in-database AI operations, implement vector and semantic search capabilities, and ensure data quality, governance, and cost efficiency for AI workloads.

AI Application Development & Integration Develop APIs and backend services to operationalize AI solutions. Integrate LLM systems into internal platforms, sales tools, and analytics environments. Implement real-time and streaming inference for low-latency applications and collaborate with stakeholders to translate business use cases into production-ready AI systems.

What You Bring

5+ years of experience in AI/ML, software engineering, or data engineering roles.

Strong proficiency in Python with a solid understanding of machine learning fundamentals.

Hands-on experience with AWS cloud services, APIs, and microservices-based architectures.

Experience integrating AI solutions with enterprise data platforms such as Snowflake.

Practical experience in prompt engineering and LLM orchestration frameworks (e.g., LangChain, LlamaIndex, Semantic Kernel, or similar).

Experience working with agentic AI frameworks (e.g., AutoGen, CrewAI, or equivalent).

Strong analytical, problem-solving, and system design skills with the ability to build scalable and reliable AI solutions.

Effective communication skills with the ability to collaborate across engineering, data, and business teams.

Preferred Qualifications

Experience building enterprise-grade RAG pipelines.

Knowledge of MLOps and LLMOps best practices.

Hands-on experience with vector databases and embeddings.

Familiarity with LLM evaluation frameworks and model performance metrics.

Experience implementing AI governance, responsible AI, and safety best practices.

Background in sales, media, marketing analytics, or enterprise data platforms.

Why TEGNA

At TEGNA, our values guide everything we do. We Demand the Truth by delivering accurate, trusted information. We Work Smarter by finding better ways to solve problems and move quickly. We Do the Right Thing by holding ourselves to the highest standards. And We Win by working together to deliver results. If you're ready to build innovative AI solutions that power the future of local media and be part of a team that lives these values every day, we'd love to hear from you.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 143305673

Similar Jobs