Search by job, company or skills

Bahwan Cybertek

RPA / AI Engineer

Save
new job description bg glownew job description bg glow
  • Posted 5 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description


Role description

We are looking for a technically sharp and forward-thinking RPA & AI Engineer who sits at the intersection of intelligent process automation and applied AI. This is not a traditional RPA role — the ideal candidate has hands-on experience building production-grade agentic AI applications, LLM-powered workflows, and autonomous bot solutions that go far beyond rule-based automation. You will design, build, and operate intelligent automation systems that combine classical RPA tooling with modern AI agent frameworks, evaluation pipelines, and observability practices — enabling the business to move faster and smarter.


 


 


Key Responsibilities


 


·        Design and build production-grade agentic AI systems using frameworks such as LangChain, LangGraph, CrewAI, OpenAI Agent SDK, Gemini ADK — with a clear understanding of when to use single-agent vs multi-agent architectures.


·        Implement tool-calling, function-calling, and ReAct-style reasoning loops; manage agent memory (short-term, long-term, episodic) and context window strategies.


·        Build autonomous workflows that chain LLMs, APIs, databases, and RPA bots into end-to-end pipelines with appropriate human-in-the-loop checkpoints.


·        Architect hybrid automation solutions that combine RPA bots with AI agents for tasks requiring perception, reasoning, and decision-making beyond rule-based logic.


·        Automate web, desktop, and API-based workflows; manage bot scheduling, orchestration, queues, and exception handling at scale.


·        Integrate large language models (OpenAI GPT, Anthropic Claude, Google Gemini, or open-source models via Ollama) into production applications via APIs and SDKs.


·        Build and optimise Retrieval-Augmented Generation (RAG) pipelines — vector store selection (Pinecone, Weaviate, Qdrant, pgvector), chunking strategies, embedding models, and hybrid search (BM25 + semantic).


·        Implement semantic caching, context compression, and query rewriting to manage latency and cost in high-throughput LLM workflows.


·        Manage LLM gateway patterns — rate limiting, fallback routing, model versioning, and cost tracking across providers using tools like LiteLLM or Portkey.


·        Design and implement LLM evaluation pipelines using frameworks such as RAGAS, DeepEval, LangSmith Evals, or custom grader chains to assess accuracy, faithfulness, relevance, and groundedness.


·        Build automated graders — both LLM-as-a-judge and deterministic metrics — to continuously measure output quality across regression and canary deployments.


·        Define and track eval datasets, golden sets, and adversarial test cases; maintain evaluation leaderboards and regression baselines.


·        Implement red-teaming, jailbreak testing, and safety evaluation suites for AI outputs before production rollout.


·        Profile and optimise end-to-end latency in LLM-powered pipelines — from retrieval and embedding to inference and response streaming — targeting p50/p95/p99 SLAs.


·        Apply model quantisation, prompt compression, and speculative decoding strategies for self-hosted models to balance quality vs cost vs speed.


·        Monitor token consumption, cost-per-request, and throughput metrics; implement budget guardrails and ing for LLM API spend.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147480905