Equity Only | Pre-Seed Stage Startup | India (Preferred, Remote-Friendly)
About Us
We'rebuilding a Robinhood-class mobile app powered by AI that turns massive, messy data intoactionable trade ideasand auto-executes via users connected brokers.We are a startup based in the US and have a registered office in India.We need your investment of a minimum of 20 Hours per Week (Part-time), which offers significant returns through equity, as you continue working while we work to secure funding and onboard you as a Full-time employee within the next 6-8 months.
You'llbe a partnerwith our AI engineers,quantitative research team,andfrontendand backend teams.
The Role
You will leadLLM model selection and adaptationforfinancial applications, comparing open- and off-the-shelf models (e.g., theOSS,Llama4family, Mistral, DeepSeek) on our tasks and data.
You'lldesign adecision frameworkforRAG vs. fine-tuning vs. full/continued pretraining, and implement efficient methods (e.g.,LoRA/QLoRA) to hit latency, quality, and cost targets.You'llalsostand-upobjectiveevaluations(HELM/MTEB-style) and production guardrails appropriate for a fintech app.
WhatYou'llDo
- Model selection & benchmarking
- Compare open/off-the-shelf LLMs on finance tasks, build scorecards (quality, latency, context, cost), and runholistic evaluationsinspired byHELMandMTEB.
- Adaptation strategy:Fine-tuning vs. pretraining
- Propose a decision tree for when to usefine-tuning(stable domain tasks) vs.continued pretraining(deep domain needs).
- Justify trade-offs for cost, safety, and maintenance.
- Efficient fine-tuning & training
- ImplementLoRA/QLoRApipelines.
- Quantify gains vs. full fine-tune.
- Documentcomputing/memory budgets and inference impacts.
- Finance-aware modeling
- Evaluate domain LLMs and datasets (e.g.,FinGPT,BloombergGPT) and codify finance-specific evaluation sets (tickers, filings, news, risk language).
- Own a benchmark pack (e.g.,FinanceBench,FinQA,FiQA, and, if relevant,FinBentasks) with pass/fail gates and a living leaderboard
- Risk, reliability & governance
- Define offline/online evals, hallucination tests, and failure modes for advice/explanations; partner with compliance on clear disclosures consistent withRobo-adviser guidance.
- Prod integration
- Ship models with tracing, prompts/versioning, and A/B or champion-vs-challenger evaluation.
- Monitor latency/cost/quality SLOs in production.
WhatWe'reLooking For
- 812+ yearsin data science/ML with strongPython(PyTorch/Transformers) and LLM ops.
- Hands-on withLLM evaluation, prompt/program design, andRAGstacks; provenLoRA/QLoRAexperience.
- Priorbake-offsamong open-weight models (e.g., Llama/Mistral/DeepSeek) with scorecards (quality, latency, context, $/1k tokens). (Seen across fintech/LLMroles.)
- Experience choosing betweenfine-tuningandcontinued pretraining, with clear cost/quality trade-offs.
- Finance domain familiarity (equities/options) strongly preferred; ability to craft domain evals (EDGAR/filings, news, corporate actions).