how do you work?

I'm Orris, an AI assistant. Here's how I work:
What I can do
  • Answer questions across all domains — science, history, technology, medicine, law, math, writing, and more
  • Search the web for current information (my knowledge has a training cutoff, so I supplement with live searches)
  • Access a curated medical textbook library for clinical and health questions
  • Search PubMed for peer-reviewed medical literature
  • Read and generate files — Word docs, PDFs, PowerPoints, spreadsheets, images
  • Write and execute code in a Linux sandbox environment
  • Break complex tasks into steps and work through them autonomously
How I approach questions
  1. Medical questions → I consult the medical library first, then supplement with PubMed and web search
  2. Research questions → I search the web and/or PubMed in parallel, synthesize results, and cite sources
  3. File/document tasks → I use specialized skills for DOCX, PDF, PPTX, and XLSX files
  4. Code/computation → I write and run code in a sandbox, verify output, and iterate if needed
  5. Ambiguous requests → I ask a clarifying question before diving in, so I don't waste effort on the wrong thing
What I won't do
  • Make up citations or fabricate medical facts
  • Proceed on vague requests without asking a quick clarifying question
  • Give legal, financial, or medical advice as a substitute for a qualified professional
What would you like help with?

### **What is your company going to make? Please describe your product and what it does or will do.** We’re building Orris, an AI powered medical search copilot. Doctors face a losing battle against information overload. Staying up to date in one specialty would require 20 hours of reading every day. A clinician types a question (“best second line therapy for drug-resistant TB in pregnancy?”). Orris instantly searches authoritative sources like PubMed papers, guideline PDFs, clinical trial registries, specialty textbooks, and peer-reviewed videos/images and then returns a one-page answer with: - a ranked evidence summary, - inline citations, and - auto-generated visuals (flowcharts, dosage tables, annotated images). Behind the scenes, we run a domain-tuned RAG pipeline, multimodal vector index, and evidence grading heuristics. Doctors find the right answer in under 60 seconds instead of trawling journals for 30 minutes. Our edge is deep user insight (founders works extensively with clinicians) and the ability to quickly iterate on user feedback like the visuals part we added within a week of the feedback received. this was our answer last year for yc. since we have majorly upgraded everything about you and we dont even use a RAG model ... what and how should we answer this question now

Asking for Preferences
Asking for Preferences
Here's the rewrite:

We're building Orris, an AI medical search copilot for clinicians.
Staying current in a single specialty would require 20 hours of reading every day. When a doctor needs a quick, trustworthy answer mid-shift — best second-line therapy for drug-resistant TB in pregnancy, dosing adjustments in renal failure, conflicting trial results — they don't have 30 minutes to trawl journals. They need the right answer in under 60 seconds.
A clinician types a question. Orris deploys a reasoning agent that searches across PubMed, guideline PDFs, clinical trial registries, and specialty textbooks — then returns a one-page answer with a ranked evidence summary, inline citations, and auto-generated visuals like flowcharts, dosage tables, and annotated images.
What sets Orris apart technically is how it retrieves and reasons. Rather than classic RAG — which chunks text and does a one-shot vector lookup — Orris uses a hybrid agentic retrieval loop. The agent reads sources the way a trained clinician would: navigating a textbook using contextual offsets, scrolling forward or backward as the clinical question evolves, and re-querying when early results change what needs to be asked next. It's a ReAct-style loop that adapts its retrieval strategy with changing context, not a single embedding match.
This matters because medicine is not a keyword search problem. The right answer to a complex clinical question often lives across multiple sources, requires understanding what's missing from an initial result, and demands judgment about evidence quality — things a static retrieval pipeline cannot do.
Our edge is deep clinical insight from founders who work extensively with practicing doctors, which lets us iterate fast on the things that actually change behavior at the bedside.

A few notes on what I changed and why:
  • Dropped "RAG pipeline" language entirely — replaced with an accurate description of the agentic loop that also explains why it's better for medicine
  • Led with the doctor's world (time pressure, mid-shift decisions) before the technology
  • Kept the output description (ranked summary, citations, visuals) because it's concrete and YC readers respond to tangible deliverables
  • The line about "scrolling through a book the way a human reads" is your real technical moat — I made sure that came through clearly without being jargon-heavy
  • Ended on founder-market fit, which YC weighs heavily
Want me to tighten it further, adjust the word count, or punch up the technical depth section?

What is your company going to make? Please describe your product and what it does or will do? this was the question

Got it — you want a clean, direct answer to just that question. No headers, no bullet breakdowns after. Here it is:

We're building Orris, an AI medical search copilot for clinicians.
Staying current in a single specialty requires 20 hours of reading every day. When a doctor needs a trustworthy answer mid-shift — best second-line therapy for drug-resistant TB in pregnancy, dosing in renal failure, conflicting trial results — they don't have 30 minutes to trawl journals. Orris gets them there in under 60 seconds.
A clinician types a question. Orris deploys a reasoning agent that searches across PubMed, guideline PDFs, clinical trial registries, and specialty textbooks, then returns a one-page answer with a ranked evidence summary, inline citations, and auto-generated visuals like flowcharts, dosage tables, and annotated images.
What makes this technically different is how Orris retrieves and reasons. Rather than classic RAG — a one-shot vector lookup — Orris uses a hybrid agentic retrieval loop. The agent reads sources the way a trained clinician would: navigating a textbook with contextual offsets, scrolling forward or backward as the question evolves, and re-querying when early results change what needs to be asked next. It adapts its retrieval strategy in real time. This matters because the right answer to a complex clinical question often lives across multiple sources and requires judgment about what's missing from an initial result — something a static pipeline cannot do.
Our edge is deep clinical insight from founders who work extensively with practicing doctors, letting us iterate fast on what actually changes behavior at the bedside.

Tight, answers the question directly, leads with pain, ends with moat. Want any line changed?

we have to describe what we are building and what it does. that's it.


Orris is an AI medical search copilot for clinicians. A doctor types a clinical question — a drug interaction, a second-line therapy, a dosing edge case — and Orris searches across PubMed, clinical guidelines, trial registries, and specialty textbooks, then returns a one-page answer with a ranked evidence summary, inline citations, and auto-generated visuals like dosage tables and flowcharts.
Under the hood, Orris uses a hybrid agentic retrieval loop. Instead of a one-shot vector lookup, the agent reads sources the way a clinician would — navigating across documents, adjusting its search as context changes, and re-querying when early results reveal a gap. The result is a confident, cited answer in under 60 seconds instead of 30 minutes of manual searching.

Want it shorter, or is this length fine?
This is a shared conversation. Sign in to Orris to start your own chat.