Ann: Ochat — Try Ochat for terminal-based LLM workflows

Hi all — I’d like to share a project I’ve been building: Ochat.

Repo: https://github.com/dakotamurphyucf/ochat
Demo: https://youtu.be/eGgmUdZfnxM

What it is

Ochat is a toolkit for building LLM workflows as plain files. The core format is ChatMarkdown (ChatMD): Markdown plus a small, closed XML vocabulary.

A single ChatMD file is both:

  • the prompt/program (model config, tool allowlist, instructions, context), and

  • the auditable transcript (assistant replies + tool calls + tool outputs)

Because everything is plain text, workflows are reproducible and diffable in git.

Prompt packs: building larger workflows from small agents

Ochat supports agent-as-tool: you can mount one ChatMD prompt as a callable tool inside another. This lets you build prompt packs (planner/coder/reviewer/docs/test triage/etc.) and orchestrate them from a top-level prompt, without writing a custom “agent app”.

chat_tui: a Notty-based terminal UI (runtime uses Eio)

Ochat ships an interactive terminal UI called chat_tui.

  • The UI is built with Notty

  • The underlying runtime uses Eio (structured concurrency + I/O)

A ChatMD file can be treated as a “terminal agent”: you author an agent prompt as a .md, open it in chat_tui, and run/iterate on it while the transcript (including tool traces) is persisted.

Some chat_tui features:

  • streaming output (assistant text + tool calls + tool outputs)

  • persistent sessions you can resume/branch/export

  • manual context compaction (:compact) for long histories

  • syntax highlighting (including OCaml)

  • AI completions in the text input (to speed up drafting)

  • Vim-ish editing modes and message selection/yank/edit/resubmit workflows

Useful built-in tools (especially for code workflows)

Ochat includes built-in tools that cover most “agent + repo” loops:

  • read_file / read_dir for safe repo navigation

  • apply_patch for atomic, repo-safe edits

  • webpage_to_markdown for high-signal ingestion (incl. GitHub blob fast-path)

  • indexing/retrieval tools for grounding over docs and code

  • optional vision inputs via import_image

MCP tool import (optional)

Ochat can also mount external tools via MCP (stdio or HTTP). For example, importing Brave Search over stdio:

<tool mcp_server="stdio:npx -y brave-search-mcp" name="brave_web_search" />

Quick example: a “custom terminal orchestrator agent”

This is a complete agent definition (and becomes the runnable “app” when opened in chat_tui):

<config model="gpt-5.2" reasoning_effort="medium" />

<!-- core built-ins -->
<tool name="read_dir"/>
<tool name="read_file"/>
<tool name="apply_patch"/>
<tool name="webpage_to_markdown"/>

<!-- optional: import an external tool via MCP -->
<tool mcp_server="stdio:npx -y brave-search-mcp" name="brave_web_search" />

<!-- prompt-pack tools (agents as tools) -->
<tool name="plan"   agent="prompts/pack/plan.md" local/>
<tool name="code"   agent="prompts/pack/code.md" local/>
<tool name="review" agent="prompts/pack/review.md" local/>

<developer>
You are the orchestrator. Call plan first.
Keep edits small. Before apply_patch: explain the diff and wait for confirmation.
</developer>

<user>
Add a Quickstart section to README.md.
</user>

Run it from the repo with:

dune exec chat_tui -- -file prompts/refactor.md

Status / caveats

  • Provider support today is OpenAI-only.

  • The project is moving quickly.

Feedback / contributors

I’d appreciate any feedback or contributions

If you try it and anything’s confusing or rough, please open an issue.

5 Likes

The video is private.

project looks cool. But I wonder how big are the transcripts on large runs, can you end up with GBs of text?

Sorry the video is public now. Probably never in the gb range for a single transcript because it would never be larger than the max context size of the model. Also the tui stores saved sessions in a more efficient format with the option of exporting it to a raw chatmd file.

Here is a link to a conversation with a very large context size