Chat & Assistant
Talk to your AI assistant with full project context, manage conversations, and work in inline or detached mode.
Chat Interface
The assistant chat panel is your direct line to the AI. It supports two display modes so you can work however fits your layout.
Inline Mode
The default mode. The chat panel sits inside the main application layout alongside other panels. It shares space with the Kanban board, ticket inspector, and terminal views. Resize the panel by dragging its edge.
Detached Mode
Undock the chat into a floating window that you can position and resize independently. The detached window supports all eight resize handles (edges and corners), has a minimum size of 320×240 pixels, and remembers its position between sessions. You can collapse it to a compact title bar (76px tall) when you need screen space.
| Feature | Inline | Detached |
|---|---|---|
| Resize | Drag panel edge | 8 directional handles |
| Reposition | Fixed in layout | Free drag anywhere |
| Collapse | Not available | Collapse to title bar |
| Persists position | Via layout state | Bounds saved per session |
Conversation Management
Conversations are organized in a tree sidebar (the Chat Explorer) that supports folders and drag-and-drop reordering.
Create a Conversation
Click the + button in the Chat Explorer or in the chat panel header.
A new conversation opens with the title "New Chat" and inherits the current project and engine settings.
Start typing your message. The title auto-updates after the first exchange.
Rename
Double-click a conversation title in the Chat Explorer, or right-click and choose Rename. Titles longer than 30 characters are automatically truncated in the sidebar.
Delete
Right-click a conversation and choose Delete. This permanently removes the conversation, its messages, and any associated engine session. Closing a conversation (click the × on its tab) only hides it from the active view — it remains accessible in the Chat Explorer.
Switch Between Conversations
Click any conversation in the Chat Explorer to activate it. You can also switch by clicking conversation tabs at the top of the chat panel. Each conversation maintains its own engine, model selection, and message history independently.
Folders
Group conversations into folders for organization. Create a folder from the Chat Explorer toolbar, then drag conversations into it. Folders support nesting, renaming, and deletion (deleting a folder moves its conversations to the root level).
Persistence & History
Conversations survive app restarts. AgentsInFlow uses a two-layer persistence strategy to keep your chat history safe.
| Layer | What It Stores | When |
|---|---|---|
| Local snapshot | UI preferences, open tabs, detached window position, selected engine and model | On every state change |
| Database | Conversation metadata, full message history, engine session IDs, folder structure | On conversation create/update/delete |
When you open a conversation that has been idle, AgentsInFlow fetches its message history from the database and marks it as hydrated before rendering. This on-demand loading keeps memory use low even with hundreds of conversations.
Each engine maintains its own session ID per conversation. Switching engines within a conversation starts a fresh session while preserving the visible message history.
Memory & Context Injection
The assistant is aware of your current project and ticket. Before every message, AgentsInFlow injects relevant context into the system prompt so the AI can give grounded, project-specific answers.
What Gets Injected
| Source | Description |
|---|---|
| AIF runtime contract | Core instructions that tell the AI how to work with AgentsInFlow tooling and CLIs |
| Project & ticket scope | Currently selected project ID and ticket ID, used to scope memory recall and give the AI awareness of what you're working on |
| Memory recall | Relevant past decisions, procedures, and notes pulled from the assistant memory store using full-text and semantic search |
How Context Flows
You select a project and optionally a ticket in the sidebar.
When you send a message, AgentsInFlow runs a memory recall query scoped to the active project.
The recall results, runtime contract, and scope context are merged into the system prompt before the request reaches the engine CLI.
The AI responds with full awareness of your project, ticket, and past decisions.
Switching Context
Change the active project or ticket at any time by selecting a different one in the sidebar. The next message you send automatically uses the new context. Existing messages in the conversation remain unchanged — only new requests pick up the updated scope.
Tip: Start a new conversation when switching to a different project. This keeps each conversation's context clean and avoids confusing the AI with mixed project references.
Engine-Specific Injection
Each engine receives context through its native mechanism. This means the assistant prompt adapts to the CLI you selected.
| Engine | Injection Method |
|---|---|
| Claude Code | Appended to the system prompt |
| Codex | Passed via --developer-instructions flag |
| Cursor | Prepended to the user prompt |
For engine and model configuration details, see Engine & Model Config.
Fast Prompt Actions
Fast prompt actions are pre-configured prompts stored in your project settings that act like slash commands. They give you one-click access to common assistant requests.
- Each action has a label and a prompt template
- Actions can be flagged to auto-submit without requiring you to press Enter
- Optionally configured to close the ticket or CLI session after completing
- Defined per project in project settings
For workflow automation patterns using fast prompt actions, see Assistant Workflows.
Streaming & Interruption
Responses stream in real-time as the engine CLI generates output. You see tokens appear as they arrive, not after the full response completes.
How Streaming Works
The engine CLI runs as a subprocess. Its stdout is parsed as JSON lines, and each line is emitted as a streaming event. The chat panel accumulates text deltas into the active message in real-time.
| Event | What It Does |
|---|---|
assistant:delta | Appends a text chunk to the current assistant message |
assistant:reasoning | Shows thinking/reasoning tokens (Claude extended thinking) |
assistant:system | Displays tool calls, progress indicators, and activity metadata |
run:exit | Marks the response as complete with exit code |
Auto-Scroll
The chat panel auto-scrolls to follow new content as it streams in. If you scroll up to review earlier messages, auto-scroll pauses. It resumes when you scroll back to within 50 pixels of the bottom.
Activity Indicators
While the AI is working, inline activity badges show what the engine is doing — reading files, running tools, or orchestrating sub-tasks. Each activity displays its status: running, completed, or failed.
Stopping a Response
Click the Stop button that appears during streaming to interrupt the assistant. This sends a termination signal to the engine CLI subprocess. The partial response up to that point is kept in the conversation. You can then send a follow-up message to continue or redirect the conversation.
Stopping a response does not discard the conversation session. The next message you send resumes in the same engine session with the partial context intact.
Multi-Engine Support
Each conversation is bound to one engine at a time. You can switch engines from the chat panel header dropdown.
- Claude Code — Anthropic's coding CLI with extended thinking support
- Codex — OpenAI's coding agent with thread-based sessions
- Cursor — Cursor's agent with prompt-based interaction
Each engine maintains its own session ID per conversation. Switching engines within the same conversation starts a fresh engine session while keeping the visible message history. For setup and model selection details, see Getting Started → Prerequisites.