Settings & Configuration
Manage app-wide defaults and per-project overrides from a single unified dialog.
Opening the Settings Dialog
All configuration lives in a single unified dialog. You can reach it from three places:
Sidebar header — click the gear icon in the top-left sidebar.
Project context menu — right-click a project in the sidebar and choose Settings.
Project settings view — open any project's settings tab directly from the Kanban or inspector.
The dialog has a tree sidebar on the left. The top-level App node holds global defaults; each project appears below with its own overridable sections.
Settings Scopes: App vs Project
Settings are organized into two scopes. App settings provide sensible defaults; project settings override them when you need per-project customization.
| Scope | Applies To | Sections |
|---|---|---|
| App (Global) | Every project unless overridden | General, Engines, Memory, Notifications, Prompts, Statuses, Keyboard, Integrations, Database, About |
| Project | A single project only | Identity, Engines, Memory, Notifications, Prompts, Statuses, Integrations |
App-only sections like Database, Keyboard, and About have no project-level counterpart because they are inherently global.
Engine Configuration
Engine settings control which AI CLI runs by default and how it behaves. The app-level Engines section sets the global default engine, reasoning effort, and permission mode. Each project can override these under its own Engines section and additionally configure model selection and MCP servers.
Per-Project Engine Defaults
When you open a project's engine settings, you can configure separate defaults for each installed engine (Claude Code, Codex, Cursor). These values are injected automatically whenever you launch an execution. See Engine & Model Config for model selection details and the 3-level settings cascade.
Permission Modes
Permission modes control how much autonomy the AI engine has during executions. Set a global default in App settings and override per-project when needed.
| Mode | Behavior |
|---|---|
default | Standard permission checks — the engine prompts before file writes and shell commands. |
acceptEdits | Automatically accept file edits; still prompt for shell commands. |
plan | Read-only analysis mode — the engine plans but does not write files or run commands. |
bypassPermissions | Skip all permission prompts. Use for trusted, well-scoped automation tasks. |
dontAsk | Suppress interactive prompts entirely. Actions that require approval are silently skipped. |
delegate | Delegate permission decisions to an external system or orchestration layer. |
bypassPermissions and dontAsk give the engine full autonomy. Only use them when you are comfortable with the engine making file and shell changes without review.
Reasoning Effort Levels
Reasoning effort controls how deeply the engine thinks before responding. Higher effort produces more thorough results but uses more tokens and takes longer. The default is medium.
Low
Quick answers, minimal chain-of-thought
Medium
Balanced — default for most tasks
High
Deeper analysis, more tool use
Extra High
Maximum thoroughness for complex tasks
You can set the global default in App → Engines and override it per-project or at execution time from the run dialog.
MCP Server Management
MCP (Model Context Protocol) servers extend engine capabilities with tools such as browser automation, database access, and external APIs. Server configuration is managed per-project in the engine settings.
Server Sources
| Source | Description |
|---|---|
| Packaged | Built-in servers bundled with AgentsInFlow (e.g., Chrome browser MCP). |
| Inherited | Servers defined in the CLI's own global config that AgentsInFlow discovers automatically. |
| Custom | User-defined servers added per-project. Specify a command and arguments for stdio transport, or a URL for streamable_http. |
Selected servers are injected into the engine at execution time. For details on Chrome session sharing and browser MCP, see MCP & Browser Integration.
Environment Variables & API Keys
API keys for third-party services are managed in the Integrations section at both app and project scope.
- App → Integrations — set a global OpenAI API key used for audio transcription and the default embedding provider.
- Project → Integrations — override the key per-project for semantic search embeddings.
Keys are stored securely and persisted immediately when entered — they bypass the draft/save workflow so you don't lose them if you cancel other changes. A status indicator shows whether the key is configured.
Engine CLI credentials (Claude Code, Codex, Cursor logins) are managed by the CLIs themselves. AgentsInFlow detects their auth state but does not store those tokens. See Getting Started for login instructions.
Context Window Configuration
Context window size determines how much conversation history and project context the engine can process in a single turn. AgentsInFlow passes context window flags to the underlying CLI when supported.
This setting is part of the per-engine configuration. When a model supports multiple context tiers (for example, Claude Code's standard vs. extended context), you can select the tier in the project engine settings. The chosen tier applies to all executions in that project unless overridden at run time.
Extended context windows consume more tokens per turn. Use them for large codebases or complex multi-file tasks where the extra capacity is genuinely needed.
Draft & Save Workflow
The settings dialog uses a draft/commit pattern. When you modify a value, it is held as a draft until you explicitly save. This lets you preview and validate changes before they take effect.
Edit — change any setting. The dialog marks itself as dirty with an unsaved-changes indicator.
Save — click Save to commit all draft changes. Only modified fields are persisted.
Cancel — discard drafts and revert to the last saved state.
API keys are the one exception — they persist immediately on entry so you never lose a key if you cancel other changes.