A full-featured conversational AI assistant with tool-calling capabilities.
Usage
agent-clichat[OPTIONS]
Description
A persistent, conversational agent that you can have a back-and-forth conversation with:
Run the command—it starts listening for your voice
Speak your command or question
The agent transcribes, sends to the LLM (which can use tools), and responds
The response is spoken back to you (if TTS enabled)
Immediately starts listening for your next command
Conversation history is saved between sessions
Interaction Controls
To Interrupt: Press Ctrl+Conce to stop listening or speaking and return to a listening state
To Exit: Press Ctrl+Ctwice in a row to terminate the application
Examples
# Start with TTSagent-clichat--input-device-index1--tts
# List available devicesagent-clichat--list-devices
# Custom history settingsagent-clichat--last-n-messages100--history-dir~/.my-chat-history
Options
Provider Selection
Option
Default
Description
--asr-provider
wyoming
The ASR provider to use ('wyoming', 'openai', 'gemini').
--llm-provider
ollama
The LLM provider to use ('ollama', 'openai', 'gemini').
--tts-provider
wyoming
The TTS provider to use ('wyoming', 'openai', 'kokoro', 'gemini').
Audio Input
Option
Default
Description
--input-device-index
-
Audio input device index (see --list-devices). Uses system default if omitted.
--input-device-name
-
Select input device by name substring (e.g., MacBook or USB).
--list-devices
false
List available audio devices with their indices and exit.
Audio Input: Wyoming
Option
Default
Description
--asr-wyoming-ip
localhost
Wyoming ASR server IP address.
--asr-wyoming-port
10300
Wyoming ASR server port.
Audio Input: OpenAI-compatible
Option
Default
Description
--asr-openai-model
whisper-1
The OpenAI model to use for ASR (transcription).
--asr-openai-base-url
-
Custom base URL for OpenAI-compatible ASR API (e.g., for custom Whisper server: http://localhost:9898).
--asr-openai-prompt
-
Custom prompt to guide transcription (optional).
Audio Input: Gemini
Option
Default
Description
--asr-gemini-model
gemini-3-flash-preview
The Gemini model to use for ASR (transcription).
LLM: Ollama
Option
Default
Description
--llm-ollama-model
gemma3:4b
The Ollama model to use. Default is gemma3:4b.
--llm-ollama-host
http://localhost:11434
The Ollama server host. Default is http://localhost:11434.
LLM: OpenAI-compatible
Option
Default
Description
--llm-openai-model
gpt-5-mini
The OpenAI model to use for LLM tasks.
--openai-api-key
-
Your OpenAI API key. Can also be set with the OPENAI_API_KEY environment variable.
--openai-base-url
-
Custom base URL for OpenAI-compatible API (e.g., for llama-server: http://localhost:8080/v1).
LLM: Gemini
Option
Default
Description
--llm-gemini-model
gemini-3-flash-preview
The Gemini model to use for LLM tasks.
--gemini-api-key
-
Your Gemini API key. Can also be set with the GEMINI_API_KEY environment variable.
Audio Output
Option
Default
Description
--tts/--no-tts
false
Enable text-to-speech for responses.
--output-device-index
-
Audio output device index (see --list-devices for available devices).
--output-device-name
-
Partial match on device name (e.g., 'speakers', 'headphones').
Voice name to use for Wyoming TTS (e.g., 'en_US-lessac-medium').
--tts-wyoming-language
-
Language for Wyoming TTS (e.g., 'en_US').
--tts-wyoming-speaker
-
Speaker name for Wyoming TTS voice.
Audio Output: OpenAI-compatible
Option
Default
Description
--tts-openai-model
tts-1
The OpenAI model to use for TTS.
--tts-openai-voice
alloy
Voice for OpenAI TTS (alloy, echo, fable, onyx, nova, shimmer).
--tts-openai-base-url
-
Custom base URL for OpenAI-compatible TTS API (e.g., http://localhost:8000/v1 for a proxy).
Audio Output: Kokoro
Option
Default
Description
--tts-kokoro-model
kokoro
The Kokoro model to use for TTS.
--tts-kokoro-voice
af_sky
The voice to use for Kokoro TTS.
--tts-kokoro-host
http://localhost:8880/v1
The base URL for the Kokoro API.
Audio Output: Gemini
Option
Default
Description
--tts-gemini-model
gemini-2.5-flash-preview-tts
The Gemini model to use for TTS.
--tts-gemini-voice
Kore
The voice to use for Gemini TTS (e.g., 'Kore', 'Puck', 'Charon', 'Fenrir').
Process Management
Option
Default
Description
--stop
false
Stop any running instance of this command.
--status
false
Check if an instance is currently running.
--toggle
false
Start if not running, stop if running. Ideal for hotkey binding.
History Options
Option
Default
Description
--history-dir
~/.config/agent-cli/history
Directory for conversation history and long-term memory. Both conversation.json and long_term_memory.json are stored here.
--last-n-messages
50
Number of past messages to include as context for the LLM. Set to 0 to start fresh each session (memory tools still persist).
General Options
Option
Default
Description
--save-file
-
Save audio to WAV file instead of playing through speakers.
--log-level
warning
Set logging level.
--log-file
-
Path to a file to write logs to.
--quiet, -q
false
Suppress console output from rich.
--config
-
Path to a TOML configuration file.
--print-args
false
Print the command line arguments, including variables taken from the configuration file.
Available Tools
The chat agent has access to tools that let it interact with your system:
Note
The memory tools below use a simple, built-in JSON storage system.
For the advanced, vector-backed memory system, see the memory command.
read_file: Read file contents
execute_code: Run a single command (no shell features like pipes or redirects)
duckduckgo_search: Search the web via DuckDuckGo
add_memory: Store information for future conversations
search_memory: Search stored memories
update_memory: Update existing memories
list_all_memories: List all stored memories
list_memory_categories: Show memory category summary
Example Conversation
You: "Read the pyproject.toml file and tell me the project version."
AI: (Uses read_file tool) "The project version is 0.5.0."
You: "What dependencies does it have?"
AI: "The project has the following dependencies: typer, pydantic, ..."
You: "Thanks!"
AI: "You're welcome! Let me know if you need anything else."
Conversation History
History is stored in ~/.config/agent-cli/history/ and persists between sessions.