This document explains what happens internally from the moment you send a message until the model output appears.
User
|
| (1) sendMessage
v
ChatPharoPresenter (Spec)
|
| model sendMessage: text
v
ChatPharoChat (conversation)
|
| (2) command parsing? (/help, /reset, ...)
| (3) add user message -> UI + history
| (4) build prompt:
| - history prefix
| - + memory context (optional)
| - + skills context (optional)
| (5) multivers chain (optional) / cache (optional)
v
ChatPharoAgent (backend)
|
| (6) HTTP request via ChatPharoTool (or backend-specific client)
| (7) parse response (content, thinking, tool_calls)
|
| (8) tool loop (bounded):
| while tool_calls and iteration < maximumIterations:
| execute tool -> append tool result -> call model again
v
ChatPharoChat
|
| (9) add assistant message -> UI + history
| (10) notify listeners / update status
v
User sees response
The presenter sends the current input text to the model and records a frontend log entry.
If the text starts with a supported command, ChatPharo executes it locally:
/help, /clear, /reset, /export, /historyNo model call occurs for these commands.
ChatPharo keeps two synchronized representations:
ChatPharoMessage for UI renderingChatPharoHistoryMessage for LLM requests (role/content/tool_calls)ChatPharo builds an effective prompt from multiple sources:
The selected ChatPharoAgent sends the request to the backend.
ChatPharoTool.The response may include:
tool_calls)If tool calls exist and tool calling is enabled/supported:
Repeat until tool calls stop or maximumIterations is reached.
The final assistant message is appended to UI and history, and listeners are notified.
When enabled, ChatPharo can summarize conversations into memory for future context injection.