Complete guide for building, running, and inspecting LLM pipelines using the visual dashboard.
There are three ways to open the visual dashboard in a Pharo Playground or script:
LLMDashboardPresenter open.
Opens an empty dashboard. Use the New Pipeline button to create a pipeline visually.
| pipeline |
pipeline := LLMPipeline new
addStep: (LLMPromptStep template: 'Translate to {language}: {text}');
addStep: (LLMModelStep model: LLMOpenAIModel gpt4o);
addStep: (LLMParserStep parser: LLMStringOutputParser new);
yourself.
pipeline name: 'Translation Pipeline'.
LLMDashboardPresenter openOnPipeline: pipeline.
| pipeline tracer |
pipeline := LLMPipeline new
addStep: (LLMPromptStep template: 'Summarize: {text}');
addStep: (LLMModelStep model: LLMOpenAIModel gpt4oMini);
yourself.
pipeline name: 'Summarizer'.
tracer := LLMPipelineTracer on: pipeline.
tracer openDashboard.
The dashboard window has these regions:
+------------------------------------------------------------------+
| [New Pipeline] [Run] [Re-Run] [Refresh] [Clear] [Export] | <- Toolbar
+------------------+-----------------------------------------------+
| | |
| Pipeline Runs | [ Pipeline Flow | Execution Steps | |
| (left panel) | Conversation | Metrics ] |
| | |
| - Status | (tabbed content area) |
| - Pipeline name | |
| - Steps count | |
| - Duration | |
| - Tokens | |
| | |
+------------------+-----------------------------------------------+
| Status bar messages |
+------------------------------------------------------------------+
Left panel: Lists all pipeline executions (runs), newest first. Click a row to inspect it.
Right panel: Four tabs showing details of the selected run:
Click the New Pipeline button in the toolbar. This opens the Pipeline Builder dialog:
+--------------------------------------------------+
| LLM Studio - Pipeline Builder |
+--------------------------------------------------+
| Pipeline Name: [My Pipeline ] |
| |
| Provider: [OpenAI v] Model: [gpt-4o v] |
| |
| API Key: [sk-... or leave empty for env var ] |
| |
| Ollama Host: [localhost] Ollama Port: [11434] |
| |
| Prompt Template (use {variable} for inputs): |
| +---------------------------------------------+ |
| | Translate the following text to {language}: | |
| | {text} | |
| +---------------------------------------------+ |
| |
| Temperature: [0.7] Max Tokens: [1024] |
| Output Parser: [String v] |
| |
| Status: Configure your pipeline and click Create |
| |
| [Create Pipeline] [Create & Run] [Cancel] |
+--------------------------------------------------+
| Field | Description |
|---|---|
| Pipeline Name | Human-readable name for your pipeline |
| Provider | LLM provider: OpenAI, Anthropic, or Ollama |
| Model | Specific model ID (list updates per provider) |
| API Key | Your API key. Leave empty to use environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY) |
| Ollama Host/Port | Only for Ollama provider. Defaults to localhost:11434 |
| Prompt Template | Your prompt text. Use {variable} syntax for inputs. Example: Translate to {language}: {text} |
| Temperature | Controls randomness (0.0 = deterministic, 1.0 = creative). Default: 0.7 |
| Max Tokens | Maximum tokens in the response. Default: 1024 |
| Output Parser | How to parse the LLM output: String (trim whitespace), JSON (extract JSON), List (parse lists), None (raw) |
OpenAI: gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4
Anthropic: claude-3-5-sonnet-20241022, claude-3-opus-20240229, claude-3-haiku-20240307
Ollama (local): llama3, mistral, codellama, phi, gemma
Click the Run button in the toolbar. If the pipeline has template variables (e.g., {language}, {text}), the Run Input dialog appears:
+------------------------------------------+
| LLM Studio - Run Pipeline |
+------------------------------------------+
| Enter input variables for execution: |
| |
| language: |
| [French ] |
| |
| text: |
| [Hello, how are you today? ] |
| |
| Status: Fill in all fields and click Run |
| |
| [Run Pipeline] [Cancel] |
+------------------------------------------+
The dialog auto-detects variables from {variable} placeholders in your prompt template and creates one input field per variable.
LLMPipelineTracerIf the pipeline has no variables (e.g., a fixed prompt), it runs immediately without showing the input dialog.
Click the Re-Run button to re-execute the pipeline with the same inputs as the last run. This is useful for:
If no previous run exists, Re-Run falls back to opening the Run dialog.
After a pipeline run, results are shown in the dashboard tabs:
Visual flow diagram with color-coded step boxes:
+-------------+ +-------------+ +-------------+
| PromptStep | --> | ModelStep | --> | ParserStep |
| [OK] 2ms | | [OK] 1523ms | | [OK] 1ms |
| 0 tokens | | 156 tokens | | 0 tokens |
+-------------+ +-------------+ +-------------+
Click a step box to see its details (input, output, timing, tokens, errors) in the panel below.
Table with all steps and their details:
| # | Status | Step Name | Type | Duration | In Tokens | Out Tokens | Output Preview |
|---|---|---|---|---|---|---|---|
| 1 | [OK] | LLMPromptStep | LLMPromptStep | 2ms | 0 | 0 | Translate to French: Hello… |
| 2 | [OK] | LLMModelStep | LLMModelStep | 1523ms | 42 | 114 | Bonjour, comment allez-vous… |
| 3 | [OK] | LLMParserStep | LLMParserStep | 1ms | 0 | 0 | Bonjour, comment allez-vous… |
Click a row to see full input/output in the detail panel below.
Shows the messages exchanged with the LLM:
| Role | Content |
|---|---|
| USER | Translate to French: Hello, how are you today? |
| ASSISTANT | Bonjour, comment allez-vous aujourd’hui ? |
Performance summary:
Pipeline: Translation Pipeline
Status: Success
Total Duration: 1526ms
Steps: 3 (3 succeeded, 0 failed)
Success Rate: 100.0%
Average Step Duration: 508.7ms
Token Usage: 42 input + 114 output = 156 total
Plus a visual token usage bar chart showing input/output tokens per step.
| Button | Icon | Action |
|---|---|---|
| New Pipeline | + | Opens the Pipeline Builder dialog to create a new pipeline |
| Run | ▶ | Runs the current pipeline (prompts for input variables) |
| Re-Run | ↻ | Re-runs with the same inputs as the last execution |
| Refresh | ↺ | Refreshes the dashboard data from the tracer |
| Clear | ✕ | Clears all execution traces from history |
| Export | 💾 | Copies a text report of the selected trace to clipboard |
You can also build and run pipelines programmatically, then inspect them in the dashboard.
| pipeline tracer |
pipeline := LLMPipeline new
addStep: (LLMPromptStep template: 'Write a haiku about {topic}');
addStep: (LLMModelStep model: LLMOpenAIModel gpt4oMini);
addStep: (LLMParserStep parser: LLMStringOutputParser new);
yourself.
pipeline name: 'Haiku Generator'.
tracer := LLMPipelineTracer on: pipeline.
tracer runWith: { #topic -> 'programming' } asDictionary.
tracer openDashboard.
| pipeline tracer |
pipeline := LLMPipeline new
addStep: (LLMPromptStep template: 'Translate to {language}: {text}');
addStep: (LLMModelStep model: LLMAnthropicModel claude3Sonnet);
addStep: (LLMParserStep parser: LLMStringOutputParser new);
yourself.
pipeline name: 'Translator'.
tracer := LLMPipelineTracer on: pipeline.
"Run multiple translations"
tracer runWith: { #language -> 'French'. #text -> 'Hello world' } asDictionary.
tracer runWith: { #language -> 'Spanish'. #text -> 'Good morning' } asDictionary.
tracer runWith: { #language -> 'Japanese'. #text -> 'Thank you' } asDictionary.
"Open dashboard - all 3 runs visible in the left panel"
tracer openDashboard.
"Just open the UI - create everything from buttons"
LLMDashboardPresenter open.
Then use New Pipeline → configure → Create & Run → enter variables → results appear.
| dashboard pipeline |
dashboard := LLMDashboardPresenter open.
"Later, set a pipeline"
pipeline := LLMPipeline new
addStep: (LLMPromptStep template: 'Explain {concept} simply');
addStep: (LLMModelStep model: LLMOllamaModel llama3);
yourself.
pipeline name: 'Explainer'.
dashboard tracer pipeline: pipeline.
LLMDashboardPresenter openTranslatorOpenAIgpt-4o-miniOPENAI_API_KEY env var)Translate the following text to {language}: {text}0.3512StringFrenchHello, how are you today?LLMDashboardPresenter openCode ReviewerAnthropicclaude-3-5-sonnet-20241022Review this code for bugs and improvements:\n\n{code}0.22048Stringcode fieldollama serveLLMDashboardPresenter openSummarizerOllamallama3localhost, Port: 11434Summarize the following text in 3 bullet points:\n\n{text}0.5String| pipeline |
pipeline := LLMPipeline new
addStep: (LLMPromptStep template:
'Extract the name, age, and city from: {text}. Return JSON with keys: name, age, city.');
addStep: (LLMModelStep model: LLMOpenAIModel gpt4o);
addStep: (LLMParserStep parser: LLMJsonOutputParser new);
yourself.
pipeline name: 'Entity Extractor'.
LLMDashboardPresenter openOnPipeline: pipeline.
"Then click Run, enter text like: 'John is 30 years old and lives in Paris'"
Set these environment variables before launching Pharo:
# OpenAI
export OPENAI_API_KEY="sk-..."
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# Ollama - no key needed (runs locally)
LLMApiKeyManager default registerKey: 'sk-...' forProvider: #openai.
LLMApiKeyManager default registerKey: 'sk-ant-...' forProvider: #anthropic.
Enter your API key directly in the API Key field of the Pipeline Builder dialog. The key is registered for the session.
You need to create a pipeline first. Click New Pipeline or open the dashboard with openOnPipeline:.
Your prompt template has {variable} placeholders but the Run dialog fields were left empty. Fill in all fields.
Set the environment variable (e.g., OPENAI_API_KEY) or enter the key in the Pipeline Builder or register it via LLMApiKeyManager.
The default HTTP timeout may be too short. You can increase it programmatically:
model := LLMOpenAIModel gpt4o.
model connector timeout: 120. "120 seconds"
Ensure Ollama is running (ollama serve) and listening on the configured host/port (default: localhost:11434).
Click Refresh in the toolbar. The dashboard auto-refreshes on trace completion, but if the UI thread is blocked, a manual refresh helps.
Re-Run requires at least one previous run. Use Run first to set initial inputs.