POST /v1/ai/workflow/chat/stream
Stream AI chat for workflow generation and updates via SSE
POST
/v1/ai/workflow/chat/stream
Stream AI chat for workflow generation and updates via SSE
Request Body required
Chat request parameters
application/jsonOne of:
Option 1
Option 2
agentVersion
string
Optional - agent version toggle (defaults to v1)
Enum:
v1, v2, bothapiKeyOverride
string
Optional - override the LLM API key for this request (eval/testing use only)
conversationId
string
Stable conversation identifier (generated client-side, persists across session→workflow transition)
definition
object
Definition is the full workflow graph to execute. Must contain a
"root" node with type "manual". All action nodes referenced in
children arrays must be present.
blockExpansions
object
Set on run snapshots only (not workflow DB)
inputSchema
object[]
Array of:
description
string
key
string
required
boolean
type
string
"string", "number", "boolean", "object", "array"
nodes
object
REQUIRED
sensitivePropKeys
string[]
Legacy: kept for old runs; no longer populated for new workflows
Array of:
description
string
Optional - for generating new workflow
images
object[]
Optional - images attached to the message (base64 for LLM, IPFS for storage)
Array of:
base64
string
ipfsUrl
string
lastEventIndex
integer
Last received event index for replay (reconnection)
message
string
Optional on reconnect, required for new sessions
modelOverride
string
Optional - override the LLM model for this request (eval/testing use only)
name
string
Optional - for generating new workflow
preferredMode
string
Optional - user's preferred mode hint; overrides auto-classifier when set
Enum:
planning, building, repair, conversationrequestId
string
Client-generated UUID4 for persistent session tracking (optional for legacy/inline mode)
sessionId
string
Optional - metadata: which create-page session started this conversation
temperatureOverride
number
Optional - override the LLM temperature for this request (eval/testing use only)
templateSearchMode
string
Optional - template search mode toggle (defaults to lightweight)
Enum:
lightweight, hybridtimezone
string
Optional - user's IANA timezone (e.g. "America/New_York") for time-aware responses
workflowId
string
Optional - metadata: which workflow this conversation is about
Responses
200
SSE stream with events: chunk, node, done, error
text/event-stream
400
Bad Request
curl -X POST 'https://api.example.com/v1/ai/workflow/chat/stream' \ -H 'Authorization: Bearer YOUR_API_TOKEN' \ -H 'Content-Type: application/json' \ -d '{}'
const response = await fetch('https://api.example.com/v1/ai/workflow/chat/stream', { method: 'POST', headers: { "Authorization": "Bearer YOUR_API_TOKEN", "Content-Type": "application/json" }, body: JSON.stringify({})});const data = await response.json();console.log(data);
import requestsheaders = { 'Authorization': 'Bearer YOUR_API_TOKEN'}response = requests.post('https://api.example.com/v1/ai/workflow/chat/stream', headers=headers, json={})print(response.json())
package mainimport ( "fmt" "io" "net/http" "strings")func main() { body := strings.NewReader(`{}`) req, _ := http.NewRequest("POST", "https://api.example.com/v1/ai/workflow/chat/stream", body) req.Header.Set("Authorization", "Bearer YOUR_API_TOKEN") req.Header.Set("Content-Type", "application/json") resp, _ := http.DefaultClient.Do(req) defer resp.Body.Close() result, _ := io.ReadAll(resp.Body) fmt.Println(string(result))}
200
Response
"<string>"
API Playground
Try this endpoint
POST
/v1/ai/workflow/chat/stream
