AI Prompt
Prompt an AI model with custom instructions and optional context data via OpenRouter (GPT-4, Claude, Llama, etc.). Supports structured JSON output with schema enforcement via OpenRouter (best with OpenAI models). Pass in data from previous workflow steps as context. Ideal for: AI-driven decisions, data analysis, content generation, automated reasoning in workflows.
Catalog action Utility
Prompt an AI model with custom instructions and optional context data via OpenRouter (GPT-4, Claude, Llama, etc.). Supports structured JSON output with schema enforcement via OpenRouter (best with OpenAI models). Pass in data from previous workflow steps as context. Ideal for: AI-driven decisions, data analysis, content generation, automated reasoning in workflows.
At a Glance
| Field | Value |
|---|---|
| Action ID | ai-llm-chat |
| Category | Utility |
| Connector | Not required |
| Requires gas | No |
| Funds movement | None declared |
| Tags | ai, llm, chat, openrouter, gpt, utility, transform, text |
Payload Schema
| Field | Type | Required | Description |
|---|---|---|---|
prompt | string | Yes | The instruction or question for the AI (e.g., 'Based on the ETH price data, should I buy or sell?') |
context | string | No | Additional data or context to include with the prompt. Pipe in data from previous workflow steps (e.g., ETH candlestick data, horoscope text, wallet balances). If passing objects or arrays, they must be stringified as JSON. |
systemPrompt | string | No | System prompt to set the AI's role and behavior (e.g., 'You are a crypto trading advisor. Always respond with a JSON object containing decision and reasoning.') |
model | string | No | The OpenRouter model to use (default: 'openai/gpt-4o-mini') |
temperature | number | No | Creativity/randomness (0.0 = deterministic, 2.0 = very creative, default: 0.7) |
maxTokens | number | No | Maximum tokens in the response (default: 2048, max: 16384) |
responseFormat | string | No | Response format. Use 'json_schema' for structured JSON output matching your schema (enforced at the API level, best supported by OpenAI models). Requires jsonSchema when selected. Default: 'text' |
jsonSchema | string | No | Required when responseFormat is 'json_schema'. Define the exact JSON Schema the AI must follow. The AI's output will be structured to match this schema. Best enforced by OpenAI models; other providers may return an error or ignore the schema. All object properties must be listed in 'required' and 'additionalProperties' must be false (OpenAI strict-mode constraint). Example: '{"type": "object", "properties": {"decision": {"type": "string", "enum": ["buy", "sell", "hold"]}, "confidence": {"type": "number"}, "reasoning": {"type": "string"}}, "required": ["decision", "confidence", "reasoning"], "additionalProperties": false}' |
Result Schema
| Field | Type | Required | Description |
|---|---|---|---|
output | string | object | Yes | The AI's response. When responseFormat is 'text', this is a plain string. When responseFormat is 'json_schema', this is a parsed JSON object matching the provided jsonSchema — access fields directly (e.g., output.decision, output.confidence). |
model | string | No | The model that was used |
usage | object | No | Token usage statistics from OpenRouter |
Examples
json{ "type": "ai-llm-chat", "payload": { "prompt": "example-prompt" }, "children": []}
bashcurl -X POST "https://api.b3os.org/v1/actions/ai-llm-chat/test" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "inputs": { "prompt": "example-prompt" }}'
Payload fields can use workflow expressions such as {{$trigger.body.amount}}, {{$nodes.fetch.result.price}}, and {{$props.asset}} when the value should come from a trigger, prior node, or reusable workflow prop.
