Sessions
Multi-turn conversations with persistent message history
Sessions provide a high-level API for multi-turn conversations. Messages persist across send()/stream() cycles, so the model sees the full conversation history on every turn.
Creating a Session
import { createSession } from "@usestratus/sdk/core";
const session = createSession({
model,
instructions: "You are a weather assistant.",
tools: [getWeather],
});Send and Stream
The session API follows a simple two-step pattern:
Queue a message
send(message) queues a user message synchronously - no API call is made.
session.send("What's the weather in NYC?");Stream the response
stream() runs the agent loop, yielding streaming events.
for await (const event of session.stream()) {
if (event.type === "content_delta") {
process.stdout.write(event.content);
}
}Multi-Turn
Just call send() and stream() again. Previous messages are automatically included:
session.send("What's the weather in NYC?");
for await (const event of session.stream()) {
if (event.type === "content_delta") process.stdout.write(event.content);
}
// The model sees the full conversation so far
session.send("What about London?");
for await (const event of session.stream()) {
if (event.type === "content_delta") process.stdout.write(event.content);
}Multimodal Messages
send() accepts either a string or a ContentPart[] array for multimodal input:
import type { ContentPart } from "@usestratus/sdk/core";
const parts: ContentPart[] = [
{ type: "text", text: "What is in this image?" },
{ type: "image_url", image_url: { url: "https://example.com/photo.png" } },
];
session.send(parts);
for await (const event of session.stream()) {
if (event.type === "content_delta") process.stdout.write(event.content);
}Content Part Types
TextContentPart - { type: "text", text: string }
ImageContentPart - { type: "image_url", image_url: { url: string, detail?: "auto" | "low" | "high" } }
The prompt() function also accepts ContentPart[]:
const result = await prompt(parts, { model });Wait for Result
If you don't need streaming events, wait() drains the stream internally and returns the result directly:
session.send("What's the weather in NYC?");
const result = await session.wait();
console.log(result.output);This is equivalent to draining the stream manually, but eliminates the boilerplate. All hooks, guardrails, and persistence still fire as usual.
Accessing Results
After consuming the stream, access the result via session.result:
session.send("Summarize our conversation.");
for await (const event of session.stream()) { /* ... */ }
const result = await session.result;
console.log(result.output); // Raw string output
console.log(result.finishReason); // "stop", "tool_calls", etc.
console.log(result.usage); // Token usage across this turn
console.log(result.lastAgent); // Agent that handled this turnMessage History
Access the accumulated conversation history at any time:
const messages = session.messages;
// Returns a copy - mutating it won't affect the sessionMessages include all user, assistant, and tool messages from previous turns. System messages are managed internally and not included.
Save, Resume, and Fork
Sessions can be saved to a snapshot and resumed or forked later. This enables persistence, branching conversations, and recovery from failures.
const snapshot = session.save();
// snapshot.id - same as session.id
// snapshot.messages - deep copy of the conversation historysave() throws if the session is closed or currently streaming.
Resume a session with the same ID and conversation history:
import { resumeSession } from "@usestratus/sdk/core";
const session2 = resumeSession(snapshot, {
model,
instructions: "You are a helpful assistant.",
});
// session2.id === snapshot.id
session2.send("Continue where we left off.");
for await (const event of session2.stream()) { /* ... */ }Fork creates a new session (new ID) with a copy of the conversation history:
import { forkSession } from "@usestratus/sdk/core";
const forked = forkSession(snapshot, {
model,
instructions: "You are a helpful assistant.",
});
// forked.id !== snapshot.id
forked.send("Take a different direction.");
for await (const event of forked.stream()) { /* ... */ }Abort Signal
Pass an AbortSignal to stream() to cancel a running turn:
import { RunAbortedError } from "@usestratus/sdk/core";
const ac = new AbortController();
session.send("Write a very long essay.");
try {
for await (const event of session.stream({ signal: ac.signal })) {
if (event.type === "content_delta") process.stdout.write(event.content);
}
} catch (error) {
if (error instanceof RunAbortedError) {
console.log("Stream was cancelled");
}
}The signal is per-invocation, not per-session. See Streaming - Abort Signal for more details.
Cleanup
Sessions support both explicit cleanup and await using:
await using session = createSession({ model });
// session.close() is called automatically when the block exitssession.close();
// After closing, send(), stream(), and save() will throwOne-Shot with prompt()
For single-turn use cases, prompt() is a convenience that creates a session, sends a message, drains the stream, and returns the result:
import { prompt } from "@usestratus/sdk/core";
const result = await prompt("What is 2 + 2?", { model });
console.log(result.output); // "4"Session Config
SessionConfig accepts the same options as AgentConfig (except name), plus context and maxTurns:
| Property | Type | Description |
|---|---|---|
model | Model | Required. The LLM model |
instructions | Instructions | System prompt |
tools | AgentTool[] | Available tools (function tools and built-in tools) |
subagents | SubAgent[] | Sub-agents that run as tool calls |
modelSettings | ModelSettings | Temperature, max tokens, etc. |
outputType | z.ZodType | Structured output schema |
handoffs | HandoffInput[] | Handoff targets |
inputGuardrails | InputGuardrail[] | Input guardrails |
outputGuardrails | OutputGuardrail[] | Output guardrails |
hooks | AgentHooks | Lifecycle hooks |
toolUseBehavior | ToolUseBehavior | Post-tool-call behavior |
context | TContext | Shared context object |
maxTurns | number | Max model calls per stream() (default: 10) |
costEstimator | CostEstimator | Function that converts UsageInfo to a dollar cost. Enables totalCostUsd on results |
maxBudgetUsd | number | Maximum dollar budget per stream(). Throws MaxBudgetExceededError when exceeded |
runHooks | RunHooks | Run-level hooks that fire across all agents |
toolErrorFormatter | ToolErrorFormatter | Custom formatter for tool error messages sent to the LLM |
callModelInputFilter | CallModelInputFilter | Transform model requests before they're sent to the API |
toolInputGuardrails | ToolInputGuardrail[] | Tool guardrails that run before tool execution |
toolOutputGuardrails | ToolOutputGuardrail[] | Tool guardrails that run after tool execution |
resetToolChoice | boolean | Reset toolChoice to "auto" after the first LLM call |
allowedTools | string[] | Restrict which tools are available. Supports glob wildcards. See Running Agents - Allowed tools |
canUseTool | CanUseTool | Permission callback invoked before any tool executes. See Running Agents - Tool permissions |
store | SessionStore | Persistence backend. Auto-saves after each stream. See Persistence |
sessionId | string | ID for persistence. Auto-generated if not provided |
onStateChange | SessionStateChangeListener | Callback fired on state mutations. See State Events |
Tool Management
Sessions support adding, removing, and replacing tools between turns. This enables hot-swapping MCP tools or dynamically adjusting capabilities.
const session = createSession({ model, tools: [getWeather] });
// Add tools mid-session (e.g. after connecting a new MCP server)
const mcpTools = await mcpClient.getTools();
session.addTools(mcpTools);
// Remove tools by name
session.removeTools(["get_weather"]);
// Replace all tools
session.setTools([calculate, searchDocs]); | Method | Description |
|---|---|
addTools(tools) | Append tools to the session's agent |
removeTools(names) | Remove tools by name |
setTools(tools) | Replace all tools on the session's agent |
Tool management methods throw if the session is closed or currently streaming. Modify tools between stream() calls, not during one.
Persistence
Sessions can auto-persist to a pluggable backend via SessionStore:
import { createSession, loadSession, MemorySessionStore } from "@usestratus/sdk/core";
const store = new MemorySessionStore();
const session = createSession({
model,
instructions: "You are a helpful assistant.",
store,
sessionId: "user-123",
});
session.send("Hello!");
for await (const event of session.stream()) { /* ... */ }
// Session auto-saved to store after stream completesLoad a previously saved session:
const session = await loadSession(store, "user-123", {
model,
instructions: "You are a helpful assistant.",
});
if (session) {
session.send("What did we talk about?");
for await (const event of session.stream()) { /* ... */ }
}SessionStore interface
interface SessionStore {
save(sessionId: string, snapshot: SessionSnapshot): Promise<void>;
load(sessionId: string): Promise<SessionSnapshot | undefined>;
delete(sessionId: string): Promise<void>;
list?(): Promise<string[]>;
}MemorySessionStore is a built-in in-memory implementation. For production, implement SessionStore with your preferred backend (SQLite, Redis, Postgres, etc.).
Auto-save only runs when the stream completes successfully. If the stream errors, no save occurs to prevent persisting incomplete state.
State Events
Track session state changes for UI integration:
const session = createSession({
model,
onStateChange: (event) => {
switch (event.type) {
case "stream_start":
showLoadingIndicator();
break;
case "message_added":
updateMessageList(event.message);
break;
case "saved":
showSaveConfirmation(event.sessionId);
break;
case "stream_end":
hideLoadingIndicator();
break;
}
},
});| Event | Fields | When |
|---|---|---|
stream_start | — | Stream begins |
message_added | message: ChatMessage | A message is added to history |
stream_end | — | Stream ends (always fires, even on error) |
saved | sessionId: string | Session persisted to store |
Last updated on