stratus

Agents

Define AI agents with instructions, tools, and model settings

An Agent encapsulates a model, system prompt, tools, and behavior configuration. It's the central building block of Stratus.

Creating an Agent

agent.ts
import { Agent } from "stratus-sdk/core";

const agent = new Agent({
  name: "assistant",
  model,
  instructions: "You are a helpful assistant.",
});

Configuration

The AgentConfig interface accepts these properties:

PropertyTypeDescription
namestringRequired. Agent name, used in handoff tool names and tracing
instructionsstring | (ctx) => stringSystem prompt - can be a string or async function
modelModelLLM model to use
toolsFunctionTool[]Available tools
subagentsSubAgent[]Sub-agents that run as tool calls
modelSettingsModelSettingsTemperature, max tokens, etc.
outputTypez.ZodTypeZod schema for structured output
handoffsHandoffInput[]Other agents this agent can hand off to
inputGuardrailsInputGuardrail[]Pre-execution guardrails
outputGuardrailsOutputGuardrail[]Post-execution guardrails
hooksAgentHooksLifecycle hooks
toolUseBehaviorToolUseBehaviorWhat to do after tool calls

Dynamic Instructions

Instructions can be a function that receives the context and returns a string. This lets you customize the system prompt per-request:

dynamic-instructions.ts
const agent = new Agent({
  name: "assistant",
  model,
  instructions: (ctx: { language: string }) =>
    `You are a helpful assistant. Respond in ${ctx.language}.`, 
});

await run(agent, "Hello", { context: { language: "Spanish" } });

Async functions are also supported:

instructions: async (ctx) => {
  const rules = await fetchRulesFromDB(ctx.tenantId);
  return `Follow these rules: ${rules}`;
},

Model Settings

Fine-tune model behavior with modelSettings:

settings.ts
const agent = new Agent({
  name: "creative-writer",
  model,
  modelSettings: {
    temperature: 0.9,
    maxTokens: 2000,
    topP: 0.95,
  },
});
SettingTypeDescription
temperaturenumberSampling temperature (0-2)
topPnumberNucleus sampling threshold
maxTokensnumberMaximum tokens to generate
stopstring[]Stop sequences
presencePenaltynumberPresence penalty (-2 to 2)
frequencyPenaltynumberFrequency penalty (-2 to 2)
toolChoiceToolChoiceControl which tools the model calls
parallelToolCallsbooleanAllow parallel tool execution
seednumberDeterministic sampling seed
reasoningEffortReasoningEffortReasoning effort for o1/o3 models
maxCompletionTokensnumberMax completion tokens (including reasoning)
promptCacheKeystringPrompt cache routing key

Tool Choice

Control how the model uses tools:

// Let the model decide (default)
modelSettings: { toolChoice: "auto" }

// Force a specific tool
modelSettings: { toolChoice: { type: "function", function: { name: "search" } } }

// Force the model to use at least one tool
modelSettings: { toolChoice: "required" }

// Disable tool use
modelSettings: { toolChoice: "none" }

Tool Use Behavior

Control what happens after tool calls execute:

Default behavior - send tool results back to the model for another response:

toolUseBehavior: "run_llm_again"

Stop and return tool output as the final result:

toolUseBehavior: "stop_on_first_tool"

Stop only for specific tools:

toolUseBehavior: { stopAtToolNames: ["final_answer"] }

Cloning Agents

Create a modified copy of an agent with clone():

const spanishAgent = agent.clone({
  instructions: "Respond only in Spanish.",
});

All properties not in the override are preserved from the original, including tools, subagents, hooks, guardrails, and handoffs.

Edit on GitHub

Last updated on

On this page