Skip to content

pi-agent-core runtime

pi-agent-core is the agent runtime engine within the Pi Monorepo, responsible for managing the lifecycle, state, and event flow of AI agents^[001-TODO__Pi_Monorepo_-AI_Agent_开发工具包.md]. It acts as the core orchestration layer that wraps large language model (LLM) interactions, handling complexities such as tool calling, context transformation, and asynchronous execution^[001-TODO__Pi_Monorepo-_AI_Agent_开发工具包.md].

Designed for TypeScript environments, it bridges the gap between raw LLM APIs (provided by pi-ai) and structured agent behaviors, offering hooks for interception and control flow management^[001-TODO__Pi_Monorepo_-_AI_Agent_开发工具包.md].

Core Concepts

The runtime distinguishes between internal agent representation and the data sent to the LLM.

  • AgentMessage vs. LLM Message: The runtime utilizes AgentMessage types which can be extended via declaration merging^[001-TODO__Pi_Monorepo_-_AI_Agent_开发工具包.md]. A conversion layer (convertToLlm) handles the filtering and transformation of these messages into the format required by the target LLM provider.
  • Event Stream: The agent lifecycle is modeled as a stream of events, allowing for granular observation and logging^[001-TODO__Pi_Monorepo_-_AI_Agent_开发工具包.md].
    • Flow: agent_startturn_startmessage_startmessage_updatemessage_endturn_endagent_end.
  • Tool Execution: The runtime supports configurable tool execution strategies, defaulting to parallel execution but allowing for sequential execution if required^[001-TODO__Pi_Monorepo_-_AI_Agent_开发工具包.md].

Control Flow & Intervention

pi-agent-core provides mechanisms to intervene in the agent's execution flow dynamically^[001-TODO__Pi_Monorepo_-_AI_Agent_开发工具包.md]:

  • Steering: Allows the injection of "steering" messages to interrupt the agent's current trajectory or alter its state mid-process.
  • Follow-up: Allows appending new tasks or messages after the agent has completed its current objective.

Lifecycle Hooks

Developers can define custom logic at specific points in the tool execution lifecycle^[001-TODO__Pi_Monorepo_-_AI_Agent_开发工具包.md]:

  • beforeToolCall: An interception point before a tool is executed, useful for validation, logging, or modifying inputs.
  • afterToolCall: A post-processing point for handling tool results, transforming outputs, or triggering side effects.

API Usage

The runtime is instantiated with an initial state configuration that includes the system prompt, model selection, available tools, and message history^[001-TODO__Pi_Monorepo_-_AI_Agent_开发工具包.md].

const agent = new Agent({
  initialState: { systemPrompt, model, tools, messages },
  convertToLlm,        // Converts AgentMessage[] to LLM Message[]
  transformContext,    // Handles message trimming/compression
  toolExecution: "parallel",
  beforeToolCall,      // Interceptor
  afterToolCall,       // Post-processor
});

await agent.prompt("Hello"); // Initiate a turn
await agent.continue();      // Resume from current context
agent.abort();               // Cancel execution
  • [[pi-ai]]: The unified LLM API layer that pi-agent-core interacts with.
  • [[Tool Calling]]: A capability managed by the runtime to execute external functions.
  • [[Agent Workflow]]: High-level patterns for structuring agent behavior.

Sources

  • 001-TODO__Pi_Monorepo_-_AI_Agent_开发工具包.md