Tools
Complete guide to the Vel tool system for enabling function calling in agents.
Overview
Tools allow agents to perform actions and retrieve information beyond text generation. The Vel tool system provides:
- JSON Schema Validation: Automatic input/output validation
- Async Support: Both sync and async tool handlers
- Type Safety: Schema-enforced parameter types
- Provider Agnostic: Works with OpenAI, Gemini, and Claude
- Simple Registration: Global tool registry
Quick Start
from vel import Agent, ToolSpec, register_tool
# 1. Define tool handler
def get_weather_handler(input: dict, ctx: dict) -> dict:
city = input['city']
# Your logic here
return {'temp_f': 72, 'condition': 'sunny', 'city': city}
# 2. Create ToolSpec
weather_tool = ToolSpec(
name='get_weather',
input_schema={
'type': 'object',
'properties': {'city': {'type': 'string'}},
'required': ['city']
},
output_schema={
'type': 'object',
'properties': {
'temp_f': {'type': 'number'},
'condition': {'type': 'string'},
'city': {'type': 'string'}
},
'required': ['temp_f', 'condition', 'city']
},
handler=get_weather_handler
)
# 3. Register tool
register_tool(weather_tool)
# 4. Use with agent
agent = Agent(
id='my-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather'] # Tool names
)
# Agent will automatically call tool when needed
answer = await agent.run({'message': 'What is the weather in San Francisco?'})
ToolSpec
Structure
class ToolSpec:
name: str # Unique tool identifier
description: str # Tool description (optional, helps LLM decide when to use)
input_schema: Dict[str, Any] # JSON Schema for input validation
output_schema: Dict[str, Any] # JSON Schema for output validation
handler: Callable # Function to execute (sync or async)
Parameters
name (required)
- Unique identifier for the tool
- Used by agent to reference tool
- Convention: lowercase_with_underscores
description (optional but recommended)
- Human-readable description of what the tool does and when to use it
- Helps the LLM decide when to invoke the tool
- If not provided, falls back to
input_schema['description'], or defaults tof'Tool: {name}' - Best practice: Be explicit and specific about the tool’s purpose and use cases
input_schema (required)
- JSON Schema (Draft 2020-12) defining expected input
- Must include
type,properties, andrequiredfields - Can include top-level
descriptionfield as fallback for tool description - Automatically validated before calling handler
output_schema (required)
- JSON Schema defining expected output structure
- Validates handler return value
- Ensures consistent tool behavior
handler (required)
- Function that executes the tool logic
- Signature:
(input: dict, ctx: dict) -> dict - Can be sync or async (auto-detected)
Creating Tools
Basic Tool
from vel import ToolSpec, register_tool
def add_numbers_handler(input: dict, ctx: dict) -> dict:
a = input['a']
b = input['b']
return {'result': a + b}
add_tool = ToolSpec(
name='add_numbers',
description='Add two numbers together and return the sum', # ← Explicit description
input_schema={
'type': 'object',
'properties': {
'a': {'type': 'number', 'description': 'First number'},
'b': {'type': 'number', 'description': 'Second number'}
},
'required': ['a', 'b']
},
output_schema={
'type': 'object',
'properties': {
'result': {'type': 'number'}
},
'required': ['result']
},
handler=add_numbers_handler
)
register_tool(add_tool)
Alternative: Description in input_schema
# If you don't provide explicit description parameter,
# Vel falls back to input_schema['description']
add_tool = ToolSpec(
name='add_numbers',
input_schema={
'type': 'object',
'description': 'Add two numbers together and return the sum', # ← Fallback description
'properties': {
'a': {'type': 'number'},
'b': {'type': 'number'}
},
'required': ['a', 'b']
},
output_schema={...},
handler=add_numbers_handler
)
Async Tool
import asyncio
from vel import ToolSpec, register_tool
async def fetch_data_handler(input: dict, ctx: dict) -> dict:
"""Async tool with I/O operations"""
url = input['url']
# Simulate async I/O
await asyncio.sleep(0.1)
return {
'status': 200,
'data': f"Fetched from {url}"
}
fetch_tool = ToolSpec(
name='fetch_data',
input_schema={
'type': 'object',
'properties': {
'url': {'type': 'string', 'format': 'uri'}
},
'required': ['url']
},
output_schema={
'type': 'object',
'properties': {
'status': {'type': 'integer'},
'data': {'type': 'string'}
},
'required': ['status', 'data']
},
handler=fetch_data_handler
)
register_tool(fetch_tool)
Tool with Complex Schema
def search_handler(input: dict, ctx: dict) -> dict:
query = input['query']
filters = input.get('filters', {})
limit = input.get('limit', 10)
# Your search logic
results = [
{'title': 'Result 1', 'score': 0.95},
{'title': 'Result 2', 'score': 0.87}
]
return {
'results': results[:limit],
'total': len(results)
}
search_tool = ToolSpec(
name='search',
input_schema={
'type': 'object',
'properties': {
'query': {
'type': 'string',
'description': 'Search query'
},
'filters': {
'type': 'object',
'properties': {
'category': {'type': 'string'},
'date_range': {'type': 'string'}
}
},
'limit': {
'type': 'integer',
'minimum': 1,
'maximum': 100,
'default': 10
}
},
'required': ['query']
},
output_schema={
'type': 'object',
'properties': {
'results': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'title': {'type': 'string'},
'score': {'type': 'number'}
},
'required': ['title', 'score']
}
},
'total': {'type': 'integer'}
},
'required': ['results', 'total']
},
handler=search_handler
)
register_tool(search_tool)
Using Tools
Single Tool
agent = Agent(
id='my-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather'] # Single tool
)
answer = await agent.run({'message': 'What is the weather in Tokyo?'})
Multiple Tools
agent = Agent(
id='my-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather', 'search', 'add_numbers'] # Multiple tools
)
answer = await agent.run({'message': 'Search for weather APIs and add 5 + 3'})
Tools with Streaming
agent = Agent(
id='my-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather']
)
async for event in agent.run_stream({'message': 'Weather in London?'}):
if event['type'] == 'tool-input-available':
print(f"Tool called: {event['tool_name']}")
print(f"Input: {event['input']}")
elif event['type'] == 'tool-output-available':
print(f"Tool result: {event['output']}")
elif event['type'] == 'text-delta':
print(event['delta'], end='', flush=True)
Built-in Tools
Vel includes a default get_weather tool for testing:
# Automatically registered
default_tool = ToolSpec(
name='get_weather',
input_schema={
'type': 'object',
'properties': {'city': {'type': 'string'}},
'required': ['city']
},
output_schema={
'type': 'object',
'properties': {'temp_f': {'type': 'number'}},
'required': ['temp_f']
},
handler=lambda inp, ctx: {'temp_f': 72.0}
)
Note: Override by registering your own get_weather tool.
Tool Context
The ctx parameter provides runtime context to tools. It contains both built-in context (automatically provided by the agent) and custom resources (injected via tool_context).
Built-in Context
Every tool automatically receives runtime metadata:
def context_aware_handler(input: dict, ctx: dict) -> dict:
"""Tool that uses built-in context"""
run_id = ctx.get('run_id') # Current run ID
session_id = ctx.get('session_id') # Session ID (if any)
agent_id = ctx.get('agent_id') # Agent ID
# Use context for logging, tracking, etc.
print(f"Tool called in run {run_id} by agent {agent_id}")
return {'status': 'ok'}
Built-in Context Keys:
run_id: Unique run identifiersession_id: Session ID (if using sessions)agent_id: Agent identifierinput: Original user input
Custom Resource Injection
Use the tool_context parameter to inject shared resources into tools (dependency injection pattern):
from vel import Agent
# Create shared resources
db_connection = get_database_connection()
storage = MessageBasedStorage(messages)
# Inject resources via tool_context
agent = Agent(
id='my-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['query_database', 'manage_storage'],
tool_context={
'db': db_connection, # Database connection
'storage': storage, # Storage backend
'user_id': 'user_123', # User context
'config': app_config # Configuration
}
)
Tool accesses resources:
async def query_database_handler(input: dict, ctx: dict) -> dict:
# Access injected database connection
db = ctx.get('db')
if not db:
return {'error': 'Database not available'}
results = await db.query(input['table'])
return {'results': results}
Why Use tool_context?
✅ Flexibility
- Same tool works with different backends
- Swap implementations without changing tool code
# Development: Mock database
agent = Agent(tools=['query_db'], tool_context={'db': MockDatabase()})
# Production: Real database
agent = Agent(tools=['query_db'], tool_context={'db': PostgresDatabase()})
✅ Per-Request Isolation
- Different agent instances have different contexts
- Perfect for multi-tenant applications
# User A's agent
agent_a = Agent(
tools=['get_data'],
tool_context={'user_id': 'user_a', 'tenant': 'acme_corp'}
)
# User B's agent (different context)
agent_b = Agent(
tools=['get_data'],
tool_context={'user_id': 'user_b', 'tenant': 'widget_inc'}
)
✅ Testability
- Easy to mock resources in tests
- No global state to manage
# Test with mock
mock_storage = MockStorage()
agent = Agent(tools=['manage_data'], tool_context={'storage': mock_storage})
✅ No Global Variables
- Resources passed explicitly
- Better code organization and thread safety
Common Use Cases
1. Database Connections
db = await get_db_connection()
agent = Agent(
tools=['query_users', 'update_record'],
tool_context={'db': db}
)
async def query_users_handler(input, ctx):
db = ctx.get('db')
return await db.query('users', input['filter'])
2. Storage Backends
from server.llm.artifacts.storage import MessageBasedStorage
storage = MessageBasedStorage(messages)
agent = Agent(
tools=['tableEditor'],
tool_context={'storage': storage}
)
async def table_editor_handler(input, ctx):
storage = ctx.get('storage')
artifact = await storage.get_artifact()
# ... work with artifact
3. API Clients
api_client = ExternalAPIClient(api_key=settings.API_KEY)
agent = Agent(
tools=['fetch_external_data'],
tool_context={'api': api_client}
)
4. User Context & Permissions
agent = Agent(
tools=['delete_file', 'update_settings'],
tool_context={
'user_id': current_user.id,
'permissions': current_user.permissions,
'org_id': current_user.organization_id
}
)
def delete_file_handler(input, ctx):
if 'delete' not in ctx.get('permissions', []):
return {'error': 'Permission denied'}
# ... proceed with deletion
5. Multiple Resources
agent = Agent(
tools=['complex_operation'],
tool_context={
'db': db_connection,
'cache': redis_client,
'storage': s3_client,
'config': app_config,
'user_id': user_id
}
)
Example: Artifact Storage Tool
Real-world example from artifact streaming implementation:
# Endpoint creates storage and injects it
from server.llm.artifacts.storage import MessageBasedStorage
async def generate_answer_endpoint():
messages = chat.messages if chat else []
storage = MessageBasedStorage(messages)
agent = Agent(
id='table-editor-agent:v1',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['tableEditor'],
tool_context={'storage': storage} # Inject storage
)
async for event in agent.run_stream({'messages': messages}):
yield event
# Tool accesses storage
async def table_editor_tool_handler(input, ctx):
storage = ctx.get('storage')
# Get existing artifact
current_artifact = await storage.get_artifact()
# Perform operations
# ...
# Return result
return {'artifact_id': artifact_id, 'status': 'complete'}
Best Practices
1. Check for Required Resources
def my_tool_handler(input, ctx):
db = ctx.get('db')
if not db:
return {'error': 'Database connection required but not provided'}
# ... use db safely
2. Provide Defaults
def my_tool_handler(input, ctx):
config = ctx.get('config', {'env': 'dev', 'debug': False})
# ... use config with defaults
3. Document Dependencies
async def query_database_handler(input, ctx):
"""
Query database tool.
Required ctx keys:
- db: Database connection instance with .query() method
Optional ctx keys:
- timeout: Query timeout in seconds (default: 30)
"""
db = ctx.get('db')
timeout = ctx.get('timeout', 30)
# ...
4. Keep Context Lean
Only pass what tools actually need:
# ❌ Too much
tool_context={'everything': entire_app_state}
# ✅ Specific resources
tool_context={'db': db, 'storage': storage}
See Also:
- Full example:
examples/tool_context_injection.py - Artifact storage implementation:
server/llm/artifacts/storage.py
JSON Schema Validation
Input Validation
Automatic validation before calling handler:
# Schema defines number
input_schema={
'type': 'object',
'properties': {'count': {'type': 'number'}},
'required': ['count']
}
# If LLM provides string, validation fails
# {"count": "five"} ❌ ValidationError
# {"count": 5} ✓ Valid
Output Validation
Automatic validation of handler return value:
# Schema expects specific structure
output_schema={
'type': 'object',
'properties': {
'success': {'type': 'boolean'},
'message': {'type': 'string'}
},
'required': ['success', 'message']
}
# Handler must return matching structure
return {'success': True} # ❌ Missing 'message'
return {'success': True, 'message': 'OK'} # ✓ Valid
Schema Best Practices
# ✓ Good: Descriptive, constrained schemas
{
'type': 'object',
'properties': {
'temperature': {
'type': 'number',
'description': 'Temperature in Fahrenheit',
'minimum': -100,
'maximum': 200
},
'units': {
'type': 'string',
'enum': ['fahrenheit', 'celsius'],
'default': 'fahrenheit'
}
},
'required': ['temperature']
}
# ✗ Bad: Vague, unconstrained
{
'type': 'object',
'properties': {
'data': {'type': 'string'} # Too generic
}
}
Error Handling
Tool Execution Errors
def safe_divide_handler(input: dict, ctx: dict) -> dict:
try:
a = input['a']
b = input['b']
result = a / b
return {'result': result}
except ZeroDivisionError:
return {'error': 'Division by zero', 'result': None}
except Exception as e:
return {'error': str(e), 'result': None}
# Schema allows error field
output_schema={
'type': 'object',
'properties': {
'result': {'type': ['number', 'null']},
'error': {'type': 'string'}
}
}
Validation Errors
from jsonschema.exceptions import ValidationError
try:
answer = await agent.run({'message': 'Call the tool'})
except ValidationError as e:
print(f"Tool validation failed: {e}")
Advanced Usage
Dynamic Tool Registration
def create_api_tool(api_name: str, endpoint: str) -> ToolSpec:
"""Factory function to create API tools"""
def handler(input: dict, ctx: dict) -> dict:
# Call the API
return {'response': f"Called {endpoint}"}
return ToolSpec(
name=f'call_{api_name}',
input_schema={
'type': 'object',
'properties': {'params': {'type': 'object'}},
'required': []
},
output_schema={
'type': 'object',
'properties': {'response': {'type': 'string'}},
'required': ['response']
},
handler=handler
)
# Register multiple API tools
for api in ['weather', 'maps', 'translate']:
tool = create_api_tool(api, f'https://api.example.com/{api}')
register_tool(tool)
Tool Chaining
Agent automatically chains tools when needed:
# Agent can call multiple tools in sequence
agent = Agent(
id='my-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather', 'search', 'send_email'],
policies={'max_steps': 10} # Allow multi-step execution
)
# Agent might: search weather API → get weather → send email with results
answer = await agent.run({
'message': 'Find the weather in Paris and email it to user@example.com'
})
Tool Policies
Control tool execution with policies:
agent = Agent(
id='my-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather'],
policies={
'max_steps': 5, # Maximum tool calls per run
'timeout': 30, # Timeout in seconds (future)
'retry': True # Retry failed tools (future)
}
)
Tool Use Behavior
Control what happens after tool execution - whether to continue to LLM for a natural language response or halt and return raw tool output.
Overview
By default, after a tool executes, Vel calls the LLM again to generate a natural language response. You can change this behavior to:
- Return raw tool output (skip final LLM call)
- Halt on specific tools (useful for routing/intent detection)
- Custom logic per tool call
Per-Tool Behavior (Dict Approach)
Specify behavior for individual tools:
agent = Agent(
id='assistant-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather', 'send_email', 'finalize_report', 'search_docs'],
policies={
'tool_behavior': {
'get_weather': {'stop_on_first_use': True}, # Halt, return raw JSON
'send_email': {'stop_on_first_use': False}, # Continue to LLM
'finalize_report': {'stop_on_first_use': True}, # Halt, return raw JSON
# search_docs defaults to False (continue to LLM)
}
}
)
# If LLM calls get_weather → halts, returns {'temp_f': 72, 'condition': 'sunny'}
# If LLM calls send_email → continues, LLM says "Email sent successfully!"
# If LLM calls finalize_report → halts, returns raw report data
# If LLM calls search_docs → continues, LLM summarizes results
Enum-Based Behavior
Use the ToolUseBehavior enum for cleaner global control:
from vel.core import ToolUseBehavior
agent = Agent(
id='assistant-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather', 'send_email', 'finalize_report', 'search_docs'],
policies={
'tool_use_behavior': ToolUseBehavior.STOP_AT_TOOLS,
'stop_at_tools': ['get_weather', 'finalize_report']
# These two halt; others continue to LLM
}
)
Available Behaviors:
| Enum | Description |
|---|---|
RUN_LLM_AGAIN |
Default. Call LLM after tool for natural language response |
STOP_AFTER_TOOL |
Halt after ANY tool, return raw output |
STOP_AT_TOOLS |
Halt only for tools in stop_at_tools list |
CUSTOM_HANDLER |
Use custom_tool_handler callback |
Global Stop After First Tool
Halt after any tool executes:
agent = Agent(
id='routing-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['classify_intent', 'extract_entities'],
policies={
'stop_on_first_tool': True # Halt after ANY tool
}
)
# Returns raw tool output, skips final LLM call
result = await agent.run({'message': 'Book a flight to NYC'})
# result = {'intent': 'booking', 'confidence': 0.95}
Custom Tool Handler
Full control over post-tool behavior:
from vel.core import ToolUseBehavior, ToolUseDecision, ToolUseDirective
def my_tool_handler(event):
"""Custom logic based on tool name and output"""
if event.tool_name == 'get_weather':
# Always return raw weather data
return ToolUseDecision.STOP
elif event.tool_name == 'finalize_report':
# Stop and customize the return value
return ToolUseDirective(
decision=ToolUseDecision.STOP,
final_output={'status': 'complete', 'report': event.output}
)
elif event.tool_name == 'send_email':
# Continue but inject a system message
return ToolUseDirective(
decision=ToolUseDecision.CONTINUE,
add_messages=[{
'role': 'system',
'content': 'Email was sent successfully. Confirm to the user.'
}]
)
elif event.tool_name == 'validate_input' and event.output.get('error'):
# Abort on validation error
return ToolUseDecision.ERROR
# Default: continue to LLM
return ToolUseDecision.CONTINUE
agent = Agent(
id='assistant-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather', 'send_email', 'finalize_report', 'validate_input'],
policies={
'tool_use_behavior': ToolUseBehavior.CUSTOM_HANDLER,
'custom_tool_handler': my_tool_handler
}
)
ToolEvent Properties:
| Property | Description |
|---|---|
tool_name |
Name of the tool that was called |
args |
Arguments passed to the tool |
output |
Tool’s return value |
step |
Current execution step number |
messages |
Current message history |
run_id |
Unique run identifier |
session_id |
Session ID (if using sessions) |
Return Types:
ToolUseDecision.CONTINUE- Call LLM againToolUseDecision.STOP- Return tool outputToolUseDecision.ERROR- Abort with errorToolUseDirective(...)- Advanced control with message injection
Reset Tool Choice
Prevent tools from being called repeatedly in loops:
agent = Agent(
id='my-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['search_docs'],
policies={
'reset_tool_choice': True # Inject prompt to reconsider tool selection
}
)
When enabled, after each tool call Vel adds a system message:
“The previous tool did not resolve the request; reconsider tool selection.”
Use Cases
1. Intent Detection / Routing
# Return raw classification, skip prose
policies={'stop_on_first_tool': True}
2. Structured Data Retrieval
# Return raw JSON for specific tools
policies={
'tool_behavior': {
'get_user_data': {'stop_on_first_use': True},
'get_metrics': {'stop_on_first_use': True}
}
}
3. Multi-Step with Final Summary
# Only finalize halts; others continue
policies={
'tool_use_behavior': ToolUseBehavior.STOP_AT_TOOLS,
'stop_at_tools': ['finalize_report']
}
4. Conditional Logic
# Custom handler for complex decisions
def handler(event):
if event.output.get('requires_approval'):
return ToolUseDecision.STOP
return ToolUseDecision.CONTINUE
policies={
'tool_use_behavior': ToolUseBehavior.CUSTOM_HANDLER,
'custom_tool_handler': handler
}
Streaming Behavior
Tool use behavior works the same with streaming. When a tool halts:
async for event in agent.run_stream({'message': 'Get weather'}):
if event['type'] == 'tool-output-available':
print(f"Tool result: {event['output']}")
elif event['type'] == 'finish':
break # Execution halted after tool
Conditional Tool Enablement
Dynamically enable/disable tools based on context:
from vel import ToolSpec, register_tool
tool = ToolSpec(
name='premium_feature',
input_schema={...},
output_schema={...},
handler=premium_handler,
enabled=lambda ctx: ctx.get('user', {}).get('is_premium', False)
)
register_tool(tool)
# Tool only appears in schema for premium users
agent = Agent(
tools=['basic_feature', 'premium_feature'],
tool_context={'user': {'is_premium': True}} # Premium user sees both
)
When enabled returns False, the tool is omitted from the schema sent to the LLM.
Per-Tool Policies
Configure timeout, retries, and fallback per tool:
tool = ToolSpec(
name='slow_api_call',
input_schema={...},
output_schema={...},
handler=slow_handler,
timeout=5.0, # Cancel after 5 seconds
retries=2, # Retry up to 2 times
fallback='return_error' # What to do when all retries fail
)
Fallback Options:
'return_error'- Return error to LLM'call_other_tool'- Try alternative tool (future)- Custom handler (future)
Tool Organization & Imports
How the Registry Works
Vel uses a global tool registry. When you call register_tool(), the tool is added to this global registry and becomes available to all agents in your application.
from vel import register_tool, ToolSpec
# This registers the tool globally
register_tool(my_tool)
# Now ANY agent can use it by name
agent = Agent(tools=['my_tool'])
Pattern 1: Inline Registration (Simple)
Register tools directly in your main file:
# my_agent.py
from vel import Agent, ToolSpec, register_tool
# Define and register tool inline
weather_tool = ToolSpec(
name='get_weather',
input_schema={...},
output_schema={...},
handler=lambda inp, ctx: {'temp_f': 72}
)
register_tool(weather_tool)
# Create agent
agent = Agent(
id='my-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather'] # Available immediately after registration
)
When to use:
- Simple applications
- Few tools (1-3)
- Quick prototyping
Pattern 2: Separate Module (Recommended)
Organize tools in separate files and import them:
File: tools/weather.py
from vel import ToolSpec, register_tool
def weather_handler(input: dict, ctx: dict) -> dict:
return {'temp_f': 72, 'condition': 'sunny'}
weather_tool = ToolSpec(
name='get_weather',
input_schema={...},
output_schema={...},
handler=weather_handler
)
# Register tool when module is imported
register_tool(weather_tool)
File: tools/__init__.py
# Import all tools to register them
from .weather import weather_tool
from .search import search_tool
from .email import email_tool
# Re-export for convenience
__all__ = ['weather_tool', 'search_tool', 'email_tool']
File: my_agent.py
from vel import Agent
# Import tools module (registers all tools automatically)
import tools
# Or import specific tools
from tools.weather import weather_tool
# Create agent - tools are already registered
agent = Agent(
id='my-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['get_weather', 'search', 'send_email']
)
When to use:
- Production applications
- Multiple tools (4+)
- Team collaboration
- Reusable tool libraries
Pattern 3: Conditional Registration
Register tools only when needed:
# tools/web_search.py
from vel import ToolSpec, register_tool
import os
def create_web_search_tool():
"""Only register if API key is available"""
if not os.getenv('PERPLEXITY_API_KEY'):
return None
tool = ToolSpec(
name='websearch',
input_schema={...},
output_schema={...},
handler=web_search_handler
)
register_tool(tool)
return tool
# Register on import (if key exists)
web_search_tool = create_web_search_tool()
Important Rules
- Import Before Agent Creation: Tools must be imported/registered before creating the agent
# ✓ Good: Import first
from tools.weather import weather_tool
agent = Agent(tools=['get_weather'])
# ✗ Bad: Import after agent creation
agent = Agent(tools=['get_weather']) # KeyError: 'get_weather' not found!
from tools.weather import weather_tool
- Registration Happens Once: Tools are registered when the module is imported
- First import: Tool is registered
- Subsequent imports: Tool already registered (no duplicates)
- Global Registry: All agents share the same tool registry
- Registering a tool makes it available to all agents
- You cannot have agent-specific tools (by design)
Real-World Example
Here’s how the Perplexity web search tool is organized:
# examples/multi_step_tools/web_search.py
"""Web Search Tool - Perplexity API Integration"""
from vel import ToolSpec, register_tool
async def web_search_handler(input: dict, ctx: dict) -> dict:
# Implementation here
pass
web_search_tool = ToolSpec(
name='websearch',
input_schema={...},
output_schema={...},
handler=web_search_handler
)
# Register automatically when imported
register_tool(web_search_tool)
# my_research_agent.py
from vel import Agent
# Import tool (registers it)
from examples.multi_step_tools.web_search import web_search_tool
# Create agent
agent = Agent(
id='research-agent',
model={'provider': 'openai', 'model': 'gpt-4o'},
tools=['websearch'] # Tool is already registered
)
# Use agent
result = await agent.run({'message': 'Search for AI trends'})
Troubleshooting Imports
Problem: KeyError: 'my_tool'
Solution:
# Check: Did you import the tool module?
from tools.my_tool import my_tool # This registers it
# Check: Did you import before creating agent?
# imports must be at top of file
# Check: Is tool name spelled correctly?
agent = Agent(tools=['my_tool']) # Must match ToolSpec.name
Problem: Tool registered twice with different implementations
Solution:
# Vel allows re-registration (last one wins)
# To prevent confusion, use unique names or check before registering:
from vel import get_tool_registry
registry = get_tool_registry()
if 'my_tool' not in registry:
register_tool(my_tool)
Examples
See examples/test_both_modes.py for complete tool usage demonstration:
python examples/test_both_modes.py
See examples/perplexity_web_search_example.py for real-world tool import example:
python examples/perplexity_web_search_example.py
Troubleshooting
Tool Not Found
Error:
KeyError: 'my_tool'
Solution:
- Ensure tool is registered before creating agent:
register_tool(tool) - Check tool name spelling in
tools=[]parameter - Verify tool name matches ToolSpec.name
Validation Error
Error:
jsonschema.exceptions.ValidationError: 'city' is a required property
Solution:
- Check LLM is providing all required fields
- Verify schema matches handler expectations
- Add descriptions to help LLM understand parameters
Tool Never Called
Problem: Agent generates text response instead of calling tool.
Solutions:
- Make tool name and schema descriptive
- Add explicit instructions in message: “Use the get_weather tool”
- Verify tool is in
tools=[]parameter - Check if provider supports function calling (all do)
Async Tool Hangs
Problem: Async tool handler never completes.
Solutions:
- Ensure all async operations use
await - Add timeouts to async I/O operations
- Check for deadlocks in async code
- Use
asyncio.wait_for()for timeout control
Best Practices
1. Descriptive Schemas
# ✓ Good: Helps LLM understand tool
input_schema={
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'City name for weather lookup (e.g., "San Francisco")'
}
},
'required': ['city']
}
# ✗ Bad: No context for LLM
input_schema={
'type': 'object',
'properties': {'city': {'type': 'string'}},
'required': ['city']
}
2. Consistent Naming
# ✓ Good: Verb_noun pattern
'get_weather', 'search_products', 'send_email'
# ✗ Bad: Unclear actions
'weather', 'products', 'email'
3. Error Fields
# ✓ Good: Schema allows error responses
output_schema={
'type': 'object',
'properties': {
'result': {'type': ['string', 'null']},
'error': {'type': 'string'},
'success': {'type': 'boolean'}
},
'required': ['success']
}
4. Idempotent Tools
# ✓ Good: Safe to retry
def get_weather_handler(input: dict, ctx: dict) -> dict:
# Read-only operation
return fetch_weather(input['city'])
# ⚠ Caution: Side effects
def send_email_handler(input: dict, ctx: dict) -> dict:
# May send duplicate emails if retried
return send_email(input['to'], input['body'])
Next Steps
- Stream Protocol - Understand tool call events
- API Reference - Complete API documentation
- Providers - Provider-specific tool features