Error Handling
Learn how to handle errors gracefully and build resilient applications.
Overview
Super Agent Stack uses standard HTTP status codes and provides detailed error messages to help you debug issues quickly. All errors follow the OpenAI error format.
Error Response Format
json
{
"error": {
"message": "Invalid API key provided",
"type": "invalid_request_error",
"code": "invalid_api_key"
}
}Common Error Codes
400 - Bad Request
The request was malformed or contains invalid parameters.
json
{
"error": {
"message": "Invalid model specified",
"type": "invalid_request_error",
"code": "invalid_model"
}
}Common causes:
- Invalid model name
- Missing required parameters
- Invalid parameter types
- Malformed JSON
401 - Unauthorized
Authentication failed due to invalid or missing API keys.
json
{
"error": {
"message": "Invalid API key provided",
"type": "authentication_error",
"code": "invalid_api_key"
}
}Common causes:
- Missing OpenRouter API key
- Missing Super Agent key
- Expired or revoked keys
- Keys not properly formatted in headers
429 - Too Many Requests
Rate limit exceeded for your plan.
json
{
"error": {
"message": "Rate limit exceeded. Please try again in 60 seconds.",
"type": "rate_limit_error",
"code": "rate_limit_exceeded"
}
}Rate limits by plan:
- Free: 10 requests/minute
- Pro: 100 requests/minute
- Premium: 500 requests/minute
- Enterprise: 1000 requests/minute
500 - Internal Server Error
An unexpected error occurred on the server.
json
{
"error": {
"message": "Internal server error. Please try again.",
"type": "server_error",
"code": "internal_error"
}
}What to do:
- Retry the request after a short delay
- Check our status page for incidents
- Contact support if the issue persists
Basic Error Handling
TypeScript/JavaScript
error-handling.ts
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://superagentstack.orionixtech.com/api/v1',
apiKey: process.env.OPENROUTER_KEY,
defaultHeaders: {
'superAgentKey': process.env.SUPER_AGENT_KEY,
},
});
async function makeRequest() {
try {
const completion = await client.chat.completions.create({
model: 'anthropic/claude-3-sonnet',
messages: [
{ role: 'user', content: 'Hello!' }
],
});
return completion.choices[0].message.content;
} catch (error: any) {
// Handle OpenAI SDK errors
if (error.status === 401) {
console.error('Authentication failed. Check your API keys.');
} else if (error.status === 429) {
console.error('Rate limit exceeded. Please wait and try again.');
} else if (error.status === 500) {
console.error('Server error. Please try again later.');
} else {
console.error('Unexpected error:', error.message);
}
throw error;
}
}Python
error_handling.py
from openai import OpenAI, OpenAIError, AuthenticationError, RateLimitError
import os
client = OpenAI(
base_url="https://superagentstack.orionixtech.com/api/v1",
api_key=os.environ.get("OPENROUTER_KEY"),
default_headers={
"superAgentKey": os.environ.get("SUPER_AGENT_KEY"),
}
)
def make_request():
try:
completion = client.chat.completions.create(
model="anthropic/claude-3-sonnet",
messages=[
{"role": "user", "content": "Hello!"}
]
)
return completion.choices[0].message.content
except AuthenticationError as e:
print("Authentication failed. Check your API keys.")
raise
except RateLimitError as e:
print("Rate limit exceeded. Please wait and try again.")
raise
except OpenAIError as e:
print(f"API error: {e}")
raise
except Exception as e:
print(f"Unexpected error: {e}")
raiseImplementing Retry Logic
Implement exponential backoff for transient errors:
retry-logic.ts
async function makeRequestWithRetry(
maxRetries = 3,
initialDelay = 1000
) {
let lastError: any;
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const completion = await client.chat.completions.create({
model: 'anthropic/claude-3-sonnet',
messages: [{ role: 'user', content: 'Hello!' }],
});
return completion.choices[0].message.content;
} catch (error: any) {
lastError = error;
// Don't retry on authentication errors
if (error.status === 401) {
throw error;
}
// Don't retry on bad requests
if (error.status === 400) {
throw error;
}
// Retry on rate limits and server errors
if (error.status === 429 || error.status >= 500) {
const delay = initialDelay * Math.pow(2, attempt);
console.log(`Retry attempt ${attempt + 1} after ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
throw lastError;
}Handling Streaming Errors
Streaming requires special error handling:
streaming-errors.ts
async function streamWithErrorHandling() {
try {
const stream = await client.chat.completions.create({
model: 'anthropic/claude-3-sonnet',
messages: [{ role: 'user', content: 'Hello!' }],
stream: true,
});
let fullResponse = '';
for await (const chunk of stream) {
try {
const content = chunk.choices[0]?.delta?.content || '';
fullResponse += content;
process.stdout.write(content);
} catch (chunkError) {
console.error('Error processing chunk:', chunkError);
// Continue processing other chunks
}
}
return fullResponse;
} catch (error: any) {
if (error.name === 'AbortError') {
console.log('Stream was canceled');
return null;
}
console.error('Stream error:', error.message);
throw error;
}
}Common Validation Errors
Invalid Model
typescript
// ❌ Wrong
model: 'gpt-4' // Not available through OpenRouter
// ✅ Correct
model: 'openai/gpt-4' // Proper OpenRouter formatEmpty Messages
typescript
// ❌ Wrong
messages: [] // Must have at least one message
// ✅ Correct
messages: [
{ role: 'user', content: 'Hello!' }
]Invalid Temperature
typescript
// ❌ Wrong
temperature: 3 // Must be between 0 and 2
// ✅ Correct
temperature: 0.7 // Valid range: 0-2Best Practices
- Always use try-catch: Wrap all API calls in error handling
- Implement retries: Use exponential backoff for transient errors
- Log errors properly: Include request IDs and context for debugging
- Validate inputs: Check parameters before making requests
- Handle rate limits: Implement queuing or backoff strategies
- Monitor errors: Track error rates and patterns in production
- Provide user feedback: Show meaningful error messages to users
Production-Ready Error Handler
production-error-handler.ts
class SuperAgentClient {
private client: OpenAI;
private maxRetries = 3;
private baseDelay = 1000;
constructor() {
this.client = new OpenAI({
baseURL: 'https://superagentstack.orionixtech.com/api/v1',
apiKey: process.env.OPENROUTER_KEY,
defaultHeaders: {
'superAgentKey': process.env.SUPER_AGENT_KEY,
},
});
}
async chat(
messages: Array<{ role: string; content: string }>,
options: any = {}
) {
let lastError: any;
for (let attempt = 0; attempt < this.maxRetries; attempt++) {
try {
const completion = await this.client.chat.completions.create({
model: options.model || 'anthropic/claude-3-sonnet',
messages,
...options,
});
return {
success: true,
data: completion.choices[0].message.content,
usage: completion.usage,
};
} catch (error: any) {
lastError = error;
// Log error with context
console.error('API Error:', {
attempt: attempt + 1,
status: error.status,
message: error.message,
timestamp: new Date().toISOString(),
});
// Don't retry on client errors
if (error.status >= 400 && error.status < 500 && error.status !== 429) {
return {
success: false,
error: {
type: 'client_error',
message: error.message,
status: error.status,
},
};
}
// Retry on server errors and rate limits
if (attempt < this.maxRetries - 1) {
const delay = this.baseDelay * Math.pow(2, attempt);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
}
}
return {
success: false,
error: {
type: 'max_retries_exceeded',
message: lastError?.message || 'Request failed after maximum retries',
status: lastError?.status,
},
};
}
}
// Usage
const client = new SuperAgentClient();
const result = await client.chat([
{ role: 'user', content: 'Hello!' }
]);
if (result.success) {
console.log('Response:', result.data);
} else {
console.error('Error:', result.error);
}Security Note
Never expose detailed error messages to end users in production. Log full errors server-side but show user-friendly messages to clients.