Error Handling
Learn how to handle API errors gracefully and implement robust retry logic.
Error Response Format
When an error occurs, the API returns a JSON object with error details:
json
{
"error": {
"message": "Rate limit exceeded. Please retry after 60 seconds.",
"type": "rate_limit_error",
"code": "rate_limit_exceeded",
"param": null
}
}Error Object Fields
| Field | Description |
|---|---|
message | Human-readable error description |
type | Error category (e.g., invalid_request_error, rate_limit_error) |
code | Machine-readable error code |
param | The parameter that caused the error (if applicable) |
HTTP Status Codes
Client Errors (4xx)
| Status | Error | Description | Retry? |
|---|---|---|---|
| 400 | Bad Request | Invalid request parameters or malformed JSON | |
| 401 | Unauthorized | Invalid or missing API key | |
| 403 | Forbidden | API key doesn't have permission for this action | |
| 404 | Not Found | Model or resource doesn't exist | |
| 422 | Unprocessable | Request is valid but cannot be processed | |
| 429 | Rate Limited | Too many requests. Check Retry-After header. |
Server Errors (5xx)
| Status | Error | Description | Retry? |
|---|---|---|---|
| 500 | Internal Error | Server encountered an unexpected error | |
| 502 | Bad Gateway | Upstream provider returned an invalid response | |
| 503 | Unavailable | Service temporarily unavailable or overloaded | |
| 504 | Timeout | Request timed out waiting for upstream |
Common Error Codes
| Code | Meaning | Solution |
|---|---|---|
invalid_api_key | API key is invalid | Check your API key |
insufficient_credits | Not enough credits | Add more credits |
model_not_found | Model doesn't exist | Check model ID |
context_length_exceeded | Input too long | Reduce message length |
rate_limit_exceeded | Too many requests | Wait and retry |
content_policy_violation | Content blocked | Modify your prompt |
Handling Errors in Code
Here's how to properly catch and handle API errors:
TypeScript
try {
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }]
});
} catch (error) {
if (error instanceof OpenAI.APIError) {
console.error('Status:', error.status);
console.error('Message:', error.message);
console.error('Code:', error.code);
switch (error.status) {
case 400:
console.error('Bad request - check your parameters');
break;
case 401:
console.error('Invalid API key');
break;
case 403:
console.error('Access forbidden');
break;
case 404:
console.error('Model not found');
break;
case 429:
console.error('Rate limited - slow down');
break;
case 500:
case 502:
case 503:
console.error('Server error - retry later');
break;
}
}
}Automatic Retries
The OpenAI SDK has built-in retry logic for transient errors:
TypeScript
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.llmhub.one/v1',
apiKey: process.env.LLMHUB_API_KEY,
maxRetries: 3, // Retry up to 3 times
timeout: 30000, // 30 second timeout
});
// The SDK automatically retries on 429 and 5xx errorsCustom Retry Logic
For more control, implement your own retry logic with exponential backoff:
TypeScript
async function chatWithRetry(messages, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages
});
return response;
} catch (error) {
if (error.status === 429 && attempt < maxRetries - 1) {
// Rate limited - wait and retry
const retryAfter = error.headers?.['retry-after'] || 60;
console.log(`Rate limited. Retrying in ${retryAfter}s...`);
await new Promise(r => setTimeout(r, retryAfter * 1000));
continue;
}
if (error.status >= 500 && attempt < maxRetries - 1) {
// Server error - exponential backoff
const delay = Math.pow(2, attempt) * 1000;
console.log(`Server error. Retrying in ${delay}ms...`);
await new Promise(r => setTimeout(r, delay));
continue;
}
throw error;
}
}
}Best Practices
Always Handle Errors
Wrap all API calls in try-catch blocks. Never assume requests will succeed.
Use Exponential Backoff
For server errors, wait progressively longer between retries (1s, 2s, 4s, 8s).
Respect Retry-After
On 429 errors, check the Retry-After header and wait that long before retrying.
Set Reasonable Timeouts
Large responses can take time. Set a timeout of 30-60 seconds for completions.
Log Errors for Debugging
Log error details including request ID, status, and message for debugging.

