Error Handling

Learn how to handle API errors gracefully and implement robust retry logic.

Error Response Format

When an error occurs, the API returns a JSON object with error details:

json
{
  "error": {
    "message": "Rate limit exceeded. Please retry after 60 seconds.",
    "type": "rate_limit_error",
    "code": "rate_limit_exceeded",
    "param": null
  }
}

Error Object Fields

FieldDescription
messageHuman-readable error description
typeError category (e.g., invalid_request_error, rate_limit_error)
codeMachine-readable error code
paramThe parameter that caused the error (if applicable)

HTTP Status Codes

Client Errors (4xx)

StatusErrorDescriptionRetry?
400Bad RequestInvalid request parameters or malformed JSON
401UnauthorizedInvalid or missing API key
403ForbiddenAPI key doesn't have permission for this action
404Not FoundModel or resource doesn't exist
422UnprocessableRequest is valid but cannot be processed
429Rate LimitedToo many requests. Check Retry-After header.

Server Errors (5xx)

StatusErrorDescriptionRetry?
500Internal ErrorServer encountered an unexpected error
502Bad GatewayUpstream provider returned an invalid response
503UnavailableService temporarily unavailable or overloaded
504TimeoutRequest timed out waiting for upstream

Common Error Codes

CodeMeaningSolution
invalid_api_keyAPI key is invalidCheck your API key
insufficient_creditsNot enough creditsAdd more credits
model_not_foundModel doesn't existCheck model ID
context_length_exceededInput too longReduce message length
rate_limit_exceededToo many requestsWait and retry
content_policy_violationContent blockedModify your prompt

Handling Errors in Code

Here's how to properly catch and handle API errors:

TypeScript
try {
  const response = await client.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello!' }]
  });
} catch (error) {
  if (error instanceof OpenAI.APIError) {
    console.error('Status:', error.status);
    console.error('Message:', error.message);
    console.error('Code:', error.code);
    
    switch (error.status) {
      case 400:
        console.error('Bad request - check your parameters');
        break;
      case 401:
        console.error('Invalid API key');
        break;
      case 403:
        console.error('Access forbidden');
        break;
      case 404:
        console.error('Model not found');
        break;
      case 429:
        console.error('Rate limited - slow down');
        break;
      case 500:
      case 502:
      case 503:
        console.error('Server error - retry later');
        break;
    }
  }
}

Automatic Retries

The OpenAI SDK has built-in retry logic for transient errors:

TypeScript
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.llmhub.one/v1',
  apiKey: process.env.LLMHUB_API_KEY,
  maxRetries: 3,          // Retry up to 3 times
  timeout: 30000,         // 30 second timeout
});

// The SDK automatically retries on 429 and 5xx errors

Custom Retry Logic

For more control, implement your own retry logic with exponential backoff:

TypeScript
async function chatWithRetry(messages, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await client.chat.completions.create({
        model: 'gpt-4o',
        messages
      });
      return response;
    } catch (error) {
      if (error.status === 429 && attempt < maxRetries - 1) {
        // Rate limited - wait and retry
        const retryAfter = error.headers?.['retry-after'] || 60;
        console.log(`Rate limited. Retrying in ${retryAfter}s...`);
        await new Promise(r => setTimeout(r, retryAfter * 1000));
        continue;
      }
      
      if (error.status >= 500 && attempt < maxRetries - 1) {
        // Server error - exponential backoff
        const delay = Math.pow(2, attempt) * 1000;
        console.log(`Server error. Retrying in ${delay}ms...`);
        await new Promise(r => setTimeout(r, delay));
        continue;
      }
      
      throw error;
    }
  }
}

Best Practices

Always Handle Errors

Wrap all API calls in try-catch blocks. Never assume requests will succeed.

Use Exponential Backoff

For server errors, wait progressively longer between retries (1s, 2s, 4s, 8s).

Respect Retry-After

On 429 errors, check the Retry-After header and wait that long before retrying.

Set Reasonable Timeouts

Large responses can take time. Set a timeout of 30-60 seconds for completions.

Log Errors for Debugging

Log error details including request ID, status, and message for debugging.

Related Guides