SDKs & Libraries
LLM Hub is compatible with all OpenAI SDKs. Simply change the base URL to start using our API with your preferred language.
No Special SDK Required
LLM Hub implements the OpenAI API specification, so you can use the official OpenAI SDK for your language. Just set the base URL tohttps://api.llmhub.one/v1and use your LLM Hub API key.
Available SDKs
JavaScript / TypeScript
OfficialopenaiInstallation
npm install openaiConfiguration
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.llmhub.one/v1',
apiKey: process.env.LLMHUB_API_KEY
});Python
OfficialopenaiInstallation
pip install openaiConfiguration
from openai import OpenAI
client = OpenAI(
base_url="https://api.llmhub.one/v1",
api_key="your-api-key"
)Go
Officialgithub.com/openai/openai-goInstallation
go get github.com/openai/openai-goConfiguration
import (
"github.com/openai/openai-go"
"github.com/openai/openai-go/option"
)
client := openai.NewClient(
option.WithBaseURL("https://api.llmhub.one/v1"),
option.WithAPIKey(os.Getenv("LLMHUB_API_KEY")),
)PHP
Communityopenai-php/clientInstallation
composer require openai-php/clientConfiguration
use OpenAI;
$client = OpenAI::factory()
->withBaseUri('https://api.llmhub.one/v1')
->withApiKey($_ENV['LLMHUB_API_KEY'])
->make();Ruby
Communityruby-openaiInstallation
gem install ruby-openaiConfiguration
require "openai"
client = OpenAI::Client.new(
uri_base: "https://api.llmhub.one/v1",
access_token: ENV["LLMHUB_API_KEY"]
)C# / .NET
OfficialOpenAIInstallation
dotnet add package OpenAIConfiguration
using OpenAI;
var client = new OpenAIClient(
new OpenAIClientOptions {
BaseUrl = new Uri("https://api.llmhub.one/v1"),
ApiKey = Environment.GetEnvironmentVariable("LLMHUB_API_KEY")
});Java / Kotlin
Officialcom.openai:openai-javaInstallation
// Add to build.gradle or pom.xmlConfiguration
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
OpenAIClient client = OpenAIOkHttpClient.builder()
.baseUrl("https://api.llmhub.one/v1")
.apiKey(System.getenv("LLMHUB_API_KEY"))
.build();Rust
Communityasync-openaiInstallation
cargo add async-openaiConfiguration
use async_openai::{Client, config::OpenAIConfig};
let config = OpenAIConfig::new()
.with_api_base("https://api.llmhub.one/v1")
.with_api_key(std::env::var("LLMHUB_API_KEY")?);
let client = Client::with_config(config);Direct HTTP / REST
If there's no SDK for your language, you can make direct HTTP requests:
curl https://api.llmhub.one/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LLMHUB_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'API Endpoints
/v1/chat/completions— Chat & completions/v1/images/generations— Image generation/v1/embeddings— Text embeddings/v1/models— List available modelsFramework Integrations
LLM Hub works with popular AI frameworks:
LangChain
Use our endpoint with LangChain's ChatOpenAI:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://api.llmhub.one/v1",
api_key="your-key",
model="gpt-4o"
)Vercel AI SDK
Works with the OpenAI provider:
import { createOpenAI } from '@ai-sdk/openai';
const llmhub = createOpenAI({
baseURL: 'https://api.llmhub.one/v1',
apiKey: process.env.LLMHUB_API_KEY
});LlamaIndex
Configure as an OpenAI-compatible endpoint:
from llama_index.llms.openai import OpenAI
llm = OpenAI(
api_base="https://api.llmhub.one/v1",
api_key="your-key",
model="gpt-4o"
)AutoGen
Use in your AutoGen config:
config_list = [{
"model": "gpt-4o",
"base_url": "https://api.llmhub.one/v1",
"api_key": "your-key"
}]
