Supported Providers
Connect to any LLM provider with a unified interface. All providers use the same API structure.
OpenAI
Industry-leading GPT models including GPT-4 and GPT-3.5
Models:
Features:
Anthropic
Claude family of models with strong reasoning capabilities
Models:
Features:
Gemini and PaLM models with multimodal capabilities
Models:
Features:
AWS Bedrock
Access to multiple foundation models through AWS
Models:
Features:
Azure
OpenAI models hosted on Microsoft Azure
Models:
Features:
Mistral
Open-weight models with strong performance
Models:
Features:
Groq
Ultra-fast inference with LPU technology
Models:
Features:
HuggingFace
Wide variety of open-source models
Models:
Features:
Ollama
Run large language models locally
Models:
Features:
Cohere
Enterprise-focused language models
Models:
Features:
Cerebras
BETAHigh-performance AI compute platform
Models:
Features:
DeepSeek
Advanced reasoning and coding models
Models:
Features:
xAI
BETAGrok models with real-time information
Models:
Features:
Together
Platform for running open-source models
Models:
Features:
Fireworks
Fast inference for open-source models
Models:
Features:
SambaNova
BETAEnterprise AI platform
Models:
Features:
Watsonx
IBM enterprise AI platform
Models:
Features:
Nebius
BETACloud AI infrastructure provider
Models:
Features:
LMStudio
Desktop app for local LLM inference
Models:
Features:
Inception
BETACustom model deployment platform
Models:
Features:
Don't See Your Provider?
We're always adding new providers. Request one or contribute your own!