Add and manage API keys for OpenAI, Anthropic, Google, and self-hosted models with enterprise-grade encryption
OpenRails supports multiple LLM providers simultaneously. API keys are stored using enterprise-grade encryption for maximum security.
| Provider | Models | Key Type |
|---|---|---|
| OpenAI | GPT-4, GPT-4o, GPT-5 | Opaque API Token |
| Anthropic | Claude 3.5 Sonnet, Claude 4 Opus, Claude 4 Sonnet | Opaque API Token |
| Gemini 1.5 Pro, Gemini 2.0, Gemini 2.5 Pro | Opaque API Token | |
| Local AI Models | Llama, Mistral, Phi, and other self-hosted models | No key required (local endpoint) |
From the sidebar, go to Settings > LLM Keys. This page lists all configured provider keys.
Click the Add Key button to open the key configuration form.
Choose the LLM provider from the dropdown: OpenAI, Anthropic, or Google.
Paste your API key into the secure input field. The key is encrypted immediately upon submission and is never stored in plain text.
Select which models this key should authenticate. You can assign a single key to multiple models from the same provider.
Click Save to store the encrypted key. The key will show as a masked value (e.g., sk-...7xQ2) in the key list.
Ensure your local AI model server is installed and running on your server or local machine.
Download the models you want to use via your local model server's CLI.
In Settings > LLM Keys, click Add Key, select Local Models as the provider, and enter the server endpoint URL.
Click Test Connection to verify OpenRails can communicate with the local model server. Available models will be auto-discovered.
API keys are encrypted at rest. Only Global Admins can add, edit, or delete LLM keys.