Skip to content

AI Providers

AI providers connect Raikoo to external LLM services, making models available to agents, workflows, and chat sessions. This guide walks through creating and configuring providers, managing their models, and setting up model families.

Understanding AI Providers

AI providers are organization-level connections to external AI services. Each provider holds a type, API credentials, and an optional base URL. Once configured, the provider's models become available for selection throughout the organization.

For a conceptual overview of how providers, models, and model families relate to each other, see Models.

Permissions Required

You need the ai-providers organization permission to manage AI providers. See Roles & Permissions for details.

Accessing AI Providers

  1. Navigate to your organization dashboard
  2. Click AI Providers in the left sidebar under the Configuration section
  3. You will see a list of all configured AI providers for your organization

Adding a New AI Provider

  1. From the AI Providers list, click Create
  2. Complete the fields on the Configuration tab:
Field Required Description
Provider Type Yes The AI service this provider connects to (e.g., OpenAI, Anthropic)
Name Yes A descriptive label for this provider (e.g., "OpenAI Production")
Description Yes Brief description of this provider's purpose
API Key No Authentication key for the provider's API. When editing an existing provider, leave empty to keep the current key
Base URL No Custom endpoint URL. Required for Azure OpenAI, Ollama, and OpenAI-Compatible providers
  1. Click Save to create the provider
  2. After saving, the Models tab becomes available

Multiple Providers

You can add multiple providers of the same type. For example, separate OpenAI providers for different accounts, teams, or billing tiers.

Supported Provider Types

Provider Type Notes
OpenAI Connects to OpenAI's API. Requires an API key
Anthropic Connects to Anthropic's Claude models. Requires an API key
Google Generative AI Connects to Google's Gemini models. Requires an API key
Azure OpenAI Connects to Azure-hosted OpenAI deployments. Requires a Base URL. See Azure AI Foundry Setup for detailed instructions
Ollama Connects to a locally-running Ollama instance. Set the Base URL to the Ollama host (defaults to http://localhost:11434)
OpenAI-Compatible Connects to any API that follows the OpenAI API format. Requires a Base URL
Amazon Bedrock Connects to AWS Bedrock foundation models
Groq High-speed inference service for open-source models
Mistral Connects to Mistral AI models
Cohere Connects to Cohere's language and embedding models
Perplexity Connects to Perplexity's search-augmented models
DeepSeek Connects to DeepSeek's reasoning and coding models
OpenRouter Multi-provider routing service with access to many model providers

Managing Models

After creating a provider, open the Models tab to control which models are available to your organization.

When the Models tab loads, Raikoo automatically queries the provider's API to discover available models and matches them against its built-in library of known model definitions.

Model Management Modes

Use the toggle at the top of the Models tab to choose between two modes:

Allow All

All discovered models from the provider are available by default. You can individually exclude specific models you do not want users to access. When new models appear from the provider's API, they become available automatically.

Select Models

No models are available by default. You must explicitly enable each model you want to expose. New models from the provider API do not appear automatically -- use Refresh to check for additions and enable them as needed.

Which mode should I use?

Use Allow All when you trust the provider's full model list and want new models to appear automatically. Use Select Models when you need precise control over which models users can access.

Model Discovery

Each discovered model is shown in a list with the following information:

  • Model ID -- The identifier returned by the provider's API
  • Name -- A human-readable name, if resolved from a known definition
  • Creator -- The model's creator (e.g., OpenAI, Google) or "Custom" for manually added models
  • Resolution confidence -- How the model was matched to a known definition:
Badge Meaning
Matched (green) Model ID exactly matched a known Raikoo definition
Inferred (yellow) Model ID was matched by pattern inference (e.g., a deployment name containing a known model stem)
Unknown (grey) No match found; no pre-loaded capability data
  • Capability confidence -- How complete the capability data is:
Badge Meaning
Full info (green) Context window, capabilities, and modalities are all defined
Partial info (yellow) Some capability fields are defined
No info (grey) Only the model ID is known
  • Context window and max output -- Token limits, if known
  • Modality icons -- Input and output types (text, image, audio, video)
  • Capability chips -- Quick indicators for tools, system prompt, and reasoning support

Click Refresh at any time to re-query the provider's API for updated model information.

Editing Model Capabilities

Click any model row to open the model editor. Here you can view and customize the model's capability data.

Model Preset

The preset dropdown lets you select a known model from Raikoo's built-in library to apply its full capability profile. This is useful when a provider returns a custom deployment name that Raikoo did not automatically match -- for example, selecting the gpt-4o preset for a deployment named my-company-gpt4o-prod.

Capability Fields

Field Description
Name Display name shown in model selectors throughout the UI
Description Descriptive text about this model's purpose and characteristics
Context window Maximum tokens this model can process per request (input and output combined)
Max output tokens Maximum tokens this model can generate per response
Input modalities Content types the model accepts: text, image, audio, video
Output modalities Content types the model produces: text, image, audio, video, embeddings
System prompt Whether this model supports a system or developer prompt
Tools Whether this model supports function calling and tool use
Reasoning effort levels Comma-separated levels for extended thinking (e.g., low,medium,high)
Max budget tokens Maximum token budget for reasoning

Why accuracy matters

Accurate capability data lets Raikoo make better decisions. For example, image attachment controls only appear for models that support vision input, and reasoning effort controls are only shown for models that support extended thinking. Incorrect data may cause features to appear or be hidden unexpectedly.

Adding a Custom Model ID

In Select Models mode, you can add model IDs that are not in the discovered list. This is useful for:

  • Provider-specific deployment names that do not appear in the discovery API
  • Models in preview or early access that are not yet listed
  • Custom fine-tuned models

To add a custom model:

  1. Switch to Select Models mode
  2. Click Add Model in the toolbar
  3. Enter the model ID exactly as the provider expects it
  4. Click Add -- the model editor opens so you can configure its capabilities

Custom model IDs

Custom model IDs are not validated against the provider's API. Ensure the ID matches exactly what the provider expects, or API calls using this model will fail.

Model Families

Model families are logical groupings of models with an ordered fallback sequence. When a workflow or agent targets a family, Raikoo automatically selects the best available model from the group.

Model families are managed from Model Families in the left sidebar, under the same Configuration section as AI Providers.

To create a model family:

  1. Navigate to Model Families and click Create
  2. Enter a name for the family (e.g., "Fast Models" or "High Capability")
  3. Add models in priority order -- the first available model in the list will be used
  4. Save the family

For a conceptual explanation of model families and when to use them, see Models.

Permissions Required

You need the model-families organization permission to manage model families. See Roles & Permissions for details.

Azure AI Foundry

For detailed instructions on connecting to Azure AI Foundry and Azure OpenAI deployments, see the dedicated Azure AI Foundry Setup guide.

Troubleshooting

No Models Discovered

Possible causes:

  • The API key is incorrect or has expired
  • The Base URL is wrong or missing (required for Azure OpenAI, Ollama, and OpenAI-Compatible providers)
  • The provider type does not support model discovery

Solutions:

  • Verify the API key is correct and has not been revoked
  • Check the Base URL matches exactly what the provider expects
  • For Azure OpenAI, see Azure AI Foundry Setup for the correct URL format
  • If discovery is not supported by your provider, add models manually using Add Model in Select Models mode

API Key Errors

Symptoms: Discovery fails or API calls return authentication errors.

Solutions:

  • Re-enter the API key on the Configuration tab. For security, existing keys are not displayed after saving
  • Generate a new API key from the provider's dashboard
  • Verify the key has the required permissions for model listing and chat completions

Models Missing After Refresh

Symptoms: A model you expect to see does not appear after clicking Refresh.

Solutions:

  • Allow a few minutes for new models to propagate through the provider's API
  • Confirm the model is available in your provider account or subscription tier
  • Check if the model was excluded (in Allow All mode) or not selected (in Select Models mode)

Next Steps