Skip to main content

Models

The Mastra Memory Gateway is built on OpenRouter and supports every model in their catalog, including models from OpenAI, Anthropic, Google, Meta, Mistral, and hundreds more. Browse the full list of available models at openrouter.ai/models.

Use any OpenRouter model ID in your request and the gateway handles routing, authentication, and memory automatically.

Default routing

By default, all requests route through OpenRouter. Pass any model ID from the OpenRouter models page and it works out of the box.

curl https://gateway-api.mastra.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "google/gemini-2.5-flash", "messages": [{"role": "user", "content": "Hello"}]}'

Your Mastra API key covers access to all OpenRouter models with no additional provider configuration.

Direct providers

When you add a provider binding through the dashboard with your own API key, the gateway routes directly to that provider instead of through OpenRouter. Direct routing is also used in BYOK pass-through mode.

ProviderModel prefixEndpointsExample model ID
OpenAIopenai/Chat Completions, Responsesopenai/gpt-5.4
Anthropicanthropic/Messagesanthropic/claude-sonnet-4-6
Googlegoogle/Chat Completionsgoogle/gemini-2.5-flash
Codexcodex/Responsescodex/codex-mini-latest

Prefix stripping

Direct providers receive the model ID without the provider prefix. For example, anthropic/claude-sonnet-4-6 is forwarded to Anthropic as claude-sonnet-4-6.

OpenRouter models keep their prefix intact. For example, google/gemini-2.5-flash is sent as-is.

Model inference

BYOK pass-through is available on the Teams plan and above. In pass-through mode, when a provider key override is present but the model ID has no explicit prefix, the gateway infers the provider from the model name:

Model patternInferred provider
claude-*Anthropic
gpt-*, o1, o3, o4-*, chatgpt-*, codex-*OpenAI
gemini-*Google

This inference only applies in BYOK pass-through mode. Without BYOK, unprefixed models route through the default provider (OpenRouter).

If the gateway can't infer a provider in pass-through mode, the request is rejected with a 400 error. Use a prefixed model ID (e.g., openai/gpt-5.4) to resolve this.

Provider resolution order

When the gateway receives a request, it resolves the provider using this fallback chain:

  1. BYOK pass-through: If a provider key override is present (X-Memory-Gateway-Authorization + Authorization), route to the provider inferred from the model ID
  2. Model prefix: If the model ID has a recognized prefix (e.g., anthropic/), look for a binding for that provider
  3. Key binding: If the API key has a provider binding, use it
  4. Project binding: If the project has a provider binding, use it
  5. Default provider: Fall back to OpenRouter

Endpoint compatibility

Not all providers support all endpoint formats. The gateway validates that the resolved provider supports the requested endpoint:

EndpointOpenRouterOpenAIAnthropicGoogleCodex
POST /v1/chat/completions
POST /v1/messages
POST /v1/responses

If a provider doesn't support the requested endpoint, the gateway returns a 400 error with details about which endpoints the provider supports.

Custom providers

For providers not in the built-in registry, add a binding with a custom base URL through the dashboard. The gateway treats unknown providers as OpenAI-compatible and forwards requests with a standard Authorization: Bearer header.