API Documentation
Connect your favorite coding tools to AgentFritz using our OpenAI-compatible API.
Quick Start
- 1 Create an account (or log in)
- 2 Generate an API key from Account Settings
-
3
Point your tool at
https://tryfritz.com/v1with yourfritz_API key
API Endpoint
AgentFritz implements the OpenAI Chat Completions API. Any tool or SDK that supports a custom OpenAI base URL will work.
{
"model": "fritz-standard",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello!" }
],
"temperature": 0.7,
"max_tokens": 4096,
"stream": true
}
Authorization: Bearer fritz_your_key_here
Aider
EasiestAider is an AI pair-programming tool for your terminal. It can edit code, run tests, and make git commits.
Option 1: Environment variables
export OPENAI_API_BASE=https://tryfritz.com/v1 export OPENAI_API_KEY=fritz_your_key_here aider --model openai/fritz-standard
Option 2: Command-line flags
aider \ --openai-api-base https://tryfritz.com/v1 \ --openai-api-key fritz_your_key_here \ --model openai/fritz-standard
Option 3: Config file ~/.aider.conf.yml
openai-api-base: "https://tryfritz.com/v1" openai-api-key: "fritz_your_key_here" model: "openai/fritz-standard"
Continue (VS Code / JetBrains)
Continue is an open-source AI code assistant that runs inside your IDE.
YAML config ~/.continue/config.yaml
models:
- name: AgentFritz
provider: openai
model: fritz-standard
apiBase: https://tryfritz.com/v1
apiKey: fritz_your_key_here
roles:
- chat
- edit
JSON config ~/.continue/config.json
{
"models": [{
"name": "AgentFritz",
"provider": "openai",
"model": "fritz-standard",
"apiBase": "https://tryfritz.com/v1",
"apiKey": "fritz_your_key_here"
}]
}
OpenCode
OpenCode is a terminal-based AI coding agent.
~/.config/opencode/opencode.json
{
"provider": {
"agentfritz": {
"npm": "@ai-sdk/openai-compatible",
"name": "AgentFritz",
"options": {
"baseURL": "https://tryfritz.com/v1"
},
"models": {
"fritz-standard": { "name": "Fritz Standard" }
}
}
},
"model": "agentfritz/fritz-standard"
}
~/.local/share/opencode/auth.json
{
"agentfritz": {
"type": "api",
"key": "fritz_your_key_here"
}
}
Claude Code
Claude Code natively uses the Anthropic protocol, so it requires a lightweight proxy wrapper to translate to the OpenAI format.
1. Install the proxy
git clone https://github.com/RichardAtCT/claude-code-openai-wrapper.git cd claude-code-openai-wrapper pip install -r requirements.txt
2. Configure the proxy .env
OPENAI_API_KEY=fritz_your_key_here OPENAI_BASE_URL=https://tryfritz.com/v1 BIG_MODEL=fritz-standard SMALL_MODEL=fritz-standard
3. Start the proxy, then Claude Code
# Terminal 1 python start_proxy.py # Terminal 2 export ANTHROPIC_BASE_URL=http://localhost:8082 export ANTHROPIC_API_KEY=unused claude
Cursor
Cursor supports custom OpenAI-compatible endpoints.
- 1. Open Cursor Settings → Models
- 2. Click Add Model
-
3.
Set Override OpenAI Base URL to
https://tryfritz.com/v1 -
4.
Set the API key to your
fritz_key -
5.
Enter
fritz-standardas the model name
OpenAI Python SDK
pip install openai
from openai import OpenAI client = OpenAI( base_url="https://tryfritz.com/v1", api_key="fritz_your_key_here", ) # Streaming stream = client.chat.completions.create( model="fritz-standard", messages=[{"role": "user", "content": "Hello!"}], stream=True, ) for chunk in stream: print(chunk.choices[0].delta.content or "", end="") # Non-streaming response = client.chat.completions.create( model="fritz-standard", messages=[{"role": "user", "content": "Hello!"}], ) print(response.choices[0].message.content)
OpenAI Node.js SDK
npm install openai
import OpenAI from 'openai'; const client = new OpenAI({ baseURL: 'https://tryfritz.com/v1', apiKey: 'fritz_your_key_here', }); // Streaming const stream = await client.chat.completions.create({ model: 'fritz-standard', messages: [{ role: 'user', content: 'Hello!' }], stream: true, }); for await (const chunk of stream) { process.stdout.write(chunk.choices[0]?.delta?.content || ''); } // Non-streaming const response = await client.chat.completions.create({ model: 'fritz-standard', messages: [{ role: 'user', content: 'Hello!' }], }); console.log(response.choices[0].message.content);
curl
Non-streaming
curl https://tryfritz.com/v1/chat/completions \
-H "Authorization: Bearer fritz_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"model": "fritz-standard",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Streaming
curl https://tryfritz.com/v1/chat/completions \
-H "Authorization: Bearer fritz_your_key_here" \
-H "Content-Type: application/json" \
-N \
-d '{
"model": "fritz-standard",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'
Available Models
Query the models endpoint to see what's available:
curl https://tryfritz.com/v1/models -H "Authorization: Bearer fritz_your_key_here"
| Model | Description | Best For |
|---|---|---|
| fritz-standard | Fast, versatile general-purpose model | General coding, chat, analysis |
| fritz-reasoning | Advanced chain-of-thought reasoning model | Complex reasoning, math, architecture |
Rate Limits
Rate limits depend on your plan. If you hit a limit, the API returns 429 Too Many Requests.
| Plan | Price | Speed | Conversations | Messages/conv |
|---|---|---|---|---|
| Starter | $5/mo | Standard | Unlimited | Unlimited |
| Pro | $15/mo | Fast | Unlimited | Unlimited |
| Ultra | $30/mo | Maximum | Unlimited | Unlimited |
Higher-tier plans receive faster AI response speeds and higher throughput.