Home / Components / Backend

Backend

14 documents categorized under Backend.

Page 1 of 1

Anthropic
Cloud API provider for Claude models via the Anthropic Messages API, a non-OpenAI-compatible backend.
Provider Backend
DeepSeek
Cloud API provider for DeepSeek chat and reasoning models via api.deepseek.com.
Provider Backend
Fireworks AI
Cloud API provider for fast open-model inference via api.fireworks.ai.
Provider Backend
Google Gemini
Cloud API provider for Gemini models via the OpenAI-compatible endpoint at generativelanguage.googleapis.com.
Provider Backend
Groq
Cloud API provider for fast inference on Groq LPU hardware via api.groq.com.
Provider Backend
Inference Backend
Client interface, OpenAI-compatible API translation, streaming and structured response modes, per-request metrics, and transcription support.
Architecture Backend
llama.cpp
High-performance local inference engine with OpenAI-compatible server mode.
Provider Backend
Mistral AI
Cloud API provider for Mistral models via api.mistral.ai.
Provider Backend
Ollama
Local model runner with an OpenAI-compatible API. The default backend for musegpt.
Provider Backend
OpenAI
Cloud API provider for GPT-4o, o1, and other OpenAI models via api.openai.com.
Provider Backend
OpenRouter
Cloud API aggregator providing access to many models from multiple providers through a single OpenAI-compatible endpoint.
Provider Backend
Together AI
Cloud API provider popular for open-weight models via api.together.xyz.
Provider Backend
vLLM
High-throughput LLM serving engine with OpenAI-compatible API.
Provider Backend
whisper.cpp
Local speech recognition engine for audio-to-text transcription in the musegpt audio pipeline.
Provider Backend
Compiled with SchemaFlux