Home / All Documentation

All Documentation

Browse all 19 documents in the musegpt documentation.

Page 1 of 1

Acknowledgments
Open-source projects and communities that musegpt builds on.
Meta Core
Anthropic
Cloud API provider for Claude models via the Anthropic Messages API, a non-OpenAI-compatible backend.
Provider Backend
Changelog
Version history and release notes for musegpt. Follows the Keep a Changelog format.
Meta Core
Configuration Model
Four-source configuration merge system with defaults, config file, environment variables, and plugin UI override for musegpt.
Architecture Configuration
DeepSeek
Cloud API provider for DeepSeek chat and reasoning models via api.deepseek.com.
Provider Backend
Fireworks AI
Cloud API provider for fast open-model inference via api.fireworks.ai.
Provider Backend
Google Gemini
Cloud API provider for Gemini models via the OpenAI-compatible endpoint at generativelanguage.googleapis.com.
Provider Backend
Groq
Cloud API provider for fast inference on Groq LPU hardware via api.groq.com.
Provider Backend
Inference Backend
Client interface, OpenAI-compatible API translation, streaming and structured response modes, per-request metrics, and transcription support.
Architecture Backend
Inter-Process Communication Model
Ports and adapters architecture for thread-to-thread and plugin-to-backend communication in musegpt.
Architecture Core
llama.cpp
High-performance local inference engine with OpenAI-compatible server mode.
Provider Backend
Mistral AI
Cloud API provider for Mistral models via api.mistral.ai.
Provider Backend
Ollama
Local model runner with an OpenAI-compatible API. The default backend for musegpt.
Provider Backend
OpenAI
Cloud API provider for GPT-4o, o1, and other OpenAI models via api.openai.com.
Provider Backend
OpenRouter
Cloud API aggregator providing access to many models from multiple providers through a single OpenAI-compatible endpoint.
Provider Backend
Threading Model
Three-thread architecture for real-time audio, responsive UI, and background inference in the musegpt VST3 plugin.
Architecture Core
Together AI
Cloud API provider popular for open-weight models via api.together.xyz.
Provider Backend
vLLM
High-throughput LLM serving engine with OpenAI-compatible API.
Provider Backend
whisper.cpp
Local speech recognition engine for audio-to-text transcription in the musegpt audio pipeline.
Provider Backend
Compiled with SchemaFlux