Hub is a next generation smart proxy for LLM applications. It centralizes control and tracing of all LLM calls and traces. It’s built in Rust so it’s fast and efficient. It’s completely open-source and free to use.Documentation Index
Fetch the complete documentation index at: https://enrolla-nk-hub-guardrails.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Installation
Local
- Clone the repo:
-
Copy the
config-example.yamlfile toconfig.yamland set the correct values (see below for more information). -
Run the hub by running
cargo runin the root directory.
With Docker
Traceloop Hub is available as a docker image namedtraceloop/hub. Make sure to create a config.yaml file
following the configuration instructions.
Connecting to Hub
After running the hub and configuring it, you can start using it to invoke available LLM providers. Its API is the standard OpenAI API, so you can use it as a drop-in replacement for your LLM calls. You can invoke different pipelines by passing thex-traceloop-pipeline header. If none is specified, the default pipeline will be used.

