Skip to main content
This guide covers the Managed setup — Majordomo runs the gateway, you point your SDK at it. If you need to run Steward inside your own VPC, see Self-hosted Setup.

1. Create an account and API key

Sign up at app.gomajordomo.com. From the dashboard, go to API Keys and create your first key. Your key has the format mdm_sk_.... Store it in your secrets manager — it’s shown once at creation time.

2. Update your SDK configuration

Majordomo acts as a transparent proxy. Change the base URL, add one header. Your existing provider API key is passed through unchanged.
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://gateway.gomajordomo.com/v1",
    api_key=os.environ["OPENAI_API_KEY"],
    default_headers={"X-Majordomo-Key": os.environ["MAJORDOMO_API_KEY"]},
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)
Set the environment variable in your deployment environment:
MAJORDOMO_API_KEY=mdm_sk_your_key_here
The gateway returns responses identically to calling the provider directly — streaming, function calling, and all provider-specific parameters are passed through unchanged.

3. Verify in the dashboard

Open the Majordomo dashboard. Your request appears with model, token counts, cost, and latency. No polling, no setup — the log is there as soon as the request completes. From here you can manage API keys, tag requests for cost attribution, run replays against candidate models, and build eval sets from production traffic.

Next steps

Attribute costs by team or feature

Tag requests with custom metadata and break down spend across any dimension.

Test a model switch

Replay production traffic against a candidate model before committing to the change.

Self-hosted Steward

Run Steward in your own VPC. Prompts never leave your infrastructure.

All SDKs and frameworks

Integration examples for every supported SDK, including Pydantic AI and majordomo-llm.