Documentation Index
Fetch the complete documentation index at: https://docs.llmgrid.ai/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Playground is an interactive environment for testing and validating requests against models and routes configured in your tenant. It is commonly used to experiment with prompts, validate routing behavior, test tools and vector stores, and generate example requests for integration. The Playground is intended for development, debugging, and validation, not long‑running production workloads.Layout
The Playground is divided into two main areas:- Configurations panel (left) – Controls how the request is executed
- Test Key workspace (right) – Interactive chat and request execution area
- Chat – Conversational testing
- Compare – Side‑by‑side comparison of responses (when enabled)
Configurations Panel
The Configurations panel defines how each request is processed.Virtual Key Source
Controls which virtual key is used for the request.- Current UI Session
Uses your active session permissions. - Other key sources may appear depending on tenant configuration.
Endpoint Type
Selects the API endpoint being exercised. Common examples include:/v1/chat/completions- Other supported inference or embedding endpoints
Select Model
Chooses the target model or model alias. Available options depend on:- Team and organization access
- Model access groups
- Virtual key restrictions
Tags
Attach one or more tags to the request. Use tags to:- Attribute usage to projects or environments
- Filter logs and analytics
- Validate tag‑based governance rules
MCP Tool
Select one or more tools exposed via MCP servers. Typical use cases- Tool‑augmented inference
- Function or action execution
- Integration with external systems
Vector Store
Attach one or more vector stores to the request. Common scenarios- Retrieval‑augmented generation
- Knowledge‑grounded responses
- Semantic search testing
Guardrails
Apply guardrails to enforce policy checks on the request and/or response. Guardrails can be used to:- Validate inputs and outputs
- Enforce safety constraints
- Test policy behavior before production
Test Key Workspace
The right‑hand workspace is where requests are executed and responses are displayed.Empty State
When no messages have been sent, a placeholder prompts you to:- Start a conversation
- Generate an image
- Test audio or multimodal behavior (when supported)
Message Composer
The input box at the bottom allows you to enter messages.- Shift + Enter inserts a new line
- Submit sends the request using the current configuration
Actions
At the top of the workspace:- Clear Chat
Resets the current conversation. - Get Code
Generates a ready‑to‑use code example that matches the current Playground configuration.
- Selected endpoint
- Model or alias
- Tools, guardrails, and vector stores
- Tags and routing behavior
Compare Mode
When enabled, Compare mode allows you to:- Test the same input against multiple models or routes
- Visually compare responses
- Evaluate output differences and behavior
- Migrating models
- Testing routers or aliases
- Evaluating prompt changes
Common Use Cases
- Validate a model or router before production rollout
- Test guardrails and policy behavior
- Experiment with prompts and system instructions
- Verify tool and vector store integration
- Generate example code for developers
- Debug routing, access, or permission issues
Best Practices
- Match Playground configuration to production keys for accurate results
- Use tags consistently to trace tests in logs
- Clear chat when switching models or configurations
- Use Get Code only after finalizing settings
- Validate behavior with and without guardrails when required
Related Sections
- Models – Configure and manage available models
- Virtual Keys – Control access and limits
- Router Settings – Define routing behavior
- Guardrails – Manage policy enforcement
- Usage & Logs – Inspect Playground activity

