Playground
The LLMGrid Playground is an interactive web interface that lets you experiment with prompts, compare responses across multiple LLMs, and visualize routing strategies—all without writing code.Key Features
- Prompt Testing
Enter prompts and view responses from different models side-by-side. - Multi-Model Comparison
Compare latency, cost, and output quality across providers (OpenAI, Anthropic, Azure OpenAI, Google Gemini, etc.). - Routing Simulation
Preview how your configured routing strategy (cost-optimized, latency-optimized, balanced) selects models. - Streaming Support
See token-by-token generation in real time for streaming-enabled models. - Usage Metrics
Track token counts, estimated cost, and latency for each request.
How to Access
- Navigate to Playground in the LLMGrid Console:
https://app.llmgrid.ai/playground - Select:
- Tenant and Project
- Route or Direct Model
- Optional: Adjust temperature, max tokens, and other parameters.
Example Workflow
- Choose Route:
chat_default - Enter prompt:
- Click Run to view:
- Responses from all models in the route
- Latency and token usage per model
- Routing decision logic applied
Why Use Playground?
- Validate prompt templates before deploying to production.
- Benchmark models for cost and performance.
- Debug routing policies and fallback behavior.
Tip: The Playground uses the same OpenAI-compatible API as your apps, so results mirror real API calls.