Drop files or folders here

PDF, DOCX, XLSX, TXT, Code files, entire project folders, images
Connection interrupted. Your response may be incomplete.
DO Inference Studio Beta

What can I help you with?

Powered by DigitalOcean Serverless Inference β€” choose from 40+ models including Claude, GPT, Llama, DeepSeek, and more. Attach files or entire project folders.

@Kii

Model Parameters

Temperature iControls randomness. Lower = more focused. Higher = more creative.
0.7
0 2
PreciseBalancedCreative
Top P iNucleus sampling threshold.
1.0
0 1
Top K iLimits token selection to K most likely. 0 = disabled.
0
0 200
OffModerateRestrictive
Frequency Penalty iPenalises repeated tokens.
0.0
-2 2
Max Completion Tokens iMaximum tokens in the response. Uses max_completion_tokens (not deprecated max_tokens).
2K

Check your model's context window limit.

Enable Prompt Caching Reuses input context from previous requests to reduce latency and cost. Anthropic: per-message cache_control. OpenAI: activates at β‰₯1024 tokens (best-effort).
Enable Reasoning Sends reasoning_effort (OpenAI) or thinking.budget_tokens (Anthropic). Increases quality on complex tasks.

API Logs

Requests 0 this session
Total Tokens 0 prompt + completion
Cache Hits β€” tokens read from cache
Avg Latency β€” per request
Errors 0 failed requests
No API calls yet Logs appear after your first message

Delete conversation?

This cannot be undone.