Overview
ScryCLI supports multiple AI providers and models for code analysis. You can select your preferred model through an interactive CLI interface or by manually editing the configuration file.
Available Providers
ScryCLI supports the following AI providers through OpenRouter:
Provider Models Available Recommended Model OpenAI 2 models gpt-oss-120b ArceeAI 2 models Trinity Large Qwen 2 models Qwen3 Next 80B A3B Instruct Mistral 1 model Mistral Small 3.1 24B StepFun 1 model Step 3.5 Flash Zai 1 model GLM 4.5 Air
All models listed are free-tier models accessible through OpenRouter with rate limits.
Selecting a Model
Using the CLI Command
Open model selection
While ScryCLI is running, type:
Choose your provider
Use arrow keys to navigate and Enter to select: > OpenAI
StepFun
Qwen
ArceeAI
Mistral
Zai
Select a model
After choosing a provider, select from available models: > gpt-oss-120b (Recommended)
gpt-oss-20b
Configuration saved
Your selection is automatically saved to ~/.scrycli/config.json
Available Models
OpenAI Models
// From src/ui/SelectModel.tsx:21-24
openai : [
{ label: "gpt-oss-120b (Recommended)" , value: "openai/gpt-oss-120b:free" },
{ label: "gpt-oss-20b" , value: "openai/gpt-oss-20b:free" },
]
Recommended: openai/gpt-oss-120b:free - Best balance of capability and availability
ArceeAI Models
arceeai : [
{ label: "Trinity Large (Recommended)" , value: "arcee-ai/trinity-large-preview:free" },
{ label: "Trinity Mini" , value: "arcee-ai/trinity-mini:free" },
]
Recommended: arcee-ai/trinity-large-preview:free - Optimized for code tasks
Qwen Models
qwen : [
{ label: "Qwen3 Coder 480B A35B" , value: "qwen/qwen3-coder:free" },
{ label: "Qwen3 Next 80B A3B Instruct (Recommended)" , value: "qwen/qwen3-next-80b-a3b-instruct:free" },
]
Recommended: qwen/qwen3-next-80b-a3b-instruct:free - Excellent instruction following
Mistral Models
mistral : [
{ label: "Mistral Small 3.1 24B" , value: "mistralai/mistral-small-3.1-24b-instruct:free" },
]
Single option: mistralai/mistral-small-3.1-24b-instruct:free
StepFun Models
stepfun : [
{ label: "Step 3.5 Flash" , value: "stepfun/step-3.5-flash:free" },
]
Single option: stepfun/step-3.5-flash:free - Fast inference
Zai Models
zai : [
{ label: "GLM 4.5 Air" , value: "z-ai/glm-4.5-air:free" },
]
Single option: z-ai/glm-4.5-air:free
Configuration Structure
When you select a model, ScryCLI updates your configuration:
{
"model" : {
"modelProvider" : "openai" ,
"modelName" : "openai/gpt-oss-120b:free" ,
"modelKey" : "your-api-key"
}
}
The configuration is set via:
// From src/ui/SelectModel.tsx:68
setConfig ( 'model' , {
modelProvider: selectedProvider ,
modelName: item . value
});
Configuration Fields
Field Description Example modelProviderProvider identifier "openai", "qwen", "arceeai"modelNameFull model path "openai/gpt-oss-120b:free"modelKeyYour API key "sk-or-v1-..."
You must set modelKey separately. The model selection command only sets modelProvider and modelName.
Manual Configuration
You can manually edit your model configuration:
Open config file
nano ~/.scrycli/config.json
Update model settings
{
"model" : {
"modelProvider" : "qwen" ,
"modelName" : "qwen/qwen3-next-80b-a3b-instruct:free" ,
"modelKey" : "sk-or-v1-your-key-here"
}
}
Save and restart
Changes take effect when you restart ScryCLI
Model Selection Process
The model selection UI follows a two-step process:
// From src/ui/SelectModel.tsx:44-75
const SelectModel = ( props : selectModelType ) => {
const [ selectedProvider , setSelectedProvider ] = useState < string | null >( null );
const [ selectedModel , setSelectedModel ] = useState < string | null >( null );
// Step 1: Select provider
if ( ! selectedProvider ) {
return < SelectInput items ={ providers } onSelect ={ setSelectedProvider } />;
}
// Step 2: Select model from provider's available models
if ( selectedProvider && ! selectedModel ) {
return < SelectInput
items ={ modelsMap [ selectedProvider ]}
onSelect ={( item ) => {
setSelectedModel (item.value);
setConfig ( 'model' , {
modelProvider : selectedProvider ,
modelName : item . value
});
}}
/>;
}
};
Viewing Current Model
Your currently selected model is displayed in the ScryCLI interface:
// From src/ui/InputBox.tsx:40
< Text color = 'yellow' >
{ `Model: ${ config ?. model ?. modelName ?? 'Not Selected' } ` }
</ Text >
If no model is selected, you’ll see “Model: Not Selected”
Model Usage
The selected model is used when making AI calls:
// From src/model/openRouter.ts:19-23
const result = openRouterClient . callModel ({
model: ` ${ getConfig (). model . modelName } ` ,
instructions: systemPrompt ,
input: ` ${ prompt } \n\n File Tree: ${ fileTreeString } ` ,
});
Validation Requirements
Before ScryCLI can make model calls, it validates:
// From src/lib/isModelSelected.ts:6-15
const isModelSelected = () : boolean => {
const config = getConfig ();
if ( ! config ?. model ?. modelName ) return false ;
if ( ! config ?. model ?. modelKey ) return false ;
return config . model . modelName !== '' && config . model . modelKey !== '' ;
};
Both modelName and modelKey must be:
Present in the configuration
Non-empty strings
Choosing the Right Model
For Code Analysis
Recommended: qwen/qwen3-coder:free
Specialized for code understanding
Excellent at parsing syntax
Good for large codebases
For General Tasks
Recommended: openai/gpt-oss-120b:free
Balanced performance
Good instruction following
Widely available
For Speed
Recommended: stepfun/step-3.5-flash:free
Fastest inference
Lower latency
Good for quick queries
Troubleshooting
Model Not Selected Error
If you see “Model: Not Selected”:
Verify configuration
cat ~/.scrycli/config.json
Ensure model.modelName exists
Add API key
Make sure model.modelKey is also set
Model Calls Failing
If model calls fail:
Check model name format
Must be in format: provider/model:tier
Example: openai/gpt-oss-120b:free
Verify API key
Check OpenRouter status
Some models may be temporarily unavailable
Next Steps
API Keys Configure authentication for model access
Configuration Setup Learn about the configuration system