ScryCLI is an AI-powered CLI tool that interprets natural language commands and executes file operations on your codebase. It combines React/Ink for the UI, AI models for language understanding, and a structured tool execution system.
Architecture Overview
ScryCLI is built on three core pillars:
React/Ink Terminal UI - Interactive command-line interface
AI Integration - Natural language processing via OpenRouter SDK
Tool Execution System - Automated file operations based on AI responses
React/Ink User Interface
ScryCLI uses Ink to build its terminal UI with React components. This enables a rich, interactive CLI experience.
Application Flow
The main app follows a state-based flow:
const App = () => {
const [ authed , setAuthed ] = useState < boolean >( isAuthenticated ());
const [ modelSelected , setModelSelected ] = useState < boolean >( isModelSelected ());
return (
<>
{ ! authed && (
< Box flexDirection = "column" width = "100%" >
< Header />
< Welcome />
< Auth onAuthenticated = { () => setAuthed ( true ) } />
</ Box >
) }
{ authed && ! modelSelected && (
< Box flexDirection = "column" width = "100%" >
< Header />
< Welcome />
< SelectModel onDone = { () => setModelSelected ( true ) } />
</ Box >
) }
{ authed && modelSelected && (
< Box flexDirection = "column" width = "100%" >
< Welcome />
< Footer />
< InputBox />
</ Box >
) }
</>
);
};
The app entry point is a simple Node.js script that renders the React app: #!/usr/bin/env node
import { render } from 'ink' ;
import App from '../ui/App.js' ;
render ( < App /> );
Key UI Components
Welcome - Displays the SCRYCLI banner using ink-big-text
Auth - Handles user authentication
SelectModel - Interactive model selection menu
InputBox - Main prompt input and response display
AnswerDisplay - Renders AI responses and tool execution results
AI Integration
ScryCLI uses the useChat hook to communicate with AI models:
export function useChat () {
const [ answer , setAnswer ] = useState ( "" );
const [ finalAnswer , setFinalAnswer ] = useState ( "" );
const [ loading , setLoading ] = useState ( false );
const [ error , setError ] = useState ( "" );
const send = async ( prompt : string ) => {
setAnswer ( "" );
setError ( "" );
setLoading ( true );
try {
const config = getConfig ();
const text = await llmCall ({
prompt ,
systemPrompt: systemPrompt as string ,
});
setAnswer ( text );
setFinalAnswer ( text );
} catch ( e : any ) {
setError ( e ?. message || "Something went wrong." );
} finally {
setLoading ( false );
}
};
return { answer , finalAnswer , loading , error , send , setFinalAnswer };
}
Context Enhancement
The AI receives enhanced context with the current directory’s file tree:
const fileTreeString = getFileTree ( process . cwd ()). join ( ' \n ' );
export async function llmCall ({ prompt , systemPrompt } : llmCallParams ) : Promise < string > {
const result = openRouterClient . callModel ({
model: ` ${ getConfig (). model . modelName } ` ,
instructions: ` ${ systemPrompt } ` ,
input: ` ${ prompt } \n\n File Tree: ${ fileTreeString } ` ,
});
const text = await result . getText ();
return text ;
}
The file tree is automatically appended to every user prompt, giving the AI full context of the project structure.
The useToolExecutor hook parses AI responses and executes file operations:
export function useToolExecutor ( answer : string , loading : boolean ) {
const [ result , setResult ] = useState ( "" );
useEffect (() => {
if ( loading ) return ;
const clean = answer . replace ( / | ```/ g , "" ). trim ();
if ( ! clean . startsWith ( "{" ) || ! clean . endsWith ( "}" )) return ;
try {
const instruction = JSON . parse ( clean );
if ( ! instruction . action ) return ;
switch ( instruction . action ) {
case 'create_file' :
createFile ( instruction . file , instruction . content );
break ;
case 'read_file' :
console . log ( readFile ( instruction . file ));
break ;
case 'write_file' :
writeFile ( instruction . file , instruction . content );
break ;
case 'delete_file' :
deleteFile ( instruction . file );
break ;
}
} catch ( e : any ) {
console . error ( `Error executing tool: ${ e . message } ` );
}
}, [ loading , answer ]);
return result ;
}
System Prompt
The AI is instructed to return structured JSON responses:
Example Response
Write/Update File
{
"action" : "create_file" ,
"file" : "public/game.html" ,
"content" : "<!DOCTYPE html><html>...</html>"
}
The AI must return only valid JSON without markdown code blocks or extra text. Output must start with { and end with }.
Configuration Management
ScryCLI stores configuration in ~/.scrycli/config.json:
const configPath = path . join ( os . homedir (), ".scrycli" , "config.json" );
export const getConfig = () => {
const config = JSON . parse ( fs . readFileSync ( configPath , "utf-8" ));
return config ;
};
export const setConfig = ( key : string , value : any ) => {
const config = getConfig ();
config [ key ] = value ;
fs . writeFileSync ( configPath , JSON . stringify ( config , null , 2 ));
};
The config stores:
Selected AI model provider and name
API keys
User authentication state
Command System
ScryCLI supports slash commands for special operations:
const handleSubmit = ( val : string ) => {
if ( val . startsWith ( "/" )) {
setActiveCmd ( val as CommandName );
return ;
}
send ( val );
};
Available commands include:
/exit - Exit the application
/logout - Log out of current session
/report - Report issues or feedback
Press ESC to exit any command and return to the main prompt.