Back to docs

Configuration

pAI MSc stores configuration in ~/.msc/config.yaml. You can manage it via the msc config CLI or by editing the file directly.


Quick Config via CLI

msc config get model
msc config set model claude-opus-4-6
msc config set budget_usd 50
msc config set output_format latex
msc config set counsel_enabled true

Environment Variables

API keys and service credentials are stored in ~/.msc/.env (created by msc setup). You can also set them as environment variables or in a project-level .env.

API Keys

VariableRequiredDescription
ANTHROPIC_API_KEYAt least oneClaude models
OPENAI_API_KEYAt least oneGPT models
GOOGLE_API_KEYAt least oneGemini models
DEEPSEEK_API_KEYNoDeepSeek models

Optional

VariableDescription
CONSORTIUM_SLURM_ENABLEDSet to 1 to enable SLURM GPU job submission
CONSORTIUM_TEXLIVE_BINPath to TeX Live binaries
LANGCHAIN_TRACING_V2Set to true to enable LangSmith tracing
LANGCHAIN_API_KEYLangSmith API key
SLACK_WEBHOOK_URLSlack notifications
TELEGRAM_BOT_TOKENTelegram notifications (with TELEGRAM_CHAT_ID)
TINKER_API_KEYTinker GPU API access

Never commit .env files.

LLM Config (.llm_config.yaml)

For advanced control, edit .llm_config.yaml in the project root. This overrides ~/.msc/config.yaml defaults.

main_agents:
  model: claude-sonnet-4-6
  reasoning_effort: high
  budget_tokens: 128000

budget:
  usd_limit: 25
  hard_stop: true
  fail_closed: true

Budget alerts trigger at 85%, 95%, and 100% of the limit.

Counsel Mode

counsel:
  enabled: false
  max_debate_rounds: 3
  synthesis_model: claude-sonnet-4-6

Per-Agent Model Tiers

per_agent_models:
  enabled: false
  tiers:
    opus: { model: claude-opus-4-6, reasoning_effort: high }
    sonnet: { model: claude-sonnet-4-6, reasoning_effort: high }
    economy: { model: claude-sonnet-4-6, reasoning_effort: low }

Experiment Tools

run_experiment_tool:
  code_model: claude-sonnet-4-6
  feedback_model: claude-sonnet-4-6
  vlm_model: claude-sonnet-4-6
  report_model: claude-sonnet-4-6

Cluster Config (engaging_config.yaml)

For SLURM/HPC deployments. See HPC Setup for full details.

cluster:
  name: engaging
  orchestrator:
    partition: your_partition
    time: "7-00:00:00"
    cpus: 4
    mem: "32G"
  experiment_gpu:
    partition: your_gpu_partition
    time: "7-00:00:00"
    cpus: 8
    mem: "64G"
    gres: "gpu:a100:1"

CLI Reference

Core Commands

msc setup
msc run
msc doctor
msc status
msc logs
msc runs
msc resume
msc campaign
msc config
msc budget
msc notify
msc openclaw
msc install

Use msc <command> --help for details.

Run Options

FlagDefaultDescription
--presetstandardquick, standard, thorough, maximum
--task-filePath to a task file
--output-formatlatexlatex or markdown
--modeauto-detectlocal, tinker, hpc

Stage Name Aliases

AliasFull Stage Name
literatureliterature_review_agent
experimentexperimentation_agent
analysisresults_analysis_agent
writeupwriteup_agent
proofreadproofreading_agent

Research Presets

PresetCostTimeBest For
quick$2-$5~30 minTesting, quick summaries, sanity checks
standard$10-$25~2 hrsMost research questions, drafts
thorough$40-$100~6 hrsPublication-quality drafts
maximum$80-$20012+ hrsRigorous manuscripts, comprehensive surveys

Supported Models

ProviderModelsEnv Variable
Anthropicclaude-opus-4-6, claude-sonnet-4-6ANTHROPIC_API_KEY
OpenAIgpt-5, gpt-5-mini, gpt-5.4OPENAI_API_KEY
Googlegemini-3-pro-previewGOOGLE_API_KEY
DeepSeekdeepseek-chatDEEPSEEK_API_KEY

Only one provider key is required. Multiple keys unlock counsel mode (multi-model debate).