sciClaw
PHI Mode Explained

Keep the private work local. Keep the rest flexible.

PHI Mode gives sciClaw a local brain on your own machine. New users can think of it like this: the same sciClaw workflow, the same folders, the same app, but the model runs through Ollama + Qwen 3.5 instead of a cloud provider.

It is built for people who want to work with sensitive material, private notes, protected health information, or simply want a local channel that does not leave the machine.

No cloud hop Your prompt and model run stay on the local machine.
Same sciClaw workflow You still work in folders, channels, and the app.
Ollama only This build supports Ollama as the local backend.
Per-room control You can keep one channel local and another on cloud.
Choose the right mode

Three ways new users usually work

You do not have to choose “all local” forever. Most people end up with a mixed setup: cloud for broad research, PHI for private channels, and one shared sciClaw workspace tying the work together.
Everyday research

Cloud mode

Best when you want the strongest reasoning, fastest model iteration, and privacy is not the main limit.

  • Great for literature review and broad synthesis
  • Usually the best choice for hard tool-heavy tasks
  • Not the right default for sensitive clinical work
Everything local

Global PHI mode

Best when the whole machine should stay local. Every chat uses your Ollama-hosted Qwen model.

  • Simple mental model: this machine is local
  • Great for dedicated privacy-sensitive setups
  • You can still switch back to cloud instantly later
One room at a time

Per-room PHI routing

Best for mixed use: keep a private channel local and everything else on cloud or default behavior.

  • Ideal for a “phi” channel in Discord
  • Keeps local mode where it matters
  • Usually the easiest setup for new users
Visual guide

What happens when you send one message?

The workflow is easier to trust when you can picture it. This is the full PHI path in plain English.
1

You write a message

Usually from the app or a routed chat channel tied to a workspace folder.

2

sciClaw checks the room

It decides whether this room uses default, cloud, VM, or local PHI mode.

3

Ollama runs locally

Qwen 3.5 answers on your machine. This is the privacy boundary new users usually care about.

4

Files and tools still work

sciClaw can still read, write, and organize the workspace like normal.

5

You get a reply

The result comes back into the same chat flow, but the model call stayed local.

The first reply can be slower because the model may need to load into memory. After that, the machine usually feels much snappier.
First-time setup

Your first ten minutes in PHI Mode

The goal is not to memorize commands. The goal is to know what is happening and what “good” looks like.

Open the PHI Mode tab

In sciclaw app, go to the PHI Mode tab. This is where local mode is managed for normal users.

You should see the local backend, model, preset, and backend health area.

Press Setup

Use p to run setup. sciClaw detects your hardware, chooses a Qwen size, pulls the model, and validates the runtime.

For most people, the suggested model is the right starting point.

Run the local check

Use e to run the local eval. It checks plain text, JSON, and tool calls so you can tell whether the machine is ready.

This is the fastest way to tell the difference between “installed” and “actually usable.”

PHI Mode is not a separate product. It is the same sciClaw, the same folders, and the same workflow — with the model moved onto your machine.
Most useful real-world pattern

Route one private room to local mode

This is the pattern that makes PHI Mode practical for new users: keep your general work flexible, and reserve local mode for the room that actually needs it.
When to choose this

Create a dedicated private channel

A simple example is a Discord channel like #phi-channel or a folder-specific room for private drafting.

1
Keep global mode on cloud or default
So your ordinary rooms keep using the stronger cloud path.
2
Open Routing in the app
Select the room mapping you want to isolate.
3
Set that room to PHI
Choose mode phi, backend ollama, then keep the suggested model unless you have a reason to change it.
4
Apply and test
Ask a simple question in the room. If it replies, the boundary is working.
Why people like it

It gives you a clean mental model

New users usually do better with room-level rules than machine-level theory. One room is private. One room is general. You know which is which.

  • Private notes stay in the local room
  • General research can still use cloud models
  • You do not need to switch your whole machine back and forth all day
  • It works especially well for teams that want one “safe lane” inside a busier setup
Reality check

What good local mode feels like

PHI Mode is strongest when expectations match hardware. Good local mode feels private, steady, and predictable — not necessarily identical to the best cloud model.

Good interactive fit

Usually Apple Silicon or GPU-backed machines with enough RAM/VRAM.

  • Short replies feel quick
  • Tool calls are usable
  • Drafting and cleanup feel natural

Fallback-only fit

Usually CPU-only machines. Still useful, but this is the slower lane, not the same experience as GPU-backed or Apple Silicon local mode.

  • Short drafting, cleanup, and one-off private work can still be fine
  • Tool-heavy or multi-step tasks may drag noticeably
  • Best treated as lighter local work or a fallback option, not your fastest workflow

What does not change

The folder-first workflow, the app, the routing model, and the file outputs all stay the same.

  • Same workspace
  • Same channels
  • Same sciClaw personality and skills
Quick comparison

What changes when you switch on PHI Mode?

New users often assume everything changes. It does not. Only one part of the stack moves: the model runtime.
Part of the workflow Cloud / default PHI Mode
Your foldersWorkspace, files, outputs Same sciClaw workspace model Same sciClaw workspace model
The model runtimeWhere inference actually happens Remote provider Local Ollama + Qwen 3.5
Privacy boundaryWhere the prompt goes Leaves the machine Stays on the machine
Best use caseWhat each mode is best at Highest-end reasoning and harder agent work Private drafting, local routing, sensitive work
First responseWhat users notice first Usually immediate Can be slower if the model is loading
FAQ

Questions new users usually ask first

These are the questions that matter before commands and configuration files matter.

Do I need to use PHI Mode for everything?

No. Most people should not. The best default is usually cloud or default for ordinary work, then route only private rooms to PHI.

What backend does local mode use?

Ollama. This build treats Ollama as the supported local backend for normal users.

Will it feel slower?

Sometimes, yes. Especially on CPU-only machines or on the first reply after a model loads. On CPU-only hardware, slower turns are normal and do not mean the setup is broken.

What models does it use?

The Qwen 3.5 family, usually 2B, 4B, or 9B depending on your hardware and preset.

Can I switch back to cloud later?

Yes. Switching back is immediate. Your local setup stays saved for the next time you need it.

How do I know my machine is ready?

Run setup, then run the local eval in the PHI tab. That tells you whether the machine is merely installed or actually usable.

Start simple

Set up local mode once. Route the private room. Keep moving.

That is the whole story for most new users. You do not need a new workflow. You need one local lane that you trust.
sciclaw app → PHI Mode → Setup → Eval → Routing → set one room to PHI