sciClaw

Security

sciClaw runs on your computer. Not on ours. Not in anyone else's cloud. Here's what that actually means for your data.

It's a program, not a service

There's no account to create with us. No login. No server to connect to. sciClaw is software that runs on your Mac or Linux machine, same as any other program you've installed. We don't host anything, and we don't know you're running it.

That's a deliberate choice. Research data is sensitive. Institutional review boards exist for a reason. We didn't want to build something that asks you to trust a startup with your participants' information. So we didn't.

Your data stays in one folder

Everything sciClaw produces lives in ~/sciclaw. Conversations, documents, logs, memory. One folder on your machine that you own and control. You can open it in Finder, back it up to an encrypted drive, version it with git, or delete the whole thing if you want.

There's no hidden cache somewhere else on your system. No data stashed in a temp directory. It's all right there.

Nothing is exposed to the internet

sciClaw doesn't open any ports. It doesn't listen for incoming connections. It doesn't run a web server. When you send a message, it reaches out to your AI provider (OpenAI, Anthropic, whoever you chose) and to any tools you explicitly told it to use (like PubMed). That's it.

If you're behind a hospital firewall or a VPN, sciClaw works fine because it only makes outbound HTTPS requests. Nothing inbound. No one can connect to it from outside your machine.

Messages go through your private bot

When you chat via Telegram or Discord, the messages travel through a bot that you created and you control. It's registered to your account, with your token. Nobody else can talk to it unless you explicitly add them to the allow list.

The gateway (the part that connects to Telegram or Discord) runs on your machine too. It's not a relay service we operate. Your messages don't pass through our infrastructure at any point.

API keys stay local

Your credentials are stored in a config file on your computer (~/.picoclaw/config.json). They're never transmitted to anyone except the provider they belong to. We can't see your keys because we never receive them.

If you use the OAuth device-code flow for OpenAI, the token exchange happens directly between your machine and OpenAI's servers. We're not in that loop.

No telemetry

sciClaw sends nothing back to us. No usage data. No error reports. No analytics. No phone-home behavior of any kind. We genuinely don't know how many people are using it, what they're doing with it, or where they are. And that's fine by us.

The only analytics are on our website (sciclaw.dev), which uses privacy-respecting, cookieless analytics. The software itself is silent.

We audited the code we built on

sciClaw is a fork of PicoClaw, which itself is based on nanobot. Both are open-source projects built by people doing good work. But we're shipping software that researchers will use with sensitive data, so "trust the upstream" wasn't good enough.

We went through the inherited codebase line by line. We removed telemetry hooks. We replaced external server dependencies with local alternatives. We stripped out update mechanisms that phoned home. Not because we thought anyone had bad intentions, but because we wanted to know exactly what this binary does when it runs on your machine, and we wanted you to be able to verify that too.

Prompt injection is a real concern with any agent that takes user input and passes it to an LLM. We can't eliminate that risk entirely (nobody can right now), but we can control the surface area. Keeping the tool local, validating skills before they run, logging every tool call, and not routing traffic through third-party infrastructure all help reduce the places where something could go wrong.

Skills are validated before install

When you install a skill (the markdown files that teach sciClaw new capabilities), it goes through validation: size limits, binary rejection, frontmatter checks, and SHA-256 provenance logging. Catalog fetches use pinned commit refs so you get exactly what you asked for, not whatever happens to be on main at that moment.

Skills are plain text files you can read. There's no compiled code, no executables, no packages with hidden dependencies. Open any skill file in a text editor and you can see everything it does.

What about the AI provider?

This is worth being direct about. When you send a message through sciClaw, that message goes to the AI provider you chose (OpenAI, Anthropic, Google, etc.) using their API. That's how it works. The provider sees whatever you send in that conversation.

What you send is up to you. You control what goes in the prompt. If you're working with sensitive data, you can choose a provider whose data handling policies meet your requirements. Some providers offer zero-retention APIs. Some institutions have BAAs in place. sciClaw doesn't make that choice for you because it shouldn't.

The important thing is that your data goes directly from your machine to the provider you picked. We're not in the middle. There's no intermediary, no proxy, no aggregation layer.

Everything above is the default. You can change all of it.

This is the part we need to be honest about. Everything described on this page is how sciClaw ships out of the box: locked down, local, private. But none of it is hardcoded. sciClaw is fully configurable, and that means you can open it up as wide as you want.

The OpenClaw ecosystem includes skills and integrations that deliberately reach beyond your machine. Connect your Gmail account. Sync files with Dropbox or Google Drive. Stand up a web server. Expose an API to the internet. Hook into Slack workspaces, calendars, databases, whatever your workflow needs. These aren't bugs or workarounds. They're designed capabilities.

That flexibility is the whole point. A solo researcher running analyses on deidentified data needs a very different posture than a lab that wants their agent coordinating across five collaborators on three continents. sciClaw lets both of those people use the same tool.

But here's the thing: every integration you enable changes your security surface. Connect Gmail and now Google has access to whatever flows through that channel. Open a port and now your firewall configuration matters. Share your bot token and now someone else can talk to your agent. None of this is inherently wrong, but you should know you're doing it.

We're not going to pretend this tool can't be misconfigured. It absolutely can. If you install a third-party skill without reading it, connect services you don't understand, or share credentials carelessly, you can create real problems for yourself. sciClaw gives you the power to build exactly the setup you need. It also gives you the power to build a security nightmare if you're not paying attention.

This is a grown-up tool. It doesn't babysit you. It doesn't block you from making choices that might be unwise. It trusts that you'll think about what you're connecting, why, and what the consequences are. The defaults are conservative because most people should start there. What you do after that is up to you.

The short version

If you have questions about security or find something that doesn't match what we've described here, please open an issue or reach out on GitHub Discussions.