Getting Started with SapixDB
No prior database experience required. This manual walks you through everything from installation to writing your first intelligent, self-auditing record.
- A computer running macOS, Linux, or Windows
- Docker Desktop installed (free at docker.com)
- A terminal / command prompt
- Basic familiarity with JSON (key-value pairs like
{"name": "Alice"})
1. What is SapixDB?
Think of SapixDB as a database that never forgets, never lies, and never needs you to restructure it. Every piece of data you store is:
- Signed — cryptographically stamped so you always know who created it
- Linked — chained to the previous record, creating an unbreakable history
- Permanent — nothing is ever deleted or overwritten; old versions stay queryable
- Schema-free — add new fields any time; no ALTER TABLE, no migration scripts
The analogy SapixDB uses is biology. Your data is stored as nucleotides (individual records) strung together into a strand(a chain of records for one entity). Just like DNA, the chain can be read from any point in history and cannot be tampered with.
2. Installation
SapixDB runs as a Docker container. You do not need to install anything special — Docker handles all the dependencies.
mkdir my-sapixdb && cd my-sapixdb
docker-compose.yml and paste this content:services:
sapixdb:
image: sapixdb/agent:latest
container_name: sapixdb
restart: unless-stopped
ports:
- "7475:7475"
volumes:
- sapixdb_strand:/data/strand
- sapixdb_graph:/data/graph
- sapixdb_blobs:/data/blobs
environment:
SAPIX_AGENT_ID: my-first-agent
SAPIX_STRAND_DIR: /data/strand
SAPIX_GRAPH_DIR: /data/graph
SAPIX_BLOB_DIR: /data/blobs
SAPIX_PORT: 7475
volumes:
sapixdb_strand:
sapixdb_graph:
sapixdb_blobs:docker compose up -d
curl http://localhost:7475/v1/health
{"status":"ok","agent":"my-first-agent"}3. First Steps — Understanding the URL structure
Every request you make to SapixDB follows this pattern:
http://localhost:7475/v1/{your-agent-id}/{collection}/{...action}7475— the port SapixDB listens on/v1/— API version prefixmy-first-agent— the name you gave inSAPIX_AGENT_IDusers/orders/ etc. — your collection name (like a table name)
There is no schema to define first. You just start writing data and SapixDB figures out the structure from your records.
4. Writing Data
Use a POST request to write a record. Let's store a user:
Write a single record
curl -X POST http://localhost:7475/v1/my-first-agent/strand/write \
-H "Content-Type: application/json" \
-d '{
"collection": "users",
"data": {
"name": "Alice",
"email": "[email protected]",
"role": "admin"
}
}'{
"id": "nuc_abc123",
"hash": "sha3:e7f2a1...",
"prev_hash": null,
"timestamp": "2026-05-12T10:00:00Z",
"collection": "users"
}SapixDB returns an id (the unique nucleotide ID) and a hash(the cryptographic fingerprint of the record). Save the id — you'll use it to read this record back.
Write another record — SapixDB links them automatically
curl -X POST http://localhost:7475/v1/my-first-agent/strand/write \
-H "Content-Type: application/json" \
-d '{
"collection": "users",
"data": {
"name": "Alice",
"email": "[email protected]",
"role": "superadmin"
}
}'SapixDB does not overwrite the previous record. It appends a new nucleotide that links back to the old one. Alice now has a history: first she wasadmin, now she's superadmin. Both versions are preserved forever.
"deleted": true field — your application checks that field, but the full history is always preserved for audit purposes.5. Reading Data
Get the latest version of a record
curl "http://localhost:7475/v1/my-first-agent/strand/query" \
-H "Content-Type: application/json" \
-d '{
"collection": "users",
"filter": { "name": "Alice" },
"latest": true
}'{
"results": [
{
"id": "nuc_def456",
"data": { "name": "Alice", "email": "[email protected]", "role": "superadmin" },
"timestamp": "2026-05-12T10:05:00Z",
"hash": "sha3:f8a3b2..."
}
]
}Get ALL versions (the full history)
curl "http://localhost:7475/v1/my-first-agent/strand/query" \
-H "Content-Type: application/json" \
-d '{
"collection": "users",
"filter": { "name": "Alice" },
"latest": false
}'This returns every version ever written for Alice — in chronological order, each linked to the previous by hash. This is the strand.
Read a record by its exact ID
curl http://localhost:7475/v1/my-first-agent/strand/nuc_abc123
6. Time Travel — Query the Past
Because SapixDB never overwrites data, you can ask "what did the database look like at 9 AM yesterday?" This is called time travel and it's built in — no extra configuration needed.
curl "http://localhost:7475/v1/my-first-agent/strand/query" \
-H "Content-Type: application/json" \
-d '{
"collection": "users",
"filter": { "name": "Alice" },
"as_of": "2026-05-12T10:02:00Z"
}'The response will show Alice's record exactly as it was at 10:02 AM — theadmin version, before the update to superadmin.
7. Agents & Data Ownership
In SapixDB, an agent is a named identity that owns a collection of data. When you set SAPIX_AGENT_ID=my-first-agent, you're declaring that agent's identity.
This matters because:
- Every nucleotide is signed with the agent's identity
- You can run multiple agents (e.g., one per microservice or AI process)
- AI agents like LLMs can own their own data strand — writing directly to SapixDB
- You always know which agent wrote which record
Ingest data from an AI agent or external process
SapixDB includes a special /ingest endpoint designed for automated pipelines — AI agents, webhooks, cron jobs, etc.:
curl -X POST http://localhost:7475/v1/my-first-agent/ingest \
-H "Content-Type: application/json" \
-d '{
"collection": "decisions",
"data": {
"agent": "gpt-4o",
"action": "approved_loan",
"reason": "Credit score 780, DTI 28%",
"confidence": 0.94
}
}'Every AI decision is now permanently logged, signed, and auditable. You can always prove what the AI decided, when, and why.
8. Graph Relationships
SapixDB includes a built-in graph layer. You can create directed edges between any two records, representing relationships like "Alice manages Bob" or "Order #42 belongs to Customer #7".
Create a relationship
curl -X POST http://localhost:7475/v1/my-first-agent/graph/edge \
-H "Content-Type: application/json" \
-d '{
"src": "nuc_abc123",
"dst": "nuc_xyz789",
"edge_type": "manages",
"weight": 1.0
}'Traverse relationships
curl "http://localhost:7475/v1/my-first-agent/graph/traverse/nuc_abc123?depth=2&direction=outbound"
This returns all nodes reachable from Alice's record within 2 hops — useful for org charts, dependency trees, recommendation engines, and access control graphs.
9. HIPAA & SOX Compliance
SapixDB's architecture is compliance-by-default. Here's why that matters and what it means in practice:
10. Troubleshooting
Another process is using port 7475. Change the port mapping in docker-compose.yml: "7476:7475" and update your requests to use port 7476.
Install curl: on macOS run brew install curl, on Ubuntu run sudo apt install curl. Alternatively, use Postman (a free GUI tool) to send requests instead.
The container may still be starting. Wait 10 seconds and try again. Check container status with docker compose ps. If it says "unhealthy", check logs: docker compose logs sapixdb
Make sure the collection name in your read query exactly matches what you used when writing. Collection names are case-sensitive. Also verify your SAPIX_AGENT_ID in the URL path matches the one set in docker-compose.yml.
Run docker compose down in your project folder. Your data is safely stored in Docker volumes and will be there when you start again with docker compose up -d.
Your data lives in the sapixdb_strand, sapixdb_graph, and sapixdb_blobs Docker volumes. Back these up with docker run --rm -v sapixdb_strand:/data -v $(pwd):/backup ubuntu tar czf /backup/strand-backup.tar.gz /data
Explore the full developer reference for advanced queries, distributed mode, agent graph traversal, and SaQL — our semantic query language.