Installation
Canonical reference for every way to install and run OGuardAI
Which path is right for you?
| Goal | Best path | Section |
|---|---|---|
| Quick eval (30 seconds) | docker run | 1. Quick Start |
| Local development | Docker Compose minimal | 2. Docker Compose |
| Full local with NER | Docker Compose full | 2. Docker Compose |
| Production VM / bare metal | systemd + reverse proxy | 7. Production Linux Server |
| Production containers | Docker Compose HA | 2. Docker Compose |
| Kubernetes | Helm chart | 3. Kubernetes |
| Build from source | Rust toolchain | 4. Local Build |
| Python SDK integration | pip install | 5. Package Managers |
| TypeScript SDK integration | npm install | 5. Package Managers |
| AWS deployment | ECS Fargate | 8. Cloud Deployments |
| GCP deployment | Cloud Run | 8. Cloud Deployments |
| Azure deployment | Container Apps | 8. Cloud Deployments |
1. Quick Start
docker run -p 8080:8080 \
-e GUARDAI_PORT=8080 \
-e GUARDAI_SESSION_SECRET=change-me-32-byte-secret-value!! \
ghcr.io/oronts/oronts-guardai/oguardai-server:latestcurl http://localhost:8080/v1/healthOne binary. Built-in regex detectors. Sealed sessions. No Python, no Redis.
2. Docker Compose
All compose files are in deploy/docker/. Run from the repository root.
Minimal (server only, built-in detectors)
docker compose -f deploy/docker/docker-compose.minimal.yml up --buildServer on port 3000. Regex detection covers: email, phone, IBAN, SSN, IP, URL, credit card, passport, DOB, address, customer/order ID, health ID.
Full Stack (server + NER detector + Redis)
GUARDAI_SESSION_SECRET=your-production-secret \
docker compose -f deploy/docker/docker-compose.yml up --build| Service | Port | Description |
|---|---|---|
server | 3000 | Rust API server |
detector | 9090 | Python NER (spaCy/GLiNER) |
redis | 6379 | Session backend, revocation store |
High Availability (2 instances + nginx + Redis)
docker compose -f deploy/docker/docker-compose.ha.yml up --buildTwo server instances behind nginx round-robin on port 8080. Shared Redis. No sticky sessions needed -- sealed state is client-side.
3. Kubernetes (Helm)
Chart location: deploy/helm/oguardai/.
Basic Install
helm install oguardai deploy/helm/oguardai \
--set session.secret=your-32-byte-production-secretKey Values
server:
replicaCount: 2
resources:
requests: { cpu: 250m, memory: 256Mi }
limits: { cpu: "1", memory: 512Mi }
session:
backend: sealed # sealed | redis
secret: "" # or use existingSecret / existingSecretKey
ttlSeconds: 3600
auth:
mode: dev # dev | api_key | jwt
redis:
enabled: false
external:
url: "" # e.g., redis://my-redis:6379
detector:
enabled: false
resources:
requests: { cpu: 250m, memory: 512Mi }
limits: { cpu: "1", memory: 1Gi }
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
ingress:
enabled: false
className: ""
tls: []With Ingress and TLS
helm install oguardai deploy/helm/oguardai \
--set ingress.enabled=true \
--set ingress.className=nginx \
--set ingress.hosts[0].host=oguardai.example.com \
--set ingress.hosts[0].paths[0].path=/ \
--set ingress.hosts[0].paths[0].pathType=Prefix \
--set ingress.tls[0].secretName=oguardai-tls \
--set ingress.tls[0].hosts[0]=oguardai.example.com \
--set ingress.annotations."cert-manager\.io/cluster-issuer"=letsencrypt-prodWith Redis for Distributed Sessions
helm install oguardai deploy/helm/oguardai \
--set session.backend=redis \
--set redis.enabled=true \
--set redis.external.url=redis://my-redis-cluster:6379Namespace and RBAC
kubectl create namespace oguardai
helm install oguardai deploy/helm/oguardai \
--namespace oguardai \
--set serviceAccount.create=true \
--set session.existingSecret=oguardai-session-secretThe chart supports ServiceAccount, PodDisruptionBudget (auto-enabled when replicas > 1), and NetworkPolicy.
4. Local Build (from source)
Prerequisites: Rust 1.88+ (required), Python 3.10+ (optional, NER), Node 20+ (optional, TS SDK).
Server
cargo build --release -p oguardai-server
GUARDAI_SESSION_SECRET=dev-secret-at-least-32-chars!! ./target/release/oguardai-serverListens on port 3000. Use --config oguardai.yaml for custom configuration.
CLI
cargo build --release -p oguardai-cli
./target/release/oguardai --helpoguardai transform --input "Contact julia@firma.de"
oguardai detect --input "SSN: 123-45-6789"
oguardai run --config oguardai.yaml
oguardai config validate --config oguardai.yamlProxy
cargo build --release -p oguardai-proxy
./target/release/oguardai-proxy --target https://api.openai.com --port 8081Full Workspace
cargo build --release --workspace && cargo test --workspace5. Install from Package Managers
Rust CLI (when published):
cargo install oguardai-cliPython SDK:
pip install oguardai-sdkTypeScript SDK:
npm install @oguardai/sdk # or: pnpm add @oguardai/sdk6. Binary Download (GitHub Releases)
# Linux x86_64
curl -fsSL https://github.com/oronts/oguardai/releases/latest/download/guardai-linux-x86_64.tar.gz \
| tar xz -C /usr/local/bin
# macOS x86_64
curl -fsSL https://github.com/oronts/oguardai/releases/latest/download/guardai-macos-x86_64.tar.gz \
| tar xz -C /usr/local/bin
# macOS ARM64 (Apple Silicon)
curl -fsSL https://github.com/oronts/oguardai/releases/latest/download/guardai-macos-aarch64.tar.gz \
| tar xz -C /usr/local/bin7. Production Linux Server (systemd)
Create /etc/systemd/system/oguardai-server.service:
[Unit]
Description=OGuardAI AI Data Protection Runtime
Documentation=https://github.com/oronts/oguardai
After=network.target
[Service]
Type=simple
User=guardai
Group=guardai
ExecStart=/usr/local/bin/oguardai-server --config /etc/guardai/oguardai.yaml
EnvironmentFile=-/etc/guardai/oguardai.env
Restart=on-failure
RestartSec=5
TimeoutStartSec=30
TimeoutStopSec=30
# Security hardening
ProtectSystem=strict
ProtectHome=yes
NoNewPrivileges=yes
ReadWritePaths=/var/lib/guardai
PrivateTmp=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictSUIDSGID=yes
LockPersonality=yes
RestrictRealtime=yes
RestrictNamespaces=yes
MemoryDenyWriteExecute=yes
SystemCallArchitectures=native
[Install]
WantedBy=multi-user.targetCreate /etc/guardai/oguardai.env:
GUARDAI_SESSION_SECRET=your-production-secret-at-least-32-bytes
RUST_LOG=guardai_server=info,tower_http=infosudo useradd --system --no-create-home --shell /usr/sbin/nologin guardai
sudo mkdir -p /etc/guardai /var/lib/guardai
sudo chown guardai:guardai /var/lib/guardai
sudo systemctl daemon-reload && sudo systemctl enable --now oguardai-server
sudo journalctl -u oguardai-server -fReverse Proxy (nginx)
Put OGuardAI behind nginx with TLS termination:
server {
listen 443 ssl;
server_name guardai.yourcompany.com;
ssl_certificate /etc/ssl/guardai.crt;
ssl_certificate_key /etc/ssl/guardai.key;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# SSE streaming support
proxy_buffering off;
proxy_cache off;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}sudo ln -s /etc/nginx/sites-available/guardai /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx8. Cloud Deployments
AWS ECS (Fargate)
Image: ghcr.io/oronts/oronts-guardai/oguardai-server:latest, port 3000, health check /v1/health. Task config: FARGATE, 1 vCPU, 2 GB memory. Set GUARDAI_SESSION_SECRET in container environment.
Google Cloud Run
gcloud run deploy guardai \
--image ghcr.io/oronts/oronts-guardai/oguardai-server:latest \
--port 3000 \
--set-env-vars GUARDAI_SESSION_SECRET=your-secret \
--min-instances 1 --max-instances 10 \
--cpu 1 --memory 512Mi --region us-central1Azure Container Apps
az containerapp create \
--name guardai --resource-group guardai-rg \
--image ghcr.io/oronts/oronts-guardai/oguardai-server:latest \
--target-port 3000 --ingress external \
--min-replicas 1 --max-replicas 10 \
--cpu 1.0 --memory 2Gi \
--env-vars GUARDAI_SESSION_SECRET=your-secret9. Python NER Detector (optional)
Adds NLP entity recognition (person names, companies, locations, medical terms) beyond built-in regex.
Docker (included in full compose):
docker compose -f deploy/docker/docker-compose.yml up detectorLocal (spaCy):
cd apps/detector-py && pip install -e . && pip install -e ../../python/detector-core
python -m spacy download en_core_web_sm
uvicorn guardai_detector_service.main:app --host 0.0.0.0 --port 9090Local (GLiNER, recommended for multilingual):
pip install guardai-detector-core[gliner]
NER_BACKEND=gliner uvicorn guardai_detector_service.main:app --port 9090Point the server to the detector -- in oguardai.yaml:
detector:
advanced_url: http://localhost:9090Or: GUARDAI_DETECTOR_URL=http://localhost:9090
10. Configuration Reference
oguardai.yaml
server:
host: "0.0.0.0"
port: 3000
session:
backend: sealed # sealed | redis
ttl_seconds: 3600
policy:
default: default
directory: /app/policies
auth:
mode: dev # dev | api_key | jwt
transform:
context_strategy: full
max_context_tokens: 4096
detector:
advanced_url: "" # Python NER URL (optional)Environment Variables
| Variable | Default | Description |
|---|---|---|
GUARDAI_SESSION_SECRET | (required) | 32+ char secret for AES-256-GCM session encryption |
GUARDAI_HOST | 0.0.0.0 | Server bind address |
GUARDAI_PORT | 3000 | Server listen port |
GUARDAI_DETECTOR_URL | (none) | Python NER service URL |
GUARDAI_REDIS_URL | (none) | Redis URL for session backend |
GUARDAI_INSTANCE_ID | (auto) | Instance identifier for HA deployments |
RUST_LOG | guardai_server=info | Log level filter |
NER_BACKEND | gliner | Python detector engine: gliner, spacy, none |
GLINER_MODEL | urchade/gliner_medium-v2.1 | GLiNER HuggingFace model name |
11. Verification
After any install method:
# Health check
curl http://localhost:3000/v1/health
# List capabilities
curl http://localhost:3000/v1/capabilitiesRound-trip test:
# Transform
RESPONSE=$(curl -s -X POST http://localhost:3000/v1/transform \
-H "Content-Type: application/json" \
-d '{"input": "Contact Julia at julia@firma.de or call 555-0123."}')
echo "$RESPONSE" | jq .
# Rehydrate
SESSION_STATE=$(echo "$RESPONSE" | jq -r '.session_state')
curl -s -X POST http://localhost:3000/v1/rehydrate \
-H "Content-Type: application/json" \
-d "{\"output\": $(echo "$RESPONSE" | jq '.safe_text'), \"session_state\": \"$SESSION_STATE\"}" \
| jq .Port note: The port depends on your deployment method:
- Docker quick start (section 1): uses port 8080 (
GUARDAI_PORT=8080)- Docker Compose / local build / cloud deployments: uses port 3000 (the server default)
The examples above use port 3000. Adjust accordingly for your setup.