OGuardAI
Getting Started

Installation

Canonical reference for every way to install and run OGuardAI

Which path is right for you?

GoalBest pathSection
Quick eval (30 seconds)docker run1. Quick Start
Local developmentDocker Compose minimal2. Docker Compose
Full local with NERDocker Compose full2. Docker Compose
Production VM / bare metalsystemd + reverse proxy7. Production Linux Server
Production containersDocker Compose HA2. Docker Compose
KubernetesHelm chart3. Kubernetes
Build from sourceRust toolchain4. Local Build
Python SDK integrationpip install5. Package Managers
TypeScript SDK integrationnpm install5. Package Managers
AWS deploymentECS Fargate8. Cloud Deployments
GCP deploymentCloud Run8. Cloud Deployments
Azure deploymentContainer Apps8. Cloud Deployments

1. Quick Start

docker run -p 8080:8080 \
  -e GUARDAI_PORT=8080 \
  -e GUARDAI_SESSION_SECRET=change-me-32-byte-secret-value!! \
  ghcr.io/oronts/oronts-guardai/oguardai-server:latest
curl http://localhost:8080/v1/health

One binary. Built-in regex detectors. Sealed sessions. No Python, no Redis.


2. Docker Compose

All compose files are in deploy/docker/. Run from the repository root.

Minimal (server only, built-in detectors)

docker compose -f deploy/docker/docker-compose.minimal.yml up --build

Server on port 3000. Regex detection covers: email, phone, IBAN, SSN, IP, URL, credit card, passport, DOB, address, customer/order ID, health ID.

Full Stack (server + NER detector + Redis)

GUARDAI_SESSION_SECRET=your-production-secret \
  docker compose -f deploy/docker/docker-compose.yml up --build
ServicePortDescription
server3000Rust API server
detector9090Python NER (spaCy/GLiNER)
redis6379Session backend, revocation store

High Availability (2 instances + nginx + Redis)

docker compose -f deploy/docker/docker-compose.ha.yml up --build

Two server instances behind nginx round-robin on port 8080. Shared Redis. No sticky sessions needed -- sealed state is client-side.


3. Kubernetes (Helm)

Chart location: deploy/helm/oguardai/.

Basic Install

helm install oguardai deploy/helm/oguardai \
  --set session.secret=your-32-byte-production-secret

Key Values

server:
  replicaCount: 2
  resources:
    requests: { cpu: 250m, memory: 256Mi }
    limits:   { cpu: "1",  memory: 512Mi }
session:
  backend: sealed          # sealed | redis
  secret: ""               # or use existingSecret / existingSecretKey
  ttlSeconds: 3600
auth:
  mode: dev                # dev | api_key | jwt
redis:
  enabled: false
  external:
    url: ""                # e.g., redis://my-redis:6379
detector:
  enabled: false
  resources:
    requests: { cpu: 250m, memory: 512Mi }
    limits:   { cpu: "1",  memory: 1Gi }
autoscaling:
  enabled: false
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70
ingress:
  enabled: false
  className: ""
  tls: []

With Ingress and TLS

helm install oguardai deploy/helm/oguardai \
  --set ingress.enabled=true \
  --set ingress.className=nginx \
  --set ingress.hosts[0].host=oguardai.example.com \
  --set ingress.hosts[0].paths[0].path=/ \
  --set ingress.hosts[0].paths[0].pathType=Prefix \
  --set ingress.tls[0].secretName=oguardai-tls \
  --set ingress.tls[0].hosts[0]=oguardai.example.com \
  --set ingress.annotations."cert-manager\.io/cluster-issuer"=letsencrypt-prod

With Redis for Distributed Sessions

helm install oguardai deploy/helm/oguardai \
  --set session.backend=redis \
  --set redis.enabled=true \
  --set redis.external.url=redis://my-redis-cluster:6379

Namespace and RBAC

kubectl create namespace oguardai
helm install oguardai deploy/helm/oguardai \
  --namespace oguardai \
  --set serviceAccount.create=true \
  --set session.existingSecret=oguardai-session-secret

The chart supports ServiceAccount, PodDisruptionBudget (auto-enabled when replicas > 1), and NetworkPolicy.


4. Local Build (from source)

Prerequisites: Rust 1.88+ (required), Python 3.10+ (optional, NER), Node 20+ (optional, TS SDK).

Server

cargo build --release -p oguardai-server
GUARDAI_SESSION_SECRET=dev-secret-at-least-32-chars!! ./target/release/oguardai-server

Listens on port 3000. Use --config oguardai.yaml for custom configuration.

CLI

cargo build --release -p oguardai-cli
./target/release/oguardai --help
oguardai transform --input "Contact julia@firma.de"
oguardai detect --input "SSN: 123-45-6789"
oguardai run --config oguardai.yaml
oguardai config validate --config oguardai.yaml

Proxy

cargo build --release -p oguardai-proxy
./target/release/oguardai-proxy --target https://api.openai.com --port 8081

Full Workspace

cargo build --release --workspace && cargo test --workspace

5. Install from Package Managers

Rust CLI (when published):

cargo install oguardai-cli

Python SDK:

pip install oguardai-sdk

TypeScript SDK:

npm install @oguardai/sdk    # or: pnpm add @oguardai/sdk

6. Binary Download (GitHub Releases)

# Linux x86_64
curl -fsSL https://github.com/oronts/oguardai/releases/latest/download/guardai-linux-x86_64.tar.gz \
  | tar xz -C /usr/local/bin

# macOS x86_64
curl -fsSL https://github.com/oronts/oguardai/releases/latest/download/guardai-macos-x86_64.tar.gz \
  | tar xz -C /usr/local/bin

# macOS ARM64 (Apple Silicon)
curl -fsSL https://github.com/oronts/oguardai/releases/latest/download/guardai-macos-aarch64.tar.gz \
  | tar xz -C /usr/local/bin

7. Production Linux Server (systemd)

Create /etc/systemd/system/oguardai-server.service:

[Unit]
Description=OGuardAI AI Data Protection Runtime
Documentation=https://github.com/oronts/oguardai
After=network.target

[Service]
Type=simple
User=guardai
Group=guardai

ExecStart=/usr/local/bin/oguardai-server --config /etc/guardai/oguardai.yaml
EnvironmentFile=-/etc/guardai/oguardai.env

Restart=on-failure
RestartSec=5
TimeoutStartSec=30
TimeoutStopSec=30

# Security hardening
ProtectSystem=strict
ProtectHome=yes
NoNewPrivileges=yes
ReadWritePaths=/var/lib/guardai
PrivateTmp=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictSUIDSGID=yes
LockPersonality=yes
RestrictRealtime=yes
RestrictNamespaces=yes
MemoryDenyWriteExecute=yes
SystemCallArchitectures=native

[Install]
WantedBy=multi-user.target

Create /etc/guardai/oguardai.env:

GUARDAI_SESSION_SECRET=your-production-secret-at-least-32-bytes
RUST_LOG=guardai_server=info,tower_http=info
sudo useradd --system --no-create-home --shell /usr/sbin/nologin guardai
sudo mkdir -p /etc/guardai /var/lib/guardai
sudo chown guardai:guardai /var/lib/guardai
sudo systemctl daemon-reload && sudo systemctl enable --now oguardai-server
sudo journalctl -u oguardai-server -f

Reverse Proxy (nginx)

Put OGuardAI behind nginx with TLS termination:

server {
    listen 443 ssl;
    server_name guardai.yourcompany.com;

    ssl_certificate     /etc/ssl/guardai.crt;
    ssl_certificate_key /etc/ssl/guardai.key;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # SSE streaming support
        proxy_buffering off;
        proxy_cache off;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}
sudo ln -s /etc/nginx/sites-available/guardai /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx

8. Cloud Deployments

AWS ECS (Fargate)

Image: ghcr.io/oronts/oronts-guardai/oguardai-server:latest, port 3000, health check /v1/health. Task config: FARGATE, 1 vCPU, 2 GB memory. Set GUARDAI_SESSION_SECRET in container environment.

Google Cloud Run

gcloud run deploy guardai \
  --image ghcr.io/oronts/oronts-guardai/oguardai-server:latest \
  --port 3000 \
  --set-env-vars GUARDAI_SESSION_SECRET=your-secret \
  --min-instances 1 --max-instances 10 \
  --cpu 1 --memory 512Mi --region us-central1

Azure Container Apps

az containerapp create \
  --name guardai --resource-group guardai-rg \
  --image ghcr.io/oronts/oronts-guardai/oguardai-server:latest \
  --target-port 3000 --ingress external \
  --min-replicas 1 --max-replicas 10 \
  --cpu 1.0 --memory 2Gi \
  --env-vars GUARDAI_SESSION_SECRET=your-secret

9. Python NER Detector (optional)

Adds NLP entity recognition (person names, companies, locations, medical terms) beyond built-in regex.

Docker (included in full compose):

docker compose -f deploy/docker/docker-compose.yml up detector

Local (spaCy):

cd apps/detector-py && pip install -e . && pip install -e ../../python/detector-core
python -m spacy download en_core_web_sm
uvicorn guardai_detector_service.main:app --host 0.0.0.0 --port 9090

Local (GLiNER, recommended for multilingual):

pip install guardai-detector-core[gliner]
NER_BACKEND=gliner uvicorn guardai_detector_service.main:app --port 9090

Point the server to the detector -- in oguardai.yaml:

detector:
  advanced_url: http://localhost:9090

Or: GUARDAI_DETECTOR_URL=http://localhost:9090


10. Configuration Reference

oguardai.yaml

server:
  host: "0.0.0.0"
  port: 3000
session:
  backend: sealed            # sealed | redis
  ttl_seconds: 3600
policy:
  default: default
  directory: /app/policies
auth:
  mode: dev                  # dev | api_key | jwt
transform:
  context_strategy: full
  max_context_tokens: 4096
detector:
  advanced_url: ""           # Python NER URL (optional)

Environment Variables

VariableDefaultDescription
GUARDAI_SESSION_SECRET(required)32+ char secret for AES-256-GCM session encryption
GUARDAI_HOST0.0.0.0Server bind address
GUARDAI_PORT3000Server listen port
GUARDAI_DETECTOR_URL(none)Python NER service URL
GUARDAI_REDIS_URL(none)Redis URL for session backend
GUARDAI_INSTANCE_ID(auto)Instance identifier for HA deployments
RUST_LOGguardai_server=infoLog level filter
NER_BACKENDglinerPython detector engine: gliner, spacy, none
GLINER_MODELurchade/gliner_medium-v2.1GLiNER HuggingFace model name

11. Verification

After any install method:

# Health check
curl http://localhost:3000/v1/health

# List capabilities
curl http://localhost:3000/v1/capabilities

Round-trip test:

# Transform
RESPONSE=$(curl -s -X POST http://localhost:3000/v1/transform \
  -H "Content-Type: application/json" \
  -d '{"input": "Contact Julia at julia@firma.de or call 555-0123."}')
echo "$RESPONSE" | jq .

# Rehydrate
SESSION_STATE=$(echo "$RESPONSE" | jq -r '.session_state')
curl -s -X POST http://localhost:3000/v1/rehydrate \
  -H "Content-Type: application/json" \
  -d "{\"output\": $(echo "$RESPONSE" | jq '.safe_text'), \"session_state\": \"$SESSION_STATE\"}" \
  | jq .

Port note: The port depends on your deployment method:

  • Docker quick start (section 1): uses port 8080 (GUARDAI_PORT=8080)
  • Docker Compose / local build / cloud deployments: uses port 3000 (the server default)

The examples above use port 3000. Adjust accordingly for your setup.