OGuardAI
Guides

Migration Guide

Migrate existing AI applications to OGuardAI with minimal code changes

You do not need to rewrite your application. OGuardAI is designed to slot into existing AI workflows with minimal changes. Most migrations require changing one line of code or adding a thin middleware layer.


Migration Paths

Current SetupMigration PathEffort
OpenAI / Anthropic APIChange base_url to OGuardAI proxy1 line
LangChain agentAdd OGuardAI LangChain adapter3 lines
Vercel AI SDKAdd OGuardAI Vercel AI adapter3 lines
Python backend (FastAPI)Add OGuardAI middleware5 lines
Node.js backend (Express)Add OGuardAI middleware5 lines
Custom HTTP pipelineCall transform before LLM, rehydrate after10 lines
RAG pipelineUse 4-step API (transform, index, search, rehydrate)Replace ingestion step

Migration 1: OpenAI Proxy (Lowest Effort)

If your application calls the OpenAI or Anthropic API directly, point it at OGuardAI's proxy instead. No other code changes are needed.

Before:

from openai import OpenAI

client = OpenAI(api_key="sk-...")
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Help customer Julia Schneider (julia@firma.de)"}]
)
# PII sent directly to OpenAI

After:

from openai import OpenAI

client = OpenAI(
    api_key="sk-...",
    base_url="http://localhost:8081/v1"  # <-- only change
)
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Help customer Julia Schneider (julia@firma.de)"}]
)
# OpenAI sees: "Help customer `{{person:p_001}}` (`{{email:e_001}}`)"
# Response is automatically restored before reaching your code

Setup:

oguardai proxy --target https://api.openai.com --policy default --port 8081

Streaming, tool calls, and all message roles are handled transparently.


Migration 2: Python Middleware (FastAPI)

Add OGuardAI as FastAPI middleware. Incoming text is tokenized; outgoing responses are restored. Route handler code does not change.

# Add transform/rehydrate calls around your LLM interaction:
from guardai_sdk import OGuardAIClient

guardai = OGuardAIClient(base_url="http://localhost:3000")

@app.post("/chat")
async def chat(message: str):
    # Transform: mask PII before sending to LLM
    result = guardai.transform(message)
    llm_reply = call_llm(result.safe_text)
    # Rehydrate: restore PII in the response
    restored = guardai.rehydrate(llm_reply, session_state=result.session_state)
    return {"reply": restored.restored_text}

Migration 3: Node.js Middleware (Express)

Same pattern for Express. Add the middleware; routes stay unchanged.

// Add transform/rehydrate calls around your LLM interaction:
import { OGuardAIClient } from "@oguardai/sdk";

const guardai = new OGuardAIClient({ baseUrl: "http://localhost:3000" });

app.post("/chat", async (req, res) => {
  // Transform: mask PII before sending to LLM
  const result = await guardai.transform(req.body.message);
  const reply = await callLlm(result.safeText);
  // Rehydrate: restore PII in the response
  const restored = await guardai.rehydrate(reply, result.sessionState);
  res.json({ reply: restored.restoredText });
});

What Changes vs. What Stays the Same

AspectChangesStays the Same
LLM callsRouted through OGuardAI (proxy or middleware)Same API, same models, same prompts
Application code1-5 lines addedBusiness logic, routes, database queries
LLM providerReceives tokenized text instead of raw PIISame provider, same billing, same models
Output qualityPreserved or improved (semantic tokens carry context)Tone, style, language, format
InfrastructureOne new service (OGuardAI binary or container)Everything else
AuthenticationAdd OGuardAI API key to configYour existing auth unchanged

Rollback and Gradual Rollout

Shadow Mode (Start Here)

Enable shadow mode to evaluate OGuardAI on real traffic without changing production behavior:

shadow_mode: true

In shadow mode, OGuardAI runs the full pipeline but returns the original unmodified text alongside the protected version. Your application continues to use unprotected text while you compare outputs, verify detection accuracy, and build confidence.

  1. Pick one non-critical AI workflow and route it through OGuardAI in shadow mode.
  2. Run shadow mode for 1-2 weeks. Review detection reports and compare outputs.
  3. Set shadow_mode: false to activate protection. Monitor for regressions.
  4. Expand to additional workflows with per-workflow policies.
  5. Add the NER sidecar if you need person/company/location detection beyond regex.

Emergency Rollback

  • Proxy mode: Change base_url back to the LLM provider URL. One line.
  • Middleware mode: Remove the middleware line. Redeploy.
  • No data migration needed. OGuardAI stores no persistent state. Removing it leaves your system exactly as it was.