Migration Guide
Migrate existing AI applications to OGuardAI with minimal code changes
You do not need to rewrite your application. OGuardAI is designed to slot into existing AI workflows with minimal changes. Most migrations require changing one line of code or adding a thin middleware layer.
Migration Paths
| Current Setup | Migration Path | Effort |
|---|---|---|
| OpenAI / Anthropic API | Change base_url to OGuardAI proxy | 1 line |
| LangChain agent | Add OGuardAI LangChain adapter | 3 lines |
| Vercel AI SDK | Add OGuardAI Vercel AI adapter | 3 lines |
| Python backend (FastAPI) | Add OGuardAI middleware | 5 lines |
| Node.js backend (Express) | Add OGuardAI middleware | 5 lines |
| Custom HTTP pipeline | Call transform before LLM, rehydrate after | 10 lines |
| RAG pipeline | Use 4-step API (transform, index, search, rehydrate) | Replace ingestion step |
Migration 1: OpenAI Proxy (Lowest Effort)
If your application calls the OpenAI or Anthropic API directly, point it at OGuardAI's proxy instead. No other code changes are needed.
Before:
from openai import OpenAI
client = OpenAI(api_key="sk-...")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Help customer Julia Schneider (julia@firma.de)"}]
)
# PII sent directly to OpenAIAfter:
from openai import OpenAI
client = OpenAI(
api_key="sk-...",
base_url="http://localhost:8081/v1" # <-- only change
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Help customer Julia Schneider (julia@firma.de)"}]
)
# OpenAI sees: "Help customer `{{person:p_001}}` (`{{email:e_001}}`)"
# Response is automatically restored before reaching your codeSetup:
oguardai proxy --target https://api.openai.com --policy default --port 8081Streaming, tool calls, and all message roles are handled transparently.
Migration 2: Python Middleware (FastAPI)
Add OGuardAI as FastAPI middleware. Incoming text is tokenized; outgoing responses are restored. Route handler code does not change.
# Add transform/rehydrate calls around your LLM interaction:
from guardai_sdk import OGuardAIClient
guardai = OGuardAIClient(base_url="http://localhost:3000")
@app.post("/chat")
async def chat(message: str):
# Transform: mask PII before sending to LLM
result = guardai.transform(message)
llm_reply = call_llm(result.safe_text)
# Rehydrate: restore PII in the response
restored = guardai.rehydrate(llm_reply, session_state=result.session_state)
return {"reply": restored.restored_text}Migration 3: Node.js Middleware (Express)
Same pattern for Express. Add the middleware; routes stay unchanged.
// Add transform/rehydrate calls around your LLM interaction:
import { OGuardAIClient } from "@oguardai/sdk";
const guardai = new OGuardAIClient({ baseUrl: "http://localhost:3000" });
app.post("/chat", async (req, res) => {
// Transform: mask PII before sending to LLM
const result = await guardai.transform(req.body.message);
const reply = await callLlm(result.safeText);
// Rehydrate: restore PII in the response
const restored = await guardai.rehydrate(reply, result.sessionState);
res.json({ reply: restored.restoredText });
});What Changes vs. What Stays the Same
| Aspect | Changes | Stays the Same |
|---|---|---|
| LLM calls | Routed through OGuardAI (proxy or middleware) | Same API, same models, same prompts |
| Application code | 1-5 lines added | Business logic, routes, database queries |
| LLM provider | Receives tokenized text instead of raw PII | Same provider, same billing, same models |
| Output quality | Preserved or improved (semantic tokens carry context) | Tone, style, language, format |
| Infrastructure | One new service (OGuardAI binary or container) | Everything else |
| Authentication | Add OGuardAI API key to config | Your existing auth unchanged |
Rollback and Gradual Rollout
Shadow Mode (Start Here)
Enable shadow mode to evaluate OGuardAI on real traffic without changing production behavior:
shadow_mode: trueIn shadow mode, OGuardAI runs the full pipeline but returns the original unmodified text alongside the protected version. Your application continues to use unprotected text while you compare outputs, verify detection accuracy, and build confidence.
Recommended Rollout Sequence
- Pick one non-critical AI workflow and route it through OGuardAI in shadow mode.
- Run shadow mode for 1-2 weeks. Review detection reports and compare outputs.
- Set
shadow_mode: falseto activate protection. Monitor for regressions. - Expand to additional workflows with per-workflow policies.
- Add the NER sidecar if you need person/company/location detection beyond regex.
Emergency Rollback
- Proxy mode: Change
base_urlback to the LLM provider URL. One line. - Middleware mode: Remove the middleware line. Redeploy.
- No data migration needed. OGuardAI stores no persistent state. Removing it leaves your system exactly as it was.