diff --git a/.gitignore b/.gitignore
index 0e17121d..7d4f5082 100644
--- a/.gitignore
+++ b/.gitignore
@@ -22,7 +22,16 @@ docs/pgedge-vectorizer/index.md
docs/radar/index.md
docs/snowflake/index.md
docs/spock-v5/index.md
+docs/pgadmin-4/index.md
+docs/pgaudit/index.md
+docs/pgbackrest/index.md
+docs/pgbouncer/index.md
+docs/pgvector/index.md
+docs/postgis/index.md
+docs/postgrest/index.md
+docs/psycopg2/index.md
# Ansible vault password file
ansible/.pgedge-docs-vault-pass
.vscode/settings.json
+.superpowers/
diff --git a/docs/ai-toolkit/index.md b/docs/ai-toolkit/index.md
new file mode 100644
index 00000000..fde8c567
--- /dev/null
+++ b/docs/ai-toolkit/index.md
@@ -0,0 +1,184 @@
+# AI Toolkit
+
+The pgEdge AI Toolkit connects AI agents and LLMs to PostgreSQL through two
+independent capabilities: **secure database access** and
+**retrieval-augmented generation (RAG)**.
+
+The **pgEdge Postgres MCP Server** gives AI agents autonomous, structured
+access to your database through the Model Context Protocol — with built-in
+security, PostgreSQL-specific knowledge, and support for multiple LLM
+providers. It works as a standalone, requiring no additional toolkit
+components.
+
+For applications that need to answer questions over a document corpus, the
+remaining components form a **complete RAG pipeline**: Docloader ingests
+documents, the Vectorizer chunks and embeds them, and the RAG Server answers
+questions over the resulting knowledge base.
+
+Both capabilities share **[pgVector](../pgvector/)** as a foundation — it
+provides the vector similarity search that powers the MCP Server's semantic
+search tools and the RAG pipeline's hybrid retrieval.
+
+## Components
+
+The pgEdge AI Toolkit consists of the following components.
+
+### pgEdge Components
+
+**[pgEdge Postgres MCP Server](../pgedge-postgres-mcp-server/)** gives AI
+agents secure, structured access to PostgreSQL through the Model Context
+Protocol. It exposes tools for schema inspection, SQL execution, similarity
+search, embedding generation, query plan analysis, and knowledgebase search.
+It supports Claude, OpenAI, and Ollama, with read-only defaults,
+authentication, TLS, and row-level security.
+
+**[pgEdge RAG Server](../pgedge-rag-server/)** provides an HTTP API for
+retrieval-augmented generation. It runs hybrid search combining pgVector
+cosine similarity with BM25 keyword ranking, fuses results using Reciprocal
+Rank Fusion, and sends assembled context to an LLM for completion. It
+supports multiple independent pipelines, streaming responses, and
+conversation history.
+
+**[pgEdge Docloader](../pgedge-docloader/)** is a CLI tool for loading
+documents into PostgreSQL from local files, glob patterns, or Git
+repositories. It accepts HTML, Markdown, reStructuredText, and SGML/DocBook,
+converting all content to Markdown with extracted metadata. It supports
+transactional loading and UPSERT mode for incremental updates.
+
+**[pgEdge Vectorizer](../pgedge-vectorizer/)** is a PostgreSQL extension
+that automatically chunks text and generates vector embeddings via background
+workers. Triggers detect inserts and updates on configured tables, with
+configurable chunking strategies and support for OpenAI, Voyage AI, and
+Ollama embedding providers.
+
+### Community Components
+
+**[pgVector](../pgvector/)** is an open-source PostgreSQL extension for
+vector similarity search. It adds a `vector` column type with IVFFlat and
+HNSW indexing. pgVector is a shared dependency across the toolkit: the MCP
+Server uses it for semantic search, the Vectorizer stores embeddings in
+pgVector columns, and the RAG Server queries them for retrieval.
+
+## Connecting AI Agents With the MCP Server
+
+Rather than giving an LLM raw database credentials — uncontrolled access to
+every table, no guardrails on query complexity, no visibility into what the
+model is doing — the [MCP Server](../pgedge-postgres-mcp-server/) acts as a
+controlled gateway with read-only defaults, authentication, TLS, and
+PostgreSQL row-level security.
+
+The MCP Server is purpose-built for PostgreSQL and ships with a **built-in
+PostgreSQL knowledgebase**. When an agent needs to understand a PostgreSQL
+feature, diagnose a configuration issue, or write correct syntax for an
+extension, it queries the knowledgebase directly rather than relying on the
+LLM's training data (which may be outdated or imprecise).
+
+The server offers two connection modes:
+
+- **stdio** — For desktop clients (Claude Desktop, Cursor, VS Code Copilot,
+ Windsurf) where the server runs as a local subprocess.
+- **HTTP + SSE** — For multi-user and remote deployments where the server
+ runs as a long-lived service. The built-in **Go CLI client** and **React
+ web chat interface** both connect via this mode.
+
+```mermaid
+flowchart LR
+ subgraph clients ["Clients"]
+ direction TB
+ CD[Claude Desktop]
+ CU[Cursor]
+ VS[VS Code Copilot]
+ WS[Windsurf]
+ CA[Custom Agents]
+ CLI[Go CLI]
+ WEB[Web Chat]
+ end
+
+ subgraph server ["pgEdge Postgres MCP Server"]
+ direction TB
+ STDIO[stdio]
+ HTTP[HTTP + SSE]
+ AUTH[Auth · TLS · RLS]
+ end
+
+ subgraph data [" "]
+ direction TB
+ DB[(PostgreSQL + pgVector)]
+ KB[(PostgreSQL Knowledgebase)]
+ end
+
+ CD & CU & VS & WS -->|stdio| STDIO
+ CA & CLI & WEB -->|HTTP| HTTP
+ server --> DB
+ server --> KB
+```
+
+### Security Model
+
+AI agents never interact with your database unguarded:
+
+- **Read-only by default** — All queries run inside read-only transactions
+ unless explicitly configured otherwise.
+- **Authentication** — Token-based and user-based authentication control
+ which agents can connect.
+- **TLS** — All HTTP connections can be encrypted in transit.
+- **Row-level security** — PostgreSQL's native RLS policies are respected,
+ so different agents or users see only the data they're authorized to
+ access.
+- **Defined tool surface** — Agents can only perform operations the MCP
+ Server exposes. There is no open-ended SQL access unless the administrator
+ enables it.
+
+## Building a RAG Pipeline
+
+```mermaid
+flowchart TB
+ subgraph Ingestion
+ direction LR
+ A[Source Documents] --> B[pgEdge Docloader]
+ end
+
+ subgraph db ["PostgreSQL + pgVector"]
+ direction LR
+ C[(Document Tables)] ~~~ D[(Chunk Tables with Embeddings)]
+ end
+
+ subgraph bottom [" "]
+ direction LR
+ E[pgEdge Vectorizer] ~~~ F[pgEdge RAG Server]
+ end
+
+ B -->|load & convert| C
+ C -->|trigger on insert/update| E
+ E -->|chunk + embed| D
+ D -->|hybrid search| F
+ F -->|context + LLM| H[AI Responses]
+```
+
+### Ingestion: Docloader → PostgreSQL
+
+**[Docloader](../pgedge-docloader/)** reads source content and loads it into
+a PostgreSQL table, converting documents to Markdown with extracted metadata.
+Loading is transactional, and UPSERT mode allows re-running the same load to
+pick up changes without duplicating rows. At this stage, the data is plain
+text — no vectors or chunking are involved yet.
+
+### Processing: Vectorizer + pgVector
+
+**[Vectorizer](../pgedge-vectorizer/)** watches configured tables for changes
+via triggers. When rows are inserted or updated, background workers chunk the
+text and generate embeddings, storing the results in dedicated chunk tables
+using **[pgVector](../pgvector/)** column types. The chunk tables are indexed
+for fast similarity search.
+
+### Serving: RAG Server
+
+Pointing an LLM directly at chunk tables is both a security risk and a
+retrieval quality problem — unguarded data access, no keyword matching,
+duplicate passages wasting the context window, and embedding/token/LLM
+orchestration left to the application.
+
+**[RAG Server](../pgedge-rag-server/)** solves this by constraining access to
+pre-configured pipelines against specific tables (the LLM never generates
+SQL) and handling hybrid retrieval (vector + BM25), deduplication, and
+context assembly in a single layer.
diff --git a/docs/ansible/index.md b/docs/ansible/index.md
new file mode 100644
index 00000000..b264d88c
--- /dev/null
+++ b/docs/ansible/index.md
@@ -0,0 +1,28 @@
+# Ansible
+
+pgEdge provides an Ansible collection for deploying and managing PostgreSQL
+clusters on VMs and bare metal servers.
+
+## Overview
+
+The pgEdge Ansible collection includes roles for:
+
+- deploying pgEdge Enterprise Postgres.
+- configuring pgEdge Distributed Postgres clusters.
+- managing Spock replication setup.
+
+## Documentation
+
+For complete documentation, installation instructions, and sample playbooks,
+see the GitHub repository:
+
+**[pgEdge Ansible Collection on GitHub](https://github.com/pgEdge/pgedge-ansible)**
+
+## Related Resources
+
+- [Distributed CLI documentation](../platform/) - Using the Distributed CLI for
+ manual VM deployments.
+- [Control Plane documentation](../control-plane/) - Using the Control Plane
+ for declarative API-based cluster management.
+- [pgEdge Cloud documentation](../cloud/) - Using pgEdge Cloud for managed
+ cloud deployments.
diff --git a/docs/index.md b/docs/index.md
index cb0d73ac..dd63bdbb 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -4,52 +4,525 @@ hide:
- navigation
---
-# Welcome to pgEdge Documentation
+
+
+
+
+
Welcome to pgEdge Documentation
+
+ Enterprise-grade PostgreSQL for distributed data and agentic AI.
+
+
+ pgEdge delivers hardened, production-grade PostgreSQL with a complete AI toolkit and a seamless path from a single database through highly available, globally distributed deployments — self-hosted, in your cloud, or fully managed.
+
+ pgEdge is built by industry veterans with decades of PostgreSQL expertise. Founded in 2022 and headquartered in Northern Virginia, pgEdge serves prominent enterprises including Bertelsmann, Qube RT, European Parliament, and multiple U.S. government agencies.
+
- {# Process each category item - can be string or {title, url} object #}
{% for cat_item in category_items %}
- {% if cat_item is string %}
+ {% if cat_item.heading is defined %}
+ {# Non-clickable sub-heading #}
+
{{ cat_item.heading }}
+ {% elif cat_item.hr is defined %}
+ {# Horizontal rule divider #}
+
+ {% elif cat_item is string %}
{% set cat_title = cat_item %}
- {% set cat_url = none %}
+ {% set cat_url = "." %}
+