PyPI • Documentation • Examples • Agent Memory Server • RedisVL
adk-redis provides Redis integrations for Google's Agent Development Kit (ADK). Implements ADK's BaseMemoryService, BaseSessionService, tool interfaces, and semantic caching using Redis Agent Memory Server and RedisVL.
| 🔌 ADK Services | 🔧 Agent Tools | ⚡ Semantic Caching |
|---|---|---|
| Memory Service Long-term memory via Agent Memory Server |
Memory Tools LLM-controlled memory operations |
LLM Response Cache Reduce latency & costs |
| Semantic search & auto-extraction | search, create, update, delete | Similarity-based cache lookup |
| Cross-session knowledge retrieval | Direct Agent Memory Server API | Configurable distance threshold |
| Recency-boosted search | Namespace & user isolation | TTL-based expiration |
| Session Service Working memory via Agent Memory Server |
Search Tools RAG via RedisVL |
Tool Cache Avoid redundant calls |
| Context window management | Vector, hybrid, text, range search | Cache tool execution results |
| Auto-summarization | Multiple vectorizers supported | Reduce API calls |
| Background memory promotion | Metadata filtering | Configurable thresholds |
pip install adk-redisInstall with optional features based on your use case:
# Memory & session services (Redis Agent Memory Server integration)
pip install adk-redis[memory]
# Search tools (RedisVL integration)
pip install adk-redis[search]
# All features
pip install adk-redis[all]python -c "from adk_redis import __version__; print(__version__)"For contributors or those who want the latest unreleased changes:
# Clone the repository
git clone https://github.com/redis-developer/adk-redis.git
cd adk-redis
# Install with uv (recommended for development)
pip install uv
uv sync --all-extras
# Or install directly from GitHub
pip install git+https://github.com/redis-developer/adk-redis.git@mainFor memory/session services:
- Redis Agent Memory Server (port 8088)
- Redis 8.4+ or Redis Cloud (backend for Agent Memory Server)
For search tools:
- Redis 8.4+ or Redis Cloud with Search capability
Quick start:
Redis is required for all examples in this repository. Choose one of the following options:
Option A: Automated setup (recommended)
# Run from the repository root
./scripts/start-redis.shThis script will:
- Check if Docker is installed and running
- Check if Redis is already running on port 6379
- Start Redis 8.4 in a Docker container with health checks
- Verify the Redis container is healthy and accepting connections
- Provide helpful commands for managing Redis
Option B: Manual setup
docker run -d --name redis -p 6379:6379 redis:8.4-alpineNote: Redis 8.4 includes the Redis Query Engine (evolved from RediSearch) with native support for vector search, full-text search, and JSON operations. Docker will automatically download the image (~40MB) on first run.
Verify Redis is running:
# Check container status
docker ps | grep redis
# Test connection
docker exec redis redis-cli ping
# Should return: PONG
# Or if you have redis-cli installed locally
redis-cli -p 6379 pingCommon Redis commands:
# View logs
docker logs redis
docker logs -f redis # Follow logs in real-time
# Stop Redis
docker stop redis
# Restart Redis
docker restart redis
# Remove Redis (stops and deletes container)
docker rm -f redisTroubleshooting:
- Port 6379 already in use: Another process is using the port. Find it with
lsof -i :6379or use a different port:docker run -d --name redis -p 6380:6379 redis:8.4-alpine - Docker not running: Start Docker Desktop or the Docker daemon
- Permission denied: Run with
sudoor add your user to the docker group - Container won't start: Check logs with
docker logs redis
docker run -d --name agent-memory-server -p 8088:8088 \
-e REDIS_URL=redis://host.docker.internal:6379 \
-e GEMINI_API_KEY=your-gemini-api-key \
-e GENERATION_MODEL=gemini/gemini-2.0-flash \
-e EMBEDDING_MODEL=gemini/text-embedding-004 \
-e FAST_MODEL=gemini/gemini-2.0-flash \
-e SLOW_MODEL=gemini/gemini-2.0-flash \
-e EXTRACTION_DEBOUNCE_SECONDS=5 \
redislabs/agent-memory-server:latest \
agent-memory api --host 0.0.0.0 --port 8088 --task-backend=asyncioConfiguration Options:
- LLM Provider: Agent Memory Server uses LiteLLM and supports 100+ providers (OpenAI, Gemini, Anthropic, AWS Bedrock, Ollama, etc.). Set the appropriate environment variables for your provider (e.g.,
GEMINI_API_KEY,GENERATION_MODEL=gemini/gemini-2.0-flash). See the Agent Memory Server LLM Providers docs for details.- Model Configuration: Set
GENERATION_MODEL,FAST_MODEL(for quick tasks like extraction), andSLOW_MODEL(for complex tasks) to your preferred models. All default to OpenAI models if not specified.- Memory Extraction Debounce:
EXTRACTION_DEBOUNCE_SECONDScontrols how long to wait before extracting memories from a conversation (default: 300 seconds). Lower values (e.g., 5) provide faster memory extraction, while higher values reduce API calls.- Embedding Models: Agent Memory Server also uses LiteLLM for embeddings. For local/offline embeddings, use Ollama (e.g.,
EMBEDDING_MODEL=ollama/nomic-embed-text,REDISVL_VECTOR_DIMENSIONS=768). See Embedding Providers docs for all options.
See detailed setup guides:
- Redis Setup Guide - All Redis deployment options
- Agent Memory Server Setup - Complete configuration
- Integration Guide - End-to-end setup with code examples
Uses both working memory (session-scoped) and long-term memory (persistent):
from google.adk import Agent
from google.adk.runners import Runner
from adk_redis.memory import RedisLongTermMemoryService, RedisLongTermMemoryServiceConfig
from adk_redis.sessions import (
RedisWorkingMemorySessionService,
RedisWorkingMemorySessionServiceConfig,
)
# Configure session service (Tier 1: Working Memory)
session_config = RedisWorkingMemorySessionServiceConfig(
api_base_url="http://localhost:8088", # Agent Memory Server URL
default_namespace="my_app",
model_name="gpt-4o", # Model for auto-summarization
context_window_max=8000, # Trigger summarization at this token count
)
session_service = RedisWorkingMemorySessionService(config=session_config)
# Configure memory service (Tier 2: Long-Term Memory)
memory_config = RedisLongTermMemoryServiceConfig(
api_base_url="http://localhost:8088",
default_namespace="my_app",
extraction_strategy="discrete", # Extract individual facts
recency_boost=True, # Prioritize recent memories in search
)
memory_service = RedisLongTermMemoryService(config=memory_config)
# Create agent
agent = Agent(
name="memory_agent",
model="gemini-2.0-flash",
instruction="You are a helpful assistant with long-term memory.",
)
# Create runner with both services
runner = Runner(
agent=agent,
app_name="my_app",
session_service=session_service,
memory_service=memory_service,
)How it works:
- Working Memory: Stores session messages, state, and handles auto-summarization
- Background Extraction: Automatically promotes important information to long-term memory
- Long-Term Memory: Provides semantic search across all sessions for relevant context
- Recency Boosting: Prioritizes recent memories while maintaining access to historical knowledge
RAG with semantic search using RedisVL:
from google.adk import Agent
from redisvl.index import SearchIndex
from redisvl.utils.vectorize import HFTextVectorizer
from adk_redis.tools import RedisVectorSearchTool, RedisVectorQueryConfig
# Create a vectorizer (HuggingFace, OpenAI, Cohere, Mistral, Voyage AI, etc.)
vectorizer = HFTextVectorizer(model="sentence-transformers/all-MiniLM-L6-v2")
# Connect to existing search index
index = SearchIndex.from_existing("products", redis_url="redis://localhost:6379")
# Create the search tool with custom name and description
search_tool = RedisVectorSearchTool(
index=index,
vectorizer=vectorizer,
config=RedisVectorQueryConfig(
vector_field_name="embedding",
return_fields=["name", "description", "price"],
num_results=5,
),
# Customize the tool name and description for your domain
name="search_product_catalog",
description="Search to find relevant products in the product catalog by description semantic similarity",
)
# Use with an ADK agent
agent = Agent(
name="search_agent",
model="gemini-2.0-flash",
instruction="Help users find products using semantic search.",
tools=[search_tool],
)Customizing Tool Prompts:
All search tools (RedisVectorSearchTool, RedisHybridSearchTool, RedisTextSearchTool, RedisRangeSearchTool) support custom name and description parameters to make them domain-specific:
# Example: Medical knowledge base
medical_search = RedisVectorSearchTool(
index=medical_index,
vectorizer=vectorizer,
name="search_medical_knowledge",
description="Search medical literature and clinical guidelines for relevant information",
)
# Example: Customer support FAQ
faq_search = RedisTextSearchTool(
index=faq_index,
name="search_support_articles",
description="Search customer support articles and FAQs by keywords",
)
# Example: Legal document search
legal_search = RedisHybridSearchTool(
index=legal_index,
vectorizer=vectorizer,
name="search_legal_documents",
description="Search legal documents using both semantic similarity and keyword matching",
)Note: RedisVL supports many vectorizers including OpenAI, HuggingFace, Cohere, Mistral, Voyage AI, and more. See RedisVL documentation for the full list.
Future Enhancement: We plan to add native support for ADK embeddings classes through a union type or wrapper, allowing seamless integration with ADK's embedding infrastructure alongside RedisVL vectorizers.
Implements ADK's BaseMemoryService interface for persistent agent memory:
| Feature | Description |
|---|---|
| Semantic Search | Vector-based similarity search across all sessions |
| Recency Boosting | Prioritize recent memories while maintaining historical access |
| Auto-Extraction | LLM-based extraction of facts, preferences, and episodic memories |
| Cross-Session Retrieval | Access knowledge from any previous conversation |
| Background Processing | Non-blocking memory promotion and indexing |
Implementation: RedisLongTermMemoryService
Implements ADK's BaseSessionService interface for conversation management:
| Feature | Description |
|---|---|
| Message Storage | Persist conversation messages and session state |
| Auto-Summarization | Automatic summarization when context window limits are exceeded |
| Memory Promotion | Trigger background extraction to long-term memory |
| State Management | Store and retrieve arbitrary session state |
| Token Tracking | Monitor context window usage |
Implementation: RedisWorkingMemorySessionService
Four specialized search tools for different RAG use cases:
| Tool | Best For | Key Features |
|---|---|---|
RedisVectorSearchTool |
Semantic similarity | Vector embeddings, KNN search, metadata filtering |
RedisHybridSearchTool |
Combined search | Vector + text search, Redis 8.4+ native support, aggregation fallback |
RedisRangeSearchTool |
Threshold-based retrieval | Distance-based filtering, similarity radius |
RedisTextSearchTool |
Keyword search | Full-text search, no embeddings required |
All search tools support multiple vectorizers (OpenAI, HuggingFace, Cohere, Mistral, Voyage AI, etc.) and advanced filtering.
Reduce latency and costs with similarity-based caching:
| Feature | Description |
|---|---|
| LLM Response Cache | Cache LLM responses and return similar cached results |
| Tool Result Cache | Cache tool execution results to avoid redundant calls |
| Similarity Threshold | Configurable distance threshold for cache hits |
| TTL Support | Time-based cache expiration |
| Multiple Vectorizers | Support for OpenAI, HuggingFace, local embeddings, etc. |
Implementations: LLMResponseCache, ToolCache
- Python 3.10, 3.11, 3.12, or 3.13
- Google ADK 1.0.0+
- For memory/session services: Redis Agent Memory Server
- For search tools: Redis 8.4+ or Redis Cloud with Search capability
Complete working examples with ADK web runner integration:
| Example | Description | Features |
|---|---|---|
| simple_redis_memory | Agent with two-tier memory architecture | Working memory, long-term memory, auto-summarization, semantic search |
| semantic_cache | Semantic caching for LLM responses | Vector-based cache, reduced latency, cost optimization, local embeddings |
| redis_search_tools | RAG with search tools | Vector search, hybrid search, range search, text search |
| travel_agent_memory_hybrid | Travel agent with framework-managed memory | Redis session + memory services, automatic memory extraction, web search, calendar export, itinerary planning |
| travel_agent_memory_tools | Travel agent with LLM-controlled memory | Memory tools only (search/create/update/delete), in-memory session, web search, calendar export, itinerary planning |
Both examples use Redis Agent Memory Server for long-term memory persistence. The difference is in how they integrate with ADK:
| Aspect | travel_agent_memory_hybrid |
travel_agent_memory_tools |
|---|---|---|
| How to Run | python main.py (custom FastAPI) |
adk web . (standard ADK CLI) |
| Session Service | RedisWorkingMemorySessionService (Redis-backed, auto-summarization) |
ADK default (in-memory) |
| Memory Service | RedisLongTermMemoryService (ADK's BaseMemoryService interface) |
Memory tools only (direct Agent Memory Server API calls) |
| Memory Extraction | after_agent_callback + framework-managed |
after_agent_callback |
| Session Sync | Real-time (every message synced to Agent Memory Server) | End-of-turn (batch sync via after_agent_callback) |
| Auto-Summarization | Yes, mid-conversation (real-time sync triggers when context exceeded) | Yes, end-of-turn (batch sync triggers when context exceeded) |
| Best For | Full ADK service integration (BaseSessionService + BaseMemoryService) |
Tool-based Agent Memory Server integration (no custom services) |
Each example includes:
- Complete runnable code
- ADK web runner integration
- Configuration examples
- Setup instructions
This project follows the Google Python Style Guide, matching the ADK-Python core project conventions.
# Clone the repository
git clone https://github.com/redis-developer/adk-redis.git
cd adk-redis
# Install development dependencies
make dev
# Run all checks (format, lint, type-check, test)
make checkmake format # Format code with pyink and isort
make lint # Run ruff linter
make type-check # Run mypy type checker
make test # Run pytest test suite
make coverage # Generate coverage reportSee CONTRIBUTING.md for coding style, type hints, testing, and PR guidelines.
Please help us by contributing PRs, opening GitHub issues for bugs or new feature ideas, improving documentation, or increasing test coverage. See the following steps for contributing:
- Open an issue for bugs or feature requests
- Read CONTRIBUTING.md and submit a pull request
- Improve documentation and examples
Apache 2.0 - See LICENSE for details.
- PyPI Package - Install with
pip install adk-redis - GitHub Repository - Source code and issue tracking
- Examples - Complete working examples with ADK web runner
- Contributing Guide - How to contribute to the project
- Redis Setup Guide - All Redis deployment options
- Agent Memory Server Setup - Complete configuration
- Integration Guide - End-to-end setup with code examples
- Google ADK - Agent Development Kit framework
- Redis Agent Memory Server - Memory layer for AI agents
- RedisVL - Redis Vector Library documentation
- Redis - Redis 8.4+ with Search, JSON, and vector capabilities
