Build production-ready LangGraph agents from natural language - instantly.
DynoGraph generates complete, executable LangGraph workflows from simple descriptions. No coding required. Just describe what you want, and get a working Python agent with web search, reasoning, and tool use.
from dynograph.generation.builder import GraphBuilder
from dynograph.runtime import run_graph, get_state_schema
# Build an agent in seconds
builder = GraphBuilder()
graph, _ = builder.build_code("Create a research agent that searches the web")
# See what inputs it needs (no more guessing!)
schema = get_state_schema(graph)
print("Required inputs:", list(schema['annotations'].keys()))
# Output: ['query', 'max_results', ...]
# Run it with the right inputs
result = run_graph(graph, {"query": "AI developments in 2025"})
print(result)- Describe what you want in plain English
- Get production-ready Python code with complete LangGraph implementation
- Run immediately - no manual coding or configuration
- Web Search (Tavily + DuckDuckGo fallback)
- Wikipedia search
- Code Generation (GPT-4 powered)
- Python REPL for code execution
- File Operations (read/write)
- HTTP Requests for API calls
- Calculator, Date/Time, JSON Parser, and more
- Vector similarity search finds relevant examples
- Few-shot learning generates high-quality code
- Portable .dgraph format for sharing
- Auto-metadata generation with LLM
- One-line execution:
run_graph(graph, input_data) - State inspection: See exactly what inputs are needed
- Save to Python file: Import and use like any module
- Tool customization: Override defaults with custom implementations
# Clone the repository
git clone https://github.com/H0rizon-AI/dynoGraph.git
cd dynoGraph
# Install with uv (recommended)
uv sync
# Or install with pip
pip install -e .Create a .env file:
# LLM Provider (at least one required)
ANTHROPIC_API_KEY=sk-ant-xxxxx
OPENAI_API_KEY=sk-xxxxx
# Optional: Tavily for better web search (1,000 free searches/month)
TAVILY_API_KEY=tvly-xxxxx # Get free key at https://app.tavily.com# Set up the graph library with vector search
dynograph init
# Load starter examples
uv run python scripts/load_starter_examples.pyfrom dynograph.generation.builder import GraphBuilder
from dynograph.runtime import run_graph, get_state_schema
# Initialize builder (automatically has 12 tools available!)
builder = GraphBuilder()
# Build a research agent
graph, examples = builder.build_code(
"Create a research agent that searches the web and provides insights"
)
# Inspect what inputs it needs
schema = get_state_schema(graph)
print("Required inputs:", list(schema['annotations'].keys()))
print("Documentation:", schema['docstring'])
# Run it!
result = run_graph(graph, {"query": "quantum computing applications"})
print(result)Build an agent that searches the web, analyzes results, and generates insights:
from dynograph.generation.builder import GraphBuilder
from dynograph.runtime import run_graph, get_state_schema
# Build the agent
builder = GraphBuilder()
graph, _ = builder.build_code(
"Create a research agent that searches the web, "
"analyzes information, and discovers insights"
)
# See what inputs it needs (shows you the schema!)
schema = get_state_schema(graph)
print("Input fields:", list(schema['annotations'].keys()))
# Output: ['query', 'max_iterations', 'search_depth', 'searches', 'thoughts', 'findings', ...]
print("\nDocumentation:")
print(schema['docstring'])
# Output: Full docstring with required/optional/output fields explained
# Now run with the correct inputs
result = run_graph(graph, {"query": "AI sprite generation"})
# View results
print("Insights:", result.get('insights', ''))
print("Findings:", result.get('findings', []))
print("Report:", result.get('final_report', ''))Simple Q&A agent with web search:
builder = GraphBuilder()
graph, _ = builder.build_code(
"Create an agent that answers questions using web search"
)
result = run_graph(graph, {"question": "What is LangGraph?"})
print(result['answer'])Save generated code as a Python file:
from dynograph.runtime import save_graph_code
# Build once
builder = GraphBuilder()
graph, _ = builder.build_code("Create a Q&A agent")
# Save to file
filepath = save_graph_code(graph, "my_qa_agent.py")
# Now import and use it anywhere:
from my_qa_agent import run, State, graph
# Check what inputs are needed
print(State.__annotations__)
# Run it
result = run({"query": "What is Python?"})
# Or use the graph directly
result = graph.invoke({"query": "What is Python?"})Override default tools with your own:
from dynograph.runtime import run_graph
class MyCustomSearchTool:
def invoke(self, query):
# Your custom search logic
return f"Custom results for: {query}"
# Build graph
builder = GraphBuilder()
graph, _ = builder.build_code("Create a search agent")
# Run with custom tool
result = run_graph(
graph,
{"query": "test"},
tools={"web_search": MyCustomSearchTool()}
)DynoGraph includes 12 pre-registered tools that agents can use automatically:
- web_search: Tavily search (with API key) or DuckDuckGo (fallback)
- wikipedia: Search Wikipedia for information
- code: Generate code using GPT-4
- python_repl: Execute Python code and return output
- calculator: Perform mathematical calculations
- text_analyzer: Analyze text statistics (length, words, lines)
- file_reader: Read file contents
- file_writer: Write content to files
- http_request: Make HTTP GET/POST requests
- shell: Execute shell commands (use with caution)
- datetime: Get current date and time
- json_parser: Parse JSON strings
See docs/TOOLS.md for complete documentation.
User Query → Vector Search → Retrieve Examples → LLM Code Generation → Complete Python Code
DynoGraph uses few-shot learning:
- Your query is embedded and compared to library graphs
- Most similar examples are retrieved (top-k)
- LLM generates complete Python code using examples as templates
- Code includes all imports, state definitions, nodes, and edges
# Generated code exports everything you need:
from generated_agent import State, graph, run
# Inspect state schema
print(State.__annotations__) # {'query': str, 'answer': str, ...}
# Run directly
result = graph.invoke({"query": "test"})
# Or use helper
result = run({"query": "test"})Every generated graph includes:
# State definition with documentation
class State(TypedDict):
"""State schema with required/optional/output fields documented."""
query: str # Required input
answer: str # Output field
# ...
# Node functions
def search_node(state: State, tools=None) -> dict:
"""Searches the web and returns results."""
tool = get_tool('web_search')
results = tool.invoke(state['query'])
return {'results': results}
# Graph builder
def create_graph(tools=None):
"""Creates and compiles the graph."""
workflow = StateGraph(State)
workflow.add_node("search", search_node)
# ... add edges ...
return workflow.compile()
# Exports
graph = create_graph() # Pre-compiled instance
def run(input_data, tools=None):
"""Simple helper to run the graph."""
return graph.invoke(input_data)- TAVILY_INTEGRATION.md - Tavily web search setup and usage
- TOOLS.md - Complete tool registry documentation
- HOW_TO_RUN.md - All methods for running generated graphs
- CODE_GENERATION_IMPROVEMENTS.md - Code generation pattern details
- DGRAPH_FORMAT.md - Portable .dgraph format specification
- RUNTIME_GUIDE.md - Runtime and execution guide
- CLAUDE.md - Project overview and development guidelines
- V0_SCOPE.md - v0 scope and architecture
Every generated graph exports its State class for easy inspection:
from dynograph.runtime import get_state_schema
# Method 1: Extract from Graph object
schema = get_state_schema(graph)
print("Fields:", schema['annotations'])
print("Docs:", schema['docstring'])
# Method 2: Import from saved file
from my_agent import State
print(State.__annotations__)
print(State.__doc__)from dynograph.runtime import GraphRunner, load_and_run
# Method 1: Simple one-liner
result = run_graph(graph, {"query": "test"})
# Method 2: For multiple runs (more efficient)
runner = GraphRunner(graph)
runner.load() # Load once
result1 = runner.run({"query": "query1"})
result2 = runner.run({"query": "query2"})
# Method 3: Load from file
result = load_and_run("my_agent.py", {"query": "test"})
# Method 4: Import as module
from my_agent import run
result = run({"query": "test"})from dynograph.tools import get_tool
from langchain.tools import Tool
# Get existing tool
web_search = get_tool('web_search')
# Create custom tool
def my_function(input: str) -> str:
return f"Processed: {input}"
custom_tool = Tool(
name="my_tool",
func=my_function,
description="My custom tool"
)
# Use in generated graph
result = run_graph(
graph,
{"query": "test"},
tools={"my_tool": custom_tool}
)# LLM Configuration
ANTHROPIC_API_KEY=sk-ant-xxxxx # Claude models
OPENAI_API_KEY=sk-xxxxx # GPT models
DYNOGRAPH_LLM_PROVIDER=anthropic # or "openai"
DYNOGRAPH_MODEL=claude-sonnet-4-5-20250929
# Tool API Keys
TAVILY_API_KEY=tvly-xxxxx # Web search (1,000 free/month)
# Optional Configuration
DYNOGRAPH_TEMPERATURE=0.4 # LLM temperature
DYNOGRAPH_TOP_K=3 # Number of example graphs to retrievefrom dynograph.generation.builder import GraphBuilder
builder = GraphBuilder(
llm_provider="anthropic", # or "openai"
model="claude-sonnet-4-5-20250929",
tool_registry=None # Use default registry
)Run the complete demo showing graph building and execution:
jupyter notebook demo_research_agent.ipynbThe notebook demonstrates:
- Building a research agent
- Inspecting the State schema
- Running with proper inputs
- Analyzing results
- Saving for reuse
# Complete workflow examples
uv run python examples/how_to_run_graphs.py
# Tool registry examples
uv run python examples/tool_usage.py- Web research agents
- Competitive analysis
- Market research
- Academic research assistants
- Q&A chatbots
- Issue classification and routing
- Automated responses
- Escalation workflows
- ETL pipelines
- Data validation workflows
- Transform and analyze data
- Multi-step processing
- Code generation assistants
- API integration agents
- Testing workflows
- Documentation generators
Contributions are welcome! Please see CLAUDE.md for:
- Project architecture
- Development guidelines
- Code generation patterns
- Tool integration
Version: 0.1.0 Alpha
Status: Code generation architecture complete and working
What's Working:
- ✅ Python code generation from natural language
- ✅ 12 pre-registered tools (web search, Wikipedia, code gen, etc.)
- ✅ State schema inspection
- ✅ Simple runtime execution
- ✅ Tavily integration for high-quality web search
- ✅ Graph library with vector search
- ✅ Save to Python files
What's Next (v0.2):
- ⏳ Better example graphs in library
- ⏳ Improved search result analysis
- ⏳ Refine mode (conversational improvement)
- ⏳ Simulate mode (test with sample inputs)
- ⏳ Evaluate mode (quality scoring)
- LangGraph v1.0 Alpha: This project uses the alpha version of LangGraph 1.0
- API Changes: Expect breaking changes as LangGraph evolves
- Quality: Generated code quality depends on example graphs in library
- Testing: Always test generated agents before production use
MIT
Built with: LangGraph v1.0 Alpha, Claude Sonnet 4.5, Tavily Search
🤖 Generated with Claude Code