A Streamlit-based chatbot application designed to work with Databricks serving endpoints, including support for multi-agent systems with custom response formats.
- 🤖 Multi-agent support - Works with supervisor-agent architectures
- 🔄 Flexible response formats - Supports standard ChatCompletions and custom output arrays
- 📊 Rich content display - Automatically renders markdown tables and formatted responses
- 🎨 Customizable branding - Easy logo and title configuration
- ⚡ Real-time feedback - Animated loading indicators during processing
- 🔧 Adaptive input handling - Automatically detects and uses correct API schema
- Access to a Databricks workspace
- A deployed serving endpoint (ChatCompletions-compatible or custom agent endpoint)
- Git installed on your local machine
git clone <your-repo-url>
cd streamlit-chatbot-appEdit the app.yaml file and update the serving endpoint reference:
env:
- name: "SERVING_ENDPOINT"
value: "agents_pedroz_genai_catalog-default-teste_multi" # Change this to your serving endpoint nameExample:
env:
- name: "SERVING_ENDPOINT"
value: "my-agent-endpoint"- Replace
figures/brand_logo.pngwith your company logo - Edit
app.pyto update the title and branding text in the sidebar (lines 88-92)
- Navigate to Databricks Apps in your workspace
- Click "Create App"
- Configure the app:
- Name: Choose a name for your app (e.g., "Multi-Agent Chatbot")
- Source code: Select "Local files" or "Git repository"
- Source directory: Point to the directory where you cloned this repo
- Add Serving Endpoint Permission:
- In the app configuration, add a resource reference to your serving endpoint
- Grant
CAN_QUERYpermissions to the app
- Click "Create" and wait for deployment
Once deployed, Databricks will provide a URL to access your chatbot application.
The app.yaml file configures the Streamlit application:
command: [
"streamlit",
"run",
"app.py"
]
env:
- name: STREAMLIT_BROWSER_GATHER_USAGE_STATS
value: "false"
- name: "SERVING_ENDPOINT"
value: "serving-endpoint" # Your serving endpoint name hereThis app supports three response formats:
- Standard ChatCompletions (
choiceswithmessage) - Databricks Agent Framework (
messagesarray) - Custom Multi-Agent Format (
outputarray with messages and function calls)
streamlit-chatbot-app/
├── app.py # Main Streamlit application
├── model_serving_utils.py # Endpoint query utilities
├── app.yaml # Databricks App configuration
├── requirements.txt # Python dependencies
├── figures/
│ └── brand_logo.png # Company logo
└── README.md # This file
Edit app.py line 88:
st.markdown("<h2 style='text-align: center;'>Your App Name</h2>", unsafe_allow_html=True)Edit app.py line 8:
page_title="YourAppName",Edit app.py line 126:
max_tokens=50_000, # Adjust as neededSolution: Make sure the SERVING_ENDPOINT value in app.yaml matches your actual serving endpoint name exactly.
Solution: Ensure the app has CAN_QUERY permissions on the serving endpoint in the Databricks Apps configuration.
Solution: This app supports multiple formats. If you encounter issues, check model_serving_utils.py to see if your format needs to be added.
The app automatically:
- Filters out metadata messages (agent transfers, internal routing)
- Combines multiple agent responses (e.g., data + explanation)
- Renders markdown tables and formatted content
- Separates distinct responses with visual dividers
The app filters out:
- Messages starting with
<name>...(agent identifiers) - Transfer notifications ("Transferring...", "Successfully transferred...")
To modify filtering logic, edit model_serving_utils.py lines 80-82.
For issues or questions, please refer to the Databricks Apps documentation.