MCP Conformance Tests
ActionsTags
(2)A GitHub Action for running Model Context Protocol (MCP) conformance tests against server implementations. This action helps ensure your MCP server implementation adheres to the protocol specification.
- ✅ Test multiple server implementations (Python, TypeScript, or both)
- 🔄 Support for fork PRs via workflow_run pattern
- 📊 Automatic PR comments with test results
- 🏷️ Badge generation for README
- 📈 Baseline comparison against main/canary branches
- 🎯 Configurable test types (server, client, or both)
This action operates in two modes to properly support fork PRs:
- Test Mode - Runs conformance tests and uploads results as artifacts
- Comment Mode - Downloads artifacts and posts results as PR comments
Create .github/workflows/conformance.yml:
name: MCP Conformance Tests
on:
push:
branches: [main, canary]
pull_request:
branches: [main, canary]
jobs:
conformance-test:
name: Run Conformance Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Set up your server environment
- uses: actions/setup-node@v4
with:
node-version: '22'
- name: Install dependencies
run: npm install
- name: Build server
run: npm run build
# Run conformance tests
- name: Run conformance tests
uses: mcp-use/mcp-conformance-action@v1
with:
mode: test
servers: |
[
{
"name": "my-server",
"start-command": "npm start",
"url": "http://localhost:3000/mcp"
}
]Create .github/workflows/conformance-comment.yml:
name: Post Conformance Comment
on:
workflow_run:
workflows: ["MCP Conformance Tests"]
types: [completed]
permissions:
pull-requests: write
jobs:
comment:
name: Post Conformance Results
runs-on: ubuntu-latest
if: github.event.workflow_run.event == 'pull_request'
steps:
- name: Download results
uses: actions/download-artifact@v4
with:
name: conformance-results
path: conformance-results
github-token: ${{ secrets.GITHUB_TOKEN }}
run-id: ${{ github.event.workflow_run.id }}
- name: Post comment
uses: mcp-use/mcp-conformance-action@v1
with:
mode: comment
github-token: ${{ secrets.GITHUB_TOKEN }}
comment-mode: update| Input | Description | Required | Default |
|---|---|---|---|
mode |
Action mode: test or comment |
No | test |
servers |
JSON array of server configurations | Yes (test mode) | - |
test-type |
Type of tests: server, client, or both |
No | server |
conformance-version |
Version of @modelcontextprotocol/conformance | No | latest |
show-summary |
Show results in Actions summary | No | true |
artifact-name |
Name for results artifact | No | conformance-results |
badge-gist-id |
Gist ID for badge updates | No | - |
badge-gist-token |
Token for updating badge gist | No | - |
| Input | Description | Required | Default |
|---|---|---|---|
mode |
Action mode: test or comment |
No | test |
github-token |
GitHub token for API access | Yes (comment mode) | - |
comment-mode |
Comment behavior: create, update, or none |
No | update |
include-baseline-comparison |
Compare against baseline branches | No | true |
baseline-branches |
JSON array of branches for comparison | No | ["main", "canary"] |
The servers input accepts a JSON array of server objects with the following structure:
{
"name": "server-name",
"setup-commands": ["optional", "setup", "commands"],
"start-command": "command to start server",
"url": "http://localhost:3000/mcp",
"working-directory": "optional/working/directory"
}| Field | Description | Required |
|---|---|---|
name |
Display name for the server | Yes |
start-command |
Command to start the server | Yes |
url |
URL where the MCP server is accessible | Yes |
setup-commands |
Array of commands to run before starting server | No |
working-directory |
Directory to run commands in | No |
- name: Run conformance tests
uses: mcp-use/mcp-conformance-action@v1
with:
mode: test
servers: |
[
{
"name": "python",
"setup-commands": [
"pip install -e .",
"python -m pytest --setup-only"
],
"start-command": "python -m myserver --port 8000",
"url": "http://localhost:8000/mcp",
"working-directory": "python-server"
},
{
"name": "typescript",
"setup-commands": ["npm install", "npm run build"],
"start-command": "npm start",
"url": "http://localhost:3000/mcp",
"working-directory": "ts-server"
}
]| Output | Description |
|---|---|
results |
JSON string containing test results for each server |
all-passed |
Boolean indicating if all tests passed |
- name: Run conformance tests
id: conformance
uses: mcp-use/mcp-conformance-action@v1
with:
mode: test
servers: '...'
- name: Check results
if: steps.conformance.outputs.all-passed == 'false'
run: echo "Some tests failed!"To display conformance badges in your README:
- Create a GitHub Gist to store badge data
- Generate a Personal Access Token with
gistscope - Add the gist ID and token as repository secrets
- Configure the action:
- uses: mcp-use/mcp-conformance-action@v1
with:
mode: test
servers: '...'
badge-gist-id: ${{ secrets.CONFORMANCE_GIST_ID }}
badge-gist-token: ${{ secrets.GIST_SECRET }}- Add badge to your README:
The badge will automatically update on pushes to main/canary branches.
Updates the same comment on each run. This keeps PR conversations clean:
comment-mode: updateCreates a new comment for each run. Useful for tracking progress over time:
comment-mode: createDisables PR comments completely:
comment-mode: noneThis action properly supports PRs from forks using the workflow_run pattern:
- The test workflow runs with read-only permissions on the fork's code
- Results are uploaded as artifacts
- A separate workflow with write permissions downloads artifacts and posts comments
- Repository maintainers can review and approve workflow runs for first-time contributors
This ensures security while still providing automated test feedback.
Test different servers based on workflow inputs:
on:
workflow_dispatch:
inputs:
server:
type: choice
options: [all, python, typescript]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build server list
id: servers
run: |
if [ "${{ inputs.server }}" = "all" ]; then
echo 'servers=[{"name":"python",...},{"name":"typescript",...}]' >> $GITHUB_OUTPUT
elif [ "${{ inputs.server }}" = "python" ]; then
echo 'servers=[{"name":"python",...}]' >> $GITHUB_OUTPUT
else
echo 'servers=[{"name":"typescript",...}]' >> $GITHUB_OUTPUT
fi
- uses: mcp-use/mcp-conformance-action@v1
with:
mode: test
servers: ${{ steps.servers.outputs.servers }}Test MCP client implementations:
- uses: mcp-use/mcp-conformance-action@v1
with:
mode: test
test-type: client
servers: |
[
{
"name": "my-client",
"start-command": "npm run start:client",
"url": "http://localhost:3000"
}
]If your server doesn't start within 5 seconds, you may need to add a longer delay or health check:
{
"setup-commands": [
"npm start &",
"sleep 10",
"curl --retry 5 --retry-delay 1 http://localhost:3000/health"
],
"start-command": "true",
"url": "http://localhost:3000/mcp"
}If the comment workflow can't find artifacts:
- Ensure the test workflow completed successfully
- Check that the artifact name matches in both workflows
- Verify the workflow_run trigger is configured correctly
For fork PRs, ensure:
- The comment workflow has
pull-requests: writepermission - The workflow_run trigger is used (not direct pull_request)
- The test workflow uploads artifacts successfully
Contributions are welcome! Please open an issue or PR on GitHub.
MIT License - see LICENSE for details.
MCP Conformance Tests is not certified by GitHub. It is provided by a third-party and is governed by separate terms of service, privacy policy, and support documentation.