Summary
I'd like to discuss an optional integration path between ChatterBot and ClawMem as an external storage/memory backend.
This is not a request to add generic persistence from scratch. ChatterBot already has persistent storage adapters (SQL, MongoDB, Redis vector storage) and a clean storage_adapter extension point.
The request is specifically about whether ChatterBot should support or document a ClawMem-backed storage adapter for teams that want:
- durable conversation history across sessions
- auditable memory/workspace boundaries
- GitHub-compatible issue/comment-backed memory spaces
- optional shared memory across agents/users
Why this seems feasible
From the current codebase/docs:
ChatBot(..., storage_adapter=...) already accepts a dot-notated import path for custom adapters.
StorageAdapter defines a clear interface (count, filter, create, create_many, update, remove, get_random, drop).
- ChatterBot already supports multiple storage backends and recently expanded vector-based storage support.
So an initial integration path could likely be:
- implement a ClawMem-backed adapter externally
- optionally add docs/example usage in ChatterBot if maintainers think this is a good fit
Important caveat
ChatterBot stores conversational knowledge as a statement graph (text, in_response_to, conversation, tags, etc.), while ClawMem is oriented around durable memory artifacts and GitHub-compatible repository primitives.
Because of that, the likely MVP is not "replace ChatterBot's internal model with ClawMem semantics".
Instead, it would be something narrower such as:
- an optional storage adapter that maps ChatterBot statements into a ClawMem-backed store, or
- an adapter/example package maintained outside core, with ChatterBot docs showing how to wire it in
What I want to clarify
Would maintainers be open to one of these directions?
-
External adapter + docs example
- adapter lives outside core
- ChatterBot only documents the integration pattern
-
First-party optional adapter
- adapter lives in ChatterBot core as another storage backend
-
Not a good fit for ChatterBot
- if the statement-graph model and ClawMem memory model are too different, it would be useful to know that early
Minimum viable integration scope
If this is worth pursuing, the smallest useful scope seems to be:
- preserve
conversation, text, and in_response_to
- support
create_many, filter, and update well enough for training + response selection
- keep the adapter optional and avoid changing existing default storage behavior
- prefer docs/example-first unless maintainers want first-party support
If this direction sounds reasonable, I can turn this into a concrete implementation brief.
Summary
I'd like to discuss an optional integration path between ChatterBot and ClawMem as an external storage/memory backend.
This is not a request to add generic persistence from scratch. ChatterBot already has persistent storage adapters (SQL, MongoDB, Redis vector storage) and a clean
storage_adapterextension point.The request is specifically about whether ChatterBot should support or document a ClawMem-backed storage adapter for teams that want:
Why this seems feasible
From the current codebase/docs:
ChatBot(..., storage_adapter=...)already accepts a dot-notated import path for custom adapters.StorageAdapterdefines a clear interface (count,filter,create,create_many,update,remove,get_random,drop).So an initial integration path could likely be:
Important caveat
ChatterBot stores conversational knowledge as a statement graph (
text,in_response_to,conversation, tags, etc.), while ClawMem is oriented around durable memory artifacts and GitHub-compatible repository primitives.Because of that, the likely MVP is not "replace ChatterBot's internal model with ClawMem semantics".
Instead, it would be something narrower such as:
What I want to clarify
Would maintainers be open to one of these directions?
External adapter + docs example
First-party optional adapter
Not a good fit for ChatterBot
Minimum viable integration scope
If this is worth pursuing, the smallest useful scope seems to be:
conversation,text, andin_response_tocreate_many,filter, andupdatewell enough for training + response selectionIf this direction sounds reasonable, I can turn this into a concrete implementation brief.