-
Notifications
You must be signed in to change notification settings - Fork 27
Add MCP Python client and runnable examples #63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
What exactly is the point of having an "MCP client"? Normally, ChatGPT or Claude or some other LLM would be the "MCP client". I don't understand the point of this, unless you wanted to target some other LLM system that wasn't capable of working with MCP on it's own ... but when I read your example, don't get the sense it's some shim to some other system... Have you tested the cogserver with any LLM's? I know it works with Claude, I use that heavily. I'm less clear about how compatible it is with other LLM's ... |
|
The intent isn’t to replace Claude (or any MCP-capable LLM) as the client. This adds: A minimal reference/harness to exercise the MCP handshake and core tools from Python, so developers can smoke-test the server, debug tool args, or use MCP from environments where the LLM doesn’t natively speak MCP (e.g., custom orchestrators, offline agents, CI). |
|
Please excuse me; I'm going to push back some more; and I would like to explain, so that my pushback doesn't feel like a personal offense, So here's my logic: I just spent the last three months removing many thousands of lines of code from this project, and related projects; I lost count, maybe 5KLOC or 10KLOC of removals. This represents ten years of accumulated cruft, junk, useless widgets, utilities, duplication of existing code, half-finished, abandoned projects and "neat ideas" that were just plain-old badly designed. I'm not done yet; there's more on my cleanup list. So I'm not in the mood to add code that looks like a "neat idea" but doesn't have any obvious usefulness. To put it differently: even examples create a maintenance burden: they have to be checked, tested semi-regularly, updated as needed. Users notice and complain when they don't work. I want to reduce my maintenance burden as much as possible. So:
There's been a lot of instability in the last few months: many of the old archaic users, uses and apps have moldered and died; a new app is coming online, and I'm trying to get everything refreshed and modernized for those folks. So there's some churn. There's still a meta-question: what do you hope to accomplish here? This, to me, is a far more interesting question, than sinking down into the guts of the system-as-currently-implemented. |
|
I understand the maintenance burden concern. I’m happy to re-scope so this doesn’t add surface area. Two options:
My goal is to improve reliability for LLM/AtomSpace flows; I’m open to whichever path best reduces your maintenance load. Let me know which direction you prefer. |
|
What are you trying to accomplish? You said you want to "improve reliability for LLM/AtomSpace flows", but you have not reported any reliability issues. Did something crash? Did something not work? What's unreliable? Again, rather than getting lost in the mire, I would like to know what you are trying to do. Making PR's or wiki changes is not a "goal", its a mundane by-product, a side-effect of pursuing grander goals. |
|
Dear Proff, Can you list down the areas where I could work on reasoning and explainability in Agentic system within the Opencog framework. |
|
Do you have access to Claude Code or ChatGPT or any other system that has direct access to your file system or computer? These normally have some monthly subscription fee; they are not cheap. But maybe you can get access to one via the university computing center, or some other program for students interested in computing. Set it up in a container -- LXC or Docker, so that it does not wreck your machine. Use it for a while. You will notice that it forgets, does not do what you ask it to, cuts corners, says things to please you, instead of "doing the right thing". You can try to fix this with prompts: "please review this text before starting a new project". This works, but sort of. You can try to ask it to update: "when I ask you to remember something, please add it to your list". You can ask it to reason: "When I ask you to review the possibilities carefully, I want you to review what you have stored in memory, and deduce that this approach was already tried earlier this morning, so we don't have to do it again". All of this won't work, of course, but you will learn the failure modes. Then you can say something like "using your MCP interfaces, I want you to represent these logical structures in Atomese, and store them in the AtomSpace, in a format that you can access and retrieve later". And, of course it can do that, because it has direct access to the AtomSpace via MCP, and also because old OpenCog documentation is in it's training set, so it already knows some OpenCog/Atomese basics. So you can try to get it to do this. It will of course fail to work the way you want it to ... and again, you can think about why it is so difficult to attach an LLM (with an API that uses strings of English words) to a symbolic reasoning system (which has an API of strings written in scheme/python, not English, and those strings correspond to trees, graphs and other abstract structures). These exercises illuminate some of the present-day difficulties of attaching symbolic AI to LLM's. If you are clever, then you can map a path to cut through that jungle of issues that prevent progress. I will happily discuss the theoretical issues, and possible solutions. BTW, as you do the above, keep a record of the prompts that you type in. Keep that record in git. Claude already keeps some "jsonl" files in the ~/.clause/projects directory; you can copy those to git. You can can even ask it to review those files, analyze them, integrate them into the AtomSpace -- play these kinds of games, attempting to endow it with a long-term memory. The point of holding this in git is to not loose track... |
Summary
This PR introduces a spec-compliant MCP HTTP client (examples/mcp/mcp_client.py) that handles the full MCP handshake (initialize, initialized notification) and provides convenient wrappers for tools/list, tools/call, and resource helper operations.
It also adds three runnable Python examples under examples/mcp/python/:
A new examples/mcp/python/README.md provides usage instructions and explains what each script demonstrates.
Testing