Skip to content

Conversation

@GIND123
Copy link

@GIND123 GIND123 commented Dec 10, 2025

Summary

This PR introduces a spec-compliant MCP HTTP client (examples/mcp/mcp_client.py) that handles the full MCP handshake (initialize, initialized notification) and provides convenient wrappers for tools/list, tools/call, and resource helper operations.

It also adds three runnable Python examples under examples/mcp/python/:

  • 01_create_atoms.py — performs the MCP handshake, discovers available tools, creates several Atoms, and lists existing Concepts.
  • 02_get_incoming.py — builds a small “likes” graph and demonstrates querying incoming links via getIncoming.
  • 03_set_values.py — shows how to attach values using setValue and retrieve them using getKeys / getValues.

A new examples/mcp/python/README.md provides usage instructions and explains what each script demonstrates.

Testing

  • All three example scripts were run against CogServer MCP 0.2.1 (HTTP endpoint at http://localhost:18080/mcp) and completed successfully.

@linas
Copy link
Member

linas commented Dec 13, 2025

What exactly is the point of having an "MCP client"? Normally, ChatGPT or Claude or some other LLM would be the "MCP client". I don't understand the point of this, unless you wanted to target some other LLM system that wasn't capable of working with MCP on it's own ... but when I read your example, don't get the sense it's some shim to some other system...

Have you tested the cogserver with any LLM's? I know it works with Claude, I use that heavily. I'm less clear about how compatible it is with other LLM's ...

@GIND123
Copy link
Author

GIND123 commented Dec 13, 2025

The intent isn’t to replace Claude (or any MCP-capable LLM) as the client. This adds:

A minimal reference/harness to exercise the MCP handshake and core tools from Python, so developers can smoke-test the server, debug tool args, or use MCP from environments where the LLM doesn’t natively speak MCP (e.g., custom orchestrators, offline agents, CI).
Runnable examples that show exact payloads for makeAtom, getIncoming, and setValue, which complements the existing MCP docs/resources.
I haven’t added a Claude transcript here; I’ve been using the Python harness to verify endpoints.

@linas
Copy link
Member

linas commented Dec 14, 2025

Please excuse me; I'm going to push back some more; and I would like to explain, so that my pushback doesn't feel like a personal offense, So here's my logic:

I just spent the last three months removing many thousands of lines of code from this project, and related projects; I lost count, maybe 5KLOC or 10KLOC of removals. This represents ten years of accumulated cruft, junk, useless widgets, utilities, duplication of existing code, half-finished, abandoned projects and "neat ideas" that were just plain-old badly designed. I'm not done yet; there's more on my cleanup list. So I'm not in the mood to add code that looks like a "neat idea" but doesn't have any obvious usefulness.

To put it differently: even examples create a maintenance burden: they have to be checked, tested semi-regularly, updated as needed. Users notice and complain when they don't work. I want to reduce my maintenance burden as much as possible.

So:

  • "smoke test": there are ten unit tests; they pass. One of them checks MCP. If the coverage is poor or incomplete, the unit test should be revised, updated, or maybe a second, more through test created.
  • "example" Humans should not be writing to the MCP interface; and the LLM's can figure out how to use it just fine, so the examples don't really teach the LLM something that it can't already figure out.
  • The MCP protocol is truly ugly and nasty. JSON in general is ugly and nasty. For high-speed, high-efficiency, easy-to-generate, easy-to parse, light-weight on TCP/IP, I focus entirely on the s-expression ("sexpr") code. Although, as it happens, this is also on my list of a total redesign; it will change, maybe next week. Not sure of the timescale. The "supported API" for the sexpr is at https://github.com/opencog/atomspace-cog, and that API is meant to be "stable".

There's been a lot of instability in the last few months: many of the old archaic users, uses and apps have moldered and died; a new app is coming online, and I'm trying to get everything refreshed and modernized for those folks. So there's some churn.

There's still a meta-question: what do you hope to accomplish here? This, to me, is a far more interesting question, than sinking down into the guts of the system-as-currently-implemented.

@GIND123
Copy link
Author

GIND123 commented Dec 14, 2025

I understand the maintenance burden concern. I’m happy to re-scope so this doesn’t add surface area. Two options:

  • Convert this into MCP coverage in the existing test suite: add tests for makeAtom, getIncoming, and setValue using the current harness, and drop the example scripts entirely. This keeps everything under tests and minimizes future upkeep.
  • Alternatively, I can move the Python snippets out of the repo (gist/wiki) and close this PR, and instead work on whatever coverage gaps you’d like to see—either MCP or the sexpr API in atomspace-cog.

My goal is to improve reliability for LLM/AtomSpace flows; I’m open to whichever path best reduces your maintenance load. Let me know which direction you prefer.

@linas
Copy link
Member

linas commented Dec 14, 2025

What are you trying to accomplish? You said you want to "improve reliability for LLM/AtomSpace flows", but you have not reported any reliability issues. Did something crash? Did something not work? What's unreliable?

Again, rather than getting lost in the mire, I would like to know what you are trying to do. Making PR's or wiki changes is not a "goal", its a mundane by-product, a side-effect of pursuing grander goals.

@GIND123
Copy link
Author

GIND123 commented Dec 20, 2025

Dear Proff,

Can you list down the areas where I could work on reasoning and explainability in Agentic system within the Opencog framework.
I want to explore the aspect of open source contribution in AGI and feel this is the best platform to do so.

@linas
Copy link
Member

linas commented Dec 26, 2025

Do you have access to Claude Code or ChatGPT or any other system that has direct access to your file system or computer? These normally have some monthly subscription fee; they are not cheap. But maybe you can get access to one via the university computing center, or some other program for students interested in computing. Set it up in a container -- LXC or Docker, so that it does not wreck your machine.

Use it for a while. You will notice that it forgets, does not do what you ask it to, cuts corners, says things to please you, instead of "doing the right thing". You can try to fix this with prompts: "please review this text before starting a new project". This works, but sort of. You can try to ask it to update: "when I ask you to remember something, please add it to your list". You can ask it to reason: "When I ask you to review the possibilities carefully, I want you to review what you have stored in memory, and deduce that this approach was already tried earlier this morning, so we don't have to do it again". All of this won't work, of course, but you will learn the failure modes.

Then you can say something like "using your MCP interfaces, I want you to represent these logical structures in Atomese, and store them in the AtomSpace, in a format that you can access and retrieve later". And, of course it can do that, because it has direct access to the AtomSpace via MCP, and also because old OpenCog documentation is in it's training set, so it already knows some OpenCog/Atomese basics. So you can try to get it to do this. It will of course fail to work the way you want it to ... and again, you can think about why it is so difficult to attach an LLM (with an API that uses strings of English words) to a symbolic reasoning system (which has an API of strings written in scheme/python, not English, and those strings correspond to trees, graphs and other abstract structures).

These exercises illuminate some of the present-day difficulties of attaching symbolic AI to LLM's. If you are clever, then you can map a path to cut through that jungle of issues that prevent progress. I will happily discuss the theoretical issues, and possible solutions.

BTW, as you do the above, keep a record of the prompts that you type in. Keep that record in git. Claude already keeps some "jsonl" files in the ~/.clause/projects directory; you can copy those to git. You can can even ask it to review those files, analyze them, integrate them into the AtomSpace -- play these kinds of games, attempting to endow it with a long-term memory. The point of holding this in git is to not loose track...

@GIND123 GIND123 closed this Jan 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants