Skip to content

docs: add preview release notes#391

Open
pierrevalade wants to merge 7 commits intomainfrom
pierre/add-preview-release-notes
Open

docs: add preview release notes#391
pierrevalade wants to merge 7 commits intomainfrom
pierre/add-preview-release-notes

Conversation

@pierrevalade
Copy link
Contributor

Summary

  • Added preview release notes documentation

🤖 Generated with Claude Code

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@vercel
Copy link

vercel bot commented Oct 2, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
anotherai Ready Ready Preview Comment Oct 6, 2025 8:19pm
anotherai-docs Error Error Oct 6, 2025 8:19pm


We've also published a note about how we have secured the `query_completions` tool [here](/security#sql-query-tool-security) from malicious use. We welcome more feedback on our approach via our [Slack](https://join.slack.com/t/anotherai-dev/shared_invite/zt-3av2prezr-Lz10~8o~rSRQE72m_PyIJA)

## Some (current) limitations.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guillaq what do you think? should we add more limitations?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I can't think of any. Maybe the fact that we have focused our support of OpenAI SDK for now ?

- Changed bullet points to en-dashes in preview.mdx
- Updated index.mdx with preview announcement callout

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
description: Public preview
---

Today we're introducing a public preview of **AnotherAI**, a MCP server designed for AI engineering tasks that includes a set of tools that enables your AI assistant (such as Claude Code, Cursor, etc.) to:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Today we're introducing a public preview of **AnotherAI**, a MCP server designed for AI engineering tasks that includes a set of tools that enables your AI assistant (such as Claude Code, Cursor, etc.) to:
Today we're introducing a public preview of **AnotherAI**, a MCP server designed for AI engineering tasks that includes a set of tools that enables your AI assistant (such as Claude Code, Cursor, ChatGPT, etc.) to:

Include ChatGPT? Or not something we want to highlight anymore

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not yet 💯 convinced that ChatGPT works well with MCP servers as of now, but I need to test more.


## Compare models' performance, price, and latency.

AnotherAI's MCP server exposes tools that let your AI assistant access over 100 models, and compare their performance, price and latency. In our own tests, we've found that models like Opus 4 are very good at reviewing work from other models, and the latest development of longer context windows (Sonnet 4.5 supports up to 1M tokens, Gemini has models with 2M context window) makes it possible to compare more parameters (models and prompts) and agents with longer inputs.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This paragraph is clear and informative, but it would be ever stronger if it clearly framed why the user should care about this, ie. "What problem is model comparison - specifically AI-drive model comparison - solving for users?", "How will this make your [the reader's] product better/life easier?"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you think that people using LLMs are not aware yet of the benefits of comparing different models/prompts? that does not seems to be my experience with customers.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To answer your specific question: all users I've spoken to are aware of the benefits, but that's because they're already WorkflowAI users (who have seen the benefit)

In most (all?) cases, I think being explicit about benefits and why readers should care is a relevant part of marketing a product, hence the feedback I left. But if you feel that all potential users will already be experienced enough with building LLM agents to understand the benefits of comparing models (and experienced LLM/agent users are the only user base we're interested in) then this can be disregarded.

Co-authored-by: Anya <75702826+anyacherniss@users.noreply.github.com>
Update language to be more concise and accurate regarding context windows.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Copy link
Member

@guillaq guillaq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice !

The only critique i would have is that just reading this I don't understand how AnotherAI can have access to my production data and so it might seem too good to be true?
Maybe we should push on implementing the import completion endpoint ?

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@pierrevalade
Copy link
Contributor Author

The only critique i would have is that just reading this I don't understand how AnotherAI can have access to my production data and so it might seem too good to be true?

I've added a section FAQ to address this question at the end.

- Fix grammar and consistency issues
- Add FAQ section about MCP access to completions
- Add note about alternative observability approach
- Expand prompt examples and limitations section

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@pierrevalade pierrevalade marked this pull request as ready for review October 6, 2025 12:44
@pierrevalade
Copy link
Contributor Author

Maybe we should push on implementing the import completion endpoint ?

I really like the import completion endpoint but I think we should ship ASAP and the endpoint isn't ready/tested.

Copy link
Member

@guillaq guillaq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good


We've also published a note about how we have secured the `query_completions` tool [here](/security#sql-query-tool-security) from malicious use. We welcome more feedback on our approach via our [Slack](https://join.slack.com/t/anotherai-dev/shared_invite/zt-3av2prezr-Lz10~8o~rSRQE72m_PyIJA)

## Some (current) limitations.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I can't think of any. Maybe the fact that we have focused our support of OpenAI SDK for now ?

Copy link
Collaborator

@anyacherniss anyacherniss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@jacekzimonski
Copy link
Collaborator

@anyacherniss @pierrevalade
Newest version of the preview.mdx was added to the web from this repo in PR:
#444

And is used as the Home Page content.

I think we can close this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants