Conversation
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
|
||
| We've also published a note about how we have secured the `query_completions` tool [here](/security#sql-query-tool-security) from malicious use. We welcome more feedback on our approach via our [Slack](https://join.slack.com/t/anotherai-dev/shared_invite/zt-3av2prezr-Lz10~8o~rSRQE72m_PyIJA) | ||
|
|
||
| ## Some (current) limitations. |
There was a problem hiding this comment.
@guillaq what do you think? should we add more limitations?
There was a problem hiding this comment.
No I can't think of any. Maybe the fact that we have focused our support of OpenAI SDK for now ?
- Changed bullet points to en-dashes in preview.mdx - Updated index.mdx with preview announcement callout 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
| description: Public preview | ||
| --- | ||
|
|
||
| Today we're introducing a public preview of **AnotherAI**, a MCP server designed for AI engineering tasks that includes a set of tools that enables your AI assistant (such as Claude Code, Cursor, etc.) to: |
There was a problem hiding this comment.
| Today we're introducing a public preview of **AnotherAI**, a MCP server designed for AI engineering tasks that includes a set of tools that enables your AI assistant (such as Claude Code, Cursor, etc.) to: | |
| Today we're introducing a public preview of **AnotherAI**, a MCP server designed for AI engineering tasks that includes a set of tools that enables your AI assistant (such as Claude Code, Cursor, ChatGPT, etc.) to: |
Include ChatGPT? Or not something we want to highlight anymore
There was a problem hiding this comment.
I'm not yet 💯 convinced that ChatGPT works well with MCP servers as of now, but I need to test more.
|
|
||
| ## Compare models' performance, price, and latency. | ||
|
|
||
| AnotherAI's MCP server exposes tools that let your AI assistant access over 100 models, and compare their performance, price and latency. In our own tests, we've found that models like Opus 4 are very good at reviewing work from other models, and the latest development of longer context windows (Sonnet 4.5 supports up to 1M tokens, Gemini has models with 2M context window) makes it possible to compare more parameters (models and prompts) and agents with longer inputs. |
There was a problem hiding this comment.
This paragraph is clear and informative, but it would be ever stronger if it clearly framed why the user should care about this, ie. "What problem is model comparison - specifically AI-drive model comparison - solving for users?", "How will this make your [the reader's] product better/life easier?"
There was a problem hiding this comment.
do you think that people using LLMs are not aware yet of the benefits of comparing different models/prompts? that does not seems to be my experience with customers.
There was a problem hiding this comment.
To answer your specific question: all users I've spoken to are aware of the benefits, but that's because they're already WorkflowAI users (who have seen the benefit)
In most (all?) cases, I think being explicit about benefits and why readers should care is a relevant part of marketing a product, hence the feedback I left. But if you feel that all potential users will already be experienced enough with building LLM agents to understand the benefits of comparing models (and experienced LLM/agent users are the only user base we're interested in) then this can be disregarded.
Co-authored-by: Anya <75702826+anyacherniss@users.noreply.github.com>
Update language to be more concise and accurate regarding context windows. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
guillaq
left a comment
There was a problem hiding this comment.
Nice !
The only critique i would have is that just reading this I don't understand how AnotherAI can have access to my production data and so it might seem too good to be true?
Maybe we should push on implementing the import completion endpoint ?
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
I've added a section FAQ to address this question at the end. |
- Fix grammar and consistency issues - Add FAQ section about MCP access to completions - Add note about alternative observability approach - Expand prompt examples and limitations section 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
I really like the import completion endpoint but I think we should ship ASAP and the endpoint isn't ready/tested. |
|
|
||
| We've also published a note about how we have secured the `query_completions` tool [here](/security#sql-query-tool-security) from malicious use. We welcome more feedback on our approach via our [Slack](https://join.slack.com/t/anotherai-dev/shared_invite/zt-3av2prezr-Lz10~8o~rSRQE72m_PyIJA) | ||
|
|
||
| ## Some (current) limitations. |
There was a problem hiding this comment.
No I can't think of any. Maybe the fact that we have focused our support of OpenAI SDK for now ?
|
@anyacherniss @pierrevalade And is used as the Home Page content. I think we can close this PR |
Summary
🤖 Generated with Claude Code