Conversation
| - **Cloud-based or self-hosted** thanks to our [open-source](https://github.com/WorkflowAI/WorkflowAI/blob/main/LICENSE) licensing model | ||
| - **We value your privacy** and we are SOC-2 Type 1 certified. We do not train models on your data, nor do the LLM providers we use. | ||
|
|
||
| Learn more about all WorkflowAI's features in our [docs](https://docs.workflowai.com/). |
There was a problem hiding this comment.
not sure we want to send developers to the documentation.
the goal of the funnel is to get developers to go to workflowai.com/developers/python/instructor
all other links are basically killing our tracking.
|
|
||
| WorkflowAI is a LLM router, observability and collaboration platform that provides developpers with an extensive toolkit for structured generation. | ||
|
|
||
| ## Why use WorkflowAI with Instructor? |
There was a problem hiding this comment.
I think this section is too long, too much links to other place, too much.
the main link that we want developers to go to is: workflowai.com/developers/python/instructor
| description: "Complete guide to using Instructor with WorkflowAI. Learn how to generate structured, type-safe outputs and leverage WorkflowAI's model-switching, observability & reliability features" | ||
| --- | ||
|
|
||
| WorkflowAI is a LLM router, observability and collaboration platform that provides developpers with an extensive toolkit for structured generation. |
| ) | ||
|
|
||
| return client.chat.completions.create( | ||
| model="user-info-extraction-agent/claude-3-7-sonnet-latest", # Agent now runs Claude 3.7 Sonnet |
There was a problem hiding this comment.
would be a lot better to show more code examples with different model instead.
There was a problem hiding this comment.
same feedback apply to the PR on our own documentation.
|
|
||
| WorkflowAI allows you to view all the runs that were made for your agent: | ||
|
|
||
|  |
There was a problem hiding this comment.
We can add them to the PR. It looks like they have a bunch of images in docs/img
There was a problem hiding this comment.
I really doubt that they will let us commit images to their repo.. (I would definitely not). Can we just upload to our storage and pass a URL?
It looks like integrations doc don't use images
There was a problem hiding this comment.
Can we just upload to our storage and pass a URL?
yup, probably the way to go.
| messages=[{"role": "user", "content": user_message}], | ||
| ) | ||
|
|
||
| if __name__ == "__main__": |
There was a problem hiding this comment.
no need for that part.
focus on the relevant part that you're highlighting the section.
(adjust as well in a PR for our own documentation)
|
|
||
| ## Templating with Input Variables | ||
|
|
||
| Introducing input variables separates static instructions from dynamic content, making your agents easier to observe, since WorkflowAI logs these input variables separately. Using input variables also allows to use [benchmarks](https://docs.workflowai.com/features/benchmarks) and [deployments](https://docs.workflowai.com/features/deployments). |
There was a problem hiding this comment.
re-write this sentence assuming that people don't know what "benchmarks" and "deployments" are, and won't click to know more.
| print(f"Classification: {result.kind}") # 'work' | ||
| ``` | ||
|
|
||
| ## Using Deployments for Server-Managed Instructions |
There was a problem hiding this comment.
can you send this section to a engineer friend of yours @yannbu to check if they do understand this part? thanks.
|
|
||
| ## Streaming | ||
|
|
||
| We are currently implementing streaming on our OpenAI compatible chat completion endpoint. We'll update this documentation shortly. |
There was a problem hiding this comment.
@yannbu could you please confirm you have the tests in autopilot setup for streaming + instructor already? thanks.
| ## Streaming | ||
|
|
||
| We are currently implementing streaming on our OpenAI compatible chat completion endpoint. We'll update this documentation shortly. | ||
|
|
There was a problem hiding this comment.
include a RECAP section with the main points we want to hit, and a link/CTA on how to get started. I'll review this section, then we will use in all documentations everywhere.
| user_info = extract_user_info("John Doe is 32 years old.") | ||
| print("Basic example result:", user_info) # UserInfo(name='John Doe', age=32) | ||
| ``` | ||
| ### Supporte Instructor Modes |
|
|
||
| Then either export your credentials: | ||
| ```bash | ||
| export WORKFLOWAI_API_KEY=<your-workflowai-api-key> |
There was a problem hiding this comment.
That seems unnecessary as well since we are talking to devs ?
Also I don't think the API URL should be configured via an env var since it's a constant ?
We could just say a single sentence like
Set up the base URL of the OpenAI SDK to
https://run.workflowai.com/v1and use a WorflowAI API key in place of the OpenAI api key
?
There was a problem hiding this comment.
Maybe we could have something simpler similar to the openai doc ? https://github.com/567-labs/instructor/blob/main/docs/integrations/openai.md
|
|
||
| WorkflowAI allows you to view all the runs that were made for your agent: | ||
|
|
||
|  |
There was a problem hiding this comment.
We can add them to the PR. It looks like they have a bunch of images in docs/img
|
|
||
| WorkflowAI allows you to view all the runs that were made for your agent: | ||
|
|
||
|  |
There was a problem hiding this comment.
I really doubt that they will let us commit images to their repo.. (I would definitely not). Can we just upload to our storage and pass a URL?
It looks like integrations doc don't use images
@pierrevalade @guillaq closes WOR-4550: Prepare MR for Instructor's docs
WDYT ?
Updates compared to our docs are minimal, and are mostly located in the beginning of the docs.
My only question is "is our docs too marketing-oriented ?" if you compare to other docs that are very "dry" and do not try to "sell" the products.