Skip to content

Add parse_response to Processor, make it a bit more official#45143

Merged
Rocketknight1 merged 3 commits intomainfrom
make_parse_response_official_2
Mar 31, 2026
Merged

Add parse_response to Processor, make it a bit more official#45143
Rocketknight1 merged 3 commits intomainfrom
make_parse_response_official_2

Conversation

@Rocketknight1
Copy link
Copy Markdown
Member

This PR adds parse_response to Processor classes by wrapping the Tokenizer method!

cc @zucchini-nlp

@Rocketknight1 Rocketknight1 marked this pull request as ready for review March 31, 2026 13:15
@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Copy Markdown
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, can you add in any to any as well and we can merge

Comment on lines -440 to +443
generated_text = list(prompt_text.messages) + [
{"role": "assistant", "content": generated_text}
]
if getattr(self.tokenizer, "response_schema", False):
assistant_message = self.tokenizer.parse_response(generated_text)
else:
assistant_message = {"role": "assistant", "content": generated_text}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we do same in any-to-any pipe?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done! Just copied things over there

generated_text = list(prompt_text.messages) + [
{"role": "assistant", "content": generated_text}
]
if getattr(self.tokenizer, "response_schema", False):
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we always load a tokenizer on these pipes, or should just if processor.tokenizer.response_schema: processor.parse_response

Copy link
Copy Markdown
Member Author

@Rocketknight1 Rocketknight1 Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we always load a tokenizer, but I want to be safe just in case response_schema is undefined for whatever reason!

@Rocketknight1 Rocketknight1 added this pull request to the merge queue Mar 31, 2026
Merged via the queue into main with commit e7e9efa Mar 31, 2026
30 checks passed
@Rocketknight1 Rocketknight1 deleted the make_parse_response_official_2 branch March 31, 2026 17:07
SangbumChoi pushed a commit to SangbumChoi/transformers that referenced this pull request Apr 4, 2026
…face#45143)

* Add parse_response to Processor, make it a bit more official

* Make the parse_response annotation a string to avoid torch import issues

* Add the same logic to any-to-any
sirzechs66 pushed a commit to sirzechs66/transformers that referenced this pull request Apr 18, 2026
…face#45143)

* Add parse_response to Processor, make it a bit more official

* Make the parse_response annotation a string to avoid torch import issues

* Add the same logic to any-to-any
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants