Skip to content

Add support for ChatQwen in init_chat_model() #34183

@changbindu

Description

@changbindu

Checked other resources

  • This is a feature request, not a bug report or usage question.
  • I added a clear and descriptive title that summarizes the feature request.
  • I used the GitHub search to find a similar feature request and didn't find it.
  • I checked the LangChain documentation and API reference to see if this feature already exists.
  • This is not related to the langchain-community package.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-cli
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-perplexity
  • langchain-prompty
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Feature Description

The ChatQwen class from langchain_qwq package provides advanced support for Qwen3 and other Qwen-series models, featuring dedicated parameters to configure Qwen3's reasoning mode.

Here is the package index page: https://pypi.org/project/langchain-qwq/

Qwen is a widely adopted open-source model; however, LangChain's init_chat_model() currently lacks native support for Qwen models. Support for Qwen in init_chat_model() would provide significant convenience.

Here is an untested (and not full) patch:

--- a/libs/langchain/langchain_classic/chat_models/base.py
+++ b/libs/langchain/langchain_classic/chat_models/base.py
@@ -495,6 +495,11 @@ def _init_chat_model_helper(
         from langchain_perplexity import ChatPerplexity
 
         return ChatPerplexity(model=model, **kwargs)
+    if model_provider == "qwen":
+        _check_pkg("langchain_qwq")
+        from langchain_qwq import ChatQwen
+
+        return ChatQwen(model=model, **kwargs)
     supported = ", ".join(_SUPPORTED_PROVIDERS)
     msg = (
         f"Unsupported {model_provider=}.\n\nSupported model providers are: {supported}"
@@ -523,6 +528,7 @@ _SUPPORTED_PROVIDERS = {
     "ibm",
     "xai",
     "perplexity",
+    "qwen",
 }

Use Case

This change will add native support for Qwen models in LangChain.

Proposed Solution

No response

Alternatives Considered

No response

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    feature requestrequest for an enhancement / additional functionalitylangchainRelated to the package `langchain`langchain-classic

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions