This document is aimed at developers who need to write Python code to add or enhance Dify model support, providing detailed guidance on creating directory structures, writing model configurations, implementing model calling logic, and the complete process of debugging and publishing plugins, including details of core method implementation and error handling.
This document is a standard guide for developers who need to write Python code to add or enhance model support for Dify. You’ll need to follow the steps in this guide when the model you’re adding involves new API call logic, special parameter handling, or new features that Dify needs to explicitly support (such as Vision, Tool Calling).
Before reading this document, it’s recommended that you:
This guide will walk you through the complete process of creating a directory structure, writing model configurations (YAML), implementing model calling logic (Python), and debugging and publishing plugins.
A well-organized directory structure is the foundation for developing maintainable plugins. You need to create specific directories and files for your model provider plugin.
models/
directory of your plugin project (typically a local clone of dify-official-plugins
), find or create a folder named after the model provider (e.g., models/my_new_provider
).models
Subdirectory: In the provider directory, create a models
subdirectory.models/models/
directory, create a subdirectory for each model type you need to support. Common types include:
llm
: Text generation modelstext_embedding
: Text Embedding modelsrerank
: Rerank modelsspeech2text
: Speech-to-text modelstts
: Text-to-speech modelsmoderation
: Content moderation modelsmodels/models/llm/
), you need to create a Python file to implement the calling logic for that type of model (e.g., llm.py
).my-model-v1.yaml
)._position.yaml
file to control the display order of models of that type in the Dify UI.Example Structure (assuming provider my_provider
supports LLM and Embedding):
For each specific model, you need to create a YAML file to describe its properties, parameters, and features so that Dify can correctly understand and use it.
models/models/llm/
), create a YAML file for the model you want to add. The filename typically matches or is descriptive of the model ID (e.g., my-llm-model-v1.yaml
).model
: (Required) The official API identifier for the model.label
: (Required) The name displayed in the Dify UI (supports multiple languages).model_type
: (Required) Must match the directory type (e.g., llm
).features
: (Optional) Declare special features supported by the model (e.g., vision
, tool-call
, stream-tool-call
, etc.).model_properties
: (Required) Define inherent model properties, such as mode
(chat
or completion
), context_size
.parameter_rules
: (Required) Define user-adjustable parameters and their rules (name name
, type type
, whether required required
, default value default
, range min
/max
, options options
, etc.). You can use use_template
to reference predefined templates to simplify configuration of common parameters (such as temperature
, max_tokens
).pricing
: (Optional) Define billing information for the model.Example (claude-3-5-sonnet-20240620.yaml
):
This is the core step for implementing model functionality. You need to write code in the corresponding model type’s Python file (e.g., llm.py
) to handle API calls, parameter conversion, and result returns.
Create/Edit Python File: Create or open the corresponding Python file (e.g., llm.py
) in the model type directory (e.g., models/models/llm/
).
Define Implementation Class:
MyProviderLargeLanguageModel
.dify_plugin.provider_kits.llm.LargeLanguageModel
.Implement Key Methods: (The specific methods to implement depend on the base class inherited, using LLM as an example below)
_invoke(...)
: Core calling method.
def _invoke(self, model: str, credentials: dict, prompt_messages: List[PromptMessage], model_parameters: dict, tools: Optional[List[PromptMessageTool]] = None, stop: Optional[List[str]] = None, stream: bool = True, user: Optional[str] = None) -> Union[LLMResult, Generator[LLMResultChunk, None, None]]:
credentials
and model_parameters
.prompt_messages
format to the format required by the provider API.tools
parameter to support Function Calling / Tool Use (if the model supports it).stream
parameter.stream=True
, this method must return a generator (Generator
) that uses yield
to return LLMResultChunk
objects piece by piece. Each chunk contains partial results (text, tool calling blocks, etc.) and optional usage information.stream=False
, this method must return a complete LLMResult
object, containing the final text result, a complete list of tool calls, and total usage information (LLMUsage
).validate_credentials(self, model: str, credentials: dict) -> None
: (Required) Used to validate the validity of credentials when a user adds or modifies them. This is typically implemented by calling a simple, low-cost API endpoint (such as listing available models, checking balance, etc.). If validation fails, a CredentialsValidateFailedError
or its subclass should be thrown.
get_num_tokens(self, model: str, credentials: dict, prompt_messages: List[PromptMessage], tools: Optional[List[PromptMessageTool]] = None) -> int
: (Optional but recommended) Used to estimate the number of tokens for a given input. If it cannot be calculated accurately or the API does not support it, you can return 0.
@property _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]
: (Required) Define an error mapping dictionary. The keys are standard InvokeError
subclasses from Dify, and the values are lists of exception types that may be thrown by the vendor SDK that need to be mapped to that standard error. This is crucial for Dify to handle errors from different providers uniformly.
Thorough testing and debugging are essential before contributing your plugin to the community. Dify provides remote debugging capabilities, allowing you to modify code locally and test the effects in real-time in a Dify instance.
Get Debugging Information:
Debug Key
and Remote Server Address
(e.g., http://<your-dify-domain>:5003
).Configure Local Environment:
In the root directory of your local plugin project, find or create a .env
file (can be copied from .env.example
).
Edit the .env
file, filling in the debugging information:
Start Local Plugin Service:
In the plugin project root directory, make sure your Python environment is activated (if using a virtual environment).
Run the main program:
Observe the terminal output. If the connection is successful, there will typically be a corresponding log prompt.
Test in Dify:
When you’ve completed development and debugging, and are satisfied with the plugin’s functionality, you can package it and contribute it to the Dify community.
Stop the local debugging service (Ctrl+C
).
Run the packaging command in the plugin project root directory:
This will generate a <provider_name>.difypkg
file in the project root directory.
dify-official-plugins
repository.langgenius/dify-official-plugins
repository on GitHub. Clearly describe the changes you’ve made, the models or features you’ve added, and any necessary testing instructions in the PR description.manifest.yaml
specifications)Edit this page | Report an issue