This document provides detailed guidance on creating model provider plugins, including project initialization, choosing model configuration methods (predefined models and custom models), creating provider configuration YAML files, and the complete process of writing provider code.
The first step in creating a Model type plugin is to initialize the plugin project and create the model provider file, followed by integrating specific predefined/custom models. If you only want to add a new model to an existing model provider, please refer to Quick Integration of a New Model.
For detailed instructions on preparing the plugin development scaffolding tool, please refer to Initializing Development Tools. Before you begin, it’s recommended that you understand the basic concepts and structure of Model Plugins.
In the scaffolding command-line tool path, create a new dify plugin project.
If you have renamed the binary file to dify
and copied it to the /usr/local/bin
path, you can run the following command to create a new plugin project:
All templates in the scaffolding tool provide complete code projects. Choose the LLM
type plugin template.
Configure the following permissions for this LLM plugin:
Model providers support the following two model configuration methods:
predefined-model
Predefined Models
Common large model types that only require unified provider credentials to use the predefined models under the provider. For example, the OpenAI
model provider offers a series of predefined models such as gpt-3.5-turbo-0125
and gpt-4o-2024-05-13
. For detailed development instructions, please refer to Integrating Predefined Models.
customizable-model
Custom Models
Requires manually adding credential configurations for each model. For example, Xinference
supports both LLM and Text Embedding, but each model has a unique model_uid. If you want to integrate both, you need to configure a model_uid for each model. For detailed development instructions, please refer to Integrating Custom Models.
These two configuration methods can coexist, meaning a provider can support combinations like predefined-model
+ customizable-model
or predefined-model
. This means that with configured unified provider credentials, you can use predefined models and models fetched from remote sources, and if you add new models, you can additionally use custom models on top of this foundation.
Adding a new model provider mainly includes the following steps:
Create Model Provider Configuration YAML File
Add a YAML file in the provider directory to describe the provider’s basic information and parameter configuration. Write content according to ProviderSchema requirements to ensure consistency with system specifications.
Write Model Provider Code
Create provider class code, implementing a Python class that meets system interface requirements for connecting with the provider’s API and implementing core functionality.
Here are the complete operation details for each step.
Manifest is a YAML format file that declares the model provider’s basic information, supported model types, configuration methods, and credential rules. The plugin project template will automatically generate configuration files under the /providers
path.
Here’s an example of the anthropic.yaml
configuration file for Anthropic
:
If the provider you’re integrating offers custom models, such as OpenAI
providing fine-tuned models, you need to add the model_credential_schema
field.
Here’s a sample code for the OpenAI
family of models:
For more complete model provider YAML specifications, please refer to the Model Schema documentation.
Create a python file with the same name, e.g., anthropic.py
, in the /providers
folder and implement a class
that inherits from the __base.provider.Provider
base class, e.g., AnthropicProvider
.
Here’s an example code for Anthropic
:
Providers need to inherit the __base.model_provider.ModelProvider
base class and implement the validate_provider_credentials
method for validating unified provider credentials.
Of course, you can also initially reserve the validate_provider_credentials
implementation, and reuse it directly after the model credential verification method is implemented.
For other types of model providers, please refer to the following configuration methods.
For custom model providers like Xinference
, you can skip the full implementation step. Simply create an empty class called XinferenceProvider
and implement an empty validate_provider_credentials
method in it.
Detailed Explanation:
• XinferenceProvider
is a placeholder class used to identify custom model providers.
• While the validate_provider_credentials
method won’t be actually called, it must exist because its parent class is abstract and requires all child classes to implement this method. By providing an empty implementation, we can avoid instantiation errors that would occur from not implementing the abstract method.
After initializing the model provider, the next step is to integrate specific llm models provided by the provider. For detailed instructions, please refer to:
Edit this page | Report an issue