Pluggability of LLM Chat Model
In this guide, we will learn how to plugin a new chat model client. The plugin needs to extend/adhere to BaseChatModel
of Langchain framework. Hence, you can refer to the langchain's LLM chat model integration page.
If you don't find your LLM chat model client on the above link, please refer to this link to create your custom LLM chat client by extending BaseChatModel
A chat model is a language model that uses chat messages as inputs and returns chat messages as outputs.
You can seamlessly leverage it within existing Sakhi API workflows with minimal code changes.
Interface
The BaseChatClient
class has the below method to implement and returns an instance of BaseChatModel
representing the specific Chat model client.
get_client
Takes LLM model name as string and required additional arguments, and returns an instance of specific Chat model.
Required
Implementation
Let's say we create a YourChatClient
class inheriting from BaseChatClient
using the chat model client provided by Langchain (in the link mentioned above). You must implement get_client
method to instantiate YourChatClient
with the specified model and additional arguments.
Add the necessary environment variables to the .env
file. LLM_TYPE
is to be defined mandatorily. LLM chat model specific environment variables (Ex: API_KEY) are also to be mentioned.
LLM_TYPE=<your_chat_model_name>
Now go to the llm
folder and create a file named: your_chat_model_name.py
import os
from typing import Any
from langchain.chat_models import <LangchainChatModelClient>
from llm.base import BaseChatClient
class YourChatClient(BaseChatClient):
"""
This class provides a chat interface for interacting with <LangchainChatModelClient>.
"""
def get_client(self, model:str, **kwargs: Any) -> <LangchainChatModelClient>:
"""
This method creates and returns a <LangchainChatModelClient> instance.
Args:
model (str, optional): chat model to use.
**kwargs: Additional arguments to be passed to the <LangchainChatModelClient> constructor.
Returns:
An instance of the <LangchainChatModelClient> class.
"""
return LangchainChatModelClient(model=model, **kwargs)
Go to the LLM folder and update __init__.py
with the module lookup entry for YourChatClient
.
_module_lookup = {
...,
"YourChatClient": "llm.<your_chat_model_name>"
}
Modify env_manager.py
to import YourChatClient
and add a mapping for YourChatClient
in the self.indexes
dictionary.
from llm import (
...,
YourChatClient
)
self.indexes = {
"llm": {
"class": {
...,
"<your_chat_model_name>": YourChatClient
},
"env_key": "LLM_TYPE"
}
}
This setup ensures that YourChatClient
can be instantiated based on specific environment variables, effectively integrating it into the environment management system. The self.indexes
the dictionary now includes a mapping where <your_chat_model_name>
corresponds to the YourChatClient
class, and uses "LLM_TYPE"
as the environment key.
Example usage
from env_manager import llm_class
llm = llm_class.get_client(model=<your_llm_model_name>)
result = llm.invoke("Tell me a game using two sticks")
print(result.content)
#The game you can play using two sticks is called Gilli Danda. This game originated in India and requires two sticks. The smaller stick should be an oval-shaped wooden piece known as Gilli and the longer stick is known as danda. The player needs to use the danda to hit the Gilli at the raised end, which then flips.
For more examples, please refer to Langchain.
Last updated