Sunbird AI Assistant
  • Overview
  • Functional Overview
    • The Problem
    • The Solution
    • Use Cases
      • e-Jaadui Pitara
    • Capabilities
  • Technical Overview
    • Architecture
    • Technology Stack
  • Get Started with AI Assistant
    • Key Steps to role out an AI Assistant Solution
    • Pre-requisites
    • Installation
    • Data Ingestion Process
    • Configuration
    • APIs
    • Bot Creation 101
  • Components
    • Sakhi API Service
      • Environment Variables
      • Pluggability of LLM Chat Model
      • Pluggability of Cloud Storage
      • Pluggability of Transaltion service
      • Pluggability of Vector Store
  • Release Notes
    • Release Convention
    • 3.0.0 (Latest)
    • 2.0.0
    • 1.0.0
  • Roadmap
  • Contribution Guide
  • FAQs
  • Knowledge Base
    • Best Practices
    • Indexing CSV Data
  • Contact us
Powered by GitBook
On this page
  • Interface
  • Implementation
  • Example usage
  1. Components
  2. Sakhi API Service

Pluggability of LLM Chat Model

PreviousEnvironment VariablesNextPluggability of Cloud Storage

Last updated 1 year ago

In this guide, we will learn how to plugin a new chat model client. The plugin needs to extend/adhere to BaseChatModel of Langchain framework. Hence, you can refer to the .

If you don't find your LLM chat model client on the above link, please refer to to create your custom LLM chat client by extending BaseChatModel

A chat model is a language model that uses chat as inputs and returns chat messages as outputs.

You can seamlessly leverage it within existing Sakhi API workflows with minimal code changes.

Interface

The BaseChatClient class has the below method to implement and returns an instance of representing the specific Chat model client.

Method/Property
Description
Required/Optional

get_client

Takes LLM model name as string and required additional arguments, and returns an instance of specific Chat model.

Required

Implementation

Let's say we create a YourChatClient class inheriting from BaseChatClient using the chat model client provided by Langchain (in the link mentioned above). You must implement get_client method to instantiate YourChatClient with the specified model and additional arguments.

Add the necessary environment variables to the .env file. LLM_TYPE is to be defined mandatorily. LLM chat model specific environment variables (Ex: API_KEY) are also to be mentioned.

env
LLM_TYPE=<your_chat_model_name>

Now go to the llmfolder and create a file named: your_chat_model_name.py

your_chat_model_name.py
import os
from typing import Any

from langchain.chat_models import <LangchainChatModelClient>
from llm.base import BaseChatClient

class YourChatClient(BaseChatClient):
    """
    This class provides a chat interface for interacting with <LangchainChatModelClient>.
    """
    def get_client(self, model:str, **kwargs: Any) -> <LangchainChatModelClient>:
        """
        This method creates and returns a <LangchainChatModelClient> instance.

        Args:
            model (str, optional):  chat model to use.
            **kwargs: Additional arguments to be passed to the <LangchainChatModelClient> constructor.
 
        Returns:
            An instance of the <LangchainChatModelClient> class.
        """
        return LangchainChatModelClient(model=model, **kwargs)

Go to the LLM folder and update __init__.py with the module lookup entry for YourChatClient.

__init__.py
_module_lookup = {
    ...,
    "YourChatClient": "llm.<your_chat_model_name>"
}

Modify env_manager.py to import YourChatClient and add a mapping for YourChatClient in the self.indexes dictionary.

env_manager.py
from llm import (
    ...,
    YourChatClient
)
env_manager.py
self.indexes = {
    "llm": {
        "class": {
            ...,
            "<your_chat_model_name>": YourChatClient
        },
        "env_key": "LLM_TYPE"
    }
}

This setup ensures that YourChatClient can be instantiated based on specific environment variables, effectively integrating it into the environment management system. The self.indexes the dictionary now includes a mapping where <your_chat_model_name> corresponds to the YourChatClient class, and uses "LLM_TYPE" as the environment key.

Example usage

from env_manager import llm_class
llm  = llm_class.get_client(model=<your_llm_model_name>)
result = llm.invoke("Tell me a game using two sticks")
print(result.content)

#The game you can play using two sticks is called Gilli Danda. This game originated in India and requires two sticks. The smaller stick should be an oval-shaped wooden piece known as Gilli and the longer stick is known as danda. The player needs to use the danda to hit the Gilli at the raised end, which then flips.

<LangchainChatModelClient> should be one of the LLM chat model clients mentioned .

For more examples, please refer to .

langchain's LLM chat model integration page
this link
messages
BaseChatModel
here
Langchain