# Pluggability of LLM Chat Model

In this guide, we will learn how to plugin a new chat model client.  The plugin needs to extend/adhere to `BaseChatModel` of Langchain framework. Hence,  you can refer to the [langchain's LLM chat model integration page](https://python.langchain.com/v0.1/docs/integrations/chat/).

If you don't find your LLM chat model client on the above link, please refer to[ this link](https://python.langchain.com/v0.1/docs/modules/model_io/chat/custom_chat_model/) to create your custom LLM chat client by extending `BaseChatModel`&#x20;

A chat model is a language model that uses chat [messages](https://python.langchain.com/v0.1/docs/modules/model_io/chat/custom_chat_model/#messages) as inputs and returns chat messages as outputs.&#x20;

You can seamlessly leverage it within existing Sakhi API workflows with minimal code changes.

### Interface <a href="#base-chat-model" id="base-chat-model"></a>

The `BaseChatClient`  class has the below method to implement and returns an instance of [`BaseChatModel`](https://python.langchain.com/v0.1/docs/modules/model_io/chat/custom_chat_model/#base-chat-model) representing the specific Chat model client.

| Method/Property | Description                                                                                                        | Required/Optional |
| --------------- | ------------------------------------------------------------------------------------------------------------------ | ----------------- |
| get\_client     | Takes LLM model name as string  and required additional arguments, and returns an instance of specific Chat model. | Required          |

### Implementation <a href="#implementation" id="implementation"></a>

Let's say we create a `YourChatClient` class inheriting from `BaseChatClient`  using the chat model client provided by Langchain (in the link mentioned above).  You must implement `get_client` method to instantiate `YourChatClient` with the specified model and additional arguments.

Add the necessary environment variables to the `.env` file. `LLM_TYPE` is to be defined mandatorily. LLM chat model specific environment variables (Ex: API\_KEY) are also to be mentioned.&#x20;

{% code title="env" %}

```
LLM_TYPE=<your_chat_model_name>
```

{% endcode %}

Now go to the `llm`folder and create a file named: `your_chat_model_name.py`

{% code title="your\_chat\_model\_name.py" %}

```python
import os
from typing import Any

from langchain.chat_models import <LangchainChatModelClient>
from llm.base import BaseChatClient

class YourChatClient(BaseChatClient):
    """
    This class provides a chat interface for interacting with <LangchainChatModelClient>.
    """
    def get_client(self, model:str, **kwargs: Any) -> <LangchainChatModelClient>:
        """
        This method creates and returns a <LangchainChatModelClient> instance.

        Args:
            model (str, optional):  chat model to use.
            **kwargs: Additional arguments to be passed to the <LangchainChatModelClient> constructor.
 
        Returns:
            An instance of the <LangchainChatModelClient> class.
        """
        return LangchainChatModelClient(model=model, **kwargs)
```

{% endcode %}

{% hint style="info" %}
**`<LangchainChatModelClient>`** should be one of the LLM chat model clients mentioned [here](https://python.langchain.com/v0.1/docs/integrations/chat/).
{% endhint %}

Go to the LLM folder and update `__init__.py` with the module lookup entry for `YourChatClient`.

{% code title="**init**.py" %}

```python
_module_lookup = {
    ...,
    "YourChatClient": "llm.<your_chat_model_name>"
}
```

{% endcode %}

Modify `env_manager.py` to import `YourChatClient`  and add a mapping for `YourChatClient` in the `self.indexes` dictionary.

{% code title="env\_manager.py" %}

```python
from llm import (
    ...,
    YourChatClient
)
```

{% endcode %}

<pre class="language-python" data-title="env_manager.py"><code class="lang-python"><strong>self.indexes = {
</strong>    "llm": {
        "class": {
            ...,
            "&#x3C;your_chat_model_name>": YourChatClient
        },
        "env_key": "LLM_TYPE"
    }
}
</code></pre>

This setup ensures that `YourChatClient` can be instantiated based on specific environment variables, effectively integrating it into the environment management system. The `self.indexes` the dictionary now includes a mapping where  `<your_chat_model_name>` corresponds to the `YourChatClient` class, and uses `"LLM_TYPE"` as the environment key.

### Example usage <a href="#example-usage" id="example-usage"></a>

```python
from env_manager import llm_class
```

{% code overflow="wrap" %}

```python
llm  = llm_class.get_client(model=<your_llm_model_name>)
result = llm.invoke("Tell me a game using two sticks")
print(result.content)

#The game you can play using two sticks is called Gilli Danda. This game originated in India and requires two sticks. The smaller stick should be an oval-shaped wooden piece known as Gilli and the longer stick is known as danda. The player needs to use the danda to hit the Gilli at the raised end, which then flips.
```

{% endcode %}

For more examples, please refer to [Langchain](https://python.langchain.com/v0.1/docs/integrations/chat/google_generative_ai/#streaming-and-batching).
