Code walkthrough: generic chat app
Build a chatbot that responds to a user interactively
Prerequisite
Make sure you've followed the steps in LLM application examples to clone our example repo, create your own OctoAI LLM endpoint, and set up your local environment.
Code walkthrough for creating a simple chat app
Below is an explanation of the code in https://github.com/octoml/octoml-llm-qa/blob/main/chat_main.py
First, we import the necessary libraries for logging, environment variables, the OctoAI-hosted LLM, the LangChain library, and LlamaIndex's LMPredictor:
import logging
import os
import sys
from dotenv import load_dotenv
from OctoAiCloudLLM import OctoAiCloudLLM
from langchain import LLMChain, PromptTemplate
from llama_index import LLMPredictor
Next, we set the current directory and load environment variables from a .env file to get credentials for the OctoAI endpoint:
# Get the current file's directory
current_dir = os.path.dirname(os.path.abspath(**file**))
# Change the current working directory
os.chdir(current_dir)
# Load environment variables
load_dotenv()
Then we define a function to handle exiting the program:
def handle_exit():
"""Print a goodbye message and exit the program."""
print("\nGoodbye!\n")
sys.exit(1)
Next, we define the main ask() function which will interactively ask questions to the model:
def ask():
"""Interactively ask questions to the language model."""
print("Loading...")
# Load necessary values from environment
endpoint_url = os.getenv("ENDPOINT_URL")
# Set up the language model and predictor
llm = OctoAiCloudLLM(endpoint_url=endpoint_url)
llm_predictor = LLMPredictor(llm=llm)
We load the endpoint URL from the environment, instantiate the OctoAI LLM endpoint, and create an LLMPredictor.
# Define a prompt template
template = "{question}"
prompt = PromptTemplate(template=template, input_variables=["question"])
# Set up the language model chain
llm_chain = LLMChain(prompt=prompt, llm=llm)
We define a prompt template with a {question} placeholder, create a PromptTemplate, and construct an LLMChain to generate responses.
# Provide an example prompt and response
example_question = "Who is Leonardo Davinci?"
print("Example \n\nPrompt:", example_question, "\n\nResponse:", llm_chain.run(example_question))
try:
while True:
# Collect user's prompt
user_prompt = input("\nPrompt: ")
if user_prompt.lower() == "exit":
handle_exit()
# Generate and print the response
response = llm_chain.run(user_prompt)
response = str(response).lstrip("\n")
print("Response: " + response)
except KeyboardInterrupt:
handle_exit()
We provide an example prompt/response, then enter a loop to collect user prompts and generate responses until the user exits.
Finally, we call the ask() function:
if __name__ == "__main__":
ask()
Updated 4 months ago