Langchain | Shaping the Future of AI Development through Advanced Frameworks — Part 2

Discover how Langchain is transforming the landscape of AI development, making it more accessible and efficient for developers

Amit Kulkarni
AI Advances

--

Source: Author

In Part 1 of the blog, we explored Generative models, and Langchain features like prompts, templates, and chains. This is Part 2 of the blog.

PART 2 — In this blog, we will cover the rest of the content.

  • Features of Langchain
    - Schema & Message
    - Message prompt template
    - Memory
    - Agents
  • Conclusion & FAQs
  • References

Features of Langchain

Schema & Message

Langchain’s Schema comprises three message types: HumanMessage, SystemMessage, and AIMessage. HumanMessage represents user input, SystemMessage provides system-generated responses, and AIMessage contains AI-generated outputs. These types facilitate user interactions, convey system information, and deliver AI responses. Schema ensures clarity and coherence in communication between users and AI models, streamlining development and improving user experience in AI applications.

from langchain.prompts import ChatPromptTemplate, 
HumanMessagePromptTemplate
from langchain.schema import HumanMessage, SystemMessage, AIMessage
gemini_llm = ChatGoogleGenerativeAI(
model="gemini-pro", temperature=0.3, convert_system_message_to_human=True
)
gemini_llm(
[
SystemMessage(content=("You are a technical AI assistant")),
HumanMessage(
content="Should i learn R programming or python in
today's world?"
),
]
)
OUTPUT:
AIMessage(content="**Factors to Consider:**\n\n* **Career Goals:**\n
* R is widely used in data science, statistics, and machine learning.\n
* Python is versatile and applicable in various fields, including
data science, web development, and software engineering.\n\n*
................
................
syntax.\n\n* **Community Support:**\n * Both R and Python have large and
active communities, providing extensive documentation, tutorials, and
support forums.\n\n**Recommendation:**\n\nIn today's world, **Python**
is the more versatile and widely adopted programming language for
data science and beyond. It offers a comprehensive set of libraries,
a gentle learning curve, and a strong community.\n\n**However, if you
...............
...............
'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT',
'probability': 'NEGLIGIBLE', 'blocked': False}]})

In the above code, we have used the built-in features of Langchain to structure and streamline the interaction between the user and AI. The output seems to be fine but can we make it interactive with prompts as we had done in the earlier section of prompt templates? Let’s try that in the next section.

Messages Prompt Template

In this case, we will be more detailed in the HumanMessage.

  • We specify our requirement, and also the output format is JSON.
  • Instruct AI to carry out sentiment analysis either positive, negative, or neutral.
from langchain.prompts import ChatPromptTemplate, 
HumanMessagePromptTemplate
from langchain_core.messages import HumanMessage, SystemMessage, AIMessage
chat_template = ChatPromptTemplate.from_messages(
[
SystemMessage(
content=(
"You are an AI assistant that you process the
information and give sentiment of the information
either as Positive, Negative or Neutral and you
will give the output in the Json format"
)
),
HumanMessagePromptTemplate.from_template("{text}"),
]
)
chat_message = chat_template.format_messages(text="RCB has never won IPL")
gemini_llm = ChatGoogleGenerativeAI(
model="gemini-pro", temperature=0.3, convert_system_message_to_human=True
)
gemini_llm.invoke(chat_message)
#-----------------------------------------------------
OUTPUT:
AIMessage(content='```json\n{\n "sentiment": "Negative"\n}\n```', response_metadata={'finish_reason': 'STOP', 'safety_ratings': [{'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability': 'NEGLIGIBLE', 'blocked': False}]})

We see the AI response but how can we extract specific information from this output? For this, we will use JsonOutputParser as below.

from langchain_core.output_parsers import JsonOutputParser
# define the parser object
parser = JsonOutputParser()
# create a chain
chain = gemini_llm | parser
sentiment = chain.invoke(chat_message)
print(sentiment)
#--------------------------------------------------------------
OUTPUT:
{'sentiment': 'Negative'}

Prompt Feedback

AI-generated responses may sometimes contain errors or raise ethical concerns, highlighting the challenges of deploying AI technology responsibly. It’s essential to address these issues to maintain ethical standards and promote responsible AI usage. Gemini AI tackles this by providing prompt feedback, and helping users monitor and improve the quality of AI-generated content. This approach fosters transparency and accountability, contributing to a more trustworthy and reliable AI ecosystem.

So the prompt “RCB has never won IPL” was validated against set criteria and we can see these details in the prompt feedback.

response = gemini_llm.invoke(chat_message)
safety_ratings = response.response_metadata.get('safety_ratings', [])
safety_ratings
#-----------------------------------------------
OUTPUT:
[{'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
'probability': 'NEGLIGIBLE',
'blocked': False},
{'category': 'HARM_CATEGORY_HATE_SPEECH',
'probability': 'NEGLIGIBLE',
'blocked': False},
{'category': 'HARM_CATEGORY_HARASSMENT',
'probability': 'NEGLIGIBLE',
'blocked': False},
{'category': 'HARM_CATEGORY_DANGEROUS_CONTENT',
'probability': 'NEGLIGIBLE',
'blocked': False}]

Memory

Memory in Langchain is the ability of AI models to retain and recall information from past interactions, enhancing the continuity and coherence of interactions. This feature improves user engagement and satisfaction by creating a more natural conversational experience. Memory also allows AI models to adapt to evolving user needs, making them more effective and efficient in serving users’ requirements.

from langchain.chains import ConversationChain
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.memory import ConversationBufferMemory
gemini_llm = ChatGoogleGenerativeAI(model="gemini-pro", temperature=0.3)conversation = ConversationChain(
llm=gemini_llm, verbose=True, memory=ConversationBufferMemory()
)

We see that the chat history is maintained in the memory and AI can refer to the history and respond accordingly.

Agents

Large Language Models (LLMs) have impressive capabilities but lack basic functions like logic and calculation. Agents, equipped with specialized toolkits, perform specific tasks. For example, Python Agent uses PythonREPLTool to execute commands. The LLM instructs the agent on which code to run, and flexible chains of calls are determined by user input or an Agent can search for information on Google, fetch the information, and initiate the next sequence. This approach ensures efficient and adaptable AI interactions tailored to user needs.

Let’s create an Agent with a basic example.

  1. We will create a REPL tool that uses a Python shell to execute Python commands.
  2. Build an Agent that uses this tool to process the information.
from langchain.agents import initialize_agent
from langchain.agents import Tool
from langchain_experimental.utilities import PythonREPL

repl_tool = Tool(
name="python_repl",
description="A Python shell. Use this to execute python commands.
Input should be a valid python command. If you want
to see the output of a value`.",
func=python_repl.run,
)

agent = initialize_agent(
agent_name="gemini_agent",
llm=gemini_llm,
verbose=True,
tools=[repl_tool]

)

agent.run("How much interest wll I get for 10000 at 5.25 percent
for 3 years?")

#----------------------------------------------------------------
OUTPUT:
Source: Author

In this case, the AI used a simple interest formula, called the REPL tool to execute the equation and return the result.

You can find the complete code from GitHub

Conclusion

Google’s Gemini AI and Langchain are a powerful combination in AI development. Gemini-pro’s latest API updates provide developers with enhanced features for building sophisticated AI applications. Langchain’s seamless integration with Gemini enhances its potential, offering a robust framework for harnessing Gemini AI’s full power. As demand for intelligent AI solutions grows, Langchain stands at the forefront, providing developers with the tools to succeed in this rapidly evolving landscape. The possibilities are limitless with these two AI tools.

I hope you liked the article and found it helpful.

Connect with me

Additional reading

Data Science Using Python and R
Generative AI Blogs
Python For Finance
App Development Using Python
GeoSpatial Analysis Using Python

FAQs

Q1: How is Generative AI different from other branches of artificial intelligence?
A1: Generative AI is a branch of AI that focuses on creating original content based on training data, unlike other branches that primarily focus on classification or prediction. It has evolved significantly over the years, from probabilistic models to large language models.

Q2: What are some practical applications of Generative AI combined with frameworks like Langchain?
A2: Generative AI and frameworks like Langchain offer numerous practical applications across industries, including chatbots for natural conversation, sentiment analysis tools for social media sentiment, and language translation services for language barriers. Generative AI can also be used in content generation and creative writing assistance.

Q3: What are the limitations of Generative AI models, and how can developers mitigate them?
A3: Generative AI models have impressive capabilities but also have limitations, such as producing biased or inaccurate outputs, and struggling to generate relevant responses to queries beyond their scope or timeframe. To address these issues, developers can fine-tune models, implement robust evaluation mechanisms, and provide clear disclaimers about model limitations.

--

--