Langchain | Shaping the Future of AI Development through Advanced Frameworks — Part 1

Learn how Langchain empowers developers to create innovative and impactful AI applications

Amit Kulkarni
AI Advances

--

Source: Author

We will cover the following topics in this blog

PART I

  • Introduction
  • What is Langchain?
  • Setting up the Gemini API
  • Getting started
  • Features of Langchain
    - Prompts & templates
    - Power of chains

PART 2

  • Features of Langchain
    - Schema
    - Messages
    - Memory
    - Agents
  • Conclusion & FAQs
  • References

Introduction

Generative AI is a significant advancement in artificial intelligence, revolutionizing content creation and understanding. The LLM models that power the Genertive AI can understand context, generate coherent responses, and perform tasks based on user prompts. However, the true potential of Generative AI is realized when paired with frameworks like Langchain. Langchain acts as a catalyst, enabling developers to harness the full capabilities of LLMs like Google’s Gemini AI and OpenAI’s models. This integration provides streamlined access to advanced language processing capabilities, unlocking new possibilities for intelligent and interactive applications that can understand, respond to, and anticipate user needs. This blog explores the transformative power of AI advancements with Google Gemini and their impact on the AI landscape.

What is Google’s Gemini API?

Google’s Gemini AI is a significant advancement in artificial intelligence, particularly in natural language processing (NLP). It provides developers with advanced NLP models trained on vast text data, enabling tasks like text generation, sentiment analysis, and language translation. When paired with Langchain, an innovative AI framework, developers can unlock the full potential of Gemini AI to build sophisticated conversational AI applications.

Setting up the API Key

Google offers users the ability to create an API key through its AI studio. This key can be securely stored and easily incorporated into code, similar to other AI tools. To do this, we use the .env file to store the API key, then load it into the code. Below is an example demonstrating this process.

GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')
genai.configure(api_key=GOOGLE_API_KEY)

Available models

Let’s take a look at all the models that Gemini has to offer.

for m in genai.list_models():
if 'generateContent' in m.supported_generation_methods:
print(m.name)

# ------------------------------------------------------------
OUTPUT:
models/gemini-1.0-pro
models/gemini-1.0-pro-001
models/gemini-1.0-pro-latest
models/gemini-1.0-pro-vision-latest
models/gemini-pro
models/gemini-pro-vision

Getting started

we’ll use the Gemini API to retrieve a list of South-East Asian countries. Using our API key, we’ll connect to the “gemini-pro” model and pose a question to the model. The model will then generate a response containing the requested information. Let’s proceed with the code to fetch the desired data.

def get_gemini_response(input):
model = genai.GenerativeModel("gemini-pro")
response = model.generate_content(input)
return response.text


question = "List the south east asian countries"
response = get_gemini_response(question)
print("Response:\n", response)


OUTPUT:
Response:
* Brunei
* Cambodia
* East Timor
* Indonesia
* Laos
* Malaysia
* Myanmar (Burma)
* Philippines
* Singapore
* Thailand
* Vietnam

That was straightforward. In the next section, we’ll dive into how we can enhance AI responses by providing specific inputs tailored to different contexts. For example, imagine wanting the AI to adopt the persona of a comic character and generate humorous responses. Alternatively, you might need the AI to act as a technical assistant, providing insightful answers to your queries. We can even take it a step further by allowing the AI to make decisions on how to gather information — whether it’s from documents, search engines, or other sources. By incorporating these additional layers of input, we can guide the AI to deliver more relevant and engaging responses. Join us as we explore how to implement these enhancements using Langchain, making interactions with AI even more dynamic and effective.

What is Langchain?

Langchain is a tool that enables developers to access Gemini AI’s advanced features, allowing them to create text, understand sentiment, and more through a user-friendly interface. It is designed to be flexible, allowing developers to customize it to fit their needs. Langchain uses big language models to create interactive apps that interact with users naturally. It also allows developers to add prompts and receive responses from these models, making it easier to create smart, interactive apps. Langchain also has modules compatible with popular language models like Hugging Face, Open AI, and Gemini.

Features of Langchain

Prompts

Langchain’s prompt system provides structured input for AI models like Gemini, allowing developers to formulate specific questions or commands. Customizing prompts allows users to steer AI responses towards desired outcomes, such as generating relevant information or completing tasks. This simplifies the interaction process, enabling developers to optimize AI interactions for various use cases, enhance user experiences, and drive innovation in AI-powered applications.

Let’s try the same query from the previous section and use Langchain to get the response from the API.

from langchain_google_genai import ChatGoogleGenerativeAI

gemini_llm = ChatGoogleGenerativeAI(model="gemini-pro", temperature=0.3)
result = gemini_llm.invoke("List the south east asian countries")
print(result.content)

OUTPUT:
* Brunei
* Cambodia
* East Timor
* Indonesia
* Laos
* Malaysia
* Myanmar (Burma)
* Philippines
* Singapore
* Thailand
* Vietnam

As expected, the output is the same but the process is different as we have used ChatGoogleGenerativeAI.

Prompt templates

Langchain’s prompt templates are extensions of prompts, offering predefined structures for interacting with AI models. They provide a standardized format for inputting queries or commands, streamlining the interaction process. Developers can create consistent prompts tailored to their use cases, ensuring clarity and coherence in AI interactions. These templates enhance usability and optimize performance in various applications.

Let’s understand the prompt templates with an example. We wish to get a famous fruit from a given country and in this case, the country will be the input and the fruit will be the output.

  1. we import the PromptTemplate and define the input_variables and the template.
  2. We will format the query by feeding the country name and the output will be a proper sentence as seen in the output.
from langchain.prompts import PromptTemplate

prompt_template = PromptTemplate(
input_variables=["country"], template="Which is the famous fruit
from {country}"
)

prompt_template.format(country="India")

#----------------------------------------------------------
OUTPUT:
'Which is the famous fruit from India'

This is good but what if we wish to give multiple inputs or have multiple templates? How can that be achieved? We will explore that in the next section with chains.

If you wish to do a similar implementation using Open AI then consider reading Unleashing the Power of Generative AI And LangChain In Python

Power of chains

Langchain’s chains are essential for facilitating interactions between users and AI models. They come in sequential, conditional, and conversational chains, each serving a specific purpose. Sequential chains ensure a linear progression of tasks, conditional chains introduce branching logic, and conversational chains enable dynamic exchanges. Chains structure AI interactions, enhance usability, and enable developers to create sophisticated AI applications with ease.

Simple sequential chains

We will extend the previous example of a famous fruit from a given country. We will create multiple templates and chain them together using LLMChain.

  1. Country_template: Captures the input from the user i.e. country.
  2. Country_chain: Create a chain with the template 1 and the LLM model instance
  3. Famous_template: The output from Template 1 will be the input to Template 2. Also, Template 2 will output the health benefits from the fruit.
  4. Famous_chain: Create a chain with the template 2 and the LLM model instance
  5. SimpleSequentialChain: Use LLMChain to bring Country_chain and Famous_chain together. We will run the chain to get the output and in this case for input — India, the famous fruit came out to be Mango and described health benefits.
from langchain.chains import LLMChain

chain = LLMChain(llm=gemini_llm, prompt=prompt_template)
print(chain.run("India"))


country_template = PromptTemplate(
input_variables=["country"], template="Which is the famous fruit
from {country}"
)

country_chain = LLMChain(llm=gemini_llm, prompt=capital_template)

famous_template = PromptTemplate(
input_variables=["Fruit"],
template="What are the health benefits from {Fruit}",
)

famous_chain = LLMChain(llm=gemini_llm, prompt=famous_template)


from langchain.chains import SimpleSequentialChain

chain = SimpleSequentialChain(chains=[country_chain, famous_chain])
chain.run("India")

#------------------------------------------------------------------------

OUTPUT:
Mango
'**Nutritional Value:**\n\n* Rich in vitamins A, C, and E\n*
Good source of potassium, fiber, and antioxidants\n\n**Health
Benefits:**\n\n**1. Improves Eye Health:**\n* Contains high levels of
vitamin A, essential
for maintaining healthy vision.\n* Protects against macular
....................................
....................................
Improves Sleep Quality:**\n* Contains tryptophan, an amino acid that
promotes relaxation and sleep.\n* May help reduce stress and anxiety.

We have successfully chained the templates, processed a sequence of queries, and got the output. Can we make the code and the output much cleaner or easier to read? Let’s try that in the next section using Sequence chains.

Sequential chains

This is very similar to the previous section but we explicitly specify the inputs and the outputs.

from langchain.chains import SequentialChain


country_template = PromptTemplate(
input_variables=["country"], template="Which is the famous fruit
from {country}"
)
country_chain= LLMChain(llm=gemini_llm, prompt=country_template,
output_key="fruit")
# capital_chain.run("India")

benefit_template = PromptTemplate(
input_variables=["fruit"],
template="What are the health benefits from {fruit}",
)
benefit_chain = LLMChain(llm=gemini_llm, prompt=benefit_template,
output_key="benefits")
# benefit_chain.run('Mango')


chain = SequentialChain(
chains=[country_chain, benefit_chain],
input_variables=["country"],
output_variables=["fruit", "benefits"],
)

chain({"country": "India"})

#---------------------------------------------------------
OUTPUT:
{'country': 'India',
'fruit': 'Mango',
'benefits': '**Nutritional Value:**\n\nMangoes are rich in vitamins,
minerals, and antioxidants. A 1-cup serving (225g) of fresh mango
provides:\n\n* Calories: 101\n* Carbohydrates: 24.7g\n*
....................
....................
helps produce serotonin, a neurotransmitter associated with
mood regulation.'}

Well, that was much better, we specified the input as country and output as fruit and benefits. The output also shows the sequence of execution that is much cleaner than the previous section.

You can find the complete code from GitHub

Conclusion

Google’s Gemini AI and Langchain have partnered to enhance AI development. Gemini-pro’s API upgrades provide advanced features for creating sophisticated AI applications. Langchain integrates with Gemini, enhancing its capabilities and maximizing its potential. As demand for intelligent AI solutions increases, Langchain offers developers the necessary tools to thrive in this dynamic landscape.

We covered a few of the features in this part and the remaining will be covered in Part 2.

I hope you liked the article and found it helpful.

--

--