How to build an LLM Agent for Sales? Research prospects & create proposals BEFORE discovery calls with LangChain, GPT4 & Perplexity's API

Read Time:
9
minutes

👋 Introduction

Imagine you are a company (Ionio) which helps other companies or clients to make AI-Powered SaaS products and you have so many clients meetings scheduled. Most of these meetings are first time interaction with client where they come up with their project idea or a problem statement and they want to discuss how they can solve that problem and how to convert their idea into an actual project with your help. There is a meeting description with you for every meeting which have information about idea and client.

Now you as a good CEO (Rohan) or founder want to get some information about client, their organization and idea before the meeting to get the context of meeting so that you can make good impression in meeting and get your solution ready before meeting and also you want to create a small project proposal if in case you make the deal 😎.

Doing this manually will take a lot of time as you have so many client requests and meetings and your workforce is small so you don’t have anyone for this task and the task itself is also repetitive and that's where Langchain and Agents come in – they're here to change the game.

By using AI technology like natural language processing (NLP), Langchain lets us create clever little helpers called agents. These agents can take care of all sorts of tasks, freeing us up to focus on the important stuff.

In this blog, we will first learn about langchain and agents in detail with code examples and then we will create our own custom agent for the above scenario So let’s get started! 🚀

We will create an agent which will do following tasks:

  1. Get information about a prospect & their idea from internet when a call is booked
  2. Find possible solution about idea and how to convert that idea in an actual product from internet
  3. Create a professional project proposal from given idea, client information and solution which includes other information like tech stack, timeline, project link etc
  4. Save the project proposal as notion document or word document

💡How to get code?

You can get all the code discussed here from LLM agent for meeting automation repository.

Here is the sneak peek of our agent 👀

🔗 What is Langchain?

Nowadays everyone want to integrate AI models in their existing applications or want to make products using AI and LLMs but many of these models are limited to specific amount of data as they are trained on historical data so they don’t have access to latest data or data from other models. There are different models trained on different datasets and if you want to combine the functionality of these models for specific tasks you were not able to do it. To solve this problem, Harrison Chase launched a framework called Langchain in October 2022 as an open source project and it became very popular in very small amount of time because of its robustness, performance and features.

LangChain is an open source framework that lets software developers with AI combine large language models with other external components to develop LLM-powered applications. The goal of LangChain is to link powerful LLMs, such as OpenAI's GPT-3.5 and GPT-4 or meta’s Llama2, to an array of external data sources to create and reap the benefits of natural language processing (NLP) applications.

Let’s create our first chain with LangChain

In langchain, chain is a series or chain of different components which can connected with each other and can pass input and output to process data. These chains are created by one or more LLMs. For example, if you want to generate some data using LLM and then you want to use that output as an input for another LLM then you can create a chain for this purpose. There are several types of chains in langchain. Some of them are:

  • Sequential Chain: A series of chains that are executed in a specific order. There are two types of sequential chains: SimpleSequentialChain and SequentialChain where SimpleSequentialChain is the simplest form of chain
  • LLM Chain: The most common chain, which comes in different forms to address specific challenges. Some commonly used types of LLM chains include the Stuffing chain, the Map-Reduce chain, and the Refine chain.
  • Translation Chain: Asks an LLM to translate a text from one language to another
  • Chatbot Chain: Creates a chatbot which can answer questions and generate text.

In this blog, we will only talk about SimpleSequentialChain and SequentialChain.

We use SimpleSequencialChain when we have only one input and single output in a chain. For example, we have a cuisine and we want to generate restaurant name from given cuisine name and from that name we want to generate 10 dishes to add in menu.

As we can see here one task is dependent on other and here is where we will create our first chain so let’s code it.

First install openai and langchain modules


pip install openai langchain

Let’s first setup our LLM, we will use GPT-3.5 Turbo model for this tutorial. Get your api key from openai dashboard and add it as a environment variable in your code as it’s recommended to keep it private.


from langchain.llms import OpenAI
OpenAI_LLM = OpenAI(temperature=0.6,api_key=os.environ["OPENAI_KEY"])

Here temperature shows the creativity of output, the more temperature is the more creative answer you will get and its not recommended for any calculation related output but its very useful for content writing.

Now we have LLM setup so let’s try it once


bot = OpenAI_LLM("Say hello if you are working!")
print(bot)

After running it you will get an output from LLM saying hello and now our LLM is working perfectly!

So let’s create a chain, but first we will need to create prompt template for our LLM which we can create using PromptTemplate from langchain


from langchain.prompts import PromptTemplate
# Create first chain
prompt_1 = PromptTemplate.from_template(
    "Give me a {cuisine} restaurant name. Only return name"
)

Here cuisine is an input from user and it will be automatically added in our prompt dynamically. so now we have our prompt and LLM ready so now we can create chain using LLMChain


from langchain.chains import LLMChain
first_chain = LLMChain(llm=OpenAI_LLM,prompt=prompt_1)
# To run this chain use below code
# first_chain.run("Indian") // Passing cuisine parameter

Now our first chain is ready which will give us restaurant name. Now create one more chain which will give us food items list for the given restaurant name.


# Create second chain
prompt_2 = PromptTemplate.from_template(
    "Give me 10 dish names for restaurant {restaurant}"
)
second_chain = LLMChain(llm=OpenAI_LLM,prompt=prompt_2)

Now let’s combine these 2 chains using SimpleSequencialChain . The order of chains matters in sequential chains.


# Combine these 2 chains
final_chain = SimpleSequentialChain(chains=[first_chain,second_chain])
response = final_chain.run("Indian")
print(response)

Once we run this code, we will get name of 10 dishes based on given cuisine and restaurant name!

But what if we want both restaurant name and dishes name in output? 🤔

This is where SequentialChain comes into picture, it allows us to have multiple inputs and outputs unlike SimpleSequentialChain . so let’s try it!


# Create first chain
prompt_1 = PromptTemplate.from_template(
    "Give me a {cuisine} restaurant name. Only return name"
)
# here we will specify output_key which will be used in next chain as an input
first_chain = LLMChain(llm=OpenAI_LLM,prompt=prompt_1,output_key="restaurant_name")
# Create second chain
prompt_2 = PromptTemplate.from_template(
    "Give me 10 dish names for restaurant {restaurant}"
)
second_chain = LLMChain(llm=OpenAI_LLM,prompt=prompt_2,output_key="dishes")

Now let’s combine both chains


from langchain.chains import SequentialChain
# Combining both chains into sequential chain
final_sequencial_chain = SequentialChain(
    input_variables=["cuisine"],
    # we want c and result both in output 
    output_variables=["restaurant_name","dishes"],
    chains=[first_chain,second_chain]
)
# In sequencial chain, we can have multiple inputs so use tuple here for parameters.
final_sequencial_chain({"cuisine":"Mexican"})

After running the code, we will get both restaurant name and dishes list and this is how you can create chains using langchain according to your usecases.

🤖 What is an Agent?

Langchain Agents are the digital workhorses within the Langchain ecosystem, bringing the magic of automation to life. These agents are essentially intelligent entities, programmed to understand and respond to human language, allowing them to perform a myriad of tasks autonomously.

The main problem with most of the current LLMs is that they are trained on historical data which is limited and that’s why they don’t have information about latest data. For example if we want to know about any latest news or latest framework which just got launched or we want to know about any latest stock market related information then we can’t get it from these LLMs because they are not connected with the internet.

To solve this problem, langchain introduced langchain agents which can use set of tools and combine them with your existing LLM functionalities and allows your LLMs to work with latest data. For example, using wikipedia tool you can make an agent which can extract an information from wikipedia and using LLM you can format it properly or write detailed article about it. These agents can handle variety of responsibilities from organizing your schedule, managing tasks, interacting with databases and extracting information from internet. Find the list of tools provided by langchain here.

Let’s create our first agent

Now we know what agent is so let’s try to make one basic agent which can fetch information from internet and give us the latest information.

We will use serpapi tool for this scenario with our agent. First get your serpapi key from their website and once you get the api key then store it in your environment variable and let’s start building our agent.

We will load the tool using load_tools method and provide our OpenAI LLM which we created before. After that we will use initialize_agent method to initialize our agent with given tools, LLM and agent type.


from langchain.agents import AgentType,initialize_agent,load_tools
tools = load_tools(["serpapi"],llm=OpenAI_LLM)
agent = initialize_agent(
    tools,
    llm=OpenAI_LLM,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)

And once everything is done, now we can finally run our agent by giving it prompt like this:


agent.run("What is current value of 1 dollar in rupees?")

And it will return the current value of dollar from the internet !! 🚀

You can also create custom tools for your specific usecase and we will learn about it more when we will create our meeting management agent.

🌐 Automating meeting workflow

Let’s come back to the meeting automation scenario where we have so many meetings scheduled and you are confused to find the related person to handle or take care of the meeting depending on the company size and client idea.

For example, let’s consider this scenario:

  • If company size is less than 10 or its just a startup then the meeting will be with senior developers and for that they will need a project proposal document to learn about idea and client.
  • If company size is medium to high then you as a CEO will handle that meeting and you will also need a project proposal document to learn about idea and client.

To address this issue, we will create an agent which can take meeting description as an input and then from that meeting it will find client name, idea, company name, company link and information about client, their achievements, goals and background. Once we have all the information from about client then it will use another tool to find out a possible solution for the given idea or problem. Once we have solution then we will add that all information in a notion document with good formatting.

Workflow

So basically we will create 3 custom tools for our agent here:

  • Platform info extractor: It will find the idea, client name and platform link from the given meeting description
  • Client details extractor: It will search on the internet to get the more information about client, idea and platform
  • Solution extractor: It will take the idea and search online for the possible solution.

Once we get all the information then we will use notion API to add this information in a notion document.

Here is how the full workflow of agent will look like:

Let’s create our agent

First of all, let’s setup our LLM. we will use gpt-4 model for this example


# constats.py will store all our secret keys
import constants
from langchain.chat_models import ChatOpenAI

OPENAI_API_KEY = constants.OPENAI_API_KEY

# initialize LLM (we use ChatOpenAI because we'll later define a `chat` agent)
llm = ChatOpenAI(
    openai_api_key=OPENAI_API_KEY,
    temperature=0.6,
    model_name='gpt-4'
)

gpt-4 is a conversational model, so it will store all previous conversations in a local memory to get the context for current message so it takes the previous messages and sends the full conversation as a context in prompt so that model can understand the context for current message but suppose if there are more than 100 messages then the context will become so much longer and it won’t fit in context window of model. That’s why we will only send last k number of conversations in context and to do that we can use ConversationBufferWindowMemory method with k parameter.


from langchain.chains.conversation.memory import ConversationBufferWindowMemory
# initialize conversational memory
conversational_memory = ConversationBufferWindowMemory(
    memory_key='chat_history',
    k=5,
    return_messages=True
)

Now let’s create our first tool which is platform info extractor. To create any custom tool in langchain, you will have to create a class with following properties:

  • name: name of the tool
  • description: description about the tool so that agent will know when to use this tool
  • _run(): this method will be triggered when agent uses the tool so add your business logic in this method
  • _arun(): this method will be triggered when someone calls agent asynchronously.
  • We will have to inherit properties of BaseTool class which comes from langchain.tools

We will use perplexity api to get the information from internet and for that we will use their online model pplx-70b-online . You can get your own api key from perplexity dashboard.

here is the code for platform info extractor tool:


from langchain.tools import BaseTool
import requests
import json
class GetPlatformInfo(BaseTool):
    name = "Platform info extractor"
    description = "use this tool when you have given a description about meeting and you have to find the proposed idea, client name or platform link from given description"

    def _run(self, description):
				# it will take meeting description from prompt and pass it in perplexity prompt
        url = "https://api.perplexity.ai/chat/completions"
        payload = {
            "model": "pplx-70b-online",
            "messages": [
                {
                    "role": "system",
                    "content": (
                        "You have given a description of meeting and you will have to find out proposed idea, client name and website link mentioned in the given description."
                        "Use client info extractor tool to get more information."
                        "Return it in this format"
                        "Idea: "
                        "Client Info: "
                        "Platform Link: "
                    ),
                },
                {
                    "role": "user",
                    "content": (
                        "if anything is not provided in description then use internet and google search to find out the actual information"
                        "Here is the meeting description " 
                        f"{str(description)}"
                    )
                },
            ]
        }
        headers = {
            "accept": "application/json",
            "content-type": "application/json",
            "Authorization":f"Bearer {constants.PERPLEXITY_API_KEY}"
        }
        response = requests.post(url, json=payload, headers=headers)
        return response
    # We don't care about async behaviour for this scenario
    def _arun(self, website):
        raise NotImplementedError("This tool does not support async")


Now let’s create our second tool which is client details extractor tool:


class GetClientDetails(BaseTool):
    name = "Client details extractor"
    description = "use this tool when you have given a platform name, idea or platform link and you want to find more information about CEO of platform and proposed idea"
    def _run(self, website):
        url = "https://api.perplexity.ai/chat/completions"
        payload = {
            "model": "pplx-70b-online",
            "messages": [
                {
                    "role": "system",
                    "content": (
                        "You are artificial intelligence agent which extracts an information by searching about it online"
                        "and returns information in this format if it exists"
                        "Title of website or platform: "
                        "Proposed idea:"
                        "CEO Information:"
                    ),
                },
                {
                    "role": "user",
                    "content": (
                        "I am giving you platform name,idea or platform link and you have to find latest information about" 
                        "the platform by going to given link and return atleast 200 words description about idea and"
                        "atleast 150 words description about CEO, their goal and achievements"
                        "Please try to make it as detailed as you can and always refer to online information"
                        "here is the platform link or name "
                        f"{str(website)}"
                    )
                },
            ]
        }
        headers = {
            "accept": "application/json",
            "content-type": "application/json",
            "Authorization":f"Bearer {constants.PERPLEXITY_API_KEY}"
        }
        response = requests.post(url, json=payload, headers=headers)
        return response
    
    def _arun(self, website):
        raise NotImplementedError("This tool does not support async")

Now let’s create our last tool called solution extractor which will take the idea and give us possible solution for the given idea or problem statement


class GetIdeaSolution(BaseTool):
    name = "Solution extractor"
    description = "use this tool when you have given an idea description and information about client and you have to find solution on how we can achieve the given idea"

    def _run(self, description):
        url = "https://api.perplexity.ai/chat/completions"
        payload = {
            "model": "pplx-70b-online",
            "messages": [
                {
                    "role": "system",
                    "content": (
                        "You have given an idea description and you have to find around 3-4 solutions to achieve the given idea using AI-ML"
                        "or webdev technologies. Search online about tech stack, resources, timeline of project and youtube videos and send it with proper formatting and line breaks in this format"
                        "Solution: "
                        "Tech stack: "
                        "Timeline: "
                    ),
                },
                {
                    "role": "user",
                    "content": (
                        "Here is the idea description " 
                        f"{str(description)}"
                    )
                },
            ]
        }
        headers = {
            "accept": "application/json",
            "content-type": "application/json",
            "Authorization":f"Bearer {constants.PERPLEXITY_API_KEY}"
        }
        response = requests.post(url, json=payload, headers=headers)
        return response
    
    def _arun(self, website):
        raise NotImplementedError("This tool does not support async")

Now let’s initialize our agent and add all the tools in it.


# Creating Agent
from langchain.agents import initialize_agent
tools = [GetClientDetails(),GetPlatformInfo(),GetIdeaSolution()]
agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent="zero-shot-react-description",
    verbose=True,
    max_iterations=3,
    early_stopping_method='generate',
    memory=conversational_memory,
    handle_parsing_errors=True
)

Now it’s time to test our agent 👀 !!


para = agent(
      "First find the idea, client name and platform link using Platform info extractor tool and find 150 words information about proposed idea about product, goals and achievements of CEO. Once you got the information about their idea then find how we can help them to build it in 200 words"
      "fetch realtime data from internet everytime"
      "This information is going to be added in project proposal so please write in detail and sections like CEO_Info,idea and solution must be explained in 600 words each."
      "Return your response in python dict format like below and please use proper line breaks: "
      """
      "platform_name": platform_name
      "CEO_Info": ceo_info
      "CEO_Name": ceo_name
      "idea": idea
      "solution": solution
      "tech_stack": tech stack
      "timeline": timeline
      "Platform_link": platform_link
      """
      "Here is the description:"
      "Hey there, myself rohan and we are hosting this meeting to discuss about my idea of making a twitter automation tool with several features like automated tweets, scheduled tweets, tweet improvement guides etc"
      )
output = json.loads(para["output"])
print(output)

After running the above code, you will be able to see the thinking and observation process of our agent and how its referring to every tool one by one and giving us information in specified format.

Now let’s use gpt-4 to format this details so that it can be added in notion as a project proposal (This step is optional).


from openai import OpenAI
client = OpenAI(api_key=constants.OPENAI_API_KEY)
chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": f"""
                I am giving you some data in python dict format and you will have to add more information by searching about it online in given data so that it can be added in project proposal but don't add any fake information. Once you got the information then return the data in the same format.
                Try to add more information by yourself too to make it more detailed and make it around 1000 words. Don't return anything except this dictionary and also please use proper line breaks in every paragraph of each section of dict.
                Here is the information about fields in the given dict: 
                platform_name: Name of platform
                CEO_Info: Background about CEO and information about CEO
                CEO_Name: Name of CEO
                idea: project idea
                solution: solution on how we can solve the given problem statement and how to achieve the given idea
                tech_stack: tech stack used to build the solution
                timeline: timeline for the project
                Platform_link: link to the platform
                Here is the data: 
                {para["output"]}
            """,
        }
    ],
    model="gpt-4",
    temperature=0.7
)
output = chat_completion.choices[0].message.content
output = json.loads(output)

Adding data to notion

Now we have all the information about client, their idea and organization, so let’s format it to make it as a project proposal and add it in notion as a notion document. we will use notion_client module from notion.

Before coding, we will first need to setup a notion integration with our workspace, first go to notion dashboard and click on create integration and add all the required information and click submit and you will get your notion API key.

After that open your notion workplace and create a empty page with any title of your choice. we will add our documents in this page so name it accordingly. Now we will need to integrate our custom integration which we just created in this page so for that click on 3 dots menu on right side in that page and go to connect to and select your custom integration name and now its successfully connected to our integration. Now we can start coding!

Start by installing the module


pip install notion_client

Initialize the client


# Adding data into notion
from notion_client import Client

notion = Client(auth=constants.NOTION_API_KEY)

Now we will need page_id of the page where we are going to add all these documents. To get the page id of any page, click on 3 dots icon on right side and click on copy_link and take the last string from that link.

For example if your link looks like this:


https://www.notion.so/workspace/text-<32_character_long_string>?pvs=2

Take the 32 character long string from that link and convert it to 8-4-4-4-12 format like this below and it is your page id.


14ffc7f5-xxxx-xxx-xxxx-e6f24dfaxxxx

Now let’s write code to add all our data into notion document:


from pprint import pprint
parent = {"type": "page_id","page_id": }
properties = {
    "title": {
        "type": "title",
        "title": [{ "type": "text", "text": { "content": f"Meeting with {output['CEO_Name']} about {output['platform_name']}" } }]
    },
}
children = [
      {
        "object": "block",
        "type": "heading_2",
        "heading_2": {
          "rich_text": [{ "type": "text", "text": { "content": "🚀 Platform Name" } }]
        }
      },
      {
        "object": "block",
        "type": "paragraph",
        "paragraph": {
          "rich_text": [{ "type": "text", "text": { "content": output['platform_name'] } }]
        }
      },
      {
        "object": "block",
        "type": "heading_2",
        "heading_2": {
          "rich_text": [{ "type": "text", "text": { "content": "📝 Idea Description" } }]
        }
      },
      {
        "object": "block",
        "type": "paragraph",
        "paragraph": {
          "rich_text": [{ "type": "text", "text": { "content": output['idea'] } }]
        }
      },
      {
        "object": "block",
        "type": "heading_2",
        "heading_2": {
          "rich_text": [{ "type": "text", "text": { "content": "💡 How we can help?" } }]
        }
      },
      {
        "object": "block",
        "type": "paragraph",
        "paragraph": {
          "rich_text": [{ "type": "text", "text": { "content": output['solution'] } }]
        }
      },
      {
        "object": "block",
        "type": "heading_3",
        "heading_3": {
          "rich_text": [{ "type": "text", "text": { "content": "⚒️ What tech stack we can use?" } }]
        }
      },
      {
        "object": "block",
        "type": "paragraph",
        "paragraph": {
          "rich_text": [{ "type": "text", "text": { "content": output['tech_stack'] } }]
        }
      },
      {
        "object": "block",
        "type": "heading_3",
        "heading_3": {
          "rich_text": [{ "type": "text", "text": { "content": "⌛ What is timeline of project? " } }]
        }
      },
      {
        "object": "block",
        "type": "paragraph",
        "paragraph": {
          "rich_text": [{ "type": "text", "text": { "content": output['timeline'] } }]
        }
      },
      {
        "object": "block",
        "type": "heading_3",
        "heading_3": {
          "rich_text": [{ "type": "text", "text": { "content": "🤔 Info about CEO" } }]
        }
      },
      {
        "object": "block",
        "type": "paragraph",
        "paragraph": {
          "rich_text": [{ "type": "text", "text": { "content": output['CEO_Info'] } }]
        }
      },
      {
        "object": "block",
        "type": "heading_2",
        "heading_2": {
          "rich_text": [{ "type": "text", "text": { "content": "🔗 Platform Link " } }]
        }
      },
      {
        "object": "block",
        "type": "paragraph",
        "paragraph": {
          "rich_text": [{ "type": "text", "text": { "content": output['Platform_link'] } }]
        }
      },
    ]
create_page_response = notion.pages.create(
    parent=parent,properties=properties,children=children
)
pprint(create_page_response)

After running the above code, you will be able to see a newly created document with a title something like this:

And In that document, you will be able to see a well formatted project proposal which we just created using our agent!

Still the notion document is not looking like a professional project proposal, is it? 💀

Let’s try something else 🤔

Creating proposal as a word document

The main problem with notion formatting is that everything needs to be converted into notion object which you already saw above in code and you can’t write an object for every single paragraph, list or heading to make your document good so instead of using notion we will try to get an output in markdown from our agent and then convert that text into an actual word document.

Let’s change the part where we made our last openai call for formatting. Instead of getting python dictionary response, we will tell model to give response in markdown

Make these changes in your previous code:


from openai import OpenAI
client = OpenAI(api_key=constants.OPENAI_API_KEY)
chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": f"""
                I am giving you some data in python dict format and you will have to add more information in it and convert it in markdown file with below format
                Don't use anything except headings and paragraphs in markdown
                # Project name
                project name
                # Who is the client?
                information about client
                # What is the idea?
                information about idea
                # How can we help?
                detailed information about solution
                # Tech stack
                information about tech stack
                # Timeline
                information about timeline
                Try to add more information by yourself too to make it more detailed and make it around 800 words. Don't return anything except markdown and use proper line breaks.
                Here is the information about fields in the given dict: 
                platform_name: Name of platform
                CEO_Info: Background about CEO and information about CEO
                CEO_Name: Name of CEO
                idea: project idea
                solution: solution on how we can solve the given problem statement and how to achieve the given idea
                tech_stack: tech stack used to build the solution
                timeline: timeline for the project
                Platform_link: link to the platform
                Here is the data: 
                {para["output"]}
            """,
        }
    ],
    model="gpt-4",
    temperature=0.7
)
output = chat_completion.choices[0].message.content
print(output)

And now we have our project proposal as a markdown text so now its time to convert it into word document and we can easily do that with pypandoc which is a python wrapper for pandoc.

Let’s install it


pip install pypandoc

Now we just have to provide our markdown text and it will convert it into word document. So let’s try it out!


import pypandoc
output = pypandoc.convert_text(output, 'docx', format='md', outputfile="output.docx")

After running the above code, you will see a newly created file called output.docx and now we have a decent looking project proposal like this:

And now you can use this document as a reference before going to the meeting and make a good impression to get that deal 😎🤝.

💡How to get code?

You can get all the code discussed here from LLM agent for meeting automation repository.

⚒️ Challenges

Let’s discuss the challenges which I faced while making this agent 👀

Perplexity issues

The first challenge I was facing was related to perplexity API. The API was returning HTTP 400 status code error when i was trying to use perplexity using python module specified in this section. Weird. It didn’t work with OpenAI module.

To solve this issue, I then used perplexity API, after several tries and it worked. Skip the trouble & find Perplexity's official documentation here.

Getting proper response

After writing proper code and creating custom tools, i was able to get the response but the result was not that much good and the information it was returning was also not much in detail so i increased the temperature to 0.6 to make it more creative and added one more step to format the agent data using openai.

Making it professional

The last challenge was storing or creating a professional project proposal because if it doesn’t look like a proposal then what’s the purpose of it 🤷‍♂️.

The notion document was not looking much professional because to format content using notion api, you need to use notion objects and you have to convert everything as an object (paragraph, heading, list, code_blocks etc..) so instead of manually guessing the structure, i even tried to make the notion object for given response using an agent but the gpt-4 output limit is only 4096 characters and notion object was becoming very large ☹️.

So then i decided to get the response as a proper formatted markdown and then converted it into word document and then it looks decent.

The current code can be improved and prompts can be better too and you can play around with the it by taking the code from our github repository.

📝 Conclusion

As we've explored in this blog, the potential of Langchain and Langchain Agents is vast. By incorporating these intelligent assistants into your meeting workflows, you can streamline processes, reduce administrative overhead, and ultimately, empower your team to focus on what truly matters – driving innovation and achieving goals.

Here we only considered meeting workflow but there are so many other default tools available in langchain and you can also create different kind of tools for your specific usecase which can help you save a lot of manual work and time.

So, whether you're a small team looking to optimize your meeting routines or a large organization seeking to revolutionize your workflow, langchain agents are a very good consideration to automate your tasks with the use of Artificial Intelligence.

If you are looking to build custom AI agents to automate your workflows then kindly book a call with us and we will be happy to convert your ideas into reality.

Thanks for reading 😄

Book an AI consultation

Looking to build AI solutions? Let's chat.

Schedule your consultation today - this not a sales call, feel free to come prepared with your technical queries.

You'll be meeting Rohan Sawant, the Founder.
 Company
Book a Call

Let us help you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Behind the Blog 👀
Shivam Danawale
Writer

Shivam is an AI Researcher & Full Stack Engineer at Ionio.

Rohan Sawant
Editor

Rohan is the Founder & CEO of Ionio. I make everyone write all these nice articles... 🥵