ReAct + Gemini: A prompting method for demonstrating reasoning and acting in LLMs

View on Run in Google Colab View source on GitHub

This notebook is a minimal implementation of ReAct: Synergizing Reasoning and Acting in Language Models with the Google gemini-pro model.

This notebook demonstrates the use of gemini-pro to generate reasoning traces and task-specific actions by leveraging a Few-shot ReAct Prompt. In this walkthrough, you will learn how to:

  1. Set up your development environment and API access to use Gemini.
  2. Prompt Gemini with ReAct.
  3. Use the newly prompted model for multi-turn conversations (chat).
  4. How ReAct overcomes issues of hallucination and error propagation by seeking external groundtruth via Wikipedia API.
  5. Have conversations with deployed ReAct prompted Gemini bot 🤖


ReAct ((Yao, et al. 2022)[ is a prompting method that allows language models to trace reasoning steps involved in answering a user's query. This improves human interpretability and trustworthiness. ReAct prompted models generate Thought-Action-Observation triplets for every iteration.

Instead of instructing the model to "Explain step-by-step", using ReAct encourages models to seek factual information by inducing Thought and Action steps, and using external tools to provide Observation steps.

It works by adding structure to the prompt that corresponds to specific actions.

  - Search[entity]: Search for a specific `entity` (in this guide, it will query the Wikipedia API).
  - Lookup[phrase]: Scan the `Search` results for a specific, exact-match phrase (on a Wikipedia page).
  - Finish[answer]: Return the `answer` to the user.


Install the Python SDK

The Python SDK for the Gemini API, is contained in the google-generativeai package. Install the dependency using pip:

You will also need to install the Wikipedia API.

pip install -q google.generativeai
pip install -q wikipedia

Import packages

Import the necessary packages.

import re
import os

import wikipedia
from wikipedia.exceptions import DisambiguationError, PageError

import google.generativeai as genai

Set up your API key

Before you can use the Gemini API, you must first obtain an API key. If you don't already have one, create a key with one click in Google AI Studio.

Get an API key

In Colab, add the key to the secrets manager under the "🔑" in the left panel. Give it the name GOOGLE_API_KEY.

Once you have the API key, pass it to the SDK. You can do this in two ways:

  • Put the key in the GOOGLE_API_KEY environment variable (the SDK will automatically pick it up from there).
  • Pass the key to genai.configure(api_key=...)
    from google.colab import userdata
except ImportError as e:
    GOOGLE_API_KEY = os.environ['GOOGLE_API_KEY']

The ReAct prompt

Here, you will be working with the original ReAct prompt with a few minor adjustments.

model_instructions = """Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, Observation is understanding relevant information from an Action's output and Action can be of three types:
(1) <search>entity</search>, which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search and you can try to search the information from those topics.
(2) <lookup>keyword</lookup>, which returns the next sentence containing keyword in the current context. This only does exact matches, so keep your searches short.
(3) <finish>answer</finish>, which returns the answer and finishes the task.

Few-shot prompting to enable in-context learning with Gemini

While large language models show good understanding of the instructions they are prompted with, they still may perform poorly on complex tasks in a zero-shot setting. Hence, you will now provide a few examples along with your prompt to steer the model's output according to your needs. This in-context learning improves the model's performance significantly.

examples = """
Here are some examples.

What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?

Thought 1
I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of the area.

Action 1
<search>Colorado orogeny</search>

Observation 1
The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.

Thought 2
It does not mention the eastern sector. So I need to look up eastern sector.

Action 2
<lookup>eastern sector</lookup>

Observation 2
The eastern sector extends into the High Plains and is called the Central Plains orogeny.

Thought 3
The eastern sector of Colorado orogeny extends into the High Plains. So I need to search High Plains and find its elevation range.

Action 3
<search>High Plains</search>

Observation 3
High Plains refers to one of two distinct land regions

Thought 4
I need to instead search High Plains (United States).

Action 4
<search>High Plains (United States)</search>

Observation 4
The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130m).

Thought 5
High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft.

Action 5
<finish>1,800 to 7,000 ft</finish>

Musician and satirist Allie Goertz wrote a song about the "The Simpsons" character Milhouse, who Matt Groening named after who?

Thought 1
The question simplifies to "The Simpsons" character Milhouse is named after who. I only need to search Milhouse and find who it is named after.

Action 1

Observation 1
Milhouse Mussolini Van Houten is a recurring character in the Fox animated television series The Simpsons voiced by Pamela Hayden and created by Matt Groening.

Thought 2
The paragraph does not tell who Milhouse is named after, maybe I can look up "named after".

Action 2
<lookup>named after</lookup>

Observation 2
Milhouse was named after U.S. president Richard Nixon, whose middle name was Milhous.

Thought 3
Milhouse was named after U.S. president Richard Nixon, so the answer is Richard Nixon.

Action 3
<finish>Richard Nixon</finish>

Which documentary is about Finnish rock groups, Adam Clayton Powell or The Saimaa Gesture?

Thought 1
I need to search Adam Clayton Powell and The Saimaa Gesture, and find which documentary is about Finnish rock groups.

Action 1
<search>Adam Clayton Powell</search>

Observation 1
Could not find [Adam Clayton Powell]. Similar: [’Adam Clayton Powell III’, ’Seventh Avenue (Manhattan)’, ’Adam Clayton Powell Jr. State Office Building’, ’Isabel Washington Powell’, ’Adam Powell’, ’Adam Clayton Powell (film)’, ’Giancarlo Esposito’].

Thought 2
To find the documentary, I can search Adam Clayton Powell (film).

Action 2
<search>Adam Clayton Powell (film)</search>

Observation 2
Adam Clayton Powell is a 1989 American documentary film directed by Richard Kilberg. The film is about the rise and fall of influential African-American politician Adam Clayton Powell Jr.[3][4] It was later aired as part of the PBS series The American Experience.

Thought 3
Adam Clayton Powell (film) is a documentary about an African-American politician, not Finnish rock groups. So the documentary about Finnish rock groups must instead be The Saimaa Gesture.

Action 3
<finish>The Saimaa Gesture</finish>

What profession does Nicholas Ray and Elia Kazan have in common?

Thought 1
I need to search Nicholas Ray and Elia Kazan, find their professions, then find the profession they have in common.

Action 1
<search>Nicholas Ray</search>

Observation 1
Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16, 1979) was an American film director, screenwriter, and actor best known for the 1955 film Rebel Without a Cause.

Thought 2
Professions of Nicholas Ray are director, screenwriter, and actor. I need to search Elia Kazan next and find his professions.

Action 2
<search>Elia Kazan</search>

Observation 2
Elia Kazan was an American film and theatre director, producer, screenwriter and actor.

Thought 3
Professions of Elia Kazan are director, producer, screenwriter, and actor. So profession Nicholas Ray and Elia Kazan have in common is director, screenwriter, and actor.

Action 3
<finish>director, screenwriter, actor</finish>

Which magazine was started first Arthur’s Magazine or First for Women?

Thought 1
I need to search Arthur’s Magazine and First for Women, and find which was started first.

Action 1
<search>Arthur’s Magazine</search>

Observation 1
Arthur’s Magazine (1844-1846) was an American literary periodical published in Philadelphia in the 19th century.

Thought 2
Arthur’s Magazine was started in 1844. I need to search First for Women next.

Action 2
<search>First for Women</search>

Observation 2
First for Women is a woman’s magazine published by Bauer Media Group in the USA.[1] The magazine was started in 1989.

Thought 3
First for Women was started in 1989. 1844 (Arthur’s Magazine) < 1989 (First for Women), so Arthur’s Magazine was started first.

Action 3
<finish>Arthur’s Magazine</finish>

Were Pavel Urysohn and Leonid Levin known for the same type of work?

Thought 1
I need to search Pavel Urysohn and Leonid Levin, find their types of work, then find if they are the same.

Action 1
<search>Pavel Urysohn</search>

Observation 1
Pavel Samuilovich Urysohn (February 3, 1898 - August 17, 1924) was a Soviet mathematician who is best known for his contributions in dimension theory.

Thought 2
Pavel Urysohn is a mathematician. I need to search Leonid Levin next and find its type of work.

Action 2
<search>Leonid Levin</search>

Observation 2
Leonid Anatolievich Levin is a Soviet-American mathematician and computer scientist.

Thought 3
Leonid Levin is a mathematician and computer scientist. So Pavel Urysohn and Leonid Levin have the same type of work.

Action 3


Copy the instructions along with examples in a file called model_instructions.txt

ReAct_prompt = model_instructions + examples
with open('model_instructions.txt', 'w') as f:

Using ReAct with Gemini


You will now build a class to facilitate multi-turn chat with the ReAct-prompted Gemini model.

class ReAct:
  def __init__(self, model: str, ReAct_prompt: str | os.PathLike):
    """Prepares Gemini to follow a `Few-shot ReAct prompt` by imitating
    `function calling` technique to generate both reasoning traces and
    task-specific actions in an interleaved manner.

        model: name to the model.
        ReAct_prompt: ReAct prompt OR path to the ReAct prompt.
    self.model = genai.GenerativeModel(model) = self.model.start_chat(history=[])
    self.should_continue_prompting = True
    self._search_history: list[str] = []
    self._search_urls: list[str] = []

      # try to read the file
      with open(ReAct_prompt, 'r') as f:
        self._prompt =
    except FileNotFoundError:
      # assume that the parameter represents prompt itself rather than path to the prompt file.
      self._prompt = ReAct_prompt

  def prompt(self):
    return self._prompt

  def add_method(cls, func):
    setattr(cls, func.__name__, func)

  def clean(text: str):
    """Helper function for responses."""
    text = text.replace("\n", " ")
    return text

Define tools

As instructed by the prompt, the model will be generating Thought-Action-Observation traces, where every Action trace could be one of the following tokens:

  1. : Perform a Wikipedia search via external API.
  2. : Look up specific information on a page with the Wikipedia API.
  3. : Stop the execution of the model and return the answer.

If any of these Action tokens are returned, you want to call the relevant tool and update the prompt with an Observervation.

Define a method to perform Wikipedia searches

def search(self, query: str):
    """Perfoms search on `query` via Wikipedia api and returns its summary.

        query: Search parameter to query the Wikipedia API with.

        observation: Summary of Wikipedia search for `query` if found else
        similar search results.
    observation = None
    query = query.strip()
      # try to get the summary for requested `query` from the Wikipedia
      observation = wikipedia.summary(query, sentences=4, auto_suggest=False)
      wiki_url =, auto_suggest=False).url
      observation = self.clean(observation)

      # if successful, return the first 2-3 sentences from the summary as model's context
      observation = self.model.generate_content(f'Retun the first 2 or 3 \
          sentences from the following text: {observation}')
      observation = observation.text

      # keep track of the model's search history
      print(f"Information Source: {wiki_url}")

    # if the page is ambiguous/does not exist, return similar search phrases for model's context
    except (DisambiguationError, PageError) as e:
      observation = f'Could not find ["{query}"].'
      # get a list of similar search topics
      search_results =
      observation += f' Similar: {search_results}. You should search for one of those instead.'

    return observation

Look up

Look for a specific phrase on the Wikipedia page.

def lookup(self, phrase: str, context_length=200):
    """Searches for the `phrase` in the lastest Wikipedia search page
    and returns number of sentences which is controlled by the
    `context_length` parameter.

        phrase: Lookup phrase to search for within a page. Generally
        attributes to some specification of any topic.

        context_length: Number of words to consider
        while looking for the answer.

        result: Context related to the `phrase` within the page.
    # get the last searched Wikipedia page and find `phrase` in it.
    page =[-1], auto_suggest=False)
    page = page.content
    page = self.clean(page)
    start_index = page.find(phrase)

    # extract sentences considering the context length defined
    result = page[max(0, start_index - context_length):start_index+len(phrase)+context_length]
    print(f"Information Source: {self._search_urls[-1]}")
    return result


Instruct the pipline to terminate its execution.

def finish(self, _):
  """Finishes the conversation on encountering <finish> token by
  setting the `self.should_continue_prompting` flag to `False`.
  self.should_continue_prompting = False
  print(f"Information Sources: {self._search_urls}")

Integrate tools

Now that you are all set with function definitions, the next step is to instruct the model to interrupt its execution when it emits any of the action tokens. You will use the stop_sequences parameter from the genai.GenerativeModel.GenerationConfig class to instruct the model when to stop. Upon encountering an action token, your system will extract the last action and argument, then call the appropriate tool.

The response from the function will be added to prompt to continue the process.

def __call__(self, user_question, max_calls: int=8, **generation_kwargs):
  """Starts multi-turn conversation with the chat models with function calling

      max_calls: max calls made to the model to get the final answer.

      generation_kwargs: Same as genai.GenerativeModel.GenerationConfig
              candidate_count: (int | None) = None,
              stop_sequences: (Iterable[str] | None) = None,
              max_output_tokens: (int | None) = None,
              temperature: (float | None) = None,
              top_p: (float | None) = None,
              top_k: (int | None) = None

      AssertionError: if max_calls is not between 1 and 8

  # hyperparameter fine-tuned according to the paper
  assert 0 < max_calls <= 8, "max_calls must be between 1 and 8"

  if len( == 0:
    model_prompt = self.prompt.format(question=user_question)
    model_prompt = user_question

  # stop_sequences for the model to immitate function calling
  callable_entities = ['</search>', '</lookup>', '</finish>']

  generation_kwargs.update({'stop_sequences': callable_entities})

  self.should_continue_prompting = True
  for idx in range(max_calls):

    self.response =[model_prompt],
              generation_config=generation_kwargs, stream=True)

    for chunk in self.response:
      print(chunk.text, end=' ')

    response_cmd =[-1].parts[-1].text

      # regex to extract <function name writen in between angular brackets>
      cmd = re.findall(r'<(.*)>', response_cmd)[-1]
      # regex to extract param
      query = response_cmd.split(f'<{cmd}>')[-1].strip()
      # call to appropriate function
      observation = self.__getattribute__(cmd)(query)

      if not self.should_continue_prompting:

      stream_message = f"\nObservation {idx + 1}\n{observation}"
      # send function's output as user's response
      model_prompt = f"<{cmd}>{query}</{cmd}>'s Output: {stream_message}"

    except (IndexError, AttributeError) as e:
      model_prompt = "Please try to generate thought-action-observation traces \
          as instructed by the prompt."

Test ReAct prompted Gemini model

gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='model_instructions.txt')
# Note: try different combinations of generational_config parameters for variational results
gemini_ReAct_chat("What is the total of ages of the main trio from the new Percy Jackson and the Olympians TV series in real life?", temperature=0.2)
Thought 1
I need to search the main trio from the new Percy Jackson  and the Olympians TV series, find their ages in real life, then sum them up.

Action 1
<search>Percy Jackson and the Olymp ians TV series </search>

Observation 1
Could not find ["Percy Jackson and the Olympians TV series"]. Similar: ['Percy Jackson and the Olympians (TV series)', 'Percy Jackson & the Olympians', 'Percy Jackson (film series)', 'Percy Jackson & the Olympians: The Lightning Thief', 'Percy Jackson (disambiguation)', 'Percy Jackson', 'List of characters in mythology novels by Rick Riordan', 'The Lightning Thief', 'The Heroes of Olympus', 'Walker Scobell']. You should search for one of those instead.
Thought 2
I can search Percy Jackson and the Olympians (TV series ) instead.

Action 2
<search>Percy Jackson and the Olympians (TV series) </search>
Information Source:

Observation 2
Percy Jackson and the Olympians is an American fantasy television series created by Rick Riordan and Jonathan E. Steinberg for Disney+, based on the book series of the same name by Riordan. Walker Scobell stars as Percy Jackson, alongside Leah Sava Jeffries and Aryan Simhadri. Development on the series began in May 2020, following a pitch by Riordan to Disney Branded Television.
Thought 3
The main trio from the new Percy Jackson and the Olympians  TV series are Walker Scobell, Leah Sava Jeffries, and Aryan Simhadri. I need to find their ages in real life.

Action 3
 <search>Walker Scobell </search>
Information Source:

Observation 3
Walker Scobell, born on January 5, 2009, is an American actor who has starred in the 2022 action comedy films The Adam Project and Secret Headquarters.

In 2023, Scobell began playing the title character of Percy Jackson in the Disney+ fantasy series Percy Jackson and the Olympians.
Thought 4
Walker Scobell was born on January 5, 2 009, so he is 14 years old. I need to find the ages of Leah Sava Jeffries and Aryan Simhadri.

Action  4
<search>Leah Sava Jeffries </search>
Information Source:

Observation 4
Leah Sava Jeffries, born on September 25, 2009, is an American child actress. She made her acting debut in the American musical drama, Empire, in 2015, and later made her feature film debut in the action-thriller Beast, released in 2022.
Thought 5
Leah Sava Jeffries was born on September 25,  2009, so she is 13 years old. I need to find the age of Aryan Simhadri.

Action 5
<search>Aryan Simhadri </search>
Information Source:

Observation 5
Aryan Simhadri, born on May 6, 2006, is an American actor of Indian descent. He is best known for portraying Grover Underwood in the Disney+ series "Percy Jackson and the Olympians." 

In 2021, Simhadri made his Broadway debut as Walter in the production of "Trevor: The Musical."
Thought 6
Aryan Simhadri was born on May 6,  2006, so he is 17 years old. The sum of ages of the main trio from the new Percy Jackson and the Olympians  TV series is 14 + 13 + 17 = 44.

Action 6
<finish>44 </finish>
Information Sources: ['', '', '', '']

Now, try asking the same question to gemini-pro model without the ReAct prompt.

gemini_ReAct_chat.model.generate_content("What is the total of ages of the main trio from the new Percy Jackson and the Olympians TV series in real life?").text
'The TV series has not yet been released, so the real-life ages of the main trio are not yet known.'


The ReAct prompted Gemini model is grounded by external information sources and hence is less prone to hallucination. Furthermore, Thought-Action-Observation traces generated by the model enhance human interpretability and trustworthiness by allowing users to witness the model's reasoning process for answering the user's query.

Further reading

Head over to the Streamlit app to interact with a ReAct prompted Gemini bot.