PaLM API: Chat quickstart with Java

This quickstart gets you started in using the chat service in PaLM API through the Java client library.

Obtain an API Key

To get started, you'll need to get an API key.

Installing the API Client

These instructions will get the PaLM Java SDK installed in your local Maven repository so that you can add it as a dependency to your Gradle project.

  1. Download the google-cloud-ai-generativelanguage-v1-java.tar.gz file.
  2. Extract the files and install them in mavenLocal:

    # Extract the files
    tar -xzvf google-cloud-ai-generativelanguage-v1-java.tar.gz
    cd google-cloud-ai-generativelanguage-v1-java
    # Install to mavenLocal
    ./gradlew publishToMavenLocal

Adding the SDK to your project

  1. Open your Gradle configuration file and make sure mavenLocal() is listed under repositories:

    repositories {
        // ...
        // Add the Maven Local repository
  2. Also in your Gradle configuration file add the necessary libraries to the dependencies block:

    dependencies {
        // ...
        // Add these dependencies to use Generative AI

Initialize the Discuss Service Client

Initialize a DiscussServiceClient by passing your API Key (supplied through the API_KEY environment variable) as a header to the TransportChannelProvider to be used by DiscussServiceSettings:


HashMap<String, String> headers = new HashMap<>();
headers.put("x-goog-api-key", System.getEnv("API_KEY"));

InstantiatingGrpcChannelProvider provider = InstantiatingGrpcChannelProvider.newBuilder()

DiscussServiceSettings settings = DiscussServiceSettings.newBuilder()

DiscussServiceClient client = DiscussServiceClient.create(settings);

Create a Message Prompt

You need to provide a MessagePrompt to the API so that it can predict what is the next message in the discussion.

(optional) Create some examples

Optionally, you can provide some examples of what the model should generate. This includes both user input and the response that the model should emulate.


Message input = Message.newBuilder()
    .setContent("What is the capital of California?")

Message response = Message.newBuilder()
    .setContent("If the capital of California is what you seek, Sacramento is where you ought to peek.")

Example californiaExample = Example.newBuilder()

Create the prompt

Pass the examples to the MessagePrompt.Builder along with the current message history and optionally the example from the previous step.


Message geminiMessage = Message.newBuilder()
    .setContent("How tall is the Eiffel Tower?")

MessagePrompt messagePrompt = MessagePrompt.newBuilder()
    .addMessages(geminiMessage) // required
    .setContext("Respond to all questions with a rhyming poem.") // optional
    .addExamples(californiaExample) // use addAllExamples() to add a list of examples

Generate Messages

Create a GenerateMessageRequest

Create a GenerateMessageRequest by passing a model name and prompt to the GenerateMessageRequest.Builder:

GenerateMessageRequest request = GenerateMessageRequest.newBuilder()
    .setModel("models/chat-bison-001") // Required, which model to use to generate the result
    .setPrompt(messagePrompt) // Required
    .setTemperature(0.5f) // Optional, controls the randomness of the output
    .setCandidateCount(1) // Optional, the number of generated messages to return

Send the request

GenerateMessageResponse response = client.generateMessage(request);

Message returnedMessage = response.getCandidatesList().get(0);


Next steps

Now that you've created your first Java app using the PaLM API, check out the resource below to learn more about the API and language models in general.

  • See the Intro to LLMs topic to learn more about prompting techniques.