• Turing Talks
  • Posts
  • Issue #16: How to Work with the OpenAI API Using Python

Issue #16: How to Work with the OpenAI API Using Python

Learn how to work with the OpenAI's api (the makers of ChatGPT) using Python and interact with the text-generation model.

In today’s tech-driven world, APIs (Application Programming Interfaces) are the cornerstone of software development. They enable applications to communicate with each other.

OpenAI’s API is a powerful tool that lets developers use advanced AI models for text generation, image creation, transcription, etc. In this article, we’ll see how to work with the OpenAI API using Python.

What is OpenAI?

OpenAI is an AI research and deployment company and the team behind the ground-breaking model ChatGPT.

They’ve developed some of the most advanced AI models to date, aiming to ensure that artificial general intelligence benefits all of humanity.

OpenAI’s API allows developers to access these models, integrating AI capabilities into their applications.

What Are the APIs Provided by OpenAI?

OpenAI offers several APIs, each used for different tasks:

  1. GPT (Generative Pre-trained Transformer): For generating human-like text.

  2. DALL·E: For creating images from text

  3. Whisper: A model that can convert audio into text.

  4. TTS (Text-to-Speech): A set of models that can convert text into natural-sounding spoken audio.

  5. Embeddings: A set of models that can convert text into a numerical form.

These APIs change the way we interact with technology, from automating content creation to developing new ways of generating digital art.

Now let’s see how to work with the text-generation API powered by the GPT models.

Initial Setup

Before we begin working with the API, we have to get the API keys. You can find your keys on the OpenAI website.

First, let’s install the OpenAI library using the pip command (use ! before the command if you are using the collab notebook).

pip install openai

Once you have installed the OpenAI library, let’s create a client to work with the OpenAI API.

from openai import OpenAI
client = OpenAI(api_key="API_KEY_HERE")

We’re all set. Let’s start with asking questions to ChatGPT using the OpenAI API.

I will be using a Google Collab notebook for this project and you can find the completed notebook here if you want to follow along.

How to Generate Text Using the OpenAI API?

Generating text with the OpenAI API involves a few simple steps. Here’s a quick guide:

response = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "user", "content": "List out the continents in the world"},
  ]
)
print(response.choices[0].message.content)

Here’s what this code does.

client.chat.completions.create() calls the create method on the completions object of a chat interface. It’s requesting the API to generate a completion, or response, based on the provided parameters.

model=”gpt-3.5-turbo” specifies which language model to use for generating the response. Here, it’s set to use gpt-3.5-turbo, a variant of the GPT-3.5 model. There are also other models you can use like gpt-4, gpt-4-turbo-preview, etc.

messages=[] parameter takes a list of message objects representing the conversation context that the model should consider when generating its response.

In this case, there’s only one message in the list:

{“role”: “user”, “content”: “List out the continents in the world”} — A message object indicating that the role of the sender is a user and the content of the message is List out the continents in the world. This is the prompt to which the model will generate a response.

1. Africa
2. Antarctica
3. Asia
4. Europe
5. North America
6. Australia (Oceania)
7. South America

There is another role called system which we will see shortly.

The response object will contain the generated text among other details. 

response.choices[0].message.content gets the first choice (assuming there might be multiple choices, which is more typical for other types of completions) and then accesses the message object’s content attribute within that choice.

This content attribute contains the generated text response to the input prompt.

In summary, this code sends a prompt to the OpenAI GPT-3.5 Turbo model asking it to list out the continents in the world. It then prints the model’s response to the console.

Now, this is similar to how ChatGPT works. But there are other fun things we can do using the text-generation API. For example, you can ask it to pretend to be a chef.

How to Generate Custom Behaviour from GPT

Let's try the same code once again, only this time, we will add a system prompt. A system prompt tells the model to behave a certain way, in our case, be a chef. 

response = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a chef. You take a food as an input and tell the user the recipe. For any other input, say 'Not a food item' "},
    {"role": "user", "content": "Pizza"},
  ]
)

print(response.choices[0].message.content)

If you try this code, you will get a similar response below.

To make a delicious pizza, you will need the following ingredients:

- Pizza dough
- Tomato sauce
- Mozzarella cheese
- Toppings of your choice (such as pepperoni, mushrooms, bell peppers, etc.)

Instructions:
1. Preheat your oven to 450°F (230°C).
2. Roll out the pizza dough on a baking sheet or pizza pan.
3. Spread a layer of tomato sauce over the dough.
4. Sprinkle a generous amount of shredded mozzarella cheese over the sauce.
5. Add your favorite toppings evenly over the cheese.
6. Bake in the preheated oven for 12-15 minutes, or until the crust is golden brown and the cheese is bubbly.
7. Remove from the oven, slice, and enjoy your delicious homemade pizza!

Enjoy your pizza!

If your user prompt is not a valid food item, the response will be

Not a food item

This should help you understand the fun ways you can use the text-generation model. 

Let’s try one more example. 

How to Generate Custom Data from GPT

In this example, I am going to ask the model to give me the output always in JSON. These types of customizations are important if you are going to use a model’s output directly into another API or an automated system. 

Let’s modify the same code once again. 

response = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "Always generate the output in an array with each line of the output being an a value in an array."},
    {"role": "user", "content": "List the continents in the world"},
  ]
)

print(response.choices[0].message.content)

And the output will be

[
  "Africa",
  "Antarctica",
  "Asia",
  "Europe",
  "North America",
  "Australia (Oceania)",
  "South America"
]

Conclusion

By understanding these basics, you’re now equipped to start experimenting with the OpenAI API in your Python projects. Remember, the key to mastering API interactions is practice, so don’t hesitate to try out different prompts and settings to see what amazing things you can create.

Hope you enjoyed this article. If you have any questions, let me know in the comments. See you soon with a new topic.

Reply

or to participate.