o1 ChatGPT API Usage Guide with Example Code ChatGPT, Programming

ChatGPT API Usage Guide with Example Code

Posted by

This post provides a step-by-step guide on how to use OpenAI’s ChatGPT API to generate conversations in Python. It covers setting up an API key as an environment variable, calling the OpenAI API to generate dialogue, and displaying the number of tokens used. Additionally, as of OpenAI library version 1.0.0 and above, the API calling method has changed. This guide ensures you are using the latest implementation.


Step 1: Prerequisites

Before proceeding, ensure that you have obtained your ChatGPT API key and set it up as an environment variable. If you haven’t done so, refer to our previous post on setting up the API key.


Step 2: Writing Code to Call the OpenAI API

The following Python script demonstrates how to use OpenAI’s API to generate a conversation.

import os
from decouple import config
from openai import OpenAI

# Load API key from environment variables
api_key = config('OPENAI_API_KEY')

# Initialize OpenAI client
client = OpenAI(api_key=api_key)

# Function to generate dialogue using ChatGPT API
def generate_dialogue(prompt, model="gpt-3.5-turbo-0125", max_tokens=150, temperature=0.7, top_p=1.0, frequency_penalty=0.0, presence_penalty=0.0):
response = client.chat.completions.create(
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
],
model=model,
max_tokens=max_tokens,
temperature=temperature,
top_p=top_p,
frequency_penalty=frequency_penalty,
presence_penalty=presence_penalty,
)
return response

# Example prompt
prompt = "Create a dialogue between two Americans at a restaurant."
dialogue = generate_dialogue(prompt)

# Print the generated dialogue
for choice in dialogue.choices:
message_content = choice.message.content.strip()
messages = message_content.split('\n\n')
for message in messages:
print(message)
print("\n")

# Print the number of tokens used
total_tokens = dialogue.usage.total_tokens
print(f"Total tokens used: {total_tokens}")

Step 3: Code Explanation and Parameter Details

Loading the API Key

The API key is securely loaded from an environment variable using the python-decouple library.

api_key = config('OPENAI_API_KEY')

Initializing the OpenAI Client

The OpenAI client is initialized using the provided API key.

client = OpenAI(api_key=api_key)

Function to Generate Dialogue

The generate_dialogue function takes a user prompt and several optional parameters to configure the API request.

def generate_dialogue(prompt, model="gpt-3.5-turbo-0125", max_tokens=150, temperature=0.7, top_p=1.0, frequency_penalty=0.0, presence_penalty=0.0):
response = client.chat.completions.create(
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
],
model=model,
max_tokens=max_tokens,
temperature=temperature,
top_p=top_p,
frequency_penalty=frequency_penalty,
presence_penalty=presence_penalty,
)
return response

Printing the Generated Conversation

The generated response is formatted into dialogue format and printed.

for choice in dialogue.choices:
message_content = choice.message.content.strip()
messages = message_content.split('\n\n')
for message in messages:
print(message)
print("\n")

Displaying Token Usage

The script also prints the number of tokens used in the API request.

total_tokens = dialogue.usage.total_tokens
print(f"Total tokens used: {total_tokens}")

Understanding the API Parameters

ParameterDescriptionDefault Value
modelSpecifies the GPT model to use."gpt-3.5-turbo-0125"
max_tokensSets the maximum number of tokens for the response.150
temperatureControls the randomness of responses. Higher values produce more creative answers, lower values result in more deterministic responses.0.7
top_pDetermines the probability distribution for token selection. Lower values lead to more predictable responses.1.0
frequency_penaltyReduces the likelihood of repeated words. Higher values decrease repetition.0.0
presence_penaltyIncreases the likelihood of introducing new words.0.0

Conclusion

This guide demonstrated how to use OpenAI’s ChatGPT API to generate conversations, covering API key setup, writing API calls, and analyzing token usage. By following these steps, you can seamlessly integrate ChatGPT API into your projects and build applications that leverage AI-driven conversations.

Leave a Reply

Your email address will not be published. Required fields are marked *