As artificial intelligence continues to evolve, it is being integrated into various aspects of our daily lives. One of the most influential examples is OpenAI’s ChatGPT, which has revolutionized the way we interact with AI-driven conversations.
This post presents a continuous AI chat companion that utilizes OpenAI’s ChatGPT API to maintain long-term conversations while dynamically translating between English and other languages when necessary. The implementation is designed for seamless use in the U.S. market, focusing on English speakers while allowing multilingual interactions.
Overview of the AI Chat Companion
This AI chat system allows users to engage in continuous, context-aware conversations with an AI assistant. It keeps track of conversation history, ensuring smooth and natural interactions. Additionally, it provides real-time language translation, enabling seamless communication for non-native English speakers.
Enhancements for the U.S. Market
- Supports English as the primary language while offering automatic translation for multilingual users.
- Session-based memory allows the chatbot to maintain context throughout a conversation.
- Optimized token usage to keep costs low while delivering high-quality AI responses.
- Flexible user exit commands such as
"Thank you"
,"Exit"
, or"Goodbye"
to end the session naturally.

Why Use GPT-4 and GPT-3.5 Turbo Together?
This implementation leverages both GPT-4 and GPT-3.5-Turbo to balance cost-efficiency and performance:
Model | Purpose | Reason |
---|---|---|
GPT-4 (gpt-4 ) | Handles deep, context-aware conversations. | GPT-4 provides better long-term memory and nuanced responses. |
GPT-3.5-Turbo (gpt-3.5-turbo-1106 ) | Handles quick language translations. | Faster and cheaper than GPT-4 for basic text transformations. |
You can check the price comparison by ChatGPT models additionally.
By using GPT-3.5-Turbo for translations and GPT-4 for conversations, we optimize API usage costs while maintaining high-quality responses.

Core Features of the AI Chat Companion

Multilingual Support & Real-time Translation (GPT-3.5-Turbo)
- Detects if a user input is in a language other than English and translates it before sending it to the AI model.
- Translates the AI’s English response back to the user’s language before displaying it.
Memory-based Continuous Conversations (GPT-4)
- Maintains a history of user interactions to provide context-aware responses.
- Ensures a smooth and engaging dialogue experience.
Smart Token Usage Tracking
- Tracks total token consumption to help users understand API cost implications.
Enhanced Code Implementation
This updated implementation ensures smooth and natural interactions for American users while maintaining multilingual capabilities.

Requirements
Ensure you have installed the necessary libraries before running the script:
pip install openai python-decouple
Full Python Implementation
import openai
import os
import re
from decouple import config
# Load API Key from Environment Variables
openai.api_key = config("OPENAI_API_KEY")
# Track total token usage for cost monitoring
total_tokens_used = 0
# Estimate token usage based on text length
def estimate_token_count(text):
return len(text) // 4 # Rough estimate (OpenAI uses ~4 characters per token)
def update_total_tokens(estimated_tokens):
"""Update total token usage globally."""
global total_tokens_used
total_tokens_used += estimated_tokens
# Detect if the input contains non-English characters (basic multilingual support)
def is_non_english(text):
return bool(re.search("[^a-zA-Z0-9,.!?;:()'\" ]", text))
# Translate text using GPT-3.5 Turbo
def translate_text(text, target_language, model="gpt-3.5-turbo-1106"):
"""Translate input text to the target language."""
translation_prompt = f"Translate this to {target_language}: {text}"
response = openai.ChatCompletion.create(
model=model,
messages=[{"role": "user", "content": translation_prompt}],
max_tokens=100
)
translated_text = response["choices"][0]["message"]["content"].strip()
estimated_tokens = estimate_token_count(translation_prompt + translated_text)
update_total_tokens(estimated_tokens)
return translated_text
# Chat with GPT-4 while maintaining session memory
def chat_with_gpt(messages, model="gpt-4", stop_sequences=None):
"""Engage in a continuous conversation with memory."""
response = openai.ChatCompletion.create(
model=model,
messages=messages,
max_tokens=200,
temperature=0.7,
stop=stop_sequences
)
response_text = response["choices"][0]["message"]["content"]
messages.append({"role": "assistant", "content": response_text})
estimated_tokens = estimate_token_count(response_text)
update_total_tokens(estimated_tokens)
return response_text, messages
# Initialize the chat session with system instructions
messages = [
{"role": "system", "content": "You are a professional AI assistant helping users in the U.S. with thoughtful and empathetic responses."}
]
# Chatbot loop
print("AI Assistant: Hello! How can I assist you today?")
while True:
user_input = input("You: ").strip()
if not user_input:
continue # Ignore empty input
# Detect language and translate if necessary
original_language = "English" if not is_non_english(user_input) else "Non-English"
if original_language != "English":
user_input = translate_text(user_input, "English")
# Append user message to conversation history
messages.append({"role": "user", "content": user_input})
# Get AI response
ai_response, messages = chat_with_gpt(messages, stop_sequences=["Thank you", "Goodbye", "Exit"])
# Translate back if necessary
final_response = translate_text(ai_response, "User's language") if original_language != "English" else ai_response
print("AI Assistant:", final_response)
# Exit conditions
if user_input.lower() in ["thank you", "goodbye", "exit"]:
print(f"Total estimated tokens used: {total_tokens_used}")
break
Key Improvements for
- Session-based memory ensures that responses remain contextually relevant.
- Multilingual support enables conversations with non-English users.
- Optimized API usage by using GPT-3.5-Turbo for translations and GPT-4 for conversation flow.
- User-friendly exit phrases like
"Thank you"
,"Goodbye"
, or"Exit"
to naturally close the session.
Potential Applications in the U.S. Market
This chatbot can be utilized in various real-world applications, including:
Customer Support
- Businesses can integrate this chatbot to handle customer queries in multiple languages.
Personal AI Language Tutor
- The chatbot can act as a bilingual tutor that helps users practice English while communicating naturally.
AI Therapy and Well-being Chatbot
- Can be tailored as an empathetic virtual assistant providing mental health support.
Legal or Immigration Assistance
- U.S. immigrants and foreign residents can communicate in their native language, while the chatbot translates and provides legal guidance in English.

Conclusion
This AI chat companion built with OpenAI’s ChatGPT API is a robust and efficient system designed for the U.S. audience while maintaining multilingual capabilities. By leveraging both GPT-4 and GPT-3.5-Turbo, we ensure cost-effective, high-quality responses that enhance engagement, communication, and user experience.
Start integrating AI-powered continuous conversation models for business, education, or personal use today!
Use the link below to learn more about how to use ChatGPT API!!