OpenAI
AI/ML Platform
OpenAI Platform
Overview
OpenAI Platform is an AI platform that provides cutting-edge large language models like GPT, DALL-E, and Whisper through APIs. Established as a leader in the generative AI field with the success of ChatGPT, it provides an environment where developers can easily integrate world-class AI capabilities into their applications. The platform offers comprehensive AI features including text generation, image generation, audio processing, Function Calling, and fine-tuning as cloud services, promoting the democratization of AI application development.
Details
OpenAI Platform 2025 maintains overwhelming leadership in the generative AI field and is adopted by a wide range of developers from enterprises to startups. It provides multiple cutting-edge AI models such as GPT-4o, GPT-4 Turbo, DALL-E 3, and Whisper API through an integrated platform, enabling access to advanced AI capabilities through purpose-optimized API endpoints. Supporting various usage patterns including Realtime API, Assistants API, and Batch Processing API, it enables scalable AI integration with flexible pricing through a pay-as-you-use model.
Key Features
- State-of-the-art AI Models: Industry-leading models including GPT-4o, DALL-E 3, Whisper, and GPT-4 Turbo
- Rich APIs: Diverse APIs for various use cases including Chat Completions, Embeddings, Fine-tuning, Assistants, and Realtime API
- Function Calling: Advanced function calling capabilities enabling integration with external systems
- Real-time Processing: Real-time audio and text processing using WebSocket
- Enterprise Support: Organizational features and security with Team and Enterprise plans
- Pay-as-you-use: Transparent pricing model based on usage
Advantages and Disadvantages
Advantages
- Easy access and rapid integration with world-class AI models
- Support for diverse AI application development through comprehensive API suite
- Reduced learning costs through extensive documentation and community support
- Technical advantages through continuous model improvements and new feature releases
- Advanced AI agent construction with Function Calling and Assistants API
- Scalable usage without initial investment through pay-as-you-use pricing
Disadvantages
- High usage costs and complex cost management for large-scale usage
- Concurrent request limitations due to API rate limits
- Requires internet connection and no on-premises deployment available
- Data privacy and vendor lock-in risks
- Limited explainability due to black-box nature of models
- Compatibility issues due to API specification changes or model deprecation
Reference Links
- OpenAI Platform Official Site
- OpenAI API Documentation
- OpenAI Python Library
- OpenAI Cookbook
- OpenAI API Reference
- OpenAI Platform Pricing
Code Examples
Basic Setup
# Install OpenAI Python library
pip install openai
# Set environment variable
export OPENAI_API_KEY="your-api-key-here"
# Or set in .env file
# OPENAI_API_KEY=your-api-key-here
# Basic setup in Python
import os
from openai import OpenAI
# API key configuration (automatically retrieved from environment variable)
client = OpenAI(
api_key=os.environ.get("OPENAI_API_KEY"),
)
# Verify API key
print("OpenAI API client initialized successfully")
# Get list of available models
models = client.models.list()
print("Available models:")
for model in models.data[:5]:
print(f"- {model.id}")
Chat Completions API (Basic Text Generation)
from openai import OpenAI
client = OpenAI()
# Basic chat completion
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful AI assistant. Please provide concise and clear answers."},
{"role": "user", "content": "How do I reverse a list in Python?"}
],
max_tokens=1000,
temperature=0.7
)
print(response.choices[0].message.content)
# Streaming response
print("\n=== Streaming Response ===")
stream = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Explain asynchronous programming in JavaScript."}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
# Conversation with multiple messages
conversation_history = [
{"role": "system", "content": "You are an AI assistant that supports programming learning."},
{"role": "user", "content": "What is React?"},
{"role": "assistant", "content": "React is a JavaScript library developed by Facebook for building user interfaces."},
{"role": "user", "content": "Tell me 3 advantages of React."}
]
response = client.chat.completions.create(
model="gpt-4o",
messages=conversation_history,
max_tokens=800
)
print("\n\n=== Conversation History Response ===")
print(response.choices[0].message.content)
Function Calling (External Function Integration)
import json
from openai import OpenAI
client = OpenAI()
# Define external functions
def get_weather(location, unit="celsius"):
"""Get weather information for specified location"""
# In actual implementation, call weather API
mock_weather_data = {
"location": location,
"temperature": 22 if unit == "celsius" else 72,
"unit": unit,
"description": "sunny"
}
return json.dumps(mock_weather_data)
def calculate_math(expression):
"""Calculate mathematical expression"""
try:
result = eval(expression)
return json.dumps({"expression": expression, "result": result})
except:
return json.dumps({"error": "Invalid expression"})
# Function Calling configuration
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather information for specified location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g., Tokyo, Osaka"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
},
{
"type": "function",
"function": {
"name": "calculate_math",
"description": "Execute mathematical calculations",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to calculate, e.g., 2+2, 10*5"
}
},
"required": ["expression"]
}
}
}
]
# Request using Function Calling
messages = [
{"role": "user", "content": "Tell me the weather in Tokyo and calculate 15×8"}
]
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools,
tool_choice="auto"
)
# Process function calls
available_functions = {
"get_weather": get_weather,
"calculate_math": calculate_math
}
# Execute function calls included in response
message = response.choices[0].message
if message.tool_calls:
messages.append(message)
for tool_call in message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
if function_name in available_functions:
function_response = available_functions[function_name](**function_args)
messages.append({
"tool_call_id": tool_call.id,
"role": "tool",
"content": function_response
})
# Get final response
final_response = client.chat.completions.create(
model="gpt-4o",
messages=messages
)
print("=== Function Calling Result ===")
print(final_response.choices[0].message.content)
Image Generation and Vision Processing
import base64
from openai import OpenAI
client = OpenAI()
# Image generation with DALL-E 3
print("=== Image Generation ===")
image_response = client.images.generate(
model="dall-e-3",
prompt="Beautiful Japanese garden with cherry blossoms, watercolor style, spring afternoon",
size="1024x1024",
quality="standard",
n=1
)
generated_image_url = image_response.data[0].url
print(f"Generated image URL: {generated_image_url}")
# GPT-4 Vision for image analysis
print("\n=== Image Analysis (Vision API) ===")
# Base64 encoding for image analysis
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
# URL-based image analysis
vision_response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Please describe this image in detail. What's in it and what are its characteristics?"
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
}
]
}
],
max_tokens=500
)
print(vision_response.choices[0].message.content)
# Multi-image comparison analysis
multi_vision_response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Compare these images and explain their similarities and differences."},
{"type": "image_url", "image_url": {"url": "https://example.com/image1.jpg"}},
{"type": "image_url", "image_url": {"url": "https://example.com/image2.jpg"}}
]
}
]
)
Audio Processing (Whisper API)
from openai import OpenAI
client = OpenAI()
# Audio transcription
def transcribe_audio(audio_file_path):
with open(audio_file_path, "rb") as audio_file:
transcript = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file,
language="en" # English specification
)
return transcript.text
# Audio translation (other languages → English)
def translate_audio(audio_file_path):
with open(audio_file_path, "rb") as audio_file:
translation = client.audio.translations.create(
model="whisper-1",
file=audio_file
)
return translation.text
# Text-to-Speech (TTS)
def text_to_speech(text, voice="alloy", output_file="output.mp3"):
response = client.audio.speech.create(
model="tts-1",
voice=voice, # alloy, echo, fable, onyx, nova, shimmer
input=text
)
with open(output_file, "wb") as f:
f.write(response.content)
return output_file
# Usage examples
print("=== Audio Processing Examples ===")
# TTS example
tts_text = "Hello. This is a demonstration of speech synthesis using OpenAI API."
output_file = text_to_speech(tts_text, voice="nova", output_file="demo.mp3")
print(f"Generated audio file: {output_file}")
# High-quality TTS (tts-1-hd)
hd_response = client.audio.speech.create(
model="tts-1-hd",
voice="nova",
input="High-quality Text-to-Speech audio generation demonstration."
)
with open("hd_output.mp3", "wb") as f:
f.write(hd_response.content)
print("Generated high-quality audio file: hd_output.mp3")
Error Handling and Rate Limiting
import time
import openai
from openai import OpenAI
client = OpenAI()
def safe_api_call_with_retry(func, max_retries=3, backoff_factor=1):
"""API call with retry functionality"""
for attempt in range(max_retries):
try:
return func()
except openai.RateLimitError as e:
if attempt == max_retries - 1:
raise e
wait_time = backoff_factor * (2 ** attempt)
print(f"Rate limit error. Waiting {wait_time} seconds for retry... (attempt {attempt + 1}/{max_retries})")
time.sleep(wait_time)
except openai.APIConnectionError as e:
print(f"API connection error: {e}")
if attempt == max_retries - 1:
raise e
time.sleep(backoff_factor)
except openai.APIStatusError as e:
print(f"API status error: {e.status_code} - {e.message}")
raise e
except Exception as e:
print(f"Unexpected error: {e}")
raise e
# Comprehensive error handling example
def robust_chat_completion(messages, model="gpt-4o"):
"""Chat completion with error handling"""
try:
def api_call():
return client.chat.completions.create(
model=model,
messages=messages,
max_tokens=1000,
temperature=0.7
)
response = safe_api_call_with_retry(api_call)
return {
"success": True,
"content": response.choices[0].message.content,
"usage": response.usage
}
except openai.AuthenticationError:
return {"success": False, "error": "Authentication error: Please check your API key"}
except openai.PermissionDeniedError:
return {"success": False, "error": "Permission error: Access denied"}
except openai.NotFoundError:
return {"success": False, "error": "Resource not found"}
except openai.UnprocessableEntityError as e:
return {"success": False, "error": f"Request processing error: {e.message}"}
except Exception as e:
return {"success": False, "error": f"Unexpected error: {str(e)}"}
# Usage example
print("=== Error Handling Example ===")
messages = [
{"role": "user", "content": "Briefly explain the history of artificial intelligence."}
]
result = robust_chat_completion(messages)
if result["success"]:
print("Response:", result["content"])
print("Token usage:", result["usage"])
else:
print("Error:", result["error"])
# Batch processing with rate limiting
def process_multiple_requests(request_list, delay=1):
"""Sequential processing of multiple requests (rate limiting support)"""
results = []
for i, request in enumerate(request_list):
print(f"Processing: {i+1}/{len(request_list)}")
result = robust_chat_completion(request["messages"])
results.append({
"request_id": request.get("id", i),
"result": result
})
# Wait to avoid rate limiting
if i < len(request_list) - 1:
time.sleep(delay)
return results
Advanced Integration Example
import json
import time
from typing import List, Dict
from openai import OpenAI
client = OpenAI()
# AI Assistant class
class OpenAIAssistant:
def __init__(self, name="AI Assistant", instructions="helpful assistant", model="gpt-4o"):
self.name = name
self.model = model
self.conversation_history = []
self.system_instructions = instructions
def chat(self, message: str, stream: bool = False) -> str:
# Add to conversation history
self.conversation_history.append({"role": "user", "content": message})
# Build messages including system instructions
messages = [{"role": "system", "content": self.system_instructions}]
messages.extend(self.conversation_history)
if stream:
return self._stream_response(messages)
else:
response = client.chat.completions.create(
model=self.model,
messages=messages,
temperature=0.7,
max_tokens=1000
)
assistant_message = response.choices[0].message.content
self.conversation_history.append({"role": "assistant", "content": assistant_message})
return assistant_message
def _stream_response(self, messages):
stream = client.chat.completions.create(
model=self.model,
messages=messages,
stream=True,
temperature=0.7,
max_tokens=1000
)
full_response = ""
print(f"{self.name}: ", end="", flush=True)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
content = chunk.choices[0].delta.content
print(content, end="", flush=True)
full_response += content
print() # New line
self.conversation_history.append({"role": "assistant", "content": full_response})
return full_response
def get_conversation_summary(self):
if not self.conversation_history:
return "No conversation history available."
conversation_text = "\n".join([
f"{msg['role']}: {msg['content']}"
for msg in self.conversation_history
])
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "system",
"content": "Please summarize the following conversation concisely."
},
{
"role": "user",
"content": conversation_text
}
],
max_tokens=200
)
return response.choices[0].message.content
def reset_conversation(self):
self.conversation_history = []
# Content generation engine
class ContentGenerator:
def __init__(self):
self.client = client
def generate_blog_post(self, topic: str, tone: str = "professional", length: str = "medium"):
length_instructions = {
"short": "400-600 words",
"medium": "800-1200 words",
"long": "1500-2000 words"
}
prompt = f"""
Write a {length_instructions[length]} blog post about {topic} in a {tone} tone.
Article structure:
1. Introduction (problem statement)
2. Main points (3-4 items)
3. Practical advice
4. Conclusion
Include appropriate headings and keywords for SEO optimization.
"""
response = self.client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}],
temperature=0.7,
max_tokens=2000
)
return response.choices[0].message.content
# Usage demonstration
def demo_application():
print("=== OpenAI Integration Demo Application ===\n")
# 1. AI Assistant
assistant = OpenAIAssistant(
name="Programming Assistant",
instructions="You are a programming expert. You perform code reviews, bug fixes, and optimization suggestions."
)
print("1. AI Assistant Interaction:")
response = assistant.chat("What's the best way to read files in Python?")
print(f"Response: {response}\n")
# 2. Content Generation
generator = ContentGenerator()
print("2. Blog Post Generation:")
blog_post = generator.generate_blog_post(
"Machine Learning Fundamentals",
tone="educational",
length="short"
)
print(f"Article (excerpt): {blog_post[:200]}...\n")
print("Demo completed successfully!")
# demo_application() # Run demo