Peewee
Peewee is a simple and lightweight ORM (Object-Relational Mapping) library for Python designed as "a lightweight and expressive ORM." It's optimized for small to medium-scale projects, featuring simple syntax and a low learning curve. Supporting SQLite, MySQL, and PostgreSQL, it provides an intuitive Django-like API while being lighter than SQLAlchemy and ideal for prototyping. Its evaluation stems from being more accessible and streamlined for rapid development.
GitHub Overview
coleifer/peewee
a small, expressive orm -- supports postgresql, mysql, sqlite and cockroachdb
Topics
Star History
Library
Peewee
Overview
Peewee is a simple and lightweight ORM (Object-Relational Mapping) library for Python designed as "a lightweight and expressive ORM." It's optimized for small to medium-scale projects, featuring simple syntax and a low learning curve. Supporting SQLite, MySQL, and PostgreSQL, it provides an intuitive Django-like API while being lighter than SQLAlchemy and ideal for prototyping. Its evaluation stems from being more accessible and streamlined for rapid development.
Details
Peewee 2025 version maintains stable popularity among small projects and startups. Adopting the Active Record pattern, it intuitively represents relationships between Python objects and database tables. Inspired by Django ORM design, it provides an easy-to-understand API even for beginners. It includes database migration, backup, and replication features, making it capable of handling full-scale web application development.
Key Features
- Lightweight Design: Minimal dependencies with simple configuration
- Intuitive API: Django ORM-like comprehensible syntax
- Flexible Queries: Expression-based queries enable complex searches
- Multi-Database Support: SQLite, MySQL, PostgreSQL compatibility
- Built-in Features: Migration, pooling, replication support
- Extensibility: Custom field types and plugin support
Pros and Cons
Pros
- Low learning curve, enabling Python beginners to master quickly
- Lightweight with minimal overhead, achieving fast performance
- Simple syntax enables highly readable code
- High development efficiency for small to medium projects
- Built-in SQLite support makes prototyping easy
- Rich documentation and community support
Cons
- May lack features for large-scale, complex applications
- Limited advanced features and flexibility compared to SQLAlchemy
- Not suitable for enterprise-level complex relationship processing
- Limited asynchronous processing support
- Smaller plugin ecosystem compared to SQLAlchemy
- Fewer performance tuning options available
Reference Pages
Examples
Installation and Setup
# Install Peewee
pip install peewee
# Database-specific adapters
pip install peewee[mysql] # For MySQL
pip install peewee[postgresql] # For PostgreSQL
# SQLite is included by default
Basic Model Definition
from peewee import *
from datetime import datetime
# Database connection
db = SqliteDatabase('my_app.db')
# db = MySQLDatabase('my_db', user='root', password='secret', host='localhost', port=3306)
# db = PostgreSQLDatabase('my_db', user='postgres', password='secret', host='localhost', port=5432)
class BaseModel(Model):
class Meta:
database = db
class User(BaseModel):
id = AutoField(primary_key=True)
username = CharField(unique=True, max_length=50)
email = CharField(unique=True, max_length=100)
password_hash = CharField(max_length=255)
is_active = BooleanField(default=True)
created_at = DateTimeField(default=datetime.now)
updated_at = DateTimeField(default=datetime.now)
class Meta:
table_name = 'users'
class Post(BaseModel):
id = AutoField(primary_key=True)
title = CharField(max_length=200)
content = TextField()
author = ForeignKeyField(User, backref='posts')
published = BooleanField(default=False)
created_at = DateTimeField(default=datetime.now)
tags = CharField(null=True)
class Meta:
table_name = 'posts'
class Comment(BaseModel):
id = AutoField(primary_key=True)
post = ForeignKeyField(Post, backref='comments')
author = ForeignKeyField(User, backref='comments')
content = TextField()
created_at = DateTimeField(default=datetime.now)
class Meta:
table_name = 'comments'
# Create database and tables
db.connect()
db.create_tables([User, Post, Comment])
Basic CRUD Operations
# Create - Data creation
user = User.create(
username='john_doe',
email='[email protected]',
password_hash='hashed_password_here'
)
# Alternative method
user = User(username='jane_doe', email='[email protected]', password_hash='hash')
user.save()
# Read - Data reading
# Get all users
users = User.select()
for user in users:
print(f'{user.username}: {user.email}')
# Get single user
user = User.get(User.username == 'john_doe')
# or
user = User.get_by_id(1)
# Conditional search
active_users = User.select().where(User.is_active == True)
recent_posts = Post.select().where(Post.created_at > datetime(2024, 1, 1))
# Update - Data updates
# Single record update
user = User.get_by_id(1)
user.email = '[email protected]'
user.updated_at = datetime.now()
user.save()
# Bulk update
query = User.update(is_active=False).where(User.created_at < datetime(2023, 1, 1))
query.execute()
# Delete - Data deletion
# Single record deletion
user = User.get_by_id(1)
user.delete_instance()
# Bulk deletion
query = Post.delete().where(Post.published == False)
query.execute()
Advanced Query Operations
# JOIN operations
posts_with_authors = (Post
.select(Post, User)
.join(User)
.where(User.is_active == True))
for post in posts_with_authors:
print(f'{post.title} by {post.author.username}')
# Aggregation queries
from peewee import fn
# Post count per user
user_post_counts = (User
.select(User.username, fn.COUNT(Post.id).alias('post_count'))
.join(Post, JOIN.LEFT_OUTER)
.group_by(User.username))
for user in user_post_counts:
print(f'{user.username}: {user.post_count} posts')
# Subqueries
active_authors = User.select().where(User.is_active == True)
posts_by_active_authors = Post.select().where(Post.author.in_(active_authors))
# Complex conditions
from peewee import fn
recent_popular_posts = (Post
.select()
.join(Comment)
.group_by(Post.id)
.having(fn.COUNT(Comment.id) > 5)
.where(Post.created_at > datetime(2024, 1, 1)))
# Sorting and pagination
paginated_posts = (Post
.select()
.order_by(Post.created_at.desc())
.paginate(page=1, paginate_by=10))
Relationship Operations
# Forward reference (one-to-many)
user = User.get_by_id(1)
user_posts = user.posts # Access via User.posts
# Backward reference (many-to-one)
post = Post.get_by_id(1)
post_author = post.author # Access via Post.author
# Fetch with related data (Prefetch)
posts_with_authors = (Post
.select()
.join(User)
.where(Post.published == True))
# Many-to-Many relationship implementation
class Tag(BaseModel):
name = CharField(unique=True, max_length=50)
class PostTag(BaseModel):
post = ForeignKeyField(Post)
tag = ForeignKeyField(Tag)
class Meta:
primary_key = CompositeKey('post', 'tag')
# Many-to-Many operations
def add_tag_to_post(post, tag_name):
tag, created = Tag.get_or_create(name=tag_name)
PostTag.get_or_create(post=post, tag=tag)
def get_post_tags(post):
return (Tag
.select()
.join(PostTag)
.where(PostTag.post == post))
Transaction Management
# Basic transactions
def create_post_with_comment(user, title, content, comment_text):
with db.atomic():
post = Post.create(
title=title,
content=content,
author=user,
published=True
)
comment = Comment.create(
post=post,
author=user,
content=comment_text
)
return post, comment
# Nested transactions
def complex_operation():
with db.atomic() as txn:
try:
user = User.create(username='test_user', email='[email protected]')
# Nested transaction
with db.atomic() as nested_txn:
post = Post.create(title='Test', content='Content', author=user)
if some_condition:
nested_txn.rollback() # Rollback only inner transaction
except Exception as e:
txn.rollback() # Rollback entire transaction
raise e
# Transaction using decorator
@db.atomic()
def atomic_function():
# Entire function executed within transaction
User.create(username='atomic_user', email='[email protected]')
Post.create(title='Atomic Post', content='Content', author=user)
Database Migrations
# Migration script
from playhouse.migrate import *
# For SQLite
migrator = SqliteMigrator(db)
# For MySQL
# migrator = MySQLMigrator(db)
# For PostgreSQL
# migrator = PostgresqlMigrator(db)
# Add columns
migrate(
migrator.add_column('users', 'bio', TextField(null=True)),
migrator.add_column('posts', 'view_count', IntegerField(default=0)),
)
# Rename columns
migrate(
migrator.rename_column('users', 'password_hash', 'password'),
)
# Add indexes
migrate(
migrator.add_index('posts', ('author', 'created_at'), False),
migrator.add_index('users', ('email',), True), # Unique index
)
# Drop columns
migrate(
migrator.drop_column('posts', 'old_column'),
)
# Add foreign keys
migrate(
migrator.add_column('posts', 'category_id',
ForeignKeyField(Category, null=True)),
)
Backup and Restore
import json
from playhouse.dataset import DataSet
# Export entire database to JSON
def backup_database():
ds = DataSet(db)
all_data = {}
for table_name in db.get_tables():
table = ds[table_name]
all_data[table_name] = list(table.all())
with open('backup.json', 'w') as f:
json.dump(all_data, f, indent=2, default=str)
# Import from JSON to database
def restore_database():
with open('backup.json', 'r') as f:
all_data = json.load(f)
ds = DataSet(db)
for table_name, records in all_data.items():
table = ds[table_name]
for record in records:
table.insert(**record)
# CSV export
def export_to_csv():
import csv
users = User.select()
with open('users_export.csv', 'w', newline='') as csvfile:
fieldnames = ['id', 'username', 'email', 'is_active', 'created_at']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for user in users:
writer.writerow({
'id': user.id,
'username': user.username,
'email': user.email,
'is_active': user.is_active,
'created_at': user.created_at
})
Performance Optimization and Debugging
# Enable query logging
import logging
logger = logging.getLogger('peewee')
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
# Use connection pooling (PostgreSQL/MySQL)
from playhouse.pool import PooledPostgresqlDatabase
db = PooledPostgresqlDatabase(
'my_db',
max_connections=20,
stale_timeout=300,
user='postgres',
password='secret'
)
# Solving N+1 problem
# Bad: N+1 queries occur
posts = Post.select()
for post in posts:
print(post.author.username) # Individual author query for each post
# Good: Using prefetch
posts_with_authors = Post.select().join(User)
for post in posts_with_authors:
print(post.author.username) # Solved with single JOIN
# Index optimization
class User(BaseModel):
username = CharField(unique=True, index=True) # Add index
email = CharField(unique=True)
created_at = DateTimeField(index=True) # Index frequently searched fields
class Meta:
database = db
indexes = (
(('username', 'email'), True), # Composite unique index
(('created_at', 'is_active'), False), # Composite index
)
# Execute raw SQL
def execute_raw_sql():
cursor = db.execute_sql('SELECT COUNT(*) FROM users WHERE is_active = ?', (True,))
result = cursor.fetchone()
return result[0]
# Batch processing
def bulk_create_users(users_data):
with db.atomic():
User.insert_many(users_data).execute()
# Efficient processing of large datasets
def process_large_dataset():
for user_batch in User.select().paginate(page=1, paginate_by=1000):
# Process 1000 records at a time
process_users(user_batch)