Google Cloud Platform (GCP)

Cloud PlatformData AnalyticsMachine LearningKubernetesBigQueryTensorFlow

Cloud Platform

Google Cloud Platform (GCP)

Overview

Google Cloud Platform (GCP) is Google's cloud infrastructure and services. It has strong capabilities in machine learning, data analytics, and Kubernetes, offering innovative services like BigQuery and TensorFlow. GCP demonstrates leadership in the AI and machine learning fields, with increasing adoption among data-driven companies and startups. As the originator of Kubernetes, it also holds advantages in containerization and microservices architecture.

Details

GCP started with App Engine in 2008 and now operates in over 35 regions, providing Google's core technologies as cloud services. It particularly leads the industry in data analytics (BigQuery), machine learning (Vertex AI), and Kubernetes managed services (GKE). Notable 2025 features include new multimodal model support in Vertex AI, enhanced multi-cloud analytics capabilities in BigQuery, third-generation execution environment in Cloud Run, and expanded Document AI API. GCP continues to strengthen its position as a leading platform for data-driven innovation.

Key Features

  • Machine Learning: Comprehensive ML platform with Vertex AI, AutoML, and TensorFlow
  • Data Analytics: Real-time analytics with BigQuery, Dataflow, and Pub/Sub
  • Containers: Industry-leading Kubernetes experience with GKE (Google Kubernetes Engine)
  • Serverless: Fully managed computing with Cloud Run and Cloud Functions
  • Global Network: Google's worldwide scale infrastructure

Latest 2025 Features

  • Vertex AI: Multimodal support and more flexible model deployment
  • BigQuery: Enhanced cross-cloud analytics and AI integration
  • Cloud Run: Third-generation execution environment with improved performance
  • Document AI: Higher precision document analysis capabilities
  • Security Command Center: Enhanced cloud security monitoring

Pros and Cons

Pros

  • Industry-leading technical capabilities in data analytics and machine learning
  • Kubernetes-native container platform
  • High-speed, large-scale data analytics with BigQuery
  • Simple and consistent API design
  • Excellent cost-performance with pay-as-you-go pricing
  • High-quality global network infrastructure from Google
  • Environmental consideration: Carbon-neutral cloud services

Cons

  • Fewer services compared to AWS
  • Weaker enterprise support compared to AWS and Azure
  • Limited number of data centers in some regions
  • Fewer third-party tool integrations compared to AWS
  • Insufficient Japanese documentation in some cases
  • Integration challenges with legacy systems due to newer technology

Reference Pages

Code Examples

Basic Setup and Account Configuration

# Install Google Cloud SDK
curl https://sdk.cloud.google.com | bash
exec -l $SHELL

# Initialize gcloud
gcloud init

# Authentication setup
gcloud auth login
gcloud auth application-default login

# Create project
gcloud projects create my-project-id --name="My Project"

# Set project
gcloud config set project my-project-id

# List available regions
gcloud compute regions list

# Set default region
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a

# Check current configuration
gcloud config list

# Check billing accounts
gcloud billing accounts list

# Link billing account to project
gcloud billing projects link my-project-id --billing-account=BILLING_ACCOUNT_ID

# Enable APIs
gcloud services enable compute.googleapis.com
gcloud services enable container.googleapis.com
gcloud services enable bigquery.googleapis.com
gcloud services enable cloudfunctions.googleapis.com
gcloud services enable run.googleapis.com

Compute Services (VMs, Containers)

# Compute Engine VM creation with Google Cloud Python SDK
from google.cloud import compute_v1
import time

# Initialize clients
instances_client = compute_v1.InstancesClient()
networks_client = compute_v1.NetworksClient()
firewalls_client = compute_v1.FirewallsClient()

project_id = "my-project-id"
zone = "us-central1-a"
region = "us-central1"

# Create firewall rule
firewall_rule = compute_v1.Firewall()
firewall_rule.name = "allow-web-traffic"
firewall_rule.direction = "INGRESS"
firewall_rule.allowed = [
    compute_v1.Allowed(I_p_protocol="tcp", ports=["80", "443", "22"])
]
firewall_rule.source_ranges = ["0.0.0.0/0"]
firewall_rule.target_tags = ["web-server"]

operation = firewalls_client.insert(
    project=project_id,
    firewall_resource=firewall_rule
)

print(f"Creating firewall rule: {firewall_rule.name}")

# Create VM instance
instance = compute_v1.Instance()
instance.name = "web-server-vm"
instance.machine_type = f"zones/{zone}/machineTypes/e2-medium"

# Boot disk configuration
disk = compute_v1.AttachedDisk()
initialize_params = compute_v1.AttachedDiskInitializeParams()
initialize_params.source_image = "projects/ubuntu-os-cloud/global/images/family/ubuntu-2004-lts"
initialize_params.disk_size_gb = 20
initialize_params.disk_type = f"zones/{zone}/diskTypes/pd-standard"
disk.initialize_params = initialize_params
disk.auto_delete = True
disk.boot = True
instance.disks = [disk]

# Network configuration
network_interface = compute_v1.NetworkInterface()
network_interface.network = "global/networks/default"
access_config = compute_v1.AccessConfig()
access_config.type_ = compute_v1.AccessConfig.Type.ONE_TO_ONE_NAT.name
access_config.name = "External NAT"
network_interface.access_configs = [access_config]
instance.network_interfaces = [network_interface]

# Metadata and startup script
metadata = compute_v1.Metadata()
metadata.items = [
    compute_v1.Items(
        key="startup-script",
        value="""#!/bin/bash
            apt-get update
            apt-get install -y nginx
            systemctl start nginx
            systemctl enable nginx
            echo '<h1>Hello from Google Cloud!</h1>' > /var/www/html/index.html
            
            # Install monitoring agent
            curl -sSO https://dl.google.com/cloudagents/add-monitoring-agent-repo.sh
            sudo bash add-monitoring-agent-repo.sh
            sudo apt-get update
            sudo apt-get install stackdriver-agent
        """
    )
]
instance.metadata = metadata

# Tags configuration
instance.tags = compute_v1.Tags(items=["web-server", "http-server"])

# Labels configuration
instance.labels = {
    "environment": "production",
    "application": "webapp",
    "owner": "dev-team"
}

# Execute instance creation
operation = instances_client.insert(
    project=project_id,
    zone=zone,
    instance_resource=instance
)

print(f"Creating instance: {instance.name}")

# Wait for completion
while operation.status != compute_v1.Operation.Status.DONE:
    time.sleep(1)
    operation = instances_client.get(
        project=project_id,
        zone=zone,
        instance=operation.target_link.split('/')[-1]
    )

print(f"Instance {instance.name} created successfully")

# GKE Cluster creation
from google.cloud import container_v1

container_client = container_v1.ClusterManagerClient()
cluster_location = "us-central1"

cluster = container_v1.Cluster()
cluster.name = "webapp-cluster"
cluster.description = "Production web application cluster"
cluster.initial_node_count = 3

# Node configuration
cluster.node_config = container_v1.NodeConfig()
cluster.node_config.machine_type = "e2-medium"
cluster.node_config.disk_size_gb = 50
cluster.node_config.preemptible = False
cluster.node_config.oauth_scopes = [
    "https://www.googleapis.com/auth/cloud-platform"
]
cluster.node_config.metadata = {
    "disable-legacy-endpoints": "true"
}
cluster.node_config.labels = {
    "environment": "production",
    "application": "webapp"
}

# Network policy
cluster.network_policy = container_v1.NetworkPolicy()
cluster.network_policy.enabled = True

# Addons configuration
cluster.addons_config = container_v1.AddonsConfig()
cluster.addons_config.http_load_balancing = container_v1.HttpLoadBalancing(disabled=False)
cluster.addons_config.horizontal_pod_autoscaling = container_v1.HorizontalPodAutoscaling(disabled=False)
cluster.addons_config.network_policy_config = container_v1.NetworkPolicyConfig(disabled=False)

# Workload Identity
cluster.workload_identity_config = container_v1.WorkloadIdentityConfig()
cluster.workload_identity_config.workload_pool = f"{project_id}.svc.id.goog"

# Release channel
cluster.release_channel = container_v1.ReleaseChannel()
cluster.release_channel.channel = container_v1.ReleaseChannel.Channel.REGULAR

# Maintenance policy
cluster.maintenance_policy = container_v1.MaintenancePolicy()
cluster.maintenance_policy.window = container_v1.MaintenanceWindow()
cluster.maintenance_policy.window.daily_maintenance_window = container_v1.DailyMaintenanceWindow()
cluster.maintenance_policy.window.daily_maintenance_window.start_time = "03:00"

operation = container_client.create_cluster(
    parent=f"projects/{project_id}/locations/{cluster_location}",
    cluster=cluster
)

print(f"Creating GKE cluster: {cluster.name}")

Storage and Database Services

# Cloud Storage, Firestore, and BigQuery operations
from google.cloud import storage
from google.cloud import firestore
from google.cloud import bigquery
from google.cloud import spanner
import datetime
import uuid

# Cloud Storage operations
storage_client = storage.Client()

# Create bucket with advanced configuration
bucket_name = f"webapp-storage-{uuid.uuid4().hex[:8]}"
bucket = storage_client.bucket(bucket_name)
bucket.location = "US-CENTRAL1"
bucket.storage_class = "STANDARD"

# Create bucket if it doesn't exist
if not bucket.exists():
    bucket = storage_client.create_bucket(bucket_name, location="us-central1")
    print(f"Bucket {bucket_name} created")

# Configure lifecycle management
bucket.lifecycle_management_rules = [
    {
        "action": {"type": "SetStorageClass", "storageClass": "NEARLINE"},
        "condition": {"age": 30, "matchesStorageClass": ["STANDARD"]}
    },
    {
        "action": {"type": "SetStorageClass", "storageClass": "COLDLINE"},
        "condition": {"age": 365, "matchesStorageClass": ["NEARLINE"]}
    },
    {
        "action": {"type": "Delete"},
        "condition": {"age": 2555}  # 7 years
    }
]
bucket.patch()

# Enable versioning
bucket.versioning_enabled = True
bucket.patch()

# Upload file with metadata
blob = bucket.blob("uploads/sample.txt")
blob.metadata = {
    "author": "user123",
    "purpose": "demo",
    "uploaded_at": datetime.datetime.now().isoformat(),
    "file_type": "document"
}

with open("local-file.txt", "rb") as file:
    blob.upload_from_file(
        file, 
        content_type="text/plain",
        timeout=60
    )

# Set custom headers
blob.cache_control = "public, max-age=3600"
blob.content_encoding = "gzip"
blob.patch()

print(f"File uploaded to {blob.name}")

# Firestore (NoSQL database) operations
firestore_client = firestore.Client()

# Create collection and add document
users_ref = firestore_client.collection('users')
user_doc = {
    'name': 'John Doe',
    'email': '[email protected]',
    'status': 'active',
    'created_at': firestore.SERVER_TIMESTAMP,
    'preferences': {
        'theme': 'dark',
        'language': 'en',
        'notifications': True
    },
    'tags': ['developer', 'premium'],
    'last_login': None
}

doc_ref = users_ref.document('user123')
doc_ref.set(user_doc)

# Update document
doc_ref.update({
    'last_login': firestore.SERVER_TIMESTAMP,
    'login_count': firestore.Increment(1)
})

# Read document
doc = doc_ref.get()
if doc.exists:
    print(f"User data: {doc.to_dict()}")

# Complex queries
active_users = users_ref.where('status', '==', 'active')\
                        .where('preferences.notifications', '==', True)\
                        .order_by('created_at', direction=firestore.Query.DESCENDING)\
                        .limit(10)

for user in active_users.stream():
    print(f"Active user: {user.id} => {user.to_dict()}")

# BigQuery operations
bigquery_client = bigquery.Client()

# Create dataset
dataset_id = f"{project_id}.webapp_analytics"
dataset = bigquery.Dataset(dataset_id)
dataset.location = "US"
dataset.description = "Web application analytics data"

try:
    dataset = bigquery_client.create_dataset(dataset, timeout=30)
    print(f"Created dataset {dataset.dataset_id}")
except Exception as e:
    print(f"Dataset may already exist: {e}")

# Create table with schema
table_id = f"{dataset_id}.user_events"
schema = [
    bigquery.SchemaField("event_id", "STRING", mode="REQUIRED"),
    bigquery.SchemaField("user_id", "STRING", mode="REQUIRED"),
    bigquery.SchemaField("event_type", "STRING", mode="REQUIRED"),
    bigquery.SchemaField("timestamp", "TIMESTAMP", mode="REQUIRED"),
    bigquery.SchemaField("properties", "JSON", mode="NULLABLE"),
    bigquery.SchemaField("session_id", "STRING", mode="NULLABLE"),
]

table = bigquery.Table(table_id, schema=schema)
table.description = "User events tracking table"

try:
    table = bigquery_client.create_table(table)
    print(f"Created table {table.project}.{table.dataset_id}.{table.table_id}")
except Exception as e:
    print(f"Table may already exist: {e}")

# Insert data
rows_to_insert = [
    {
        "event_id": str(uuid.uuid4()),
        "user_id": "user123",
        "event_type": "page_view",
        "timestamp": datetime.datetime.now().isoformat(),
        "properties": {"page": "/dashboard", "referrer": "/login"},
        "session_id": "sess_" + str(uuid.uuid4())[:8]
    },
    {
        "event_id": str(uuid.uuid4()),
        "user_id": "user456",
        "event_type": "button_click",
        "timestamp": datetime.datetime.now().isoformat(),
        "properties": {"button_id": "signup", "location": "header"},
        "session_id": "sess_" + str(uuid.uuid4())[:8]
    }
]

errors = bigquery_client.insert_rows_json(table, rows_to_insert)
if not errors:
    print("Events inserted successfully into BigQuery")
else:
    print(f"Errors inserting into BigQuery: {errors}")

# Query data
query = """
    SELECT 
        event_type,
        COUNT(*) as event_count,
        COUNT(DISTINCT user_id) as unique_users
    FROM `{}`
    WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
    GROUP BY event_type
    ORDER BY event_count DESC
""".format(table_id)

query_job = bigquery_client.query(query)
results = query_job.result()

print("BigQuery results:")
for row in results:
    print(f"Event: {row.event_type}, Count: {row.event_count}, Users: {row.unique_users}")

Networking and Security

# Cloud IAM and security configuration
from google.cloud import secretmanager
from google.cloud import iam
from google.oauth2 import service_account
import json

# Secret Manager operations
secrets_client = secretmanager.SecretManagerServiceClient()
project_path = f"projects/{project_id}"

# Create secret
secret_id = "database-password"
secret_path = secrets_client.secret_path(project_id, secret_id)

try:
    secret = secrets_client.create_secret(
        request={
            "parent": project_path,
            "secret_id": secret_id,
            "secret": {
                "replication": {
                    "user_managed": {
                        "replicas": [
                            {"location": "us-central1"},
                            {"location": "us-east1"}
                        ]
                    }
                },
                "labels": {
                    "environment": "production",
                    "component": "database"
                }
            }
        }
    )
    print(f"Created secret: {secret.name}")
except Exception as e:
    print(f"Secret may already exist: {e}")

# Add secret version with metadata
secret_value = "SecurePassword123!"
response = secrets_client.add_secret_version(
    request={
        "parent": secret_path,
        "payload": {"data": secret_value.encode("UTF-8")}
    }
)
print(f"Added secret version: {response.name}")

# Access secret
version_path = secrets_client.secret_version_path(project_id, secret_id, "latest")
response = secrets_client.access_secret_version(request={"name": version_path})
secret_data = response.payload.data.decode("UTF-8")
print(f"Retrieved secret successfully")

# Create custom VPC network
networks_client = compute_v1.NetworksClient()
subnets_client = compute_v1.SubnetworksClient()

# Custom VPC network
network = compute_v1.Network()
network.name = "webapp-vpc"
network.routing_config = compute_v1.NetworkRoutingConfig()
network.routing_config.routing_mode = "REGIONAL"

operation = networks_client.insert(
    project=project_id,
    network_resource=network
)

print(f"Creating VPC network: {network.name}")

# Create subnets
subnets = [
    {
        "name": "webapp-subnet-web",
        "ip_cidr_range": "10.0.1.0/24",
        "description": "Subnet for web tier"
    },
    {
        "name": "webapp-subnet-app",
        "ip_cidr_range": "10.0.2.0/24",
        "description": "Subnet for application tier"
    },
    {
        "name": "webapp-subnet-db",
        "ip_cidr_range": "10.0.3.0/24",
        "description": "Subnet for database tier"
    }
]

for subnet_config in subnets:
    subnet = compute_v1.Subnetwork()
    subnet.name = subnet_config["name"]
    subnet.ip_cidr_range = subnet_config["ip_cidr_range"]
    subnet.description = subnet_config["description"]
    subnet.network = f"projects/{project_id}/global/networks/{network.name}"
    subnet.region = region
    
    # Enable private Google access
    subnet.private_ip_google_access = True
    
    operation = subnets_client.insert(
        project=project_id,
        region=region,
        subnetwork_resource=subnet
    )
    
    print(f"Creating subnet: {subnet.name}")

# Create comprehensive firewall rules
firewall_rules = [
    {
        "name": "allow-internal-all",
        "direction": "INGRESS",
        "allowed": [
            {"IPProtocol": "tcp"},
            {"IPProtocol": "udp"},
            {"IPProtocol": "icmp"}
        ],
        "source_ranges": ["10.0.0.0/8"],
        "target_tags": ["internal"],
        "description": "Allow all internal communication"
    },
    {
        "name": "allow-ssh-from-iap",
        "direction": "INGRESS",
        "allowed": [{"IPProtocol": "tcp", "ports": ["22"]}],
        "source_ranges": ["35.235.240.0/20"],  # IAP source range
        "target_tags": ["ssh-allowed"],
        "description": "Allow SSH through Identity-Aware Proxy"
    },
    {
        "name": "allow-web-public",
        "direction": "INGRESS",
        "allowed": [{"IPProtocol": "tcp", "ports": ["80", "443"]}],
        "source_ranges": ["0.0.0.0/0"],
        "target_tags": ["web-server"],
        "description": "Allow HTTP/HTTPS from internet"
    },
    {
        "name": "allow-health-check",
        "direction": "INGRESS",
        "allowed": [{"IPProtocol": "tcp", "ports": ["80", "8080"]}],
        "source_ranges": ["130.211.0.0/22", "35.191.0.0/16"],
        "target_tags": ["load-balanced"],
        "description": "Allow health checks from load balancer"
    }
]

for rule_config in firewall_rules:
    firewall_rule = compute_v1.Firewall()
    firewall_rule.name = rule_config["name"]
    firewall_rule.direction = rule_config["direction"]
    firewall_rule.description = rule_config["description"]
    firewall_rule.allowed = [
        compute_v1.Allowed(**allowed) for allowed in rule_config["allowed"]
    ]
    firewall_rule.source_ranges = rule_config["source_ranges"]
    firewall_rule.target_tags = rule_config["target_tags"]
    firewall_rule.network = f"projects/{project_id}/global/networks/{network.name}"
    
    operation = firewalls_client.insert(
        project=project_id,
        firewall_resource=firewall_rule
    )
    print(f"Creating firewall rule: {firewall_rule.name}")

# Cloud IAM policy management
from google.cloud import resourcemanager

resourcemanager_client = resourcemanager.ProjectsClient()

# Get current IAM policy
policy = resourcemanager_client.get_iam_policy(
    request={"resource": f"projects/{project_id}"}
)

# Add custom role binding
new_binding = {
    "role": "roles/storage.objectViewer",
    "members": [
        f"serviceAccount:webapp-service@{project_id}.iam.gserviceaccount.com"
    ]
}

policy.bindings.append(new_binding)

# Update IAM policy
updated_policy = resourcemanager_client.set_iam_policy(
    request={
        "resource": f"projects/{project_id}",
        "policy": policy
    }
)

print("IAM policy updated successfully")

Serverless and Functions

# Cloud Functions and Cloud Run examples
import functions_framework
from google.cloud import firestore
from google.cloud import pubsub_v1
from google.cloud import tasks_v2
import json
import logging
from datetime import datetime, timedelta

@functions_framework.http
def api_gateway(request):
    """HTTP trigger Cloud Function serving as API gateway"""
    
    # CORS headers
    if request.method == 'OPTIONS':
        headers = {
            'Access-Control-Allow-Origin': '*',
            'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE',
            'Access-Control-Allow-Headers': 'Content-Type, Authorization',
            'Access-Control-Max-Age': '3600'
        }
        return ('', 204, headers)
    
    headers = {
        'Access-Control-Allow-Origin': '*',
        'Content-Type': 'application/json'
    }
    
    try:
        # Route requests based on path and method
        path = request.path.strip('/')
        method = request.method
        
        # Initialize clients
        firestore_client = firestore.Client()
        
        # User management endpoints
        if path.startswith('users'):
            if method == 'GET':
                # List users
                users_ref = firestore_client.collection('users')
                users = []
                for doc in users_ref.limit(50).stream():
                    user_data = doc.to_dict()
                    user_data['id'] = doc.id
                    users.append(user_data)
                
                return (json.dumps({
                    'users': users,
                    'count': len(users),
                    'timestamp': datetime.now().isoformat()
                }), 200, headers)
            
            elif method == 'POST':
                # Create user
                request_json = request.get_json()
                if not request_json:
                    return (json.dumps({'error': 'No JSON body provided'}), 400, headers)
                
                user_data = {
                    'name': request_json.get('name'),
                    'email': request_json.get('email'),
                    'status': 'active',
                    'created_at': firestore.SERVER_TIMESTAMP,
                    'metadata': {
                        'source': 'api',
                        'user_agent': request.headers.get('User-Agent', ''),
                        'ip_address': request.remote_addr
                    }
                }
                
                doc_ref = firestore_client.collection('users').document()
                doc_ref.set(user_data)
                
                # Publish event to Pub/Sub
                publisher = pubsub_v1.PublisherClient()
                topic_path = publisher.topic_path(project_id, 'user-events')
                
                event_data = {
                    'event_type': 'user_created',
                    'user_id': doc_ref.id,
                    'timestamp': datetime.now().isoformat(),
                    'user_data': user_data
                }
                
                future = publisher.publish(
                    topic_path,
                    json.dumps(event_data, default=str).encode('utf-8'),
                    event_type='user_created',
                    user_id=doc_ref.id
                )
                
                return (json.dumps({
                    'message': 'User created successfully',
                    'user_id': doc_ref.id,
                    'event_id': future.result()
                }), 201, headers)
        
        # Analytics endpoints
        elif path.startswith('analytics'):
            # Query BigQuery for analytics data
            from google.cloud import bigquery
            
            bigquery_client = bigquery.Client()
            
            query = """
                SELECT 
                    event_type,
                    COUNT(*) as count,
                    COUNT(DISTINCT user_id) as unique_users
                FROM `{}.webapp_analytics.user_events`
                WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
                GROUP BY event_type
                ORDER BY count DESC
            """.format(project_id)
            
            query_job = bigquery_client.query(query)
            results = query_job.result()
            
            analytics_data = []
            for row in results:
                analytics_data.append({
                    'event_type': row.event_type,
                    'count': row.count,
                    'unique_users': row.unique_users
                })
            
            return (json.dumps({
                'analytics': analytics_data,
                'period': '24h',
                'timestamp': datetime.now().isoformat()
            }), 200, headers)
        
        else:
            return (json.dumps({'error': 'Endpoint not found'}), 404, headers)
    
    except Exception as e:
        logging.error(f"Error processing request: {str(e)}")
        return (json.dumps({'error': str(e)}), 500, headers)

@functions_framework.cloud_event
def process_user_events(cloud_event):
    """Pub/Sub trigger for processing user events"""
    
    import base64
    
    # Decode Pub/Sub message
    message_data = base64.b64decode(cloud_event.data['message']['data']).decode('utf-8')
    message_json = json.loads(message_data)
    attributes = cloud_event.data['message'].get('attributes', {})
    
    logging.info(f"Processing event: {message_json}")
    
    event_type = message_json.get('event_type')
    user_id = message_json.get('user_id')
    
    if event_type == 'user_created':
        # Send welcome email (simulate)
        logging.info(f"Sending welcome email to user {user_id}")
        
        # Schedule follow-up tasks
        tasks_client = tasks_v2.CloudTasksClient()
        parent = tasks_client.queue_path(project_id, 'us-central1', 'user-onboarding')
        
        # Schedule welcome email task
        task = {
            'http_request': {
                'http_method': tasks_v2.HttpMethod.POST,
                'url': f'https://us-central1-{project_id}.cloudfunctions.net/send_welcome_email',
                'headers': {'Content-Type': 'application/json'},
                'body': json.dumps({
                    'user_id': user_id,
                    'email_type': 'welcome'
                }).encode()
            },
            'schedule_time': {
                'seconds': int((datetime.now() + timedelta(minutes=5)).timestamp())
            }
        }
        
        response = tasks_client.create_task(parent=parent, task=task)
        logging.info(f"Scheduled welcome email task: {response.name}")
        
        # Update user analytics
        from google.cloud import bigquery
        
        bigquery_client = bigquery.Client()
        table_id = f"{project_id}.webapp_analytics.user_events"
        
        rows_to_insert = [
            {
                "event_id": str(uuid.uuid4()),
                "user_id": user_id,
                "event_type": "welcome_email_scheduled",
                "timestamp": datetime.now().isoformat(),
                "properties": {"trigger": "user_created"},
                "session_id": None
            }
        ]
        
        errors = bigquery_client.insert_rows_json(table_id, rows_to_insert)
        if not errors:
            logging.info("Analytics event logged successfully")
        else:
            logging.error(f"Analytics logging errors: {errors}")

# Cloud Run service (FastAPI)
from fastapi import FastAPI, HTTPException, Depends, Request
from fastapi.middleware.cors import CORSMiddleware
from google.cloud import firestore
from typing import List, Optional
import uvicorn

app = FastAPI(
    title="WebApp API",
    version="2.0.0",
    description="Production-ready API for web application"
)

# Add CORS middleware
app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

# Initialize Firestore client
firestore_client = firestore.Client()

@app.middleware("http")
async def log_requests(request: Request, call_next):
    """Log all requests for monitoring"""
    start_time = datetime.now()
    
    response = await call_next(request)
    
    process_time = (datetime.now() - start_time).total_seconds()
    
    # Log to Cloud Logging
    logging.info(f"Request: {request.method} {request.url.path} - "
                f"Status: {response.status_code} - Time: {process_time:.3f}s")
    
    return response

@app.get("/")
async def root():
    return {
        "service": "webapp-api",
        "version": "2.0.0",
        "status": "healthy",
        "timestamp": datetime.now().isoformat()
    }

@app.get("/health")
async def health_check():
    """Comprehensive health check"""
    try:
        # Test Firestore connection
        test_doc = firestore_client.collection('health').document('test')
        test_doc.set({'timestamp': firestore.SERVER_TIMESTAMP})
        
        return {
            "status": "healthy",
            "services": {
                "firestore": "ok",
                "api": "ok"
            },
            "timestamp": datetime.now().isoformat()
        }
    except Exception as e:
        raise HTTPException(status_code=503, detail=f"Service unhealthy: {str(e)}")

@app.get("/users")
async def get_users(limit: int = 50, offset: int = 0):
    """Get users with pagination"""
    try:
        users_ref = firestore_client.collection('users')
        query = users_ref.limit(limit).offset(offset)
        
        users = []
        for doc in query.stream():
            user_data = doc.to_dict()
            user_data['id'] = doc.id
            users.append(user_data)
        
        return {
            "users": users,
            "count": len(users),
            "limit": limit,
            "offset": offset,
            "timestamp": datetime.now().isoformat()
        }
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

@app.post("/users")
async def create_user(user_data: dict):
    """Create new user with validation"""
    try:
        required_fields = ['name', 'email']
        for field in required_fields:
            if field not in user_data:
                raise HTTPException(status_code=400, detail=f"Missing required field: {field}")
        
        # Add metadata
        user_data.update({
            'created_at': firestore.SERVER_TIMESTAMP,
            'status': 'active',
            'last_login': None,
            'login_count': 0
        })
        
        doc_ref = firestore_client.collection('users').document()
        doc_ref.set(user_data)
        
        return {
            "message": "User created successfully",
            "user_id": doc_ref.id,
            "timestamp": datetime.now().isoformat()
        }
    except HTTPException:
        raise
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8080)

Monitoring and DevOps Integration

# Cloud Monitoring and Logging setup
from google.cloud import monitoring_v3
from google.cloud import logging
from google.cloud import error_reporting
import time

# Cloud Monitoring client
monitoring_client = monitoring_v3.MetricServiceClient()
alert_client = monitoring_v3.AlertPolicyServiceClient()
project_name = f"projects/{project_id}"

# Create custom metric descriptor
descriptor = monitoring_v3.MetricDescriptor()
descriptor.type = "custom.googleapis.com/webapp/api_requests"
descriptor.metric_kind = monitoring_v3.MetricDescriptor.MetricKind.COUNTER
descriptor.value_type = monitoring_v3.MetricDescriptor.ValueType.INT64
descriptor.description = "Number of API requests"
descriptor.display_name = "API Requests"

labels = [
    monitoring_v3.LabelDescriptor(
        key="endpoint",
        value_type=monitoring_v3.LabelDescriptor.ValueType.STRING,
        description="API endpoint"
    ),
    monitoring_v3.LabelDescriptor(
        key="status_code",
        value_type=monitoring_v3.LabelDescriptor.ValueType.STRING,
        description="HTTP status code"
    ),
    monitoring_v3.LabelDescriptor(
        key="method",
        value_type=monitoring_v3.LabelDescriptor.ValueType.STRING,
        description="HTTP method"
    )
]
descriptor.labels.extend(labels)

try:
    descriptor = monitoring_client.create_metric_descriptor(
        name=project_name,
        metric_descriptor=descriptor
    )
    print(f"Created metric descriptor: {descriptor.type}")
except Exception as e:
    print(f"Metric descriptor may already exist: {e}")

# Send metrics data
def send_api_metrics(endpoint: str, status_code: int, method: str):
    series = monitoring_v3.TimeSeries()
    series.metric.type = "custom.googleapis.com/webapp/api_requests"
    series.metric.labels["endpoint"] = endpoint
    series.metric.labels["status_code"] = str(status_code)
    series.metric.labels["method"] = method
    
    series.resource.type = "cloud_run_revision"
    series.resource.labels["service_name"] = "webapp-api"
    series.resource.labels["revision_name"] = "webapp-api-00001"
    series.resource.labels["location"] = "us-central1"
    
    now = time.time()
    seconds = int(now)
    nanos = int((now - seconds) * 10 ** 9)
    interval = monitoring_v3.TimeInterval(
        {"end_time": {"seconds": seconds, "nanos": nanos}}
    )
    
    point = monitoring_v3.Point(
        {"interval": interval, "value": {"int64_value": 1}}
    )
    series.points = [point]
    
    monitoring_client.create_time_series(
        name=project_name,
        time_series=[series]
    )

# Create comprehensive alert policies
alert_policies = [
    {
        "display_name": "High API Error Rate",
        "documentation": "Alert when API error rate exceeds 5%",
        "conditions": [
            {
                "display_name": "Error rate condition",
                "condition_threshold": {
                    "filter": 'resource.type="cloud_run_revision" AND metric.type="run.googleapis.com/request_count"',
                    "comparison": "COMPARISON_GREATER_THAN",
                    "threshold_value": 0.05,
                    "duration": {"seconds": 300},
                    "aggregations": [
                        {
                            "alignment_period": {"seconds": 60},
                            "per_series_aligner": "ALIGN_RATE",
                            "cross_series_reducer": "REDUCE_SUM",
                            "group_by_fields": ["resource.labels.service_name"]
                        }
                    ]
                }
            }
        ]
    },
    {
        "display_name": "High Memory Usage",
        "documentation": "Alert when memory usage exceeds 80%",
        "conditions": [
            {
                "display_name": "Memory usage condition",
                "condition_threshold": {
                    "filter": 'resource.type="cloud_run_revision" AND metric.type="run.googleapis.com/container/memory/utilizations"',
                    "comparison": "COMPARISON_GREATER_THAN",
                    "threshold_value": 0.8,
                    "duration": {"seconds": 300},
                    "aggregations": [
                        {
                            "alignment_period": {"seconds": 60},
                            "per_series_aligner": "ALIGN_MEAN"
                        }
                    ]
                }
            }
        ]
    }
]

for policy_config in alert_policies:
    alert_policy = monitoring_v3.AlertPolicy()
    alert_policy.display_name = policy_config["display_name"]
    alert_policy.documentation.content = policy_config["documentation"]
    alert_policy.enabled = True
    
    for condition_config in policy_config["conditions"]:
        condition = monitoring_v3.AlertPolicy.Condition()
        condition.display_name = condition_config["display_name"]
        condition.condition_threshold.CopyFrom(
            monitoring_v3.AlertPolicy.Condition.MetricThreshold(
                **condition_config["condition_threshold"]
            )
        )
        alert_policy.conditions.append(condition)
    
    try:
        alert_policy = alert_client.create_alert_policy(
            name=project_name,
            alert_policy=alert_policy
        )
        print(f"Created alert policy: {alert_policy.display_name}")
    except Exception as e:
        print(f"Alert policy creation error: {e}")

# Cloud Logging setup
logging_client = logging.Client()
logging_client.setup_logging()

# Structured logging
logger = logging_client.logger("webapp-api")

def log_api_request(endpoint: str, method: str, status_code: int, duration: float, user_id: str = None):
    """Log API request with structured data"""
    logger.log_struct(
        {
            "message": "API request processed",
            "endpoint": endpoint,
            "method": method,
            "status_code": status_code,
            "duration_ms": duration * 1000,
            "user_id": user_id,
            "timestamp": datetime.now().isoformat(),
            "severity": "INFO" if status_code < 400 else "ERROR"
        },
        severity="INFO" if status_code < 400 else "ERROR"
    )

# Error reporting
error_client = error_reporting.Client()

def report_error(error: Exception, request_context: dict = None):
    """Report error to Cloud Error Reporting"""
    try:
        error_client.report_exception(
            http_context=error_reporting.HTTPContext(
                method=request_context.get('method', 'GET'),
                url=request_context.get('url', ''),
                user_agent=request_context.get('user_agent', ''),
                remote_ip=request_context.get('remote_ip', '')
            ) if request_context else None
        )
    except Exception as e:
        logging.error(f"Failed to report error: {e}")

print("Monitoring and logging setup completed")
# Cloud Build CI/CD pipeline (cloudbuild.yaml)
steps:
  # Run tests
  - name: 'gcr.io/cloud-builders/docker'
    args: 
      - 'run'
      - '--rm'
      - '-v'
      - '/workspace:/workspace'
      - '-w'
      - '/workspace'
      - 'python:3.9-slim'
      - 'sh'
      - '-c'
      - |
        pip install -r requirements.txt
        python -m pytest tests/ -v --cov=src/ --cov-report=xml

  # Build application image
  - name: 'gcr.io/cloud-builders/docker'
    args: 
      - 'build'
      - '-t'
      - 'gcr.io/$PROJECT_ID/webapp:$BUILD_ID'
      - '-t'
      - 'gcr.io/$PROJECT_ID/webapp:latest'
      - '--cache-from'
      - 'gcr.io/$PROJECT_ID/webapp:latest'
      - '.'

  # Push images to Container Registry
  - name: 'gcr.io/cloud-builders/docker'
    args: 
      - 'push'
      - 'gcr.io/$PROJECT_ID/webapp:$BUILD_ID'

  - name: 'gcr.io/cloud-builders/docker'
    args: 
      - 'push'
      - 'gcr.io/$PROJECT_ID/webapp:latest'

  # Security scanning
  - name: 'gcr.io/cloud-builders/gcloud'
    args:
      - 'container'
      - 'images'
      - 'scan'
      - 'gcr.io/$PROJECT_ID/webapp:$BUILD_ID'
      - '--format=json'

  # Deploy to Cloud Run
  - name: 'gcr.io/cloud-builders/gcloud'
    args:
      - 'run'
      - 'deploy'
      - 'webapp-service'
      - '--image'
      - 'gcr.io/$PROJECT_ID/webapp:$BUILD_ID'
      - '--region'
      - '${_DEPLOY_REGION}'
      - '--platform'
      - 'managed'
      - '--allow-unauthenticated'
      - '--memory'
      - '1Gi'
      - '--cpu'
      - '2'
      - '--concurrency'
      - '100'
      - '--max-instances'
      - '10'
      - '--min-instances'
      - '1'
      - '--set-env-vars'
      - 'PROJECT_ID=$PROJECT_ID,ENVIRONMENT=production,BUILD_ID=$BUILD_ID'
      - '--set-cloudsql-instances'
      - '$PROJECT_ID:${_DEPLOY_REGION}:webapp-db'
      - '--vpc-connector'
      - 'webapp-connector'
      - '--vpc-egress'
      - 'private-ranges-only'

  # Run integration tests
  - name: 'gcr.io/cloud-builders/curl'
    args:
      - '-f'
      - '-X'
      - 'GET'
      - 'https://webapp-service-${_DEPLOY_REGION}-${PROJECT_ID}.cloudfunctions.net/health'

  # Update traffic allocation
  - name: 'gcr.io/cloud-builders/gcloud'
    args:
      - 'run'
      - 'services'
      - 'update-traffic'
      - 'webapp-service'
      - '--to-latest'
      - '--region'
      - '${_DEPLOY_REGION}'

  # Record deployment in BigQuery
  - name: 'gcr.io/cloud-builders/gcloud'
    entrypoint: 'bash'
    args:
      - '-c'
      - |
        bq query --use_legacy_sql=false \
        "INSERT INTO \`$PROJECT_ID.ops.deployments\` \
        (deployment_id, build_id, service_name, image_tag, deployed_at, status, region, environment) \
        VALUES ('$BUILD_ID', '$BUILD_ID', 'webapp-service', 'gcr.io/$PROJECT_ID/webapp:$BUILD_ID', CURRENT_TIMESTAMP(), 'SUCCESS', '${_DEPLOY_REGION}', 'production')"

  # Send notification to Pub/Sub
  - name: 'gcr.io/cloud-builders/gcloud'
    args:
      - 'pubsub'
      - 'topics'
      - 'publish'
      - 'deployment-notifications'
      - '--message'
      - '{"service": "webapp-service", "build_id": "$BUILD_ID", "status": "deployed", "region": "${_DEPLOY_REGION}"}'

# Build configuration
options:
  logging: CLOUD_LOGGING_ONLY
  machineType: 'E2_HIGHCPU_8'
  dynamicSubstitutions: true

timeout: '1800s'

# Artifacts
images:
  - 'gcr.io/$PROJECT_ID/webapp:$BUILD_ID'
  - 'gcr.io/$PROJECT_ID/webapp:latest'

# Substitutions
substitutions:
  _DEPLOY_REGION: 'us-central1'
  _SERVICE_NAME: 'webapp-service'
  _MIN_INSTANCES: '1'
  _MAX_INSTANCES: '10'

# Triggers
trigger:
  github: 'your-username'/'webapp-repo'
    push:
      branch: '^main$'

Google Cloud Platform provides innovative services specializing in data analytics and machine learning, strongly supporting data-driven application development with Google's world-scale infrastructure.