Microsoft Azure

Cloud PlatformHybrid CloudEnterpriseAIWindows.NET

Cloud Platform

Microsoft Azure

Overview

Microsoft Azure is a comprehensive cloud platform provided by Microsoft. It offers excellent integration with Windows and Office 365, and strongly supports hybrid cloud and AI/machine learning services. Holding the second-largest share in the enterprise market after AWS, adoption is rapidly expanding among enterprises with existing Microsoft environments due to its product affinity.

Details

Azure began service in 2010 and currently operates in over 60 regions, supporting enterprise digital transformation through its affinity with Microsoft products. Latest 2025 features include multi-agent orchestration in the Azure AI Foundry platform, Model Router, access to over 11,000 AI models, WebRTC support for real-time AI capabilities, and vector database functionality in SQL Server 2025. Azure continues to strengthen its position as a leading enterprise cloud platform with comprehensive AI integration and hybrid cloud capabilities.

Key Features

  • Hybrid Cloud: Integrated experience between on-premises and cloud
  • Office 365 Integration: Tight integration with Microsoft 365
  • Azure Active Directory: Comprehensive identity and access management
  • AI & Machine Learning: Advanced AI services including Azure OpenAI and Cognitive Services
  • DevOps Tools: Integrated development and operations environment with Azure DevOps

Latest 2025 Features

  • Azure AI Foundry: Unified AI platform with access to 11,000+ models
  • Model Router: Automatically selects optimal AI models to optimize cost and quality
  • Realtime API with WebRTC: Low-latency audio streaming capabilities
  • o3-mini: Latest reasoning model (released January 31, 2025)
  • SQL Server 2025: Vector database capabilities for AI application support

Pros and Cons

Pros

  • Excellent integration with Microsoft product ecosystem
  • Comprehensive hybrid cloud solutions
  • Strong security and compliance for enterprises
  • Unified identity management through Active Directory integration
  • Optimized environment for .NET developers
  • Rich AI and machine learning services
  • Centralized resource management with Azure Resource Manager

Cons

  • Complex pricing structure
  • Limited benefits outside Microsoft environments
  • High learning curve (especially for Azure-specific services)
  • More limited Linux environment support compared to AWS
  • Some services have lower availability than AWS
  • Inconsistent documentation quality

Reference Pages

Code Examples

Basic Setup and Account Configuration

# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

# Login to Azure CLI
az login

# List subscriptions
az account list --output table

# Set specific subscription
az account set --subscription "Your Subscription Name"

# Show current account information
az account show

# Create resource group
az group create \
  --name myResourceGroup \
  --location eastus \
  --tags Environment=Production Project=WebApp

# List available regions
az account list-locations --output table

# Set default resource group
az configure --defaults group=myResourceGroup location=eastus

Compute Services (VMs, Containers)

# Azure Virtual Machine creation with Azure SDK for Python
from azure.identity import DefaultAzureCredential
from azure.mgmt.compute import ComputeManagementClient
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.resource import ResourceManagementClient
import uuid

# Authentication setup
credential = DefaultAzureCredential()
subscription_id = "your-subscription-id"

# Initialize clients
compute_client = ComputeManagementClient(credential, subscription_id)
network_client = NetworkManagementClient(credential, subscription_id)
resource_client = ResourceManagementClient(credential, subscription_id)

# Create resource group
resource_group_params = {
    'location': 'eastus',
    'tags': {
        'Environment': 'Production',
        'Project': 'WebApp'
    }
}
resource_client.resource_groups.create_or_update(
    'myResourceGroup',
    resource_group_params
)

# Create virtual network
vnet_params = {
    'location': 'eastus',
    'address_space': {
        'address_prefixes': ['10.0.0.0/16']
    },
    'subnets': [
        {
            'name': 'default',
            'address_prefix': '10.0.0.0/24'
        }
    ]
}

vnet_result = network_client.virtual_networks.begin_create_or_update(
    'myResourceGroup',
    'myVNet',
    vnet_params
).result()

# Create public IP address
public_ip_params = {
    'location': 'eastus',
    'public_ip_allocation_method': 'Static',
    'sku': {'name': 'Standard'},
    'dns_settings': {
        'domain_name_label': f'myvm-{uuid.uuid4().hex[:8]}'
    }
}

public_ip_result = network_client.public_ip_addresses.begin_create_or_update(
    'myResourceGroup',
    'myPublicIP',
    public_ip_params
).result()

# Create network security group
nsg_params = {
    'location': 'eastus',
    'security_rules': [
        {
            'name': 'SSH',
            'protocol': 'Tcp',
            'source_port_range': '*',
            'destination_port_range': '22',
            'source_address_prefix': '*',
            'destination_address_prefix': '*',
            'access': 'Allow',
            'priority': 1000,
            'direction': 'Inbound'
        },
        {
            'name': 'HTTP',
            'protocol': 'Tcp',
            'source_port_range': '*',
            'destination_port_range': '80',
            'source_address_prefix': '*',
            'destination_address_prefix': '*',
            'access': 'Allow',
            'priority': 1010,
            'direction': 'Inbound'
        }
    ]
}

nsg_result = network_client.network_security_groups.begin_create_or_update(
    'myResourceGroup',
    'myNSG',
    nsg_params
).result()

# Create network interface
nic_params = {
    'location': 'eastus',
    'ip_configurations': [
        {
            'name': 'myIPConfig',
            'subnet': {
                'id': vnet_result.subnets[0].id
            },
            'public_ip_address': {
                'id': public_ip_result.id
            }
        }
    ],
    'network_security_group': {
        'id': nsg_result.id
    }
}

nic_result = network_client.network_interfaces.begin_create_or_update(
    'myResourceGroup',
    'myNIC',
    nic_params
).result()

# Create virtual machine
vm_params = {
    'location': 'eastus',
    'hardware_profile': {
        'vm_size': 'Standard_B2s'
    },
    'storage_profile': {
        'image_reference': {
            'publisher': 'Canonical',
            'offer': '0001-com-ubuntu-server-focal',
            'sku': '20_04-lts-gen2',
            'version': 'latest'
        },
        'os_disk': {
            'name': 'myOSDisk',
            'create_option': 'FromImage',
            'managed_disk': {
                'storage_account_type': 'Premium_LRS'
            }
        }
    },
    'os_profile': {
        'computer_name': 'myVM',
        'admin_username': 'azureuser',
        'disable_password_authentication': True,
        'linux_configuration': {
            'ssh': {
                'public_keys': [
                    {
                        'path': '/home/azureuser/.ssh/authorized_keys',
                        'key_data': 'ssh-rsa AAAAB3NzaC1yc2E... your-public-key'
                    }
                ]
            }
        },
        'custom_data': '''#!/bin/bash
            apt-get update
            apt-get install -y nginx
            systemctl start nginx
            systemctl enable nginx
            echo "<h1>Hello from Azure VM!</h1>" > /var/www/html/index.html
        '''
    },
    'network_profile': {
        'network_interfaces': [
            {
                'id': nic_result.id
            }
        ]
    }
}

vm_result = compute_client.virtual_machines.begin_create_or_update(
    'myResourceGroup',
    'myVM',
    vm_params
).result()

print(f"VM created: {vm_result.name}")

Storage and Database Services

# Azure Storage and Cosmos DB operations
from azure.storage.blob import BlobServiceClient
from azure.cosmos import CosmosClient, PartitionKey
from azure.mgmt.storage import StorageManagementClient
from azure.mgmt.cosmosdb import CosmosDBManagementClient

# Create Storage Account
storage_client = StorageManagementClient(credential, subscription_id)

storage_params = {
    'sku': {'name': 'Standard_LRS'},
    'kind': 'StorageV2',
    'location': 'eastus',
    'access_tier': 'Hot',
    'allow_blob_public_access': True,
    'minimum_tls_version': 'TLS1_2',
    'enable_https_traffic_only': True
}

storage_result = storage_client.storage_accounts.begin_create(
    'myResourceGroup',
    'mystorageaccount123',
    storage_params
).result()

# Get storage account keys
keys = storage_client.storage_accounts.list_keys(
    'myResourceGroup',
    'mystorageaccount123'
)
storage_key = keys.keys[0].value

# Create blob service client
blob_service_client = BlobServiceClient(
    account_url=f"https://mystorageaccount123.blob.core.windows.net",
    credential=storage_key
)

# Create container
container_name = "webapp-uploads"
blob_service_client.create_container(
    container_name,
    public_access='blob',
    metadata={'purpose': 'file-uploads', 'environment': 'production'}
)

# Upload blob with metadata and tags
blob_client = blob_service_client.get_blob_client(
    container=container_name,
    blob="uploads/sample.txt"
)

with open("local-file.txt", "rb") as data:
    blob_client.upload_blob(
        data,
        metadata={
            'author': 'user123',
            'upload_time': '2025-01-01T00:00:00Z'
        },
        tags={
            'environment': 'production',
            'category': 'document'
        },
        overwrite=True
    )

# Create Cosmos DB account
cosmos_mgmt_client = CosmosDBManagementClient(credential, subscription_id)

cosmos_params = {
    'location': 'eastus',
    'locations': [
        {
            'location_name': 'East US',
            'failover_priority': 0,
            'is_zone_redundant': False
        }
    ],
    'database_account_offer_type': 'Standard',
    'consistency_policy': {
        'default_consistency_level': 'Session'
    },
    'capabilities': [
        {'name': 'EnableTable'},
        {'name': 'EnableServerless'}
    ]
}

cosmos_account_result = cosmos_mgmt_client.database_accounts.begin_create_or_update(
    'myResourceGroup',
    'mycosmosaccount',
    cosmos_params
).result()

# Get Cosmos DB connection string
keys = cosmos_mgmt_client.database_accounts.list_keys(
    'myResourceGroup',
    'mycosmosaccount'
)
cosmos_key = keys.primary_master_key

# Create Cosmos client and database
cosmos_client = CosmosClient(
    url=f"https://mycosmosaccount.documents.azure.com:443/",
    credential=cosmos_key
)

database = cosmos_client.create_database_if_not_exists(id='WebAppDB')
container = database.create_container_if_not_exists(
    id='users',
    partition_key=PartitionKey(path="/userId"),
    offer_throughput=400
)

# Insert document
user_doc = {
    'id': 'user123',
    'userId': 'user123',
    'name': 'John Doe',
    'email': '[email protected]',
    'status': 'active',
    'createdAt': '2025-01-01T00:00:00Z',
    'preferences': {
        'theme': 'dark',
        'language': 'en'
    }
}

container.create_item(body=user_doc)

# Query documents
query = "SELECT * FROM c WHERE c.status = @status"
items = list(container.query_items(
    query=query,
    parameters=[
        {"name": "@status", "value": "active"}
    ],
    enable_cross_partition_query=True
))

print(f"Found {len(items)} active users")

Networking and Security

# Azure Key Vault and network security
from azure.keyvault.secrets import SecretClient
from azure.mgmt.keyvault import KeyVaultManagementClient
from azure.mgmt.network import NetworkManagementClient
import uuid

# Create Key Vault
keyvault_client = KeyVaultManagementClient(credential, subscription_id)

vault_name = f"myvault-{uuid.uuid4().hex[:8]}"
vault_params = {
    'location': 'eastus',
    'properties': {
        'sku': {
            'family': 'A',
            'name': 'standard'
        },
        'tenant_id': 'your-tenant-id',
        'access_policies': [
            {
                'tenant_id': 'your-tenant-id',
                'object_id': 'your-user-object-id',
                'permissions': {
                    'secrets': ['get', 'list', 'set', 'delete'],
                    'keys': ['get', 'list', 'create', 'delete'],
                    'certificates': ['get', 'list', 'create', 'delete']
                }
            }
        ],
        'enabled_for_disk_encryption': True,
        'enabled_for_template_deployment': True,
        'enabled_for_deployment': True,
        'soft_delete_retention_in_days': 90,
        'enable_purge_protection': True
    }
}

vault_result = keyvault_client.vaults.begin_create_or_update(
    'myResourceGroup',
    vault_name,
    vault_params
).result()

# Create Key Vault secret client
secret_client = SecretClient(
    vault_url=f"https://{vault_name}.vault.azure.net/",
    credential=credential
)

# Set secrets with metadata
secret_client.set_secret(
    "database-password",
    "SecurePassword123!",
    content_type="password",
    tags={
        "environment": "production",
        "service": "database"
    }
)

secret_client.set_secret(
    "api-key",
    "your-api-key-here",
    content_type="api-key",
    tags={
        "environment": "production",
        "service": "external-api"
    }
)

# Get secret
retrieved_secret = secret_client.get_secret("database-password")
print(f"Retrieved secret: {retrieved_secret.name}")

# Create Application Gateway for load balancing
app_gateway_params = {
    'location': 'eastus',
    'sku': {
        'name': 'Standard_v2',
        'tier': 'Standard_v2',
        'capacity': 2
    },
    'gateway_ip_configurations': [
        {
            'name': 'appGatewayIpConfig',
            'subnet': {
                'id': f"{vnet_result.id}/subnets/default"
            }
        }
    ],
    'frontend_ip_configurations': [
        {
            'name': 'appGwPublicFrontendIp',
            'public_ip_address': {
                'id': public_ip_result.id
            }
        }
    ],
    'frontend_ports': [
        {
            'name': 'port_80',
            'port': 80
        }
    ],
    'backend_address_pools': [
        {
            'name': 'myBackendPool',
            'backend_addresses': [
                {
                    'ip_address': '10.0.0.4'
                },
                {
                    'ip_address': '10.0.0.5'
                }
            ]
        }
    ],
    'backend_http_settings_collection': [
        {
            'name': 'myHTTPSetting',
            'port': 80,
            'protocol': 'Http',
            'cookie_based_affinity': 'Disabled',
            'request_timeout': 20
        }
    ],
    'http_listeners': [
        {
            'name': 'myListener',
            'frontend_ip_configuration': {
                'id': '/subscriptions/your-subscription/resourceGroups/myResourceGroup/providers/Microsoft.Network/applicationGateways/myAppGateway/frontendIPConfigurations/appGwPublicFrontendIp'
            },
            'frontend_port': {
                'id': '/subscriptions/your-subscription/resourceGroups/myResourceGroup/providers/Microsoft.Network/applicationGateways/myAppGateway/frontendPorts/port_80'
            },
            'protocol': 'Http'
        }
    ],
    'request_routing_rules': [
        {
            'name': 'myRoutingRule',
            'rule_type': 'Basic',
            'http_listener': {
                'id': '/subscriptions/your-subscription/resourceGroups/myResourceGroup/providers/Microsoft.Network/applicationGateways/myAppGateway/httpListeners/myListener'
            },
            'backend_address_pool': {
                'id': '/subscriptions/your-subscription/resourceGroups/myResourceGroup/providers/Microsoft.Network/applicationGateways/myAppGateway/backendAddressPools/myBackendPool'
            },
            'backend_http_settings': {
                'id': '/subscriptions/your-subscription/resourceGroups/myResourceGroup/providers/Microsoft.Network/applicationGateways/myAppGateway/backendHttpSettingsCollection/myHTTPSetting'
            }
        }
    ]
}

app_gateway_result = network_client.application_gateways.begin_create_or_update(
    'myResourceGroup',
    'myAppGateway',
    app_gateway_params
).result()

Serverless and Functions

# Azure Functions (Python)
import azure.functions as func
import logging
import json
import os
from datetime import datetime
from azure.cosmos import CosmosClient
from azure.storage.blob import BlobServiceClient

app = func.FunctionApp()

@app.function_name(name="HttpTrigger")
@app.route(route="api/users", auth_level=func.AuthLevel.FUNCTION)
def http_trigger(req: func.HttpRequest) -> func.HttpResponse:
    logging.info('Python HTTP trigger function processed a request.')
    
    try:
        # Get request data
        req_body = req.get_json()
        action = req_body.get('action', 'get')
        
        # Initialize Cosmos DB client
        cosmos_client = CosmosClient(
            url=os.environ['COSMOS_DB_URL'],
            credential=os.environ['COSMOS_DB_KEY']
        )
        
        database = cosmos_client.get_database_client('WebAppDB')
        container = database.get_container_client('users')
        
        if action == 'create':
            # Create new user
            user_data = {
                'id': req_body['id'],
                'userId': req_body['id'],
                'name': req_body['name'],
                'email': req_body['email'],
                'createdAt': datetime.utcnow().isoformat(),
                'status': 'active'
            }
            
            container.create_item(body=user_data)
            return func.HttpResponse(
                json.dumps({"message": "User created successfully", "user": user_data}),
                status_code=201,
                headers={'Content-Type': 'application/json'}
            )
            
        elif action == 'get':
            # Get user
            user_id = req_body.get('userId')
            if user_id:
                user = container.read_item(item=user_id, partition_key=user_id)
                return func.HttpResponse(
                    json.dumps(user),
                    status_code=200,
                    headers={'Content-Type': 'application/json'}
                )
            else:
                # List all users
                users = list(container.query_items(
                    query="SELECT * FROM c",
                    enable_cross_partition_query=True
                ))
                return func.HttpResponse(
                    json.dumps(users),
                    status_code=200,
                    headers={'Content-Type': 'application/json'}
                )
        
    except Exception as e:
        logging.error(f"Error: {str(e)}")
        return func.HttpResponse(
            json.dumps({"error": str(e)}),
            status_code=500,
            headers={'Content-Type': 'application/json'}
        )

@app.function_name(name="BlobTrigger")
@app.blob_trigger(arg_name="myblob", 
                  path="uploads/{name}",
                  connection="AzureWebJobsStorage")
def blob_trigger(myblob: func.InputStream):
    logging.info(f"Python blob trigger function processed blob"
                 f"Name: {myblob.name}"
                 f"Blob size: {myblob.length} bytes")
    
    # Process the uploaded file
    blob_content = myblob.read()
    
    # Example: Save metadata to Cosmos DB
    cosmos_client = CosmosClient(
        url=os.environ['COSMOS_DB_URL'],
        credential=os.environ['COSMOS_DB_KEY']
    )
    
    database = cosmos_client.get_database_client('WebAppDB')
    container = database.get_container_client('files')
    
    file_metadata = {
        'id': myblob.name.split('/')[-1],
        'fileName': myblob.name,
        'size': myblob.length,
        'uploadedAt': datetime.utcnow().isoformat(),
        'processed': True
    }
    
    container.upsert_item(body=file_metadata)

@app.function_name(name="TimerTrigger")
@app.timer_trigger(schedule="0 */5 * * * *", 
                   arg_name="myTimer", 
                   run_on_startup=False,
                   use_monitor=False)
def timer_trigger(myTimer: func.TimerRequest) -> None:
    logging.info('Python timer trigger function executed.')
    
    if myTimer.past_due:
        logging.info('The timer is past due!')
    
    # Example: Cleanup old records
    cosmos_client = CosmosClient(
        url=os.environ['COSMOS_DB_URL'],
        credential=os.environ['COSMOS_DB_KEY']
    )
    
    database = cosmos_client.get_database_client('WebAppDB')
    container = database.get_container_client('logs')
    
    # Delete logs older than 30 days
    old_date = (datetime.utcnow() - timedelta(days=30)).isoformat()
    
    query = f"SELECT * FROM c WHERE c.timestamp < '{old_date}'"
    old_logs = list(container.query_items(
        query=query,
        enable_cross_partition_query=True
    ))
    
    for log in old_logs:
        container.delete_item(item=log['id'], partition_key=log['id'])
    
    logging.info(f'Deleted {len(old_logs)} old log entries')

Monitoring and DevOps Integration

# Azure Monitor and Application Insights
from azure.mgmt.applicationinsights import ApplicationInsightsManagementClient
from azure.mgmt.monitor import MonitorManagementClient
from azure.mgmt.loganalytics import LogAnalyticsManagementClient

# Create Application Insights
app_insights_client = ApplicationInsightsManagementClient(credential, subscription_id)

app_insights_params = {
    'location': 'eastus',
    'kind': 'web',
    'application_type': 'web',
    'retention_in_days': 90,
    'workflow_id': '/subscriptions/your-subscription/resourceGroups/myResourceGroup',
    'sampling_percentage': 100.0,
    'disable_ip_masking': False,
    'disable_local_auth': False
}

app_insights_result = app_insights_client.components.create_or_update(
    'myResourceGroup',
    'myAppInsights',
    app_insights_params
)

# Create Log Analytics Workspace
log_analytics_client = LogAnalyticsManagementClient(credential, subscription_id)

workspace_params = {
    'location': 'eastus',
    'sku': {
        'name': 'PerGB2018'
    },
    'retention_in_days': 30,
    'tags': {
        'Environment': 'Production',
        'Purpose': 'Monitoring'
    }
}

workspace_result = log_analytics_client.workspaces.begin_create_or_update(
    'myResourceGroup',
    'myLogAnalyticsWorkspace',
    workspace_params
).result()

# Create metric alerts
monitor_client = MonitorManagementClient(credential, subscription_id)

# CPU utilization alert
cpu_alert_params = {
    'location': 'global',
    'description': 'Alert when CPU utilization exceeds 80%',
    'severity': 2,
    'enabled': True,
    'scopes': [f'/subscriptions/{subscription_id}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM'],
    'evaluation_frequency': 'PT1M',
    'window_size': 'PT5M',
    'criteria': {
        'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria',
        'all_of': [
            {
                'name': 'High CPU Usage',
                'metric_name': 'Percentage CPU',
                'metric_namespace': 'Microsoft.Compute/virtualMachines',
                'operator': 'GreaterThan',
                'threshold': 80,
                'time_aggregation': 'Average',
                'dimensions': [],
                'skip_metric_validation': False
            }
        ]
    },
    'actions': [
        {
            'action_group_id': f'/subscriptions/{subscription_id}/resourceGroups/myResourceGroup/providers/microsoft.insights/actionGroups/myActionGroup'
        }
    ]
}

cpu_alert_result = monitor_client.metric_alerts.create_or_update(
    'myResourceGroup',
    'HighCPUAlert',
    cpu_alert_params
)

# Application performance alert
app_perf_alert_params = {
    'location': 'global',
    'description': 'Alert when response time exceeds 5 seconds',
    'severity': 1,
    'enabled': True,
    'scopes': [app_insights_result.id],
    'evaluation_frequency': 'PT1M',
    'window_size': 'PT5M',
    'criteria': {
        'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria',
        'all_of': [
            {
                'name': 'High Response Time',
                'metric_name': 'requests/duration',
                'metric_namespace': 'Microsoft.Insights/components',
                'operator': 'GreaterThan',
                'threshold': 5000,
                'time_aggregation': 'Average'
            }
        ]
    }
}

app_perf_alert_result = monitor_client.metric_alerts.create_or_update(
    'myResourceGroup',
    'HighResponseTimeAlert',
    app_perf_alert_params
)

print(f"Monitoring setup completed:")
print(f"Application Insights: {app_insights_result.instrumentation_key}")
print(f"Log Analytics Workspace: {workspace_result.customer_id}")
# Azure DevOps Pipeline (azure-pipelines.yml)
trigger:
  branches:
    include:
      - main
      - develop
  paths:
    include:
      - src/*
    exclude:
      - docs/*
      - README.md

variables:
  azureSubscription: 'MyAzureServiceConnection'
  resourceGroupName: 'myResourceGroup'
  webAppName: 'myWebApp'
  containerRegistry: 'myregistry.azurecr.io'
  imageRepository: 'webapp'
  dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
  tag: '$(Build.BuildId)'
  vmImageName: 'ubuntu-latest'

stages:
- stage: Build
  displayName: 'Build and push'
  jobs:
  - job: Build
    displayName: 'Build'
    pool:
      vmImage: $(vmImageName)
    steps:
    - task: Docker@2
      displayName: 'Build and push image to container registry'
      inputs:
        command: 'buildAndPush'
        repository: $(imageRepository)
        dockerfile: $(dockerfilePath)
        containerRegistry: 'dockerRegistryServiceConnection'
        tags: |
          $(tag)
          latest
    
    - task: PublishPipelineArtifact@1
      inputs:
        artifactName: 'manifests'
        path: 'manifests'

- stage: Deploy
  displayName: 'Deploy stage'
  dependsOn: Build
  jobs:
  - deployment: Deploy
    displayName: 'Deploy'
    pool:
      vmImage: $(vmImageName)
    environment: 'production.default'
    strategy:
      runOnce:
        deploy:
          steps:
          - task: AzureWebAppContainer@1
            displayName: 'Deploy Azure Web App'
            inputs:
              azureSubscription: $(azureSubscription)
              appName: $(webAppName)
              containers: $(containerRegistry)/$(imageRepository):$(tag)
              
          - task: AzureCLI@2
            displayName: 'Health check and monitoring setup'
            inputs:
              azureSubscription: $(azureSubscription)
              scriptType: 'bash'
              scriptLocation: 'inlineScript'
              inlineScript: |
                # Health check
                echo "Performing health check..."
                response=$(curl -s -o /dev/null -w "%{http_code}" https://$(webAppName).azurewebsites.net/health)
                if [ $response -eq 200 ]; then
                  echo "✅ Deployment successful - Health check passed"
                else
                  echo "❌ Deployment failed - Health check failed with status $response"
                  exit 1
                fi
                
                # Update Application Insights
                echo "Configuring Application Insights..."
                az webapp config appsettings set \
                  --resource-group $(resourceGroupName) \
                  --name $(webAppName) \
                  --settings APPINSIGHTS_INSTRUMENTATIONKEY=$(appInsightsKey)
                
                echo "🚀 Deployment completed successfully!"

- stage: Tests
  displayName: 'Post-deployment tests'
  dependsOn: Deploy
  jobs:
  - job: IntegrationTests
    displayName: 'Run integration tests'
    pool:
      vmImage: $(vmImageName)
    steps:
    - task: AzureCLI@2
      displayName: 'Run integration tests'
      inputs:
        azureSubscription: $(azureSubscription)
        scriptType: 'bash'
        scriptLocation: 'inlineScript'
        inlineScript: |
          # Run integration tests against deployed application
          npm install
          npm run test:integration -- --baseUrl="https://$(webAppName).azurewebsites.net"

Microsoft Azure provides powerful integration with the Microsoft ecosystem and comprehensive enterprise solutions, strongly supporting development and operations in hybrid cloud environments.