Introduction: Cloud Communication
Cloud providers expose services through APIs. Understanding how AWS, Azure, and GCP handle authentication, authorization, and service communication is essential for cloud-native development.
Series Context: This is Part 18 of 20 in the Complete Protocols Master series. Cloud APIs build on HTTP/REST and use provider-specific authentication mechanisms.
1
Part 1: OSI Model & Protocol Foundations
Network layers, encapsulation, TCP/IP model
2
Physical & Data Link Layers
Ethernet, Wi-Fi, VLANs, MAC addressing
3
Network Layer & IP
IPv4, IPv6, ICMP, routing protocols
4
Transport Layer
TCP, UDP, QUIC, ports, sockets
5
Session & Presentation Layers
TLS handshake, encryption, serialization
6
Web Protocols
HTTP/1.1, HTTP/2, HTTP/3, WebSockets
7
API Protocols
REST, GraphQL, gRPC, SOAP
8
DNS Deep Dive
DNS hierarchy, records, DNSSEC
9
Email Protocols
SMTP, IMAP, POP3, SPF/DKIM/DMARC
10
File Transfer Protocols
FTP, SFTP, SCP, rsync
11
Real-Time Protocols
WebRTC, SIP, RTP, VoIP
12
Streaming Protocols
HLS, DASH, RTMP, media delivery
13
IoT Protocols
MQTT, CoAP, Zigbee, LoRaWAN
14
VPN & Tunneling
IPsec, OpenVPN, WireGuard
15
Authentication Protocols
OAuth, SAML, OIDC, Kerberos
16
Network Management
SNMP, NetFlow, Syslog
17
Security Protocols
TLS/SSL, certificates, PKI
18
Cloud Provider Protocols
AWS, Azure, GCP APIs
You Are Here
19
Emerging Protocols
QUIC, HTTP/3, WebTransport
20
Web Security Standards
CORS, CSP, HSTS, SRI
Overview
Cloud Provider API Comparison
| Aspect | AWS | Azure | GCP |
| Auth Method | SigV4 | Bearer Token / Managed Identity | OAuth 2.0 / Service Account |
| API Style | Query/REST | REST | REST + gRPC |
| CLI | aws | az | gcloud |
| SDK | boto3 | azure-sdk | google-cloud |
| IAM Model | Policies | RBAC + Policies | IAM Roles |
AWS Protocols
AWS uses Signature Version 4 (SigV4) for API authentication. Every request is cryptographically signed using your access key—no tokens, no passwords in transit.
SigV4
AWS Signature Version 4
AWS SigV4 Signing Process:
1. CREATE CANONICAL REQUEST
HTTPMethod + '\n' +
CanonicalURI + '\n' +
CanonicalQueryString + '\n' +
CanonicalHeaders + '\n' +
SignedHeaders + '\n' +
HashedPayload
2. CREATE STRING TO SIGN
Algorithm + '\n' +
RequestDateTime + '\n' +
CredentialScope + '\n' +
HashedCanonicalRequest
3. CALCULATE SIGNATURE
DateKey = HMAC-SHA256("AWS4" + SecretKey, Date)
RegionKey = HMAC-SHA256(DateKey, Region)
ServiceKey = HMAC-SHA256(RegionKey, Service)
SigningKey = HMAC-SHA256(ServiceKey, "aws4_request")
Signature = HMAC-SHA256(SigningKey, StringToSign)
4. ADD TO REQUEST
Authorization: AWS4-HMAC-SHA256
Credential=AKIAIOSFODNN7EXAMPLE/20260131/us-east-1/s3/aws4_request,
SignedHeaders=host;x-amz-date,
Signature=calculated_signature
# AWS CLI uses SigV4 automatically
# List S3 buckets (CLI handles signing)
aws s3 ls
# Debug to see signed request
aws s3 ls --debug 2>&1 | grep "Authorization:"
# Presigned URL (time-limited signed URL)
aws s3 presign s3://my-bucket/file.pdf --expires-in 3600
# Assume role (temporary credentials)
aws sts assume-role \
--role-arn arn:aws:iam::123456789012:role/MyRole \
--role-session-name my-session
# AWS API calls with boto3
import boto3
from botocore.auth import SigV4Auth
from botocore.awsrequest import AWSRequest
import requests
def aws_api_example():
"""Demonstrate AWS API patterns"""
# SDK handles SigV4 automatically
s3 = boto3.client('s3')
# List buckets
response = s3.list_buckets()
for bucket in response['Buckets']:
print(f"Bucket: {bucket['Name']}")
# Generate presigned URL
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-bucket', 'Key': 'file.pdf'},
ExpiresIn=3600
)
print(f"Presigned URL: {url}")
def manual_sigv4_example():
"""Manual SigV4 signing (for understanding)"""
print("""
# Manual SigV4 (rarely needed, SDK does this)
from botocore.auth import SigV4Auth
from botocore.awsrequest import AWSRequest
from botocore.credentials import Credentials
credentials = Credentials(
access_key='AKIAIOSFODNN7EXAMPLE',
secret_key='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
)
request = AWSRequest(
method='GET',
url='https://s3.us-east-1.amazonaws.com/my-bucket',
headers={'Host': 's3.us-east-1.amazonaws.com'}
)
SigV4Auth(credentials, 's3', 'us-east-1').add_auth(request)
# Now request.headers contains Authorization
""")
# IAM role for EC2 (no hardcoded credentials)
print("""
EC2 Instance Profile (Best Practice):
• EC2 instance assumes IAM role
• Credentials auto-rotate
• No keys in code or env vars
# boto3 auto-discovers credentials:
# 1. Environment variables
# 2. Shared credentials file
# 3. IAM role (EC2/Lambda/ECS)
""")
AWS Services
Common AWS Service Protocols
AWS Service Protocols:
S3:
• REST API over HTTPS
• Path-style: s3.amazonaws.com/bucket/key
• Virtual-hosted: bucket.s3.amazonaws.com/key
• Transfer Acceleration: bucket.s3-accelerate.amazonaws.com
DynamoDB:
• HTTPS POST with JSON
• x-amz-target: DynamoDB_20120810.GetItem
SQS:
• REST or Query API
• Long polling: WaitTimeSeconds=20
Lambda:
• Invoke: POST /functions/{name}/invocations
• Async: X-Amz-Invocation-Type: Event
API Gateway:
• REST API or HTTP API
• WebSocket API supported
Azure Protocols
Azure uses OAuth 2.0 bearer tokens from Azure AD. Managed identities eliminate the need for credentials in code—Azure automatically provides tokens.
Auth
Azure Authentication
Azure Authentication Options:
1. SERVICE PRINCIPAL (App Registration)
• Client credentials flow
• App ID + Secret/Certificate
• For automation, CI/CD
2. MANAGED IDENTITY (Recommended)
• System-assigned: tied to resource
• User-assigned: reusable across resources
• No secrets to manage!
3. USER AUTHENTICATION
• Interactive login
• Device code flow
• For development
Token Endpoint:
POST https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token
Content-Type: application/x-www-form-urlencoded
client_id={app-id}&
scope=https://management.azure.com/.default&
client_secret={secret}&
grant_type=client_credentials
# Azure CLI authentication
# Login interactively
az login
# Login with service principal
az login --service-principal \
-u "app-id" \
-p "secret" \
--tenant "tenant-id"
# Get access token (for API calls)
az account get-access-token --resource https://management.azure.com/
# Managed identity (on Azure VM/App Service)
curl -H "Metadata: true" \
"http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/"
# Azure API with Python
from azure.identity import DefaultAzureCredential, ManagedIdentityCredential
from azure.storage.blob import BlobServiceClient
def azure_api_example():
"""Demonstrate Azure API patterns"""
# DefaultAzureCredential tries multiple methods:
# 1. Environment variables
# 2. Managed Identity
# 3. Azure CLI
# 4. Visual Studio Code
credential = DefaultAzureCredential()
# Use with Blob Storage
blob_service = BlobServiceClient(
account_url="https://mystorageaccount.blob.core.windows.net",
credential=credential
)
# List containers
for container in blob_service.list_containers():
print(f"Container: {container['name']}")
def managed_identity_example():
"""Managed Identity on Azure resources"""
print("""
# On Azure VM/App Service/Function
from azure.identity import ManagedIdentityCredential
# System-assigned identity
credential = ManagedIdentityCredential()
# User-assigned identity
credential = ManagedIdentityCredential(
client_id="user-assigned-client-id"
)
# Get token for specific resource
token = credential.get_token("https://vault.azure.net/.default")
print(f"Token: {token.token[:50]}...")
# Use with Key Vault
from azure.keyvault.secrets import SecretClient
client = SecretClient(
vault_url="https://my-vault.vault.azure.net",
credential=credential
)
secret = client.get_secret("my-secret")
""")
azure_api_example()
ARM API
Azure Resource Manager API
Azure ARM REST API:
Base URL: https://management.azure.com
List resource groups:
GET /subscriptions/{subscriptionId}/resourcegroups?api-version=2021-04-01
Authorization: Bearer {token}
Create storage account:
PUT /subscriptions/{subId}/resourceGroups/{rg}/providers/Microsoft.Storage/storageAccounts/{name}?api-version=2021-08-01
Authorization: Bearer {token}
Content-Type: application/json
{
"location": "eastus",
"sku": {"name": "Standard_LRS"},
"kind": "StorageV2"
}
API Versions:
• Each resource type has versioned API
• Always specify api-version parameter
GCP Protocols
GCP uses OAuth 2.0 with service accounts. Service account keys are JSON files, but Workload Identity (like Azure Managed Identity) is preferred.
Auth
GCP Authentication
GCP Authentication Methods:
1. SERVICE ACCOUNT KEY (JSON file)
• Download from Console
• Set GOOGLE_APPLICATION_CREDENTIALS
• Rotate regularly (security risk)
2. WORKLOAD IDENTITY (Recommended)
• No keys to manage
• For GKE, Cloud Run, Cloud Functions
• Maps Kubernetes SA to GCP SA
3. USER CREDENTIALS
• gcloud auth login
• For development only
4. METADATA SERVER (on GCP)
• Auto-available on GCE, GKE
• curl metadata.google.internal/...
Application Default Credentials (ADC):
1. GOOGLE_APPLICATION_CREDENTIALS env var
2. gcloud auth application-default credentials
3. Metadata server (on GCP)
# GCP gcloud authentication
# Login (user)
gcloud auth login
# Application Default Credentials (for SDKs)
gcloud auth application-default login
# Activate service account
gcloud auth activate-service-account \
--key-file=service-account.json
# Get access token
gcloud auth print-access-token
# Impersonate service account
gcloud auth print-access-token \
--impersonate-service-account=sa@project.iam.gserviceaccount.com
# GCP API with Python
from google.cloud import storage
from google.auth import default
def gcp_api_example():
"""Demonstrate GCP API patterns"""
# ADC auto-discovery
credentials, project = default()
print(f"Using project: {project}")
# Cloud Storage
client = storage.Client()
# List buckets
for bucket in client.list_buckets():
print(f"Bucket: {bucket.name}")
def service_account_example():
"""Service account usage patterns"""
print("""
# With key file (less secure)
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file(
'service-account.json',
scopes=['https://www.googleapis.com/auth/cloud-platform']
)
client = storage.Client(credentials=credentials)
# With Workload Identity (on GKE)
# Kubernetes ServiceAccount mapped to GCP ServiceAccount
# ADC automatically picks up credentials
client = storage.Client() # Just works!
""")
def gcp_rest_api():
"""Direct REST API call"""
print("""
# GCP REST API (rarely needed with SDK)
import google.auth
import google.auth.transport.requests
credentials, project = google.auth.default()
auth_req = google.auth.transport.requests.Request()
credentials.refresh(auth_req)
headers = {
'Authorization': f'Bearer {credentials.token}'
}
# List buckets
response = requests.get(
f'https://storage.googleapis.com/storage/v1/b?project={project}',
headers=headers
)
""")
gcp_api_example()
Cloud-Native Patterns
Beyond provider-specific APIs, cloud-native applications use common patterns for service communication: service mesh, message queues, and event-driven architectures.
Service Mesh
Service-to-Service Communication
Service Mesh (Istio, Linkerd):
Without Service Mesh:
[Service A] → HTTP → [Service B]
• No encryption
• No retry logic
• No observability
With Service Mesh:
[Service A] → [Sidecar] → mTLS → [Sidecar] → [Service B]
• Automatic mTLS
• Retries, timeouts
• Distributed tracing
• Traffic management
AWS App Mesh / Azure Service Mesh / GCP Anthos:
• Managed service mesh
• Integrated with cloud IAM
Events
Event-Driven Patterns
Cloud Event Services:
AWS:
• SNS: Pub/sub topics
• SQS: Message queues
• EventBridge: Event bus
• Kinesis: Streaming
Azure:
• Service Bus: Enterprise messaging
• Event Grid: Event routing
• Event Hubs: Streaming
GCP:
• Pub/Sub: Messaging and streaming
• Cloud Tasks: Task queues
Pattern: Event-Driven Architecture
[Producer] → [Event Bus] → [Consumer 1]
→ [Consumer 2]
→ [Consumer 3]
# Multi-cloud pattern: Abstraction layer
class CloudStorage:
"""Abstract cloud storage operations"""
def __init__(self, provider: str):
self.provider = provider
if provider == 'aws':
import boto3
self.client = boto3.client('s3')
elif provider == 'azure':
from azure.storage.blob import BlobServiceClient
from azure.identity import DefaultAzureCredential
self.client = BlobServiceClient(
account_url="...",
credential=DefaultAzureCredential()
)
elif provider == 'gcp':
from google.cloud import storage
self.client = storage.Client()
def upload(self, bucket: str, key: str, data: bytes):
if self.provider == 'aws':
self.client.put_object(Bucket=bucket, Key=key, Body=data)
elif self.provider == 'azure':
blob = self.client.get_blob_client(bucket, key)
blob.upload_blob(data)
elif self.provider == 'gcp':
bucket_obj = self.client.bucket(bucket)
blob = bucket_obj.blob(key)
blob.upload_from_string(data)
# Usage
# storage = CloudStorage(os.environ.get('CLOUD_PROVIDER', 'aws'))
# storage.upload('my-bucket', 'file.txt', b'Hello')
Summary & Next Steps
Key Takeaways:
- AWS: SigV4 signing, IAM roles for EC2/Lambda
- Azure: OAuth 2.0 tokens, Managed Identity
- GCP: OAuth 2.0, Workload Identity
- Best practice: Never hardcode credentials
- Multi-cloud: Abstraction layers help portability
Quiz
Test Your Knowledge
- AWS SigV4 signs what? (Request with HMAC-SHA256)
- Azure Managed Identity benefit? (No secrets to manage)
- GCP ADC order? (Env var → gcloud → metadata)
- What's a service mesh? (Sidecar proxies for mTLS/observability)