AWS Lambda and Serverless Function Patterns#
Lambda runs your code without you provisioning or managing servers. You upload a function, configure a trigger, and AWS handles scaling, patching, and availability. The execution model is simple: an event arrives, Lambda invokes your handler, your handler returns a response. Everything in between – concurrency, retries, scaling from zero to thousands of instances – is managed for you.
That simplicity hides real complexity. Cold starts, timeout limits, memory-to-CPU coupling, VPC attachment latency, and event source mapping behavior all require deliberate design. This article covers the patterns that matter in practice.
Handler Design#
A Lambda handler receives an event and a context object. The event shape depends on the trigger. The context provides metadata like the remaining execution time, request ID, and log group.
Node.js handler:
export const handler = async (event, context) => {
// event shape depends on the trigger (API Gateway, SQS, S3, etc.)
const { httpMethod, path, body } = event;
try {
const result = await processRequest(JSON.parse(body));
return {
statusCode: 200,
headers: { "Content-Type": "application/json" },
body: JSON.stringify(result),
};
} catch (err) {
console.error("Handler error:", JSON.stringify(err));
return {
statusCode: 500,
body: JSON.stringify({ error: "Internal server error" }),
};
}
};Python handler:
import json
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def handler(event, context):
logger.info(f"Request ID: {context.aws_request_id}")
logger.info(f"Remaining time: {context.get_remaining_time_in_millis()}ms")
try:
body = json.loads(event.get("body", "{}"))
result = process_request(body)
return {
"statusCode": 200,
"headers": {"Content-Type": "application/json"},
"body": json.dumps(result)
}
except Exception as e:
logger.exception("Handler failed")
return {"statusCode": 500, "body": json.dumps({"error": str(e)})}Key principles:
- Initialize SDK clients, database connections, and configuration outside the handler function. Code at the module level runs once per container init and is reused across invocations.
- Keep handlers thin. Parse the event, call business logic, format the response. Testability comes from separating orchestration from logic.
- Always log the request ID from context for tracing.
// Good: connections initialized outside the handler
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
const client = new DynamoDBClient({});
export const handler = async (event) => {
// client is reused across warm invocations
const result = await client.send(/* ... */);
return { statusCode: 200, body: JSON.stringify(result) };
};Cold Start Optimization#
A cold start occurs when Lambda creates a new execution environment: downloading your code, starting the runtime, running your init code, then invoking the handler. Subsequent invocations on the same container skip init entirely.
What affects cold start duration:
| Factor | Impact | Mitigation |
|---|---|---|
| Runtime | Java/C# are slowest (1-5s), Node.js/Python fastest (100-300ms) | Use Node.js or Python for latency-sensitive paths |
| Package size | Larger deployment packages take longer to download and extract | Minimize dependencies, use tree shaking, avoid bundling unused SDKs |
| VPC attachment | Adds 1-2 seconds for ENI creation | Use VPC only when required; Hyperplane has reduced this significantly |
| Memory allocation | More memory = proportionally more CPU = faster init | Increase memory even if you do not need the RAM, you need the CPU |
| Provisioned concurrency | Pre-warms N containers | Use for latency-critical functions |
Provisioned concurrency keeps a specified number of execution environments warm. The tradeoff is cost – you pay for provisioned concurrency whether invocations arrive or not.
aws lambda put-provisioned-concurrency-config \
--function-name my-api \
--qualifier prod \
--provisioned-concurrent-executions 10SnapStart (Java only) takes a snapshot of the initialized execution environment after init and restores from it on cold start. This cuts Java cold starts from seconds to hundreds of milliseconds.
Lambda Layers#
Layers let you share code and dependencies across multiple functions without bundling them into each deployment package.
# Create a layer from a zip of dependencies
mkdir -p layer/nodejs
cd layer/nodejs && npm install pg redis
cd .. && zip -r my-deps-layer.zip nodejs/
aws lambda publish-layer-version \
--layer-name shared-deps \
--zip-file fileb://my-deps-layer.zip \
--compatible-runtimes nodejs20.xAttach the layer to a function:
aws lambda update-function-configuration \
--function-name my-function \
--layers arn:aws:lambda:us-east-1:123456789:layer:shared-deps:1Layers are extracted to /opt at runtime. For Node.js, /opt/nodejs/node_modules is automatically on the module path. For Python, use /opt/python.
When layers help: shared utility code, large dependencies (numpy, pandas), custom runtimes. When they hurt: version coupling across functions, layer size limits (250 MB unzipped total across all layers), debugging complexity when the layer version diverges from what you tested locally.
Event Sources#
Lambda integrates with dozens of AWS services. The most common patterns:
API Gateway (synchronous):
# SAM template
Events:
ApiEvent:
Type: Api
Properties:
Path: /users/{id}
Method: GET
RestApiId: !Ref MyApiThe function receives the full HTTP request and must return a response object. API Gateway waits for the response – the caller is blocked until the function completes or times out (max 29 seconds for API Gateway).
SQS (asynchronous, batched):
Events:
SqsEvent:
Type: SQS
Properties:
Queue: !GetAtt OrderQueue.Arn
BatchSize: 10
MaximumBatchingWindowInSeconds: 5
FunctionResponseTypes:
- ReportBatchItemFailuresLambda polls SQS and invokes your function with a batch of messages. The critical setting is ReportBatchItemFailures – without it, any single failure in a batch causes the entire batch to be retried. With it, your handler returns which specific messages failed, and only those are retried.
export const handler = async (event) => {
const failures = [];
for (const record of event.Records) {
try {
await processMessage(JSON.parse(record.body));
} catch (err) {
failures.push({ itemIdentifier: record.messageId });
}
}
return { batchItemFailures: failures };
};S3 (event notification):
Events:
S3Upload:
Type: S3
Properties:
Bucket: !Ref UploadBucket
Events: s3:ObjectCreated:*
Filter:
S3Key:
Rules:
- Name: prefix
Value: uploads/EventBridge (event bus):
Events:
OrderCreated:
Type: EventBridgeRule
Properties:
Pattern:
source: ["com.myapp.orders"]
detail-type: ["OrderCreated"]EventBridge is the preferred pattern for decoupled event-driven architectures. It supports content-based filtering, multiple targets per rule, archive and replay, and schema discovery.
Environment Variables and Configuration#
aws lambda update-function-configuration \
--function-name my-function \
--environment "Variables={DB_HOST=mydb.cluster.us-east-1.rds.amazonaws.com,LOG_LEVEL=info}"For secrets, do not put them in environment variables in plaintext. Use AWS Systems Manager Parameter Store or Secrets Manager, and fetch at init time:
import boto3
import os
ssm = boto3.client("ssm")
# Runs once per cold start, cached across invocations
DB_PASSWORD = ssm.get_parameter(
Name="/myapp/prod/db-password",
WithDecryption=True
)["Parameter"]["Value"]For frequently changing configuration, use the Lambda Extensions API with a caching layer rather than redeploying the function.
VPC Connectivity#
Lambda functions run in an AWS-managed VPC by default and have internet access but cannot reach resources in your VPC. To access RDS, ElastiCache, or other VPC resources, attach the function to your VPC subnets:
VpcConfig:
SecurityGroupIds:
- sg-0abc123
SubnetIds:
- subnet-private-1a
- subnet-private-1bCritical considerations:
- Place Lambda in private subnets. It does not need a public IP. If the function needs internet access (calling external APIs), route through a NAT Gateway.
- Lambda uses Hyperplane ENIs (shared across functions in the same security group and subnet combination), so the old cold-start penalty for VPC is largely eliminated, but initial setup of the ENI pool for a new combination still takes a few seconds.
- Security groups must allow outbound traffic to your database or cache on the appropriate port.
Monitoring with CloudWatch#
Every Lambda invocation logs to CloudWatch Logs automatically. Key metrics to monitor:
# View recent invocation metrics
aws cloudwatch get-metric-statistics \
--namespace AWS/Lambda \
--metric-name Duration \
--dimensions Name=FunctionName,Value=my-function \
--start-time 2026-02-22T00:00:00Z \
--end-time 2026-02-22T23:59:59Z \
--period 300 \
--statistics Average p99
# Key metrics to alarm on
# - Errors (invocation errors)
# - Throttles (concurrency limit hit)
# - Duration p99 (approaching timeout)
# - ConcurrentExecutions (approaching account/function limit)
# - IteratorAge (for stream-based sources -- how far behind you are)CloudWatch Embedded Metric Format lets you emit custom metrics directly from log output without making API calls:
import { createMetricsLogger, Unit } from "aws-embedded-metrics";
export const handler = async (event) => {
const metrics = createMetricsLogger();
metrics.setNamespace("MyApp");
metrics.putMetric("OrderProcessed", 1, Unit.Count);
metrics.putMetric("ProcessingTime", elapsed, Unit.Milliseconds);
metrics.setDimensions({ Service: "OrderProcessor" });
await metrics.flush();
};Set alarms on the metrics that predict problems before they become outages: Duration approaching the configured timeout, Throttles above zero, and IteratorAge growing for stream consumers.