Lambda Functions
Lambda function is a computing resource - it runs your code. The function runs until your code finishes processing (maximum 15 minutes).
Function execution is initiated by an event (such as an incoming HTTP API gateway request, an incoming message to the SQS queue or object created in segment S3 bucket)
Lambda Functions are serverless and fully managed. You don't have to worry about provisioning and managing servers, container and OS security, patching, scaling & many other DevOps tasks.
Supported runtimes are Node.js (Javascript and Typescript), Python, Ruby, Java, Go and .NET Core (C#).
When to use
Lambda functions work well for most of the use-cases (HTTP APIs, scheduled jobs, integrations & more). However, they can't be used for long-running jobs and jobs that require a higher degree of control over an execution environment.
Advantages
- Pay-per-use - You only pay for the compute time you consume (rounded to 1ms)
- Massive & fast scaling - Can scale up to 1000s of parallel executions. New containers running your code can be spawned in milliseconds.
- High availability - AWS Lambda runs your function in multiple Availability Zones
- Secure by default - Underlying environment is securely managed by AWS
- Lots of integrations - Function can be invoked by events from a wide variety of services
Disadvantages
- Limited execution time - Can run only up to 15 minutes
- Limited configuration of lambda environment - You can configure only memory (CPU power scales with it). The maximum amount of memory is 10GB (6 virtual CPUs).
- More expensive for certain tasks - Continuously running tasks and tasks with predictable load can be performed for less using batch jobs and container workloads.
- Cold starts - Depending on the size of your function and the runtime used, your functions can take an additional ~0.2 - 5sec to execute. Behind the scenes, AWS runs your functions in containers. Cold start happens once per every new container. New containers are added when the function has not been invoked for more than ~15-45 minutes (0 containers are running your function), or when existing containers can't handle the load.
Basic usage
// Stacktape will automatically package any library for youimport anyLibrary from 'any-library';import { initializeDatabaseConnection } from './database';// Everything outside of the handler function will be executed only once (on every cold-start).// You can execute any code that should be "cached" here (such as initializing a database connection)const myDatabaseConnection = initializeDatabaseConnection();// handler will be executed on every function invocationconst handler = async (event, context) => {// This log will be published to a CloudWatch log groupconsole.log(event, context);const posts = myDatabaseConnection.query('SELECT * FROM posts');return { result: posts };};export default handler;
Example lambda function written in Typescript
resources:myLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.ts
Stacktape configuration of a basic lambda function
Packaging
- The packaging process of your lambda function is fully managed by Stacktape.
- Stacktape efficiently builds your source code and scans all of its dependencies. Required dependencies are automatically included in the deployment package.
- If a dependency is dependent on a binary executable, the dependency is re-installed inside a Docker container. This ensures that your dependencies will work correctly in the lambda environment.
- Stacktape automatically removes all the unnecessary files from your deployment package. This helps to save deployment time, storage costs and most imporantly, can improve cold-start times.
- Javascript and Typescript lambda functions are bundled into a single file. To ensure debuggability, source maps are automatically included and available.
- Your lambda functions are automatically zipped and upload to a stacktape-managed deployment bucket.
- Functions are zipped and uploaded to the bucket using S3 transfer acceleration. The function is uploaded to the
nearest AWS's edge location, and then routed to the bucket using an Amazon backbone network. This is faster and more
secure, but incures additional transfer costs ($0.04 per GB). To disable this behavior, set
(config).deploymentConfig.useS3TransferAcceleration
tofalse
.
Path to the entry point of your workload (relative to the stacktape config file)
Type: string
- Stacktape tries to bundle all your source code with its dependencies into a single file.
- If a certain dependency doesn't support static bundling (because it has binary, uses dynamic require() calls, etc.), Stacktape will install it copy it to the bundle
Configuration of packaging properties specific to given language
Exported function to use as the handler for you lambda function
Type: string
Files that should be explicitly included in the deployment package (glob pattern)
Type: Array of string
- Example glob pattern:
images/*.jpg
Files that should be explicitly excluded from deployment package (glob pattern)
Type: Array of string
Example glob pattern: images/*.jpg
Dependencies to ignore.
Type: Array of string
- These dependencies won't be a part of your deployment package.
Path to tsconfig.json file to use.
Type: string
This is used mostly to resolve path aliases.
Emits decorator metadata to the final bundle.
Type: boolean
- This is used by frameworks like NestJS or ORMs like TypeORM.
- This is not turned on by default, because it can slow down the build process.
Dependencies to exclude from main bundle.
Type: Array of string
- These dependencies will be treated as
external
and won't be statically built into the main bundle - Instead, they will be installed and copied to the deployment package.
- Using
*
means all of the workload's dependencies will be treated as external
Computing resources
- Lambda function environment is fully managed. You can't directly configure the type of virtual machine that runs your workload.
- Amount of memory available to the function can be set using
memory
property. This value should be between 128 MB and 10,240 MB in 1-MB increments. - Amount of CPU power available to the function is also set using
memory
property - it's proportionate to the amount of available RAM. Function with 1797MB has a CPU power equal to 1 virtual CPU. Lambda function can have a maximum of 6 vCPUs (at 10,240 MB of RAM).
resources:myLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsmemory: 1024
Runtime
- Stacktape automatically detects the function's language uses the latest runtime version associated with that language
- Example: uses
nodejs14.x
for all files ending with.js
and.ts
- You might want to use an older version if some of your dependencies are not not compatible with the default runtime version
Timeout
- Sets the maximum amount of time (in seconds) that a function can run before a timeout error is thrown.
- Maximum allowed time is 900 seconds.
- The default is 3 seconds.
resources:myLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tstimeout: 300
Environment variables
Most commonly used types of environment variables:
- static - any value (will be stringified)
- result of a custom directive Learn more about directives
- referenced property of another resource (using $Param directive) Learn more about referencing resources
- secret (using $Secret directive) Learn more about using secrets
environment:STATIC_ENV_VAR: 'my-env-var'DYNAMICALLY_SET_ENV_VAR: "$MyCustomDirective('input-for-my-directive')"DB_HOST: "$Param('myPgSql', 'DbInstance::Endpoint.Address')"DB_PASSWORD: "$Secret('dbSecret.password')"
Logging
- Every time your code outputs (prints) something to the
stdout
orstderr
, your log will be captured and stored in a AWS CloudWatch log group. - You can browse your logs in 2 ways:
- go to your function's log-group in the AWS CloudWatch console. You can use
stacktape stack-info
command to get a direct link. - use stacktape logs command that will print logs to the console
- go to your function's log-group in the AWS CloudWatch console. You can use
- Please note that storing log data can become costly over time. To avoid excessive charges, you can configure
logRetentionDays
.
Configures wheter the collection of logs is enabled (default true)
Type: boolean
- Information about the function invocation and function logs (stdout and stderr) are automatically sent to a pre-created CloudWatch log group.
Amount of days the logs will be retained in the log group
Type: number ENUM
Possible values: 13571430609012015018036540054573118273653
Storage
- Each lambda function has access to its own ephemeral, temporary storage.
- It's available at
/tmp
and has a fixed size of 512MB. - This storage is NOT shared between multiple execution environments. If there are 2 or more concurrently running functions, they don't share this storage.
- This storage can be used to cache certain data between function executions.
- To store data persistently, consider using Buckets.
Trigger events
- Functions are invoked ("triggered") in a reaction to an event.
- When you specify an event, Stacktape creates an event integration and adds all the required permissions to invoke the function.
- Each function can have multiple event integrations.
- Payload (data) received by the function is based on the event integration.
HTTP Api event
- The function is triggered in a reaction to an incoming request to the specified HTTP API Gateway.
- HTTP API Gateway selects the route with the most-specific match. To learn more about how paths are evaluated, refer to AWS Docs
resources:myHttpApi:type: http-api-gatewaymyLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsevents:- type: http-api-gatewayproperties:httpApiGatewayName: myHttpApipath: /hellomethod: GET
Lambda function connected to an HTTP API Gateway "myHttpApi"
Type of the event integration
Type: string "http-api-gateway"
Name of the HTTP API Gateway
Type: string
HTTP method that the request should match to be routed by this event integration
Type: string ENUM
Possible values: *DELETEGETHEADOPTIONSPATCHPOSTPUT
Can be either:
- exact method (e.g.
GET
orPUT
) - wildcard matching any method (
*
)
URL path that the request should match to be routed by this event integration
Type: string
Can be either:
- Exact URL Path - e.g.
/post
- Path with a positional parameter - e.g.
/post/{id}
. This matches anyid
parameter, e.g./post/6
. The parameter will be available to the workload usingevent.pathParameters.id
- Greedy path variable - e.g.
/pets/{anything+}
. This catches all child resources of the route. Example:/post/{anything+}
catches both/post/something/param1
and/post/something2/param
Configures authorization rules for this event integration
Type: (CognitoAuthorizer or LambdaAuthorizer)
- Only the authorized requests will be forwarded to the workload.
- All other requests will receive
{ "message": "Unauthorized" }
The format of the payload that the workload will receiveed with this integration.
Type: string ENUM
Possible values: 1.02.0
- To learn more about the differences between the formats, refer to AWS Docs
Schedule event
The function is triggered on a specified schedule. You can use 2 different schedule types:
- Fixed rate - Runs on a specified schedule starting after the event integration is successfully created in your stack. Learn more about rate expressions
- Cron expression - Leverages Cron time-based scheduler. Learn more about Cron expressions
resources:myLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsevents:# invoke function every two hours- type: scheduleproperties:scheduleRate: rate(2 hours)# invoke function at 10:00 UTC every day- type: scheduleproperties:scheduleRate: cron(0 10 * * ? *)
Type of the event integration
Type: string "schedule"
Invocation schedule rate
Type: string
2 different formats are supported:
rate expression
- example:rate(2 hours)
orrate(20 seconds)
cron
- example:cron(0 10 * * ? *)
orcron(0 15 3 * ? *)
No description
Type: UNSPECIFIED
No description
Type: string
No description
Type: EventInputTransformer
Event Bus event
The batch job is triggered when the specified event bus receives an event matching the specified pattern.
2 types of event buses can be used:
Default event bus
- Default event bus is pre-created by AWS and shared by the whole AWS account.
- Can receive events from multiple AWS services. Full list of supported services.
- To use the default event bus, set the
useDefaultBus
property.
resources:myLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsevents:- type: event-busproperties:useDefaultBus: trueeventPattern:source:- 'aws.autoscaling'region:- 'us-west-2'
Batch job connected to the default event bus
- Custom event bus
- Your own, custom Event bus.
- This event bus can receive your own, custom events.
- To use custom event bus, specify either
eventBusArn
oreventBusName
property.
resources:myEventBus:type: event-busmyLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsevents:- type: event-busproperties:eventBusName: myEventBuseventPattern:source:- 'mycustomsource'
Batch job connected to a custom event bus
Type of the event integration
Type: string "event-bus"
Used to filter the events from the event bus based on a pattern
Type: EventBusIntegrationPattern
- Each event received by the Event Bus gets evaluated against this pattern. If the event matches this pattern, the integration invokes the workload.
- To learn more about the event bus filter pattern syntax, refer to AWS Docs
Arn of the event-bus
Type: string
- Use this, if you want to use an event bus defined outside of the stack resources.
- You need to specify exactly one of
eventBusArn
,eventBusName
oruseDefaultBus
.
Name of the Event Bus defined within the Stacktape resources
Type: string
- Use this, if you want to use an event bus defined within the stack resources.
- You need to specify exactly one of
eventBusArn
,eventBusName
oruseDefaultBus
.
Configures the integration to use the default (AWS created) event bus
Type: boolean
- You need to specify exactly one of
eventBusArn
,eventBusName
oruseDefaultBus
.
No description
Type: UNSPECIFIED
No description
Type: string
No description
Type: EventInputTransformer
SNS event
The function is triggered every time a specified SNS topic receives a new message.
- Amazon SNS is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication.
- Messages (notifications) are published to the topics
- To add your custom SNS topic to your stack, add Cloudformation resource to the cloudformationResources section of your config.
resources:myLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsevents:- type: snsproperties:topicArn: $Param('mySnsTopic', 'Arn')onDeliveryFailure:sqsQueueArn: $Param('mySnsTopic', 'Arn')sqsQueueUrl: $Param('mySqsQueue', 'QueueURL')cloudformationResources:mySnsTopic:Type: AWS::SNS::TopicmySqsQueue:Type: AWS::SQS::Queue
Type of the event integration
Type: string "sns"
Arn of the SNS topic. Messages arriving to this topic will invoke the workload.
Type: string
No description
Type: UNSPECIFIED
SQS Destination for messages that fail to be delivered to the workload
Type: SnsOnDeliveryFailure
- Failure to deliver can happen in rare cases, i.e. when function is not able to scale fast enough to react to incoming messages.
SQS event
The function is triggered whenever there are messages in the specified SQS Queue.
- Messages are processed in batches
- If the SQS queue contains multiple messages, the function is invoked with multiple messages in its payload
- A single queue should always be "consumed" by a single function. SQS message can only be read once from the queue and while it's being processed, it's invisible to other functions. If multiple different functions are processing messages from the same queue, each will get their share of the messages, but one message won't be delivered to more than one function at a time. If you need to consume the same message by multiple consumers (Fanout pattern), consider using EventBus integration or SNS integration.
- To add your custom SQS queue to your stack, simply add Cloudformation resource to the cloudformationResources section of your config.
Batching behavior can be configured. The function is triggered when any of the following things happen:
- Batch window expires. Batch window can be configured using
maxBatchWindowSeconds
property. - Maximum Batch size (amount of messages in the queue) is reached. Batch size can be configured using
batchSize
property. - Maximum Payload limit is reached. Maximum payload size is 6 MB.
resources:myLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsevents:- type: sqsproperties:queueArn: $Param('mySqsQueue', 'Arn')cloudformationResources:mySqsQueue:Type: AWS::SQS::Queue
Type of the event integration
Type: string "sqs"
Arn of sqs queue from which function consumes messages.
Type: string
- Failure to deliver can happen in rare cases, i.e. when the workload is not able to scale fast enough to react to incoming messages.
Configures how many records to collect in a batch, before function is invoked.
Type: number
- Maximum
10,000
Configures maximum amount of time (in seconds) to gather records before invoking the workload
Type: number
- By default, the batch window is not configured
- Maximum 300 seconds
Kinesis event
The function is triggered whenever there are messages in the specified Kinesis Stream.
- Messages are processed in batches.
- If the stream contains multiple messages, the function is invoked with multiple messages in its payload.
- To add a custom Kinesis stream to your stack, simply add Cloudformation resource to the cloudformationResources section of your config.
- Similarly to SQS, Kinesis is used to process messages in batches. To learn the differences, refer to AWS Docs
Batching behavior can be configured. The function is triggered when any of the following things happen:
- Batch window expires. Batch window can be configured using
maxBatchWindowSeconds
property. - Maximum Batch size (amount of messages in the queue) is reached. Batch size can be configured using
batchSize
property. - Maximum Payload limit is reached. Maximum payload size is 6 MB.
Consoming messages from a kinesis stream can be done in 2 ways:
- Consuming directly from the stream - polling each shard in your Kinesis stream for records once per second. Read throughput of the kinesis shard is shared with other stream consumers.
- Consuming using a stream consumer - To minimize latency and maximize read throughput, use "stream consumer" with enhanced fan-out. Enhanced fan-out consumers get a dedicated connection to each shard that doesn't impact other applications reading from the stream. You can either pass reference to the consumer using consumerArn property, or you can let Stacktape auto-create consumer using autoCreateConsumer property.
resources:myLambda:type: functionproperties:packageConfig:filePath: 'path/to/my-lambda.ts'events:- type: kinesisproperties:autoCreateConsumer: truemaxBatchWindowSeconds: 30batchSize: 200streamArn: $Param('myKinesisStream', 'Arn')onFailure:arn: $Param('myOnFailureSqsQueue', 'Arn')type: sqscloudformationResources:myKinesisStream:Type: AWS::Kinesis::StreamProperties:ShardCount: 1myOnFailureSqsQueue:Type: AWS::SQS::Queue
Type of the event integration
Type: string "kinesis"
Arn of Kinesis stream from which function consumes records.
Type: string
Arn of the consumer which will be used by integration.
Type: string
- This parameter CAN NOT be used is combination with
autoCreateConsumer
Specifies whether to create separate consumer for this integration
Type: boolean
- Specifies whether Stacktape creates the consumer for this integration
- Using a consumer can help minimize latency and maximize read throughput
- To learn more about stream consumers, refer to AWS Docs
- This parameter CAN NOT be used when in combination with
consumerArn
Configures maximum amount of time (in seconds) to gather the records before invoking the workload
Type: number
- By default batch window is not configured
- Maximum
300
seconds
Configures how many records to collect in a batch, before function is invoked.
Type: number
- Maximum
10,000
- @default 10
Specifies position in the stream from which to start reading.
Type: string ENUM
Possible values: LATESTTRIM_HORIZON
Available values are:
LATEST
- Read only new records.TRIM_HORIZON
- Process all available records
Configures the number of times failed "record batches" are retried
Type: number
- If the workload fails, the entire batch of records is retried (not only the failed ones). This means that even the records that you processed successfully can get retried. You should implement your function with idempotency in mind.
Configures the on-failure destination for failed record batches
Type: DestinationOnFailure
SQS queue
orSNS topic
Allows to process more than one shard of the stream simultaneously
Type: number
If the workload returns an error, split the batch in two before retrying.
Type: boolean
- This can help in cases, when the failure happened because the batch was too large to be successfully processed.
DynamoDb event
The function is triggered whenever there are processable records in the specified DynamoDB streams.
- DynamoDB stream captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours.
- Records from the stream are processed in batches. This means that multiple records are included in a single function invocation.
- DynamoDB stream must be enabled in a DynamoDB table definition. Learn how to enable streams in dynamo-table docs
resources:myDynamoDbTable:type: dynamo-db-tableproperties:primaryKey:partitionKey:attributeName: idattributeType: stringdynamoStreamType: NEW_AND_OLD_IMAGESmyLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsevents:- type: dynamo-dbproperties:streamArn: $Param('myDynamoDbTable', 'DynamoTable::StreamArn')# OPTIONALbatchSize: 200
Type of the event integration
Type: string "dynamo-db"
Arn of the DynamoDb table stream from which the workload consumes records.
Type: string
Configures maximum amount of time (in seconds) to gather records before invoking the workload
Type: number
- By default, the batch window is not configured
Configures how many records to collect in a batch, before the workload is invoked.
Type: number
- Maximum
1000
Specifies position in the stream from which to start reading.
Type: string
Available values are:
LATEST
- Read only new records.TRIM_HORIZON
- Process all available records
Configures the number of times failed "record batches" are retried
Type: number
- If the workload fails, the entire batch of records is retried (not only the failed ones). This means that even the records that you processed successfully can get retried. You should implement your function with idempotency in mind.
Configures the on-failure destination for failed record batches
Type: DestinationOnFailure
SQS queue
orSNS topic
Allows to process more than one shard of the stream simultaneously
Type: number
If the workload returns an error, split the batch in two before retrying.
Type: boolean
- This can help in cases, when the failure happened because the batch was too large to be successfully processed.
S3 event
The function is triggered when a specified event occurs in your bucket.
Supported events are listed in the
s3EventType
API Reference.To learn more about the even types, refer to AWS Docs.
resources:myBucket:type: bucketmyLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsevents:- type: s3properties:bucketArn: $Param('myBucket', 'Bucket::Arn')s3EventType: 's3:ObjectCreated:*'filterRule:prefix: order-suffix: .jpg
Type of the event integration
Type: string "s3"
Arn of the S3 bucket, events of which can invoke the workload
Type: string
Specifies which event types invokes the workload
Type: string ENUM
Possible values: s3:ObjectCreated:*s3:ObjectCreated:CompleteMultipartUploads3:ObjectCreated:Copys3:ObjectCreated:Posts3:ObjectCreated:Puts3:ObjectRemoved:*s3:ObjectRemoved:Deletes3:ObjectRemoved:DeleteMarkerCreateds3:ObjectRestore:*s3:ObjectRestore:Completeds3:ObjectRestore:Posts3:ReducedRedundancyLostObjects3:Replication:*s3:Replication:OperationFailedReplications3:Replication:OperationMissedThresholds3:Replication:OperationNotTrackeds3:Replication:OperationReplicatedAfterThreshold
Allows to filter the objects that can invoke the workload
Type: S3FilterRule
Prefix of the object which can invoke function
Type: string
Suffix of the object which can invoke function
Type: string
Cloudwatch Log event
The function is triggered when a log record arrives to the specified log group.
- Event payload arriving to the function is BASE64 encoded and has the following format:
{ "awslogs": { "data": "BASE64ENCODED_GZIP_COMPRESSED_DATA" } }
- To read access the log data, event payload needs to be decoded and decompressed first.
resources:myLogProducingLambda:type: functionproperties:packageConfig:filePath: lambdas/log-producer.tsmyLogConsumingLambda:type: functionproperties:packageConfig:filePath: lambdas/log-consumer.tsevents:- type: cloudwatch-logproperties:logGroupArn: $Param('myLogProducingLambda', 'LogGroup::Arn')
Type of the event integration
Type: string "cloudwatch-log"
Arn of the watched Log group
Type: string
Allows to filter the logs that invoke the workload based on a pattern
Type: string
- To learn more about the filter pattern, refer to AWS Docs
Application Load Balancer event
The function is triggered when a specified Application load Balancer receives an HTTP request that matches the integration's conditions.
- You can filter requests based on HTTP Method, Path, Headers, Query parameters, and IP Address.
resources:# load balancer which routes traffic to the functionmyLoadBalancer:type: application-load-balancerproperties:listeners:- port: 80protocol: HTTPmyLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsevents:- type: application-load-balancerproperties:# referencing load balancer defined abovepriority: 1loadBalancerName: myLoadBalancerlistenerPort: 80paths:- /invoke-my-lambda- /another-path
Type of the event integration
Type: string "application-load-balancer"
Name of the Load balancer
Type: string
Port of the Load balancer listener
Type: number
Priority of the integration
Type: number
- Load balancers evaluate integrations according to priority.
- If multiple event integrations match the same conditions (paths, methods ...), request will be forwarded to the event integration with the highest priority.
List of URL paths that the request should match to be routed by this event integration
Type: Array of string
- The condition is satisfied if any of the paths matches the request URL
- The maximum size is 128 characters
- The comparison is case sensitive
The following patterns are supported:
- basic URL path, i.e.
/post
*
- wildcard (matches 0 or more characters)?
- wildcard (matches 1 or more characters)
List of HTTP methods that the request should match to be routed by this event integration
Type: Array of string
List of hostnames that the request should match to be routed by this event integration
Type: Array of string
- Hostname is parsed from the host header of the request
The following wildcard patterns are supported:
*
- wildcard (matches 0 or more characters)?
- wildcard (matches 1 or more characters)
List of header conditions that the request should match to be routed by this event integration
Type: Array of LbHeaderCondition
- All conditions must be satisfied.
List of query parameters conditions that the request should match to be routed by this event integration
Type: Array of LbQueryParamCondition
- All conditions must be satisfied.
List of IP addresses that the request should match to be routed by this event integration
Type: Array of string
- IP addresses must be in a CIDR format.
- If a client is behind a proxy, this is the IP address of the proxy, not the IP address of the client.
Sync vs. Async invocations
Functions can be invoked in 2 different ways. Different integrations (events) invoke your function in different ways.
Synchronous invocation
- AWS Lambda runtime invokes your functions, waits for it to complete, and then returns the result to the caller.
- Synchronous invocation can be performed by these callers:
- HTTP API Gateway event integration
- Application Load balancer event integration
- Amazon Cognito
- Directly calling
invokeSync
method (or similar method, depending on the language used) from theaws-sdk
. This method then directly returns the result of your function.
Asynchronous invocation
- AWS Lambda runtime invokes your functions but doesn't wait for it to complete. The caller receives only the information, if it's been successfully enqueued.
- Asynchronous invocation can be performed by these callers:
- SNS event integration
- SQS event integration
- Event-bus event integration
- Schedule event integration
- S3 event integration
- Cloudwatch Log event integration
- DynamoDB event integration
- Kinesis event integration
- Directly calling
invoke
method (or similar method, depending on the language used) from theaws-sdk
. This method doesn't directly return the result of your function, only the information wheter the invocation successfully started.
- If the function execution fails, lambda retries the function for 2 more times. Please note that this can sometimes cause issues, if the function is not idempotent.
Lambda Destinations
Lambda Destinations allow you to orchestrate simple, lambda-based, event-driven workflows.
- Works only for asynchronous invocations
- You can hook into onSuccess or onFailure events
- 4 different destinations are supported:
- SQS queue
- SNS topic
- Event bus
- other lambda function
- Destination receives both function's result (or error) and original event.
- To learn more about Lambda destinations, refer to AWS blog post.
- Defined using a
destinations
property on the function - For SNS, DynamoDB and Kinesis event integrations, onFailure destination can be set per event integration.
resources:myEventBus:type: event-busmySuccessLambda:type: functionproperties:packageConfig:filePath: lambdas/success-handler.tsmyLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsdestinations:# if function succeeds, invoke the mySuccessLambda with the result dataonSuccess: $Param('mySuccessLambda', 'LambdaFunction::Arn')# if the function fails, send the result to "myEventBus"onFailure: $Param('myEventBus', 'EventBus::Arn')
ARN (Amazon resource name) of the destination (SNS topic, SQS Queue, Event bus or another Lambda function)
Type: string
After each successful invocation, JSON object
containing result (response) and other information
about the execution is sent to the destination.
Format of the response:
{
"version": "1.0",
"timestamp": "2019-11-24T23:08:25.651Z",
"requestContext": {
"requestId": "c2a6f2ae-7dbb-4d22-8782-d0485c9877e2",
"functionArn": "arn:aws:lambda:sa-east-1:123456789123:function:event-destinations:$LATEST",
"condition": "Success",
"approximateInvokeCount": 1
},
"requestPayload": {
"Success": true
},
"responseContext": {
"statusCode": 200,
"executedVersion": "$LATEST"
},
"responsePayload": null
}
Response object is passed in different ways based on the destination:
SNS topic / SQS queue
: Passed as theMessage
to the destinationLambda function
: Passed as the payload to the function. The destination function cannot be the same as the source function. For example, if FunctionA has a Destination configuration attached forSuccess
, FunctionA is not a valid destination ARN. This prevents recursive functions.Event bus
: Passed as thedetail
of the event. The source islambda
, and detail type is eitherLambda Function Invocation Result - Success
orLambda Function Invocation Result – Failure
. The resource fields contain the function and destination ARNs.
To learn more about event bus integration, refer to Stacktape docs
ARN (Amazon resource name) of the destination (SNS topic, SQS Queue, Event bus or another Lambda function)
Type: string
After each successful invocation, JSON object
containing original event(request), error(response) and other information
about the execution is sent to the destination.
Format of the response:
{
"version": "1.0",
"timestamp": "2019-11-24T21:52:47.333Z",
"requestContext": {
"requestId": "8ea123e4-1db7-4aca-ad10-d9ca1234c1fd",
"functionArn": "arn:aws:lambda:sa-east-1:123456678912:function:event-destinations:$LATEST",
"condition": "RetriesExhausted",
"approximateInvokeCount": 3
},
"requestPayload": {
"Success": false
},
"responseContext": {
"statusCode": 200,
"executedVersion": "$LATEST",
"functionError": "Handled"
},
"responsePayload": {
"errorMessage": "Failure from event, Success = false, I am failing!",
"errorType": "Error",
"stackTrace": [ "exports.handler (/var/task/index.js:18:18)" ]
}
}
Response object is passed in different ways based on the destination:
SNS topic / SQS queue
: Passed as theMessage
to the destinationLambda function
: Passed as the payload to the function. The destination function cannot be the same as the source function. For example, if FunctionA has a Destination configuration attached forSuccess
, FunctionA is not a valid destination ARN. This prevents recursive functions.Event bus
: Passed as thedetail
of the event. The source islambda
, and detail type is eitherLambda Function Invocation Result - Success
orLambda Function Invocation Result – Failure
. The resource fields contain the function and destination ARNs.
To learn more about event bus integration, refer to Stacktape docs
Accessing other resources
For most of the AWS resources, resource-to-resource communication is not allowed by default. This helps to enforce security and resource isolation. Access must be explicitly granted using IAM (Identity and Access Management) permissions.
Access control of Relational Databases is not managed by IAM. These resources are not "cloud-native" and have their own access control mechanism (connection string with username and password). They are accessible by default, and you don't need to grant any extra IAM permissions. If the default, connection-string-based access-control is not sufficient for your use case, you can restrict connection to only resources in the same VPC. In that case, your function must join that VPC to access them.
Stacktape automatically handles IAM permissions for the underlying AWS services that it creates (i.e. granting functions permission to write logs to Cloudwatch, allowing functions to communicate with their event source and many others).
If your workload needs to communicate with other infrastructure components, you need to add permissions manually. You can do this in 2 ways listed below.
Raw AWS IAM role statements appended to your resources's role.
Type: Array of StpIamRoleStatement
Names of the resources that will recieve basic permissions.
Type: Array of string
Granted permissions:
Bucket
- list objects in a bucket
- create / get / delete / tag object in a bucket
DynamoDb Table
- get / put / update / delete item in a table
- scan / query a table
- describe table stream
MongoDb Atlas Cluster
- Allows connection to a cluster with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about MongoDb Atlas clusters accessibility modes, refer to MongoDB Atlas cluster docs.
Relational database
- Allows connection to a relational database with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about relational database accessibility modes, refer to Relational databases docs.
Redis cluster
- Allows connection to a redis cluster with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about redis cluster accessibility modes, refer to Redis clusters docs.
Event bus
- publish events to the specified Event bus
Function
- invoke the specified function
Batch job
- submit batch-job instance into batch-job queue
- list submitted job instances in a batch-job queue
- describe / terminate a batch-job instance
- list executions of state machine which executes the batch-job according to its strategy
- start / terminate execution of a state machine which executes the batch-job according to its strategy
Using allowAccessTo
- List of resource names that this function will be able to access (basic IAM permissions will be granted automatically). Granted permissions differ based on the resource.
- Works only for resources managed by Stacktape (not arbitrary Cloudformation resources)
- This is useful if you don't want to deal with IAM permissions yourself. Handling permissions using raw IAM role statements can be cumbersome, time-consuming and error-prone.
resources:myLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsenvironment:- name: PHOTOS_BUCKETvalue: $Param('photosBucket', 'Bucket::Name')accessControl:allowAccessTo:- photosBucketphotosBucket:type: bucket
Granted permissions:
Bucket
- list objects in a bucket
- create / get / delete / tag object in a bucket
DynamoDb Table
- get / put / update / delete item in a table
- scan / query a table
- describe table stream
MongoDb Atlas Cluster
- Allows connection to a cluster with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about MongoDb Atlas clusters accessibility modes, refer to MongoDB Atlas cluster docs.
Relational database
- Allows connection to a relational database with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about relational database accessibility modes, refer to Relational databases docs.
Redis cluster
- Allows connection to a redis cluster with
accessibilityMode
set toscoping-workloads-in-vpc
. To learn more about redis cluster accessibility modes, refer to Redis clusters docs.
Event bus
- publish events to the specified Event bus
Function
- invoke the specified function
Batch job
- submit batch-job instance into batch-job queue
- list submitted job instances in a batch-job queue
- describe / terminate a batch-job instance
- list executions of state machine which executes the batch-job according to its strategy
- start / terminate execution of a state machine which executes the batch-job according to its strategy
Using iamRoleStatements
- IAM Role statements are a low-level, granular and AWS-native way of controlling access to your resources.
- IAM Role statements can be used to add permissions to any Cloudformation resource.
- Configured IAM role statement objects will be appended to the function's role.
resources:functions:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsenvironment:- name: TOPIC_ARNvalue: $Param('NotificationTopic', 'Arn')accessControl:iamRoleStatements:- Resource:- $Param('NotificationTopic', 'Arn')Effect: 'Allow'Action:- 'sns:Publish'cloudformationResources:NotificationTopic:Type: 'AWS::SNS::Topic'
Default VPC connection
- Certain AWS services (such as MongoDb Atlas Clusters) must be connected to a VPC (Virtual private cloud) to be able to run. Stacktape automatically creates a default VPC for stacks that include these resources and connects them to the VPC.
- Functions are NOT connected to the default VPC of your stack by default.
- To communicate with resources inside a default VPC that have their accessibility mode set to only allow connection from the same VPC, you need to connect your function to that VPC.
- Connecting a function to a VPC makes it lose connection to the internet. (Outbound requests will fail). To restore a connection to the internet, you need to use NAT Gateway. We do not recommend using NAT Gateways and advice you to re-architect your application instead.
- To learn more about VPCs and accessibility modes, refer to VPC docs, accessing relational databases, accessing redis clusters and accessing MongoDb Atlas clusters
resources:myLambda:type: functionproperties:packageConfig:filePath: path/to/my-lambda.tsjoinDefaultVpc: true
Function connected to the default VPC
Pricing
You are charged for:
Total compute (gigabyte seconds):
- Amount of memory * execution time
- The price for 128MB per 1 ms execution: $0.0000000021.
Request charges: (invocation):
- $0.20/1 million invocations
(forever) FREE TIER includes one million free requests per month and 400,000 GB-seconds of compute time.
To learn more about lambda functions pricing, refer to AWS pricing page
API reference
No description
Type: string "function"
Configures how your source code is turned into a deployment package (deployment artifact)
Type: LambdaPackageConfig
List of event integrations that invoke (trigger) this function
Type: Array of (LoadBalancerIntegration or SnsIntegration or SqsIntegration or KinesisIntegration or DynamoDbIntegration or S3Integration or ScheduleIntegration or CloudwatchLogIntegration or HttpApiIntegration or EventBusIntegration)
Functions are invoked ("triggered") in reaction to an event.
- Connecting your lambda functions to an event integrations is automatically handled by Stacktape.
- Stacktape automatically adds all the permissions required to invoke the function.
- Each function can have multiple event integrations.
- Payload (data) received by the function is based on the event integration.
Environment variables injected to the batch job's environment
Type: Array of EnvironmentVar
- Environment variables are often used to inject information about other parts of the infrastrucutre (such as database URLs, secrets, etc.).
Runtime used to execute the function
Type: string ENUM
Possible values: dotnetcore2.1go1.xjava11java8nodejs10.xnodejs12.xnodejs14.xnodejs8.10python2.7python3.6python3.7python3.8ruby2.5
- Stacktape automatically detects the function's language uses the latest runtime version associated with that language
- Example: uses
nodejs14.x
for all files ending with.js
and.ts
- You might want to use an older version if some of your dependencies are not not compatible with the default runtime version
Amount of memory (in MB) available to the function during execution
Type: number
- Must be between 128 MB and 10,240 MB in 1-MB increments.
- Amount of CPU power available to the function is also set using memory property - it's proportionate to the amount of available memory.
- Function with 1797MB has a CPU power equal to 1 virtual CPU. Lambda function can have a maximum of 6 vCPUs (at 10,240 MB of RAM).
Maximum amount of time (in seconds) the lambda function is allowed to run
Type: number
Maximum allowed time is 900 seconds.
Connects the function to the default VPC
Type: boolean
- Functions are NOT connected to the default VPC of your stack by default.
- To communicate with certain resources inside your VPC, you need to connect your function to the VPC. Most common use-case for this is accessing a relational-database or a mongo-db-atlas-cluster that is configured to only allows connections from VPC.
- Connecting a function to the VPC makes it lose connection to the internet. (Outbound requests will fail). To restore a connection to the internet, you would need to use NAT Gateway. We do dont recommend this, and advice you to re-architect your application instead.
- To learn more about VPCs, refer to VPCs Stacktape documentation.
Tags to apply to this function
Type: Array of CloudformationTag
- Tags can help you to identify and categorize resources.
- A maximum number of 50 tags can be specified.
Lambda Destinations allow you to orchestrate simple, lambda-based, event-driven workflows.
Type: LambdaFunctionDestinations
- Works only for asynchronous invocations
- You can hook into
onSuccess
oronFailure
events - 4 different destinations are supported:
- SQS queue
- SNS topic
- Event bus
- other lambda function
- Destination receives both function's result (or error) and original event.
- To learn more about Lambda destinations, refer to AWS blog post.
- Defined using a destinations property on the function
- For SNS, DynamoDB and Kinesis event integrations, onFailure destination can be set per event integration.
Configures access to other resources of your stack (such as relational-databases, buckets, event-buses, etc.).
Type: AccessControl
Configures logging behavior for this function
Type: LambdaFunctionLogging
- Information about the function invocation and function logs (stdout and stderr) are automatically sent to a pre-created CloudWatch log group.
- By default, logs are retained for 180 days..
- You can browse your logs in 2 ways:
- go to the log group page in the AWS CloudWatch console. You can use
stacktape stack-info
command to get a direct link. - use stacktape logs command to print logs to the console
- go to the log group page in the AWS CloudWatch console. You can use
Overrides one or more properties of the specified child resource.
Type: Object
- Child resouces are specified using their descriptive name (e.g.
DbInstance
orEvents.0.HttpApiRoute
). - To see all configurable child resources for given Stacktape resource, use
stacktape stack-info --detailed
command. - To see the list of properties that can be overriden, refer to AWS Cloudformation docs.
No description
Type: string
No description
Type: UNSPECIFIED
No description
Type: UNSPECIFIED
No description
Type: UNSPECIFIED
No description
Type: UNSPECIFIED
No description
Type: UNSPECIFIED
No description
Type: UNSPECIFIED
No description
Type: UNSPECIFIED
No description
Type: UNSPECIFIED
No description
Type: UNSPECIFIED
Arn of the SQS queue
Type: string
Url of the SQS queue
Type: string
Arn of the SNS topic or SQS queue into which failed record batches are sent
Type: string
Type of destination being used are using
Type: string ENUM
Possible values: snssqs
Header name
Type: string
List of header values
Type: Array of string
- The Condition is satisfied if at least one of the request headers matches the values in this list.
Name of the query parameter
Type: string
List of query parameter values
Type: Array of string
- The Condition is satisfied if at least one of the request query parameters matches the values in this list.
Name of the environment variable
Type: string
Value of the environment variable
Type: (string or number or boolean)
Name of the tag
Type: string
- Must be 1-128 characters long.
- Can consist of the following characters: Unicode letters, digits, whitespace,
_
,.
,/
,=
,+
, and-
.
Value of the tag
Type: string
- Must be 1-256 characters long.
No description
Type: UNSPECIFIED