Easy Lambda Optimization

Brett Uglow
DigIO Australia
Published in
6 min readJun 2, 2022

--

What memory setting should be used when deploying an AWS Lambda — 128MB? 1024MB? How do we know that the chosen setting is the “best” setting?

AWS Lambda Power Tuning is a simple-yet-powerful way to determine the best settings for an AWS Lambda by changing only 2 Lambda configuration options — the memory setting and the CPU architecture.

Let’s take a look…

Step 1 — Install the Lambda Power Tuning app (stack)

The GitHub docs offer several ways to install the Lambda Power Tuning app. But the easiest way is shown below:

  1. Login to the AWS console using the account that contains the Lambdas we wish to tune.
  2. Find the app on the Serverless Application Repo (which is something I never knew existed until recently).
  3. Press the Deploy button. The browser will navigate to the installation page (see screenshot below) for the app. Application settings can be changed, but the default values are fine for now.
App Installation page (top)

4. Scroll to the bottom, check the I Acknowledge checkbox and press Deploy (see below):

App Installation page (bottom)

5. After a few moments, the app will have been deployed into our AWS account as a Lambda Application (see below). Note the name of the application (e.g. serverlessrepo-aws-lambda-power-tuning) as we will need this when running the execution tool later.

The installed application — remember the name (at the top)

That’s it! The app is deployed. Now it’s time to test it out…

Step 2 — Power Tuning a Lambda

The Power Tuning app works by invoking a nominated Lambda one-or-more-times, with one-or-more payloads, at a range of nominated memory settings, and recording the total invocation time. In AWS, the memory setting also represents CPU performance — the more memory a Lambda has, the better the CPU performance. This memory-CPU setting is called the powerValue.

The tuning process starts with a JSON config file that looks like this:

{
"lambdaARN": "arn:aws:lambda:us-east-1:12321:function:my-lambda",
"powerValues": [768, 1024, 1536, 2048, 4096],
"num": 20,
"payload": {
/* A Lambda event JSON-object goes here */
},
"parallelInvocation": true,
"strategy": "balanced"
}

The main features of this config:

  • lambdaARN — the ARN of the Lambda
  • powerValues — the memory-values at which to run the Lambda
  • num — the total number of times to run the Lambda. This number should be a multiple of the number of powerValues tested, so that each powerValue is run the same number of times. In the above example, there are 5 powerValues and num is 20, which means each powerValue will be run 4 times.

The docs provide further details on what the JSON file can contain.

There is a basic shell-script to which uses this JSON file as input and executes the power tuning. This approach is not ideal in my opinion, as it requires the payload for the lambda to be included within the config file. During other forms of testing, I already had payload fixture files available, which I would rather include-by-reference rather than copy them into the file. So I wrote a small NodeJS command-line tool to make that possible: ptx. Let’s go!

  1. Create a JSON config file called config.json with contents like this:
{
"lambdaARN": "arn:aws:lambda:us-east-1:12321:function:my-lambda",
"powerValues": [768, 1024, 1536, 2048, 4096],
"num": 20,
"payload": {
"$$include1": "./payload.json",
},
"parallelInvocation": true,
"strategy": "balanced"
}

Important: Change the lambdaARN to be the ARN of your own Lambda, as well as the other settings based on your use-case.

2. Create a file called payload.json and set the contents to be a JSON event object that the Lambda would typically receive. When tuning code, the data that is used should be representative of the data that the Lambda will process when in production. If we use unrealistic data during this process, we risk setting the wrong power-value for the Lamba. To support the use of realistic payloads, we can provide an array of payloads (see docs)

Example payload.json:

{
"resource": "{id}",
"path": "/jb1000",
"httpMethod": "DELETE",
"headers": {
"Accept": "application/json, text/plain, */*",
"Accept-Encoding": "gzip, deflate, br"
},
"pathParameters": {
"id": "jb1000"
},
"requestContext": {
"resourceId": "abc123",
"authorizer": {}
"resourcePath": "{id}",
"httpMethod": "DELETE",
"path": "/foo/jb1000",
}
}

It’s time to run! ⚡️

3. In a terminal window, enter the following:

# Set AWS Profile
export AWS_PROFILE=my-profile

# This may be required too
export AWS_SDK_LOAD_CONFIG=true

# Set an env-var that points to the name of the Power Tune Stack in our AWS Account
export PTX_STACK_NAME=serverlessrepo-aws-lambda-power-tuning

# The config-file argument is relative to the current directory
npx ptx config.json

4. We should see this output:

✔ Getting a reference to the PowerTune step function
✔ Start PowerTune
✔ Running...
✔ Visualization: https://lambda-power-tuning.show/#gAEAAgADAAQABQ==;uDakQ7ief0N+kTNDWVIVQ4/i+EI=;txgLNpdPEDbxMxg2RR0pNigpMDY=;
✨ PowerTune complete!

The output contains a URL that encodes the results of the tuning. Let’s open that in a browser:

Graph of the above results (Visualization URL)

Step 3 — Analysis and adjustment

At this point, we have a cost-performance graph that is already very useful! We can see that as the memory/power increases, the invocation time steadily reduces — the Lambda gets faster. Interestingly, the invocation cost is steadily increasing as the memory/power increases, but it is not perfectly linear.

Now we can make a change to our Lambda’s config, based on the price/performance that we want. In the above case, we chose 1024MB because the difference in price between 768MB and 1024MB is negligible, but the invocation time is 22% faster.

Step 4 — Comparing Lambda CPU Architectures

The last step is to compare the performance of the default Lambda CPU architecture — x86_64 — to the newer ARM64 architecture. AWS has a migration guide on how to do this, but in many cases it will be a simple configuration change.

  1. Add an Architectures field to the config and specify arm64 as an array value:

2. Redeploy the lambda with the new architecture

3. Run ptx again, using the same config as before (assuming we used the same Lambda name for the ARM-version): npx ptx config.json

4. Output:

✔ Getting a reference to the PowerTune step function
✔ Start PowerTune
✔ Running...
✔ Visualization: https://lambda-power-tuning.show/#gAEAAgADAAQABQ==;lszAQ3RCjkPe/TlD3g0SQ2As/UI=;ShwENpYOAjZMo/41/ykGNlbjEDY=
✨ PowerTune complete!
Graph of the ARM results

On first glance, that looks pretty similar to the first graph we got.

5. Press the Compare button on the web page, and enter the old URL and some labels to help us see the difference:

The Compare modal — make sure you use the right labels for each data set!

6. Output graph (link)

Comparison graph between x86 and ARM on AWS

The comparison shows that — in this particular case — the performance of the x86-version is almost identical to the ARM-version (the x86 version is a little better with smaller power values!). However, at 1024MB, the performance difference is negligible, but the x86-version is 26% more expensive!

Conclusion

The AWS Power Tuning app and ptx are two tools to help us decide which power value to use for our Lambdas. They are easy to install and use (compared to many other tools), and for a little effort they can provide large cost & performance benefits for any Lambda.

--

--

Brett Uglow
DigIO Australia

Life = Faith + Family + Others + Work + Fun, Work = Software engineering + UX + teamwork + learning, Fun = travelling + hiking + games + …