You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 30, 2024. It is now read-only.
The blog post Go Serverless, not Flagless: Implementing Feature Flags in Serverless Environments gives a great overview of implementing feature flags in a serverless architecture. However, the solution proposed doesn't take into account the complexities of using Redis with Lambda, nor the performance hit that incurs. When you spin up a Redis Elsaticache instance, it must be provisioned inside a VPC. However, that now means that the lambda must attach itself to the VPC to access it.
From the AWS docs:
AWS Lambda uses the VPC information you provide to set up ENIs that allow your Lambda function to access VPC resources.
Associating a lambda with an ENI has a very high cold start performance penalty, on the order of tens of seconds.
Each ENI is assigned a private IP address from the IP address range within the Subnets you specify, but is not assigned any public IP addresses. Therefore, if your Lambda function requires Internet access (for example, to access AWS services that don't have VPC endpoints ), you can configure a NAT instance inside your VPC or you can use the Amazon VPC NAT gateway.
This all combines to pull us kicking and screaming out of the serverless world. To counter this, I propose adding a DynamoDB-backed Feature Store. While certainly not as fast as Redis, it is a serverless service, and has some of the same features as Redis.