If you are having problems with AWS lambdas exceeding the 250MB limit, here is a tip that might help.
First of all, it is good to know that, unless you need a specific version, you do not need to include boto3 in your deployments because it is already provided with the runtime environment.
When you know that boto3 uses ~1.3Mb and botocore, which comes with it, ~65Mb, it is major.
However, there are situations where boto3 is a dependency of one of your dependencies, and so it comes back into the picture. This could be the case if you are using mypy and boto stubs, and you need to explicitly type an object from boto, like in a function signature.
from mypy_boto3_dynamodb.service_resource import DynamoDBServiceResource def do_stuff(dynamodb: DynamoDBServiceResource) -> None: ...
Since boto3-stubs does not come in the Lambda runtime environment, this code crashes if deployed without the said library. So you have to include it. But, since this library depends on boto3, you will end up with the whole boto3 in your deployment.
By modifying the code, there is a way around the problem.
try: from mypy_boto3_dynamodb.service_resource import DynamoDBServiceResource except ModuleNotFoundError: DynamoDBServiceResource = "mypy_boto3_dynamodb.service_resource.DynamoDBServiceResource" # type: ignore def do_stuff(dynamodb: DynamoDBServiceResource) -> None: ...
Locally / in your development environment, boto3-stubs is installed and mypy will correctly identify DynamoDBServiceResource. In production, without the stubs, the variable becomes a string instead. As type annotations are ignored at runtime, it doesn’t change anything.
Note: the assignment on line 4 must be annotated with
# type: ignore because mypy will correctly identify the type change of the DynamoDBServiceResource variable, which is a typing error.
With this modification, you can return the boto3-stubs library to your development dependencies and regain ~66MB of space.