+
+Python-lambda is a toolset for developing and deploying *serverless* Python code in AWS Lambda.
+
+# A call for contributors
+With python-lambda and pytube both continuing to gain momentum, I'm calling for
+contributors to help build out new features, review pull requests, fix bugs,
+and maintain overall code quality. If you're interested, please email me at
+nficano[at]gmail.com.
+
+# Description
+
+AWS Lambda is a service that allows you to write Python, Java, or Node.js code
+that gets executed in response to events like http requests or files uploaded
+to S3.
+
+Working with Lambda is relatively easy, but the process of bundling and
+deploying your code is not as simple as it could be.
+
+The *Python-Lambda* library takes away the guess work of developing your
+Python-Lambda services by providing you a toolset to streamline the annoying
+parts.
+
+# Requirements
+
+* Python 2.7, >= 3.6 (At the time of writing this, these are the Python runtimes supported by AWS Lambda).
+* Pip (\~8.1.1)
+* Virtualenv (\~15.0.0)
+* Virtualenvwrapper (\~4.7.1)
+
+
+# Getting Started
+
+First, you must create an IAM Role on your AWS account called
+``lambda_basic_execution`` with the ``LambdaBasicExecution`` policy attached.
+
+On your computer, create a new virtualenv and project folder.
+
+```bash
+$ mkvirtualenv pylambda
+(pylambda) $ mkdir pylambda
+```
+
+Next, download *Python-Lambda* using pip via pypi.
+
+```bash
+(pylambda) $ pip install python-lambda
+```
+
+From your ``pylambda`` directory, run the following to bootstrap your project.
+
+```bash
+(pylambda) $ lambda init
+```
+
+This will create the following files: ``event.json``, ``__init__.py``,
+``service.py``, and ``config.yaml``.
+
+Let's begin by opening ``config.yaml`` in the text editor of your choice. For
+the purpose of this tutorial, the only required information is
+``aws_access_key_id`` and ``aws_secret_access_key``. You can find these by
+logging into the AWS management console.
+
+Next let's open ``service.py``, in here you'll find the following function:
+
+```python
+def handler(event, context):
+ # Your code goes here!
+ e = event.get('e')
+ pi = event.get('pi')
+ return e + pi
+```
+
+This is the handler function; this is the function AWS Lambda will invoke in
+response to an event. You will notice that in the sample code ``e`` and ``pi``
+are values in a ``dict``. AWS Lambda uses the ``event`` parameter to pass in
+event data to the handler.
+
+So if, for example, your function is responding to an http request, ``event``
+will be the ``POST`` JSON data and if your function returns something, the
+contents will be in your http response payload.
+
+Next let's open the ``event.json`` file:
+
+```json
+{
+ "pi": 3.14,
+ "e": 2.718
+}
+```
+Here you'll find the values of ``e`` and ``pi`` that are being referenced in
+the sample code.
+
+If you now try and run:
+
+```bash
+(pylambda) $ lambda invoke -v
+```
+
+You will get:
+```bash
+# 5.858
+# execution time: 0.00000310s
+# function execution timeout: 15s
+```
+
+As you probably put together, the ``lambda invoke`` command grabs the values
+stored in the ``event.json`` file and passes them to your function.
+
+The ``event.json`` file should help you develop your Lambda service locally.
+You can specify an alternate ``event.json`` file by passing the
+``--event-file=.json`` argument to ``lambda invoke``.
+
+When you're ready to deploy your code to Lambda simply run:
+
+```bash
+(pylambda) $ lambda deploy
+```
+
+The deploy script will evaluate your virtualenv and identify your project
+dependencies. It will package these up along with your handler function to a
+zip file that it then uploads to AWS Lambda.
+
+You can now log into the
+[AWS Lambda management console](https://console.aws.amazon.com/lambda/) to
+verify the code deployed successfully.
+
+### Wiring to an API endpoint
+
+If you're looking to develop a simple microservice you can easily wire your
+function up to an http endpoint.
+
+Begin by navigating to your [AWS Lambda management console](https://console.aws.amazon.com/lambda/) and
+clicking on your function. Click the API Endpoints tab and click "Add API endpoint".
+
+Under API endpoint type select "API Gateway".
+
+Next change Method to ``POST`` and Security to "Open" and click submit (NOTE:
+you should secure this for use in production, open security is used for demo
+purposes).
+
+At last you need to change the return value of the function to comply with the
+standard defined for the API Gateway endpoint, the function should now look
+like this:
+
+```
+def handler(event, context):
+ # Your code goes here!
+ e = event.get('e')
+ pi = event.get('pi')
+ return {
+ "statusCode": 200,
+ "headers": { "Content-Type": "application/json"},
+ "body": e + pi
+ }
+```
+
+Now try and run:
+
+```bash
+$ curl --header "Content-Type:application/json" \
+ --request POST \
+ --data '{"pi": 3.14, "e": 2.718}' \
+ https://
+# 5.8580000000000005
+```
+
+### Environment Variables
+Lambda functions support environment variables. In order to set environment
+variables for your deployed code to use, you can configure them in
+``config.yaml``. To load the value for the environment variable at the time of
+deployment (instead of hard coding them in your configuration file), you can
+use local environment values (see 'env3' in example code below).
+
+```yaml
+environment_variables:
+ env1: foo
+ env2: baz
+ env3: ${LOCAL_ENVIRONMENT_VARIABLE_NAME}
+```
+
+This would create environment variables in the lambda instance upon deploy. If
+your functions don't need environment variables, simply leave this section out
+of your config.
+
+### Uploading to S3
+You may find that you do not need the toolkit to fully
+deploy your Lambda or that your code bundle is too large to upload via the API.
+You can use the ``upload`` command to send the bundle to an S3 bucket of your
+choosing. Before doing this, you will need to set the following variables in
+``config.yaml``:
+
+```yaml
+role: basic_s3_upload
+bucket_name: 'example-bucket'
+s3_key_prefix: 'path/to/file/'
+```
+Your role must have ``s3:PutObject`` permission on the bucket/key that you
+specify for the upload to work properly. Once you have that set, you can
+execute ``lambda upload`` to initiate the transfer.
+
+### Deploying via S3
+You can also choose to use S3 as your source for Lambda deployments. This can
+be done by issuing ``lambda deploy-s3`` with the same variables/AWS permissions
+you'd set for executing the ``upload`` command.
+
+## Development
+Development of "python-lambda" is facilitated exclusively on GitHub.
+Contributions in the form of patches, tests and feature creation and/or
+requests are very welcome and highly encouraged. Please open an issue if this
+tool does not function as you'd expect.
+
+### Environment Setup
+1. [Install pipenv](https://github.com/pypa/pipenv)
+2. [Install direnv](https://direnv.net/)
+3. [Install Precommit](https://pre-commit.com/#install) (optional but preferred)
+4. ``cd`` into the project and enter "direnv allow" when prompted. This will begin
+ installing all the development dependancies.
+5. If you installed pre-commit, run ``pre-commit install`` inside the project
+ directory to setup the githooks.
+
+### Releasing to Pypi
+Once you pushed your chances to master, run **one** of the following:
+
+ ```sh
+ # If you're installing a major release:
+ make deploy-major
+
+ # If you're installing a minor release:
+ make deploy-minor
+
+# If you're installing a patch release:
+make deploy-patch
+ ```
diff --git a/README.rst b/README.rst
deleted file mode 100644
index 10b0739e..00000000
--- a/README.rst
+++ /dev/null
@@ -1,212 +0,0 @@
-========
-python-λ
-========
-
-.. image:: https://img.shields.io/pypi/v/python-lambda.svg
- :alt: Pypi
- :target: https://pypi.python.org/pypi/python-lambda/
-
-.. image:: https://img.shields.io/pypi/pyversions/python-lambda.svg
- :alt: Python Versions
- :target: https://pypi.python.org/pypi/python-lambda/
-
-Python-lambda is a toolset for developing and deploying *serverless* Python code in AWS Lambda.
-
-A call for contributors
-=======================
-With python-lambda and `pytube `_ both continuing to gain momentum, I'm calling for contributors to help build out new features, review pull requests, fix bugs, and maintain overall code quality. If you're interested, please email me at nficano[at]gmail.com.
-
-Description
-===========
-
-AWS Lambda is a service that allows you to write Python, Java, or Node.js code that gets executed in response to events like http requests or files uploaded to S3.
-
-Working with Lambda is relatively easy, but the process of bundling and deploying your code is not as simple as it could be.
-
-The *Python-Lambda* library takes away the guess work of developing your Python-Lambda services by providing you a toolset to streamline the annoying parts.
-
-Requirements
-============
-
-* Python 2.7 & 3.6 (At the time of writing this, AWS Lambda only supports Python 2.7/3.6).
-* Pip (~8.1.1)
-* Virtualenv (~15.0.0)
-* Virtualenvwrapper (~4.7.1)
-
-Getting Started
-===============
-
-First, you must create an IAM Role on your AWS account called `lambda_basic_execution` with the `LambdaBasicExecution` policy attached.
-
-On your computer, create a new virtualenv and project folder.
-
-.. code:: bash
-
- $ mkvirtualenv pylambda
- (pylambda) $ mkdir pylambda
-
-Next, download *Python-Lambda* using pip via pypi.
-
-.. code:: bash
-
- (pylambda) $ pip install python-lambda
-
-From your ``pylambda`` directory, run the following to bootstrap your project.
-
-.. code:: bash
-
- (pylambda) $ lambda init
-
-This will create the following files: ``event.json``, ``__init__.py``, ``service.py``, and ``config.yaml``.
-
-Let's begin by opening ``config.yaml`` in the text editor of your choice. For the purpose of this tutorial, the only required information is ``aws_access_key_id`` and ``aws_secret_access_key``. You can find these by logging into the AWS management console.
-
-Next let's open ``service.py``, in here you'll find the following function:
-
-.. code:: python
-
- def handler(event, context):
- # Your code goes here!
- e = event.get('e')
- pi = event.get('pi')
- return e + pi
-
-
-This is the handler function; this is the function AWS Lambda will invoke in response to an event. You will notice that in the sample code ``e`` and ``pi`` are values in a ``dict``. AWS Lambda uses the ``event`` parameter to pass in event data to the handler.
-
-So if, for example, your function is responding to an http request, ``event`` will be the ``POST`` JSON data and if your function returns something, the contents will be in your http response payload.
-
-Next let's open the ``event.json`` file:
-
-.. code:: json
-
- {
- "pi": 3.14,
- "e": 2.718
- }
-
-Here you'll find the values of ``e`` and ``pi`` that are being referenced in the sample code.
-
-If you now try and run:
-
-.. code:: bash
-
- (pylambda) $ lambda invoke -v
-
-You will get:
-
-.. code:: bash
-
- # 5.858
-
- # execution time: 0.00000310s
- # function execution timeout: 15s
-
-As you probably put together, the ``lambda invoke`` command grabs the values stored in the ``event.json`` file and passes them to your function.
-
-The ``event.json`` file should help you develop your Lambda service locally. You can specify an alternate ``event.json`` file by passing the ``--event-file=.json`` argument to ``lambda invoke``.
-
-When you're ready to deploy your code to Lambda simply run:
-
-.. code:: bash
-
- (pylambda) $ lambda deploy
-
-The deploy script will evaluate your virtualenv and identify your project dependencies. It will package these up along with your handler function to a zip file that it then uploads to AWS Lambda.
-
-You can now log into the `AWS Lambda management console `_ to verify the code deployed successfully.
-
-Wiring to an API endpoint
-=========================
-
-If you're looking to develop a simple microservice you can easily wire your function up to an http endpoint.
-
-Begin by navigating to your `AWS Lambda management console `_ and clicking on your function. Click the API Endpoints tab and click "Add API endpoint".
-
-Under API endpoint type select "API Gateway".
-
-Next change Method to ``POST`` and Security to "Open" and click submit (NOTE: you should secure this for use in production, open security is used for demo purposes).
-
-At last you need to change the return value of the function to comply with the standard defined for the API Gateway endpoint, the function should now look like this:
-
-.. code:: python
-
- def handler(event, context):
- # Your code goes here!
- e = event.get('e')
- pi = event.get('pi')
- return {
- "statusCode": 200,
- "headers": { "Content-Type": "application/json"},
- "body": e + pi
- }
-
-Now try and run:
-
-.. code:: bash
-
- $ curl --header "Content-Type:application/json" \
- --request POST \
- --data '{"pi": 3.14, "e": 2.718}' \
- https://
- # 5.8580000000000005
-
-Environment Variables
-=====================
-Lambda functions support environment variables. In order to set environment variables for your deployed code to use, you can configure them in ``config.yaml``. To load the
-value for the environment variable at the time of deployment (instead of hard coding them in your configuration file), you can use local environment values (see 'env3' in example code below).
-
-.. code:: yaml
-
- environment_variables:
- env1: foo
- env2: baz
- env3: ${LOCAL_ENVIRONMENT_VARIABLE_NAME}
-
-This would create environment variables in the lambda instance upon deploy. If your functions don't need environment variables, simply leave this section out of your config.
-
-Uploading to S3
-===============
-You may find that you do not need the toolkit to fully deploy your Lambda or that your code bundle is too large to upload via the API. You can use the ``upload`` command to send the bundle to an S3 bucket of your choosing.
-Before doing this, you will need to set the following variables in ``config.yaml``:
-
-.. code:: yaml
-
- role: basic_s3_upload
- bucket_name: 'example-bucket'
- s3_key_prefix: 'path/to/file/'
-
-Your role must have ``s3:PutObject`` permission on the bucket/key that you specify for the upload to work properly. Once you have that set, you can execute ``lambda upload`` to initiate the transfer.
-
-Deploying via S3
-===============
-You can also choose to use S3 as your source for Lambda deployments. This can be done by issuing ``lambda deploy_s3`` with the same variables/AWS permissions you'd set for executing the ``upload`` command.
-
-Development
-===========
-
-Development of "python-lambda" is facilitated exclusively on GitHub. Contributions in the form of patches, tests and feature creation and/or requests are very welcome and highly encouraged. Please open an issue if this tool does not function as you'd expect.
-
-
-How to release updates
-----------------------
-
-If this is the first time you're releasing to pypi, you'll need to run: ``pip install -r tests/dev_requirements.txt``.
-
-Once complete, execute the following commands:
-
-.. code:: bash
-
- git checkout master
-
- # Increment the version number and tag the release.
- bumpversion [major|minor|patch]
-
- # Upload the distribution to PyPi
- python setup.py sdist bdist_wheel upload
-
- # Since master often contains work-in-progress changes, increment the version
- # to a patch release to prevent inaccurate attribution.
- bumpversion --no-tag patch
-
- git push origin master --tags
diff --git a/artwork/python-lambda.svg b/artwork/python-lambda.svg
new file mode 100644
index 00000000..0136f802
--- /dev/null
+++ b/artwork/python-lambda.svg
@@ -0,0 +1,27 @@
+
+
diff --git a/aws_lambda/__init__.py b/aws_lambda/__init__.py
old mode 100755
new mode 100644
index d151ac9a..35145b50
--- a/aws_lambda/__init__.py
+++ b/aws_lambda/__init__.py
@@ -1,18 +1,28 @@
-# -*- coding: utf-8 -*-
# flake8: noqa
-__author__ = 'Nick Ficano'
-__email__ = 'nficano@gmail.com'
-__version__ = '3.0.3'
+__author__ = "Nick Ficano"
+__email__ = "nficano@gmail.com"
+__version__ = "11.8.0"
-from .aws_lambda import deploy, deploy_s3, invoke, init, build, upload, cleanup_old_versions
+from .aws_lambda import (
+ deploy,
+ deploy_s3,
+ invoke,
+ init,
+ build,
+ upload,
+ cleanup_old_versions,
+)
# Set default logging handler to avoid "No handler found" warnings.
import logging
+
try: # Python 2.7+
from logging import NullHandler
except ImportError:
+
class NullHandler(logging.Handler):
def emit(self, record):
pass
+
logging.getLogger(__name__).addHandler(NullHandler())
diff --git a/aws_lambda/aws_lambda.py b/aws_lambda/aws_lambda.py
old mode 100755
new mode 100644
index 44f37cc6..0b5ca884
--- a/aws_lambda/aws_lambda.py
+++ b/aws_lambda/aws_lambda.py
@@ -1,39 +1,58 @@
-# -*- coding: utf-8 -*-
-from __future__ import print_function
-
import hashlib
import json
import logging
import os
+import subprocess
import sys
import time
from collections import defaultdict
-from imp import load_source
+
from shutil import copy
from shutil import copyfile
+from shutil import copystat
from shutil import copytree
from tempfile import mkdtemp
import boto3
import botocore
-import pip
import yaml
+import sys
from .helpers import archive
from .helpers import get_environment_variable_value
+from .helpers import LambdaContext
from .helpers import mkdir
from .helpers import read
from .helpers import timestamp
ARN_PREFIXES = {
- 'us-gov-west-1': 'aws-us-gov',
+ "cn-north-1": "aws-cn",
+ "cn-northwest-1": "aws-cn",
+ "us-gov-west-1": "aws-us-gov",
}
log = logging.getLogger(__name__)
-def cleanup_old_versions(src, keep_last_versions, config_file='config.yaml'):
+def load_source(module_name, module_path):
+ """Loads a python module from the path of the corresponding file."""
+
+ if sys.version_info[0] == 3 and sys.version_info[1] >= 5:
+ import importlib.util
+ spec = importlib.util.spec_from_file_location(module_name, module_path)
+ module = importlib.util.module_from_spec(spec)
+ spec.loader.exec_module(module)
+ elif sys.version_info[0] == 3 and sys.version_info[1] < 5:
+ import importlib.machinery
+ loader = importlib.machinery.SourceFileLoader(module_name, module_path)
+ module = loader.load_module()
+ return module
+
+
+def cleanup_old_versions(
+ src, keep_last_versions, config_file="config.yaml", profile_name=None,
+):
"""Deletes old deployed versions of the function in AWS Lambda.
Won't delete $Latest and any aliased version
@@ -48,39 +67,47 @@ def cleanup_old_versions(src, keep_last_versions, config_file='config.yaml'):
print("Won't delete all versions. Please do this manually")
else:
path_to_config_file = os.path.join(src, config_file)
- cfg = read(path_to_config_file, loader=yaml.load)
+ cfg = read_cfg(path_to_config_file, profile_name)
- aws_access_key_id = cfg.get('aws_access_key_id')
- aws_secret_access_key = cfg.get('aws_secret_access_key')
+ profile_name = cfg.get("profile")
+ aws_access_key_id = cfg.get("aws_access_key_id")
+ aws_secret_access_key = cfg.get("aws_secret_access_key")
client = get_client(
- 'lambda', aws_access_key_id, aws_secret_access_key,
- cfg.get('region'),
+ "lambda",
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region"),
)
response = client.list_versions_by_function(
- FunctionName=cfg.get('function_name'),
+ FunctionName=cfg.get("function_name"),
)
- versions = response.get('Versions')
- if len(response.get('Versions')) < keep_last_versions:
- print('Nothing to delete. (Too few versions published)')
+ versions = response.get("Versions")
+ if len(response.get("Versions")) < keep_last_versions:
+ print("Nothing to delete. (Too few versions published)")
else:
- version_numbers = [elem.get('Version') for elem in
- versions[1:-keep_last_versions]]
+ version_numbers = [
+ elem.get("Version") for elem in versions[1:-keep_last_versions]
+ ]
for version_number in version_numbers:
try:
client.delete_function(
- FunctionName=cfg.get('function_name'),
+ FunctionName=cfg.get("function_name"),
Qualifier=version_number,
)
except botocore.exceptions.ClientError as e:
- print('Skipping Version {}: {}'
- .format(version_number, e.message))
+ print(f"Skipping Version {version_number}: {e}")
def deploy(
- src, use_requirements=False, local_package=None,
- config_file='config.yaml',
+ src,
+ requirements=None,
+ local_package=None,
+ config_file="config.yaml",
+ profile_name=None,
+ preserve_vpc=False,
):
"""Deploys a new function to AWS Lambda.
@@ -93,26 +120,35 @@ def deploy(
"""
# Load and parse the config file.
path_to_config_file = os.path.join(src, config_file)
- cfg = read(path_to_config_file, loader=yaml.load)
+ cfg = read_cfg(path_to_config_file, profile_name)
# Copy all the pip dependencies required to run your code into a temporary
# folder then add the handler file in the root of this directory.
# Zip the contents of this folder into a single file and output to the dist
# directory.
path_to_zip_file = build(
- src, config_file=config_file,
- use_requirements=use_requirements,
+ src,
+ config_file=config_file,
+ requirements=requirements,
local_package=local_package,
)
- if function_exists(cfg, cfg.get('function_name')):
- update_function(cfg, path_to_zip_file)
+ existing_config = get_function_config(cfg)
+ if existing_config:
+ update_function(
+ cfg, path_to_zip_file, existing_config, preserve_vpc=preserve_vpc
+ )
else:
create_function(cfg, path_to_zip_file)
def deploy_s3(
- src, use_requirements=False, local_package=None, config_file='config.yaml',
+ src,
+ requirements=None,
+ local_package=None,
+ config_file="config.yaml",
+ profile_name=None,
+ preserve_vpc=False,
):
"""Deploys a new function via AWS S3.
@@ -125,28 +161,41 @@ def deploy_s3(
"""
# Load and parse the config file.
path_to_config_file = os.path.join(src, config_file)
- cfg = read(path_to_config_file, loader=yaml.load)
+ cfg = read_cfg(path_to_config_file, profile_name)
# Copy all the pip dependencies required to run your code into a temporary
# folder then add the handler file in the root of this directory.
# Zip the contents of this folder into a single file and output to the dist
# directory.
path_to_zip_file = build(
- src, config_file=config_file, use_requirements=use_requirements,
+ src,
+ config_file=config_file,
+ requirements=requirements,
local_package=local_package,
)
use_s3 = True
s3_file = upload_s3(cfg, path_to_zip_file, use_s3)
- if function_exists(cfg, cfg.get('function_name')):
- update_function(cfg, path_to_zip_file, use_s3, s3_file)
+ existing_config = get_function_config(cfg)
+ if existing_config:
+ update_function(
+ cfg,
+ path_to_zip_file,
+ existing_config,
+ use_s3=use_s3,
+ s3_file=s3_file,
+ preserve_vpc=preserve_vpc,
+ )
else:
- create_function(cfg, path_to_zip_file, use_s3, s3_file)
+ create_function(cfg, path_to_zip_file, use_s3=use_s3, s3_file=s3_file)
def upload(
- src, use_requirements=False, local_package=None,
- config_file='config.yaml',
+ src,
+ requirements=None,
+ local_package=None,
+ config_file="config.yaml",
+ profile_name=None,
):
"""Uploads a new function to AWS S3.
@@ -159,14 +208,16 @@ def upload(
"""
# Load and parse the config file.
path_to_config_file = os.path.join(src, config_file)
- cfg = read(path_to_config_file, loader=yaml.load)
+ cfg = read_cfg(path_to_config_file, profile_name)
# Copy all the pip dependencies required to run your code into a temporary
# folder then add the handler file in the root of this directory.
# Zip the contents of this folder into a single file and output to the dist
# directory.
path_to_zip_file = build(
- src, config_file=config_file, use_requirements=use_requirements,
+ src,
+ config_file=config_file,
+ requirements=requirements,
local_package=local_package,
)
@@ -174,7 +225,10 @@ def upload(
def invoke(
- src, event_file='event.json', config_file='config.yaml',
+ src,
+ event_file="event.json",
+ config_file="config.yaml",
+ profile_name=None,
verbose=False,
):
"""Simulates a call to your function.
@@ -189,11 +243,15 @@ def invoke(
"""
# Load and parse the config file.
path_to_config_file = os.path.join(src, config_file)
- cfg = read(path_to_config_file, loader=yaml.load)
+ cfg = read_cfg(path_to_config_file, profile_name)
+
+ # Set AWS_PROFILE environment variable based on `--profile` option.
+ if profile_name:
+ os.environ["AWS_PROFILE"] = profile_name
# Load environment variables from the config file into the actual
# environment.
- env_vars = cfg.get('environment_variables')
+ env_vars = cfg.get("environment_variables")
if env_vars:
for key, value in env_vars.items():
os.environ[key] = get_environment_variable_value(value)
@@ -208,22 +266,27 @@ def invoke(
except ValueError:
sys.path.append(src)
- handler = cfg.get('handler')
+ handler = cfg.get("handler")
# Inspect the handler string (.) and translate it
# into a function we can execute.
fn = get_callable_handler_function(src, handler)
- # TODO: look into mocking the ``context`` variable, currently being passed
- # as None.
+ timeout = cfg.get("timeout")
+ if timeout:
+ context = LambdaContext(cfg.get("function_name"), timeout)
+ else:
+ context = LambdaContext(cfg.get("function_name"))
start = time.time()
- results = fn(event, None)
+ results = fn(event, context)
end = time.time()
- print('{0}'.format(results))
+ print("{0}".format(results))
if verbose:
- print('\nexecution time: {:.8f}s\nfunction execution '
- 'timeout: {:2}s'.format(end - start, cfg.get('timeout', 15)))
+ print(
+ "\nexecution time: {:.8f}s\nfunction execution "
+ "timeout: {:2}s".format(end - start, cfg.get("timeout", 15))
+ )
def init(src, minimal=False):
@@ -236,10 +299,10 @@ def init(src, minimal=False):
"""
templates_path = os.path.join(
- os.path.dirname(os.path.abspath(__file__)), 'project_templates',
+ os.path.dirname(os.path.abspath(__file__)), "project_templates",
)
for filename in os.listdir(templates_path):
- if (minimal and filename == 'event.json') or filename.endswith('.pyc'):
+ if (minimal and filename == "event.json") or filename.endswith(".pyc"):
continue
dest_path = os.path.join(templates_path, filename)
@@ -248,7 +311,11 @@ def init(src, minimal=False):
def build(
- src, use_requirements=False, local_package=None, config_file='config.yaml',
+ src,
+ requirements=None,
+ local_package=None,
+ config_file="config.yaml",
+ profile_name=None,
):
"""Builds the file bundle.
@@ -261,67 +328,65 @@ def build(
"""
# Load and parse the config file.
path_to_config_file = os.path.join(src, config_file)
- cfg = read(path_to_config_file, loader=yaml.load)
+ cfg = read_cfg(path_to_config_file, profile_name)
# Get the absolute path to the output directory and create it if it doesn't
# already exist.
- dist_directory = cfg.get('dist_directory', 'dist')
+ dist_directory = cfg.get("dist_directory", "dist")
path_to_dist = os.path.join(src, dist_directory)
mkdir(path_to_dist)
# Combine the name of the Lambda function with the current timestamp to use
# for the output filename.
- function_name = cfg.get('function_name')
- output_filename = '{0}-{1}.zip'.format(timestamp(), function_name)
+ function_name = cfg.get("function_name")
+ output_filename = "{0}-{1}.zip".format(timestamp(), function_name)
- path_to_temp = mkdtemp(prefix='aws-lambda')
+ path_to_temp = mkdtemp(prefix="aws-lambda")
pip_install_to_target(
- path_to_temp,
- use_requirements=use_requirements,
- local_package=local_package,
+ path_to_temp, requirements=requirements, local_package=local_package,
)
# Hack for Zope.
- if 'zope' in os.listdir(path_to_temp):
+ if "zope" in os.listdir(path_to_temp):
print(
- 'Zope packages detected; fixing Zope package paths to '
- 'make them importable.',
+ "Zope packages detected; fixing Zope package paths to "
+ "make them importable.",
)
# Touch.
- with open(os.path.join(path_to_temp, 'zope/__init__.py'), 'wb'):
+ with open(os.path.join(path_to_temp, "zope/__init__.py"), "wb"):
pass
# Gracefully handle whether ".zip" was included in the filename or not.
output_filename = (
- '{0}.zip'.format(output_filename)
- if not output_filename.endswith('.zip')
+ "{0}.zip".format(output_filename)
+ if not output_filename.endswith(".zip")
else output_filename
)
# Allow definition of source code directories we want to build into our
# zipped package.
- build_config = defaultdict(**cfg.get('build', {}))
- build_source_directories = build_config.get('source_directories', '')
+ build_config = defaultdict(**cfg.get("build", {}))
+ build_source_directories = build_config.get("source_directories", "")
build_source_directories = (
build_source_directories
if build_source_directories is not None
- else ''
+ else ""
)
source_directories = [
- d.strip() for d in build_source_directories.split(',')
+ d.strip() for d in build_source_directories.split(",")
]
files = []
for filename in os.listdir(src):
if os.path.isfile(filename):
- if filename == '.DS_Store':
+ if filename == ".DS_Store":
continue
if filename == config_file:
continue
- print('Bundling: %r' % filename)
+ print("Bundling: %r" % filename)
files.append(os.path.join(src, filename))
elif os.path.isdir(filename) and filename in source_directories:
- print('Bundling directory: %r' % filename)
+ print("Bundling directory: %r" % filename)
files.append(os.path.join(src, filename))
# "cd" into `temp_path` directory.
@@ -332,18 +397,22 @@ def build(
# Copy handler file into root of the packages folder.
copyfile(f, os.path.join(path_to_temp, filename))
+ copystat(f, os.path.join(path_to_temp, filename))
elif os.path.isdir(f):
- destination_folder = os.path.join(path_to_temp, f[len(src) + 1:])
+ src_path_length = len(src) + 1
+ destination_folder = os.path.join(
+ path_to_temp, f[src_path_length:]
+ )
copytree(f, destination_folder)
# Zip them together into a single file.
# TODO: Delete temp directory created once the archive has been compiled.
- path_to_zip_file = archive('./', path_to_dist, output_filename)
+ path_to_zip_file = archive("./", path_to_dist, output_filename)
return path_to_zip_file
def get_callable_handler_function(src, handler):
- """Tranlate a string of the form "module.function" into a callable
+ """Translate a string of the form "module.function" into a callable
function.
:param str src:
@@ -355,7 +424,7 @@ def get_callable_handler_function(src, handler):
# "cd" into `src` directory.
os.chdir(src)
- module_name, function_name = handler.split('.')
+ module_name, function_name = handler.split(".")
filename = get_handler_filename(handler)
path_to_module_file = os.path.join(src, filename)
@@ -369,8 +438,8 @@ def get_handler_filename(handler):
:param str handler:
A dot delimited string representing the `.`.
"""
- module_name, _ = handler.split('.')
- return '{0}.py'.format(module_name)
+ module_name, _ = handler.split(".")
+ return "{0}.py".format(module_name)
def _install_packages(path, packages):
@@ -384,46 +453,65 @@ def _install_packages(path, packages):
:param list packages:
A list of packages to be installed via pip.
"""
+
def _filter_blacklist(package):
- blacklist = ['-i', '#', 'Python==', 'python-lambda==']
+ blacklist = ["-i", "#", "Python==", "python-lambda=="]
return all(package.startswith(entry) is False for entry in blacklist)
+
filtered_packages = filter(_filter_blacklist, packages)
for package in filtered_packages:
- if package.startswith('-e '):
- package = package.replace('-e ', '')
-
- print('Installing {package}'.format(package=package))
- pip.main(['install', package, '-t', path, '--ignore-installed'])
+ if package.startswith("-e "):
+ package = package.replace("-e ", "")
+
+ print("Installing {package}".format(package=package))
+ subprocess.check_call(
+ [
+ sys.executable,
+ "-m",
+ "pip",
+ "install",
+ package,
+ "-t",
+ path,
+ "--ignore-installed",
+ ]
+ )
+ print(
+ "Install directory contents are now: {directory}".format(
+ directory=os.listdir(path)
+ )
+ )
-def pip_install_to_target(path, use_requirements=False, local_package=None):
+def pip_install_to_target(path, requirements=None, local_package=None):
"""For a given active virtualenv, gather all installed pip packages then
copy (re-install) them to the path provided.
:param str path:
Path to copy installed pip packages to.
- :param bool use_requirements:
- If set, only the packages in the requirements.txt file are installed.
- The requirements.txt file needs to be in the same directory as the
- project which shall be deployed.
- Defaults to false and installs all pacakges found via pip freeze if
- not set.
+ :param str requirements:
+ If set, only the packages in the supplied requirements file are
+ installed.
+ If not set then installs all packages found via pip freeze.
:param str local_package:
The path to a local package with should be included in the deploy as
well (and/or is not available on PyPi)
"""
packages = []
- if not use_requirements:
- print('Gathering pip packages')
- packages.extend(pip.operations.freeze.freeze())
+ if not requirements:
+ print("Gathering pip packages")
+ pkgStr = subprocess.check_output(
+ [sys.executable, "-m", "pip", "freeze"]
+ )
+ packages.extend(pkgStr.decode("utf-8").splitlines())
else:
- if os.path.exists('requirements.txt'):
- print('Gathering requirement packages')
- data = read('requirements.txt')
+ if os.path.exists(requirements):
+ print("Gathering requirement packages")
+ data = read(requirements)
packages.extend(data.splitlines())
if not packages:
- print('No dependency packages installed!')
+ print("No dependency packages installed!")
if local_package is not None:
if not isinstance(local_package, (list, tuple)):
@@ -435,231 +523,325 @@ def pip_install_to_target(path, use_requirements=False, local_package=None):
def get_role_name(region, account_id, role):
"""Shortcut to insert the `account_id` and `role` into the iam string."""
- prefix = ARN_PREFIXES.get(region, 'aws')
- return 'arn:{0}:iam::{1}:role/{2}'.format(prefix, account_id, role)
+ prefix = ARN_PREFIXES.get(region, "aws")
+ return "arn:{0}:iam::{1}:role/{2}".format(prefix, account_id, role)
-def get_account_id(aws_access_key_id, aws_secret_access_key, region=None):
+def get_account_id(
+ profile_name, aws_access_key_id, aws_secret_access_key, region=None,
+):
"""Query STS for a users' account_id"""
client = get_client(
- 'sts', aws_access_key_id, aws_secret_access_key,
- region,
+ "sts", profile_name, aws_access_key_id, aws_secret_access_key, region,
)
- return client.get_caller_identity().get('Account')
+ return client.get_caller_identity().get("Account")
-def get_client(client, aws_access_key_id, aws_secret_access_key, region=None):
+def get_client(
+ client,
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ region=None,
+):
"""Shortcut for getting an initialized instance of the boto3 client."""
- return boto3.client(
- client,
+ boto3.setup_default_session(
+ profile_name=profile_name,
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=region,
)
+ return boto3.client(client)
-def create_function(cfg, path_to_zip_file, *use_s3, **s3_file):
+def create_function(cfg, path_to_zip_file, use_s3=False, s3_file=None):
"""Register and upload a function to AWS Lambda."""
- print('Creating your new Lambda function')
+ print("Creating your new Lambda function")
byte_stream = read(path_to_zip_file, binary_file=True)
- aws_access_key_id = cfg.get('aws_access_key_id')
- aws_secret_access_key = cfg.get('aws_secret_access_key')
+ profile_name = cfg.get("profile")
+ aws_access_key_id = cfg.get("aws_access_key_id")
+ aws_secret_access_key = cfg.get("aws_secret_access_key")
account_id = get_account_id(
- aws_access_key_id, aws_secret_access_key, cfg.get('region'),
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region",),
)
role = get_role_name(
- cfg.get('region'), account_id,
- cfg.get('role', 'lambda_basic_execution'),
+ cfg.get("region"),
+ account_id,
+ cfg.get("role", "lambda_basic_execution"),
)
client = get_client(
- 'lambda', aws_access_key_id, aws_secret_access_key,
- cfg.get('region'),
+ "lambda",
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region"),
)
# Do we prefer development variable over config?
- buck_name = (
- os.environ.get('S3_BUCKET_NAME') or cfg.get('bucket_name')
- )
- func_name = (
- os.environ.get('LAMBDA_FUNCTION_NAME') or cfg.get('function_name')
+ buck_name = os.environ.get("S3_BUCKET_NAME") or cfg.get("bucket_name")
+ func_name = os.environ.get("LAMBDA_FUNCTION_NAME") or cfg.get(
+ "function_name"
)
- print('Creating lambda function with name: {}'.format(func_name))
+ print("Creating lambda function with name: {}".format(func_name))
if use_s3:
kwargs = {
- 'FunctionName': func_name,
- 'Runtime': cfg.get('runtime', 'python2.7'),
- 'Role': role,
- 'Handler': cfg.get('handler'),
- 'Code': {
- 'S3Bucket': '{}'.format(buck_name),
- 'S3Key': '{}'.format(s3_file),
+ "FunctionName": func_name,
+ "Runtime": cfg.get("runtime", "python2.7"),
+ "Role": role,
+ "Handler": cfg.get("handler"),
+ "Code": {
+ "S3Bucket": "{}".format(buck_name),
+ "S3Key": "{}".format(s3_file),
+ },
+ "Description": cfg.get("description", ""),
+ "Timeout": cfg.get("timeout", 15),
+ "MemorySize": cfg.get("memory_size", 512),
+ "VpcConfig": {
+ "SubnetIds": cfg.get("subnet_ids", []),
+ "SecurityGroupIds": cfg.get("security_group_ids", []),
},
- 'Description': cfg.get('description'),
- 'Timeout': cfg.get('timeout', 15),
- 'MemorySize': cfg.get('memory_size', 512),
- 'Publish': True,
+ "Publish": True,
}
else:
kwargs = {
- 'FunctionName': func_name,
- 'Runtime': cfg.get('runtime', 'python2.7'),
- 'Role': role,
- 'Handler': cfg.get('handler'),
- 'Code': {'ZipFile': byte_stream},
- 'Description': cfg.get('description'),
- 'Timeout': cfg.get('timeout', 15),
- 'MemorySize': cfg.get('memory_size', 512),
- 'Publish': True,
+ "FunctionName": func_name,
+ "Runtime": cfg.get("runtime", "python2.7"),
+ "Role": role,
+ "Handler": cfg.get("handler"),
+ "Code": {"ZipFile": byte_stream},
+ "Description": cfg.get("description", ""),
+ "Timeout": cfg.get("timeout", 15),
+ "MemorySize": cfg.get("memory_size", 512),
+ "VpcConfig": {
+ "SubnetIds": cfg.get("subnet_ids", []),
+ "SecurityGroupIds": cfg.get("security_group_ids", []),
+ },
+ "Publish": True,
}
- if 'environment_variables' in cfg:
+ if "tags" in cfg:
+ kwargs.update(
+ Tags={key: str(value) for key, value in cfg.get("tags").items()}
+ )
+
+ if "environment_variables" in cfg:
kwargs.update(
Environment={
- 'Variables': {
+ "Variables": {
key: get_environment_variable_value(value)
- for key, value
- in cfg.get('environment_variables').items()
+ for key, value in cfg.get("environment_variables").items()
},
},
)
client.create_function(**kwargs)
+ concurrency = get_concurrency(cfg)
+ if concurrency > 0:
+ client.put_function_concurrency(
+ FunctionName=func_name, ReservedConcurrentExecutions=concurrency
+ )
+
-def update_function(cfg, path_to_zip_file, *use_s3, **s3_file):
+def update_function(
+ cfg,
+ path_to_zip_file,
+ existing_cfg,
+ use_s3=False,
+ s3_file=None,
+ preserve_vpc=False,
+):
"""Updates the code of an existing Lambda function"""
- print('Updating your Lambda function')
+ print("Updating your Lambda function")
byte_stream = read(path_to_zip_file, binary_file=True)
- aws_access_key_id = cfg.get('aws_access_key_id')
- aws_secret_access_key = cfg.get('aws_secret_access_key')
+ profile_name = cfg.get("profile")
+ aws_access_key_id = cfg.get("aws_access_key_id")
+ aws_secret_access_key = cfg.get("aws_secret_access_key")
account_id = get_account_id(
- aws_access_key_id, aws_secret_access_key, cfg.get('region'),
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region",),
)
role = get_role_name(
- cfg.get('region'), account_id,
- cfg.get('role', 'lambda_basic_execution'),
+ cfg.get("region"),
+ account_id,
+ cfg.get("role", "lambda_basic_execution"),
)
client = get_client(
- 'lambda', aws_access_key_id, aws_secret_access_key,
- cfg.get('region'),
+ "lambda",
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region"),
)
# Do we prefer development variable over config?
- buck_name = (
- os.environ.get('S3_BUCKET_NAME') or cfg.get('bucket_name')
- )
+ buck_name = os.environ.get("S3_BUCKET_NAME") or cfg.get("bucket_name")
if use_s3:
client.update_function_code(
- FunctionName=cfg.get('function_name'),
- S3Bucket='{}'.format(buck_name),
- S3Key='{}'.format(s3_file),
+ FunctionName=cfg.get("function_name"),
+ S3Bucket="{}".format(buck_name),
+ S3Key="{}".format(s3_file),
Publish=True,
)
else:
client.update_function_code(
- FunctionName=cfg.get('function_name'),
+ FunctionName=cfg.get("function_name"),
ZipFile=byte_stream,
Publish=True,
)
+ # Wait for function to be updated
+ waiter = client.get_waiter('function_updated')
+ waiter.wait(FunctionName=cfg.get("function_name"))
+
kwargs = {
- 'FunctionName': cfg.get('function_name'),
- 'Role': role,
- 'Runtime': cfg.get('runtime'),
- 'Handler': cfg.get('handler'),
- 'Description': cfg.get('description'),
- 'Timeout': cfg.get('timeout', 15),
- 'MemorySize': cfg.get('memory_size', 512),
- 'VpcConfig': {
- 'SubnetIds': cfg.get('subnet_ids', []),
- 'SecurityGroupIds': cfg.get('security_group_ids', []),
- },
+ "FunctionName": cfg.get("function_name"),
+ "Role": role,
+ "Runtime": cfg.get("runtime"),
+ "Handler": cfg.get("handler"),
+ "Description": cfg.get("description", ""),
+ "Timeout": cfg.get("timeout", 15),
+ "MemorySize": cfg.get("memory_size", 512),
}
- if 'environment_variables' in cfg:
+ if preserve_vpc:
+ kwargs["VpcConfig"] = existing_cfg.get("Configuration", {}).get(
+ "VpcConfig"
+ )
+ if kwargs["VpcConfig"] is None:
+ kwargs["VpcConfig"] = {
+ "SubnetIds": cfg.get("subnet_ids", []),
+ "SecurityGroupIds": cfg.get("security_group_ids", []),
+ }
+ else:
+ del kwargs["VpcConfig"]["VpcId"]
+ else:
+ kwargs["VpcConfig"] = {
+ "SubnetIds": cfg.get("subnet_ids", []),
+ "SecurityGroupIds": cfg.get("security_group_ids", []),
+ }
+
+ if "environment_variables" in cfg:
kwargs.update(
Environment={
- 'Variables': {
+ "Variables": {
key: str(get_environment_variable_value(value))
- for key, value
- in cfg.get('environment_variables').items()
+ for key, value in cfg.get("environment_variables").items()
},
},
)
- client.update_function_configuration(**kwargs)
+ ret = client.update_function_configuration(**kwargs)
+
+ concurrency = get_concurrency(cfg)
+ if concurrency > 0:
+ client.put_function_concurrency(
+ FunctionName=cfg.get("function_name"),
+ ReservedConcurrentExecutions=concurrency,
+ )
+ elif "Concurrency" in existing_cfg:
+ client.delete_function_concurrency(
+ FunctionName=cfg.get("function_name")
+ )
+
+ if "tags" in cfg:
+ tags = {key: str(value) for key, value in cfg.get("tags").items()}
+ if tags != existing_cfg.get("Tags"):
+ if existing_cfg.get("Tags"):
+ client.untag_resource(
+ Resource=ret["FunctionArn"],
+ TagKeys=list(existing_cfg["Tags"].keys()),
+ )
+ client.tag_resource(Resource=ret["FunctionArn"], Tags=tags)
def upload_s3(cfg, path_to_zip_file, *use_s3):
"""Upload a function to AWS S3."""
- print('Uploading your new Lambda function')
- aws_access_key_id = cfg.get('aws_access_key_id')
- aws_secret_access_key = cfg.get('aws_secret_access_key')
+ print("Uploading your new Lambda function")
+ profile_name = cfg.get("profile")
+ aws_access_key_id = cfg.get("aws_access_key_id")
+ aws_secret_access_key = cfg.get("aws_secret_access_key")
client = get_client(
- 's3', aws_access_key_id, aws_secret_access_key,
- cfg.get('region'),
+ "s3",
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region"),
)
- byte_stream = b''
- with open(path_to_zip_file, mode='rb') as fh:
+ byte_stream = b""
+ with open(path_to_zip_file, mode="rb") as fh:
byte_stream = fh.read()
- s3_key_prefix = cfg.get('s3_key_prefix', '/dist')
- checksum = hashlib.new('md5', byte_stream).hexdigest()
+ s3_key_prefix = cfg.get("s3_key_prefix", "/dist")
+ checksum = hashlib.new("md5", byte_stream).hexdigest()
timestamp = str(time.time())
- filename = '{prefix}{checksum}-{ts}.zip'.format(
+ filename = "{prefix}{checksum}-{ts}.zip".format(
prefix=s3_key_prefix, checksum=checksum, ts=timestamp,
)
# Do we prefer development variable over config?
- buck_name = (
- os.environ.get('S3_BUCKET_NAME') or cfg.get('bucket_name')
- )
- func_name = (
- os.environ.get('LAMBDA_FUNCTION_NAME') or cfg.get('function_name')
+ buck_name = os.environ.get("S3_BUCKET_NAME") or cfg.get("bucket_name")
+ func_name = os.environ.get("LAMBDA_FUNCTION_NAME") or cfg.get(
+ "function_name"
)
kwargs = {
- 'Bucket': '{}'.format(buck_name),
- 'Key': '{}'.format(filename),
- 'Body': byte_stream,
+ "Bucket": "{}".format(buck_name),
+ "Key": "{}".format(filename),
+ "Body": byte_stream,
}
client.put_object(**kwargs)
- print('Finished uploading {} to S3 bucket {}'.format(func_name, buck_name))
+ print("Finished uploading {} to S3 bucket {}".format(func_name, buck_name))
if use_s3:
return filename
-def function_exists(cfg, function_name):
- """Check whether a function exists or not"""
+def get_function_config(cfg):
+ """Check whether a function exists or not and return its config"""
- aws_access_key_id = cfg.get('aws_access_key_id')
- aws_secret_access_key = cfg.get('aws_secret_access_key')
+ function_name = cfg.get("function_name")
+ profile_name = cfg.get("profile")
+ aws_access_key_id = cfg.get("aws_access_key_id")
+ aws_secret_access_key = cfg.get("aws_secret_access_key")
client = get_client(
- 'lambda', aws_access_key_id, aws_secret_access_key,
- cfg.get('region'),
+ "lambda",
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region"),
)
- # Need to loop through until we get all of the lambda functions returned.
- # It appears to be only returning 50 functions at a time.
- functions = []
- functions_resp = client.list_functions()
- functions.extend([
- f['FunctionName'] for f in functions_resp.get('Functions', [])
- ])
- while('NextMarker' in functions_resp):
- functions_resp = client.list_functions(
- Marker=functions_resp.get('NextMarker'),
- )
- functions.extend([
- f['FunctionName'] for f in functions_resp.get('Functions', [])
- ])
- return function_name in functions
+ try:
+ return client.get_function(FunctionName=function_name)
+ except client.exceptions.ResourceNotFoundException as e:
+ if "Function not found" in str(e):
+ return False
+
+
+def get_concurrency(cfg):
+ """Return the Reserved Concurrent Executions if present in the config"""
+ concurrency = int(cfg.get("concurrency", 0))
+ return max(0, concurrency)
+
+
+def read_cfg(path_to_config_file, profile_name):
+ cfg = read(path_to_config_file, loader=yaml.full_load)
+ if profile_name is not None:
+ cfg["profile"] = profile_name
+ elif "AWS_PROFILE" in os.environ:
+ cfg["profile"] = os.environ["AWS_PROFILE"]
+ return cfg
diff --git a/aws_lambda/helpers.py b/aws_lambda/helpers.py
index ed3ef70f..edfd8e9d 100644
--- a/aws_lambda/helpers.py
+++ b/aws_lambda/helpers.py
@@ -2,6 +2,7 @@
import datetime as dt
import os
import re
+import time
import zipfile
@@ -11,7 +12,7 @@ def mkdir(path):
def read(path, loader=None, binary_file=False):
- open_mode = 'rb' if binary_file else 'r'
+ open_mode = "rb" if binary_file else "r"
with open(path, mode=open_mode) as fh:
if not loader:
return fh.read()
@@ -20,7 +21,7 @@ def read(path, loader=None, binary_file=False):
def archive(src, dest, filename):
output = os.path.join(dest, filename)
- zfh = zipfile.ZipFile(output, 'w', zipfile.ZIP_DEFLATED)
+ zfh = zipfile.ZipFile(output, "w", zipfile.ZIP_DEFLATED)
for root, _, files in os.walk(src):
for file in files:
@@ -29,7 +30,7 @@ def archive(src, dest, filename):
return os.path.join(dest, filename)
-def timestamp(fmt='%Y-%m-%d-%H%M%S'):
+def timestamp(fmt="%Y-%m-%d-%H%M%S"):
now = dt.datetime.utcnow()
return now.strftime(fmt)
@@ -37,7 +38,32 @@ def timestamp(fmt='%Y-%m-%d-%H%M%S'):
def get_environment_variable_value(val):
env_val = val
if val is not None and isinstance(val, str):
- match = re.search(r'^\${(?P\w+)*}$', val)
+ match = re.search(r"^\${(?P\w+)*}$", val)
if match is not None:
- env_val = os.environ.get(match.group('environment_key_name'))
+ env_val = os.environ.get(match.group("environment_key_name"))
return env_val
+
+
+class LambdaContext:
+ def current_milli_time(x):
+ return int(round(time.time() * 1000))
+
+ def get_remaining_time_in_millis(self):
+ return max(
+ 0,
+ self.timeout_millis
+ - (self.current_milli_time() - self.start_time_millis),
+ )
+
+ def __init__(self, function_name, timeoutSeconds=3):
+ self.function_name = function_name
+ self.function_version = None
+ self.invoked_function_arn = None
+ self.memory_limit_in_mb = None
+ self.aws_request_id = None
+ self.log_group_name = None
+ self.log_stream_name = None
+ self.identity = None
+ self.client_context = None
+ self.timeout_millis = timeoutSeconds * 1000
+ self.start_time_millis = self.current_milli_time()
diff --git a/aws_lambda/project_templates/config.yaml b/aws_lambda/project_templates/config.yaml
index 72bfdab4..bc293717 100644
--- a/aws_lambda/project_templates/config.yaml
+++ b/aws_lambda/project_templates/config.yaml
@@ -19,6 +19,7 @@ aws_secret_access_key:
# dist_directory: dist
# timeout: 15
# memory_size: 512
+# concurrency: 500
#
# Experimental Environment variables
@@ -26,6 +27,13 @@ environment_variables:
env_1: foo
env_2: baz
+# If `tags` is uncommented then tags will be set at creation or update
+# time. During an update all other tags will be removed except the tags
+# listed here.
+#tags:
+# tag_1: foo
+# tag_2: bar
+
# Build options
build:
source_directories: lib # a comma delimited list of directories in your project root that contains source to package.
diff --git a/aws_lambda/project_templates/service.py b/aws_lambda/project_templates/service.py
index e5bcb681..f04dba34 100644
--- a/aws_lambda/project_templates/service.py
+++ b/aws_lambda/project_templates/service.py
@@ -3,6 +3,6 @@
def handler(event, context):
# Your code goes here!
- e = event.get('e')
- pi = event.get('pi')
+ e = event.get("e")
+ pi = event.get("pi")
return e + pi
diff --git a/requirements.txt b/requirements.txt
deleted file mode 100644
index 3edd8ee4..00000000
--- a/requirements.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-boto3==1.4.4
-botocore==1.5.62
-click==6.6
-docutils==0.12
-futures==3.0.5
-jmespath==0.9.0
-pyaml==15.8.2
-python-dateutil==2.5.3
-PyYAML==3.11
-six==1.10.0
diff --git a/scripts/lambda b/scripts/lambda
index 3f3f7ae8..08c5eef8 100755
--- a/scripts/lambda
+++ b/scripts/lambda
@@ -9,7 +9,7 @@ import aws_lambda
CURRENT_DIR = os.getcwd()
-logging.getLogger('pip').setLevel(logging.CRITICAL)
+logging.getLogger("pip").setLevel(logging.CRITICAL)
@click.group()
@@ -17,16 +17,15 @@ def cli():
pass
-@click.command(help='Create a new function for Lambda.')
+@click.command(help="Create a new function for Lambda.")
@click.option(
- '--minimal',
+ "--minimal",
default=False,
is_flag=True,
- help='Exclude any unnecessary template files',
+ help="Exclude any unnecessary template files",
)
@click.argument(
- 'folder', nargs=-1,
- type=click.Path(file_okay=False, writable=True),
+ "folder", nargs=-1, type=click.Path(file_okay=False, writable=True),
)
def init(folder, minimal):
path = CURRENT_DIR
@@ -37,146 +36,173 @@ def init(folder, minimal):
aws_lambda.init(path, minimal=minimal)
-@click.command(help='Bundles package for deployment.')
+@click.command(help="Bundles package for deployment.")
@click.option(
- '--config-file',
- default='config.yaml',
- help='Alternate config file.',
+ "--config-file", default="config.yaml", help="Alternate config file.",
)
@click.option(
- '--use-requirements',
- default=False,
- is_flag=True,
- help='Install all packages defined in requirements.txt',
+ "--profile", help="AWS profile to use.",
+)
+@click.option(
+ "--requirements",
+ default=None,
+ type=click.Path(),
+ help="Install packages from supplied requirements file.",
)
@click.option(
- '--local-package',
+ "--local-package",
default=None,
type=click.Path(),
- help='Install local package as well.',
+ help="Install local package as well.",
multiple=True,
)
-def build(use_requirements, local_package, config_file):
+def build(requirements, local_package, config_file, profile):
aws_lambda.build(
CURRENT_DIR,
- use_requirements=use_requirements,
+ requirements=requirements,
local_package=local_package,
config_file=config_file,
+ profile_name=profile,
)
-@click.command(help='Run a local test of your function.')
+@click.command(help="Run a local test of your function.")
@click.option(
- '--event-file',
- default='event.json',
- help='Alternate event file.',
+ "--event-file", default="event.json", help="Alternate event file.",
)
@click.option(
- '--config-file',
- default='config.yaml',
- help='Alternate config file.',
+ "--config-file", default="config.yaml", help="Alternate config file.",
)
-@click.option('--verbose', '-v', is_flag=True)
-def invoke(event_file, config_file, verbose):
+@click.option(
+ "--profile", help="AWS profile to use.",
+)
+@click.option("--verbose", "-v", is_flag=True)
+def invoke(event_file, config_file, profile, verbose):
aws_lambda.invoke(
CURRENT_DIR,
event_file=event_file,
config_file=config_file,
+ profile_name=profile,
verbose=verbose,
)
-@click.command(help='Register and deploy your code to lambda.')
+@click.command(help="Register and deploy your code to lambda.")
@click.option(
- '--config-file',
- default='config.yaml',
- help='Alternate config file.',
+ "--config-file", default="config.yaml", help="Alternate config file.",
)
@click.option(
- '--use-requirements',
- default=False,
- is_flag=True,
- help='Install all packages defined in requirements.txt',
+ "--profile", help="AWS profile to use.",
)
@click.option(
- '--local-package',
+ "--requirements",
default=None,
type=click.Path(),
- help='Install local package as well.',
+ help="Install all packages defined in supplied requirements file",
+)
+@click.option(
+ "--local-package",
+ default=None,
+ type=click.Path(),
+ help="Install local package as well.",
multiple=True,
)
-def deploy(use_requirements, local_package, config_file):
+@click.option(
+ "--preserve-vpc",
+ default=False,
+ is_flag=True,
+ help="Preserve VPC configuration on existing functions",
+)
+def deploy(requirements, local_package, config_file, profile, preserve_vpc):
aws_lambda.deploy(
CURRENT_DIR,
- config_file=config_file,
- use_requirements=use_requirements,
+ requirements=requirements,
local_package=local_package,
+ config_file=config_file,
+ profile_name=profile,
+ preserve_vpc=preserve_vpc,
)
-@click.command(help='Upload your lambda to S3.')
+@click.command(help="Upload your lambda to S3.")
@click.option(
- '--use-requirements',
- default=False,
- is_flag=True,
- help='Install all packages defined in requirements.txt',
+ "--config-file", default="config.yaml", help="Alternate config file.",
+)
+@click.option(
+ "--profile", help="AWS profile to use.",
)
@click.option(
- '--local-package',
+ "--requirements",
default=None,
type=click.Path(),
- help='Install local package as well.',
+ help="Install all packages defined in supplied requirements file",
+)
+@click.option(
+ "--local-package",
+ default=None,
+ type=click.Path(),
+ help="Install local package as well.",
multiple=True,
)
-def upload(use_requirements, local_package):
- aws_lambda.upload(CURRENT_DIR, use_requirements, local_package)
+def upload(requirements, local_package, config_file, profile):
+ aws_lambda.upload(
+ CURRENT_DIR,
+ requirements=requirements,
+ local_package=local_package,
+ config_file=config_file,
+ profile_name=profile,
+ )
-@click.command(help='Deploy your lambda via S3.')
+@click.command(help="Deploy your lambda via S3.")
@click.option(
- '--config-file',
- default='config.yaml',
- help='Alternate config file.',
+ "--config-file", default="config.yaml", help="Alternate config file.",
)
@click.option(
- '--use-requirements',
- default=False,
- is_flag=True,
- help='Install all packages defined in requirements.txt',
+ "--profile", help="AWS profile to use.",
+)
+@click.option(
+ "--requirements",
+ default=None,
+ type=click.Path(),
+ help="Install all packages defined in supplied requirements file",
)
@click.option(
- '--local-package',
+ "--local-package",
default=None,
type=click.Path(),
multiple=True,
- help='Install local package as well.',
+ help="Install local package as well.",
)
-def deploy_s3(use_requirements, local_package, config_file):
+def deploy_s3(requirements, local_package, config_file, profile):
aws_lambda.deploy_s3(
- CURRENT_DIR, config_file=config_file,
- use_requirements=use_requirements,
+ CURRENT_DIR,
+ requirements=requirements,
local_package=local_package,
+ config_file=config_file,
+ profile_name=profile,
)
-@click.command(help='Delete old versions of your functions')
+@click.command(help="Delete old versions of your functions")
+@click.option(
+ "--config-file", default="config.yaml", help="Alternate config file.",
+)
@click.option(
- '--config-file',
- default='config.yaml',
- help='Alternate config file.',
+ "--profile", help="AWS profile to use.",
)
@click.option(
- '--keep-last',
+ "--keep-last",
type=int,
- prompt='Please enter the number of recent versions to keep',
+ prompt="Please enter the number of recent versions to keep",
)
-def cleanup(keep_last, config_file):
+def cleanup(keep_last, config_file, profile):
aws_lambda.cleanup_old_versions(
- CURRENT_DIR, keep_last, config_file=config_file,
+ CURRENT_DIR, keep_last, config_file=config_file, profile_name=profile,
)
-if __name__ == '__main__':
+if __name__ == "__main__":
cli.add_command(init)
cli.add_command(invoke)
cli.add_command(deploy)
diff --git a/setup.cfg b/setup.cfg
index 11b136a0..2d16abea 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,17 +1,20 @@
[bumpversion]
commit = True
tag = True
-current_version = 3.0.3
+current_version = 11.8.0
parse = (?P\d+)\.(?P\d+)\.(?P\d+)(\-(?P[a-z]+))?
serialize =
{major}.{minor}.{patch}
+[metadata]
+description-file = README.md
+
[bumpversion:file:setup.py]
[bumpversion:file:aws_lambda/__init__.py]
-[bdist_wheel]
-universal = 1
+[coverage:run]
+source = aws_lambda
[flake8]
exclude = docs
diff --git a/setup.py b/setup.py
old mode 100755
new mode 100644
index e5ef6457..bce3297e
--- a/setup.py
+++ b/setup.py
@@ -1,62 +1,89 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
+"""This module contains setup instructions for python-lambda."""
+import codecs
+import os
import sys
+from shutil import rmtree
-import pip
+from setuptools import Command
from setuptools import find_packages
from setuptools import setup
-with open('README.rst') as readme_file:
- readme = readme_file.read()
+REQUIREMENTS = [
+ "boto3>=1.4.4",
+ "click>=6.6",
+ "PyYAML==5.1",
+]
+PACKAGE_DATA = {
+ "aws_lambda": ["project_templates/*"],
+ "": ["*.json"],
+}
+THIS_DIR = os.path.abspath(os.path.dirname(__file__))
+README = os.path.join(THIS_DIR, "README.md")
-requirements = pip.req.parse_requirements(
- 'requirements.txt', session=pip.download.PipSession(),
-)
+with codecs.open(README, encoding="utf-8") as fh:
+ long_description = "\n" + fh.read()
-# Only install futures package if using a Python version <= 2.7
-if sys.version_info[0] == 2:
- pip_requirements = [str(r.req) for r in requirements]
-else:
- pip_requirements = [str(r.req)
- for r in requirements if 'futures' not in str(r.req)]
-test_requirements = [
- # TODO: put package test requirements here
-]
+class UploadCommand(Command):
+ """Support setup.py publish."""
+
+ description = "Build and publish the package."
+ user_options = []
+
+ @staticmethod
+ def status(s):
+ """Print in bold."""
+ print(f"\033[1m{s}\033[0m")
+
+ def initialize_options(self):
+ """Initialize options."""
+ pass
+
+ def finalize_options(self):
+ """Finialize options."""
+ pass
+
+ def run(self):
+ """Upload release to Pypi."""
+ try:
+ self.status("Removing previous builds ...")
+ rmtree(os.path.join(THIS_DIR, "dist"))
+ except Exception:
+ pass
+ self.status("Building Source distribution ...")
+ os.system(f"{sys.executable} setup.py sdist")
+ self.status("Uploading the package to PyPI via Twine ...")
+ os.system("twine upload dist/*")
+ sys.exit()
+
setup(
- name='python-lambda',
- version='3.0.3',
- description='The bare minimum for a Python app running on Amazon Lambda.',
- long_description=readme,
- author='Nick Ficano',
- author_email='nficano@gmail.com',
- url='https://github.com/nficano/python-lambda',
+ name="python-lambda",
+ version="11.8.0",
+ author="Nick Ficano",
+ author_email="nficano@gmail.com",
packages=find_packages(),
- package_data={
- 'aws_lambda': ['project_templates/*'],
- '': ['*.json'],
- },
- include_package_data=True,
- scripts=['scripts/lambda'],
- install_requires=pip_requirements,
- license='ISCL',
- zip_safe=False,
- keywords='python-lambda',
+ url="https://github.com/nficano/python-lambda",
+ license="ISCL",
+ install_requires=REQUIREMENTS,
+ package_data=PACKAGE_DATA,
+ test_suite="tests",
+ tests_require=[],
classifiers=[
- 'Development Status :: 2 - Pre-Alpha',
- 'Intended Audience :: Developers',
- 'License :: OSI Approved :: ISC License (ISCL)',
- 'Natural Language :: English',
- 'Programming Language :: Python :: 2',
- 'Programming Language :: Python :: 2.6',
- 'Programming Language :: Python :: 2.7',
- 'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.3',
- 'Programming Language :: Python :: 3.4',
- 'Programming Language :: Python :: 3.5',
- 'Programming Language :: Python :: 3.6',
+ "Development Status :: 2 - Pre-Alpha",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: ISC License (ISCL)",
+ "Natural Language :: English",
+ "Programming Language :: Python :: 3.5",
+ "Programming Language :: Python :: 3.6",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
],
- test_suite='tests',
- tests_require=test_requirements,
+ description="The bare minimum for a Python app running on Amazon Lambda.",
+ include_package_data=True,
+ long_description_content_type="text/markdown",
+ long_description=long_description,
+ zip_safe=True,
+ cmdclass={"upload": UploadCommand},
+ scripts=["scripts/lambda"],
)
diff --git a/tests/__init__.py b/tests/__init__.py
old mode 100755
new mode 100644
diff --git a/tests/dev_requirements.txt b/tests/dev_requirements.txt
index af92d6d2..0886536b 100644
--- a/tests/dev_requirements.txt
+++ b/tests/dev_requirements.txt
@@ -1,2 +1,5 @@
bumpversion==0.5.3
-pre-commit==0.15.0
+pre-commit==2.6.0
+pytest>=3.6
+pytest-cov
+flake8
diff --git a/tests/functional/__init__.py b/tests/functional/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/tests/unit/__init__.py b/tests/unit/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/tests/unit/test_LambdaContext.py b/tests/unit/test_LambdaContext.py
new file mode 100644
index 00000000..16c66303
--- /dev/null
+++ b/tests/unit/test_LambdaContext.py
@@ -0,0 +1,15 @@
+import time
+import unittest
+
+from aws_lambda.helpers import LambdaContext
+
+
+class TestLambdaContext(unittest.TestCase):
+ def test_get_remaining_time_in_millis(self):
+ context = LambdaContext("function_name", 2000)
+ time.sleep(0.5)
+ self.assertTrue(context.get_remaining_time_in_millis() < 2000000)
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/tests/unit/test_readHelper.py b/tests/unit/test_readHelper.py
new file mode 100644
index 00000000..33c27529
--- /dev/null
+++ b/tests/unit/test_readHelper.py
@@ -0,0 +1,36 @@
+import os
+import unittest
+
+import yaml
+
+from aws_lambda.helpers import read
+
+
+class TestReadHelper(unittest.TestCase):
+
+ TEST_FILE = "readTmp.txt"
+
+ def setUp(self):
+ with open(TestReadHelper.TEST_FILE, "w") as tmp_file:
+ tmp_file.write("testYaml: testing")
+
+ def tearDown(self):
+ os.remove(TestReadHelper.TEST_FILE)
+
+ def test_read_no_loader_non_binary(self):
+ fileContents = read(TestReadHelper.TEST_FILE)
+ self.assertEqual(fileContents, "testYaml: testing")
+
+ def test_read_yaml_loader_non_binary(self):
+ testYaml = read(TestReadHelper.TEST_FILE, loader=yaml.full_load)
+ self.assertEqual(testYaml["testYaml"], "testing")
+
+ def test_read_no_loader_binary_mode(self):
+ fileContents = read(TestReadHelper.TEST_FILE, binary_file=True)
+ self.assertEqual(fileContents, b"testYaml: testing")
+
+ def test_read_yaml_loader_binary_mode(self):
+ testYaml = read(
+ TestReadHelper.TEST_FILE, loader=yaml.full_load, binary_file=True
+ )
+ self.assertEqual(testYaml["testYaml"], "testing")