+
+Python-lambda is a toolset for developing and deploying *serverless* Python code in AWS Lambda.
+
+# A call for contributors
+With python-lambda and pytube both continuing to gain momentum, I'm calling for
+contributors to help build out new features, review pull requests, fix bugs,
+and maintain overall code quality. If you're interested, please email me at
+nficano[at]gmail.com.
+
+# Description
+
+AWS Lambda is a service that allows you to write Python, Java, or Node.js code
+that gets executed in response to events like http requests or files uploaded
+to S3.
+
+Working with Lambda is relatively easy, but the process of bundling and
+deploying your code is not as simple as it could be.
+
+The *Python-Lambda* library takes away the guess work of developing your
+Python-Lambda services by providing you a toolset to streamline the annoying
+parts.
+
+# Requirements
+
+* Python 2.7, >= 3.6 (At the time of writing this, these are the Python runtimes supported by AWS Lambda).
+* Pip (\~8.1.1)
+* Virtualenv (\~15.0.0)
+* Virtualenvwrapper (\~4.7.1)
+
+
+# Getting Started
+
+First, you must create an IAM Role on your AWS account called
+``lambda_basic_execution`` with the ``LambdaBasicExecution`` policy attached.
+
+On your computer, create a new virtualenv and project folder.
+
+```bash
+$ mkvirtualenv pylambda
+(pylambda) $ mkdir pylambda
+```
+
+Next, download *Python-Lambda* using pip via pypi.
+
+```bash
+(pylambda) $ pip install python-lambda
+```
+
+From your ``pylambda`` directory, run the following to bootstrap your project.
+
+```bash
+(pylambda) $ lambda init
+```
+
+This will create the following files: ``event.json``, ``__init__.py``,
+``service.py``, and ``config.yaml``.
+
+Let's begin by opening ``config.yaml`` in the text editor of your choice. For
+the purpose of this tutorial, the only required information is
+``aws_access_key_id`` and ``aws_secret_access_key``. You can find these by
+logging into the AWS management console.
+
+Next let's open ``service.py``, in here you'll find the following function:
+
+```python
+def handler(event, context):
+ # Your code goes here!
+ e = event.get('e')
+ pi = event.get('pi')
+ return e + pi
+```
+
+This is the handler function; this is the function AWS Lambda will invoke in
+response to an event. You will notice that in the sample code ``e`` and ``pi``
+are values in a ``dict``. AWS Lambda uses the ``event`` parameter to pass in
+event data to the handler.
+
+So if, for example, your function is responding to an http request, ``event``
+will be the ``POST`` JSON data and if your function returns something, the
+contents will be in your http response payload.
+
+Next let's open the ``event.json`` file:
+
+```json
+{
+ "pi": 3.14,
+ "e": 2.718
+}
+```
+Here you'll find the values of ``e`` and ``pi`` that are being referenced in
+the sample code.
+
+If you now try and run:
+
+```bash
+(pylambda) $ lambda invoke -v
+```
+
+You will get:
+```bash
+# 5.858
+# execution time: 0.00000310s
+# function execution timeout: 15s
+```
+
+As you probably put together, the ``lambda invoke`` command grabs the values
+stored in the ``event.json`` file and passes them to your function.
+
+The ``event.json`` file should help you develop your Lambda service locally.
+You can specify an alternate ``event.json`` file by passing the
+``--event-file=.json`` argument to ``lambda invoke``.
+
+When you're ready to deploy your code to Lambda simply run:
+
+```bash
+(pylambda) $ lambda deploy
+```
+
+The deploy script will evaluate your virtualenv and identify your project
+dependencies. It will package these up along with your handler function to a
+zip file that it then uploads to AWS Lambda.
+
+You can now log into the
+[AWS Lambda management console](https://console.aws.amazon.com/lambda/) to
+verify the code deployed successfully.
+
+### Wiring to an API endpoint
+
+If you're looking to develop a simple microservice you can easily wire your
+function up to an http endpoint.
+
+Begin by navigating to your [AWS Lambda management console](https://console.aws.amazon.com/lambda/) and
+clicking on your function. Click the API Endpoints tab and click "Add API endpoint".
+
+Under API endpoint type select "API Gateway".
+
+Next change Method to ``POST`` and Security to "Open" and click submit (NOTE:
+you should secure this for use in production, open security is used for demo
+purposes).
+
+At last you need to change the return value of the function to comply with the
+standard defined for the API Gateway endpoint, the function should now look
+like this:
+
+```
+def handler(event, context):
+ # Your code goes here!
+ e = event.get('e')
+ pi = event.get('pi')
+ return {
+ "statusCode": 200,
+ "headers": { "Content-Type": "application/json"},
+ "body": e + pi
+ }
+```
+
+Now try and run:
+
+```bash
+$ curl --header "Content-Type:application/json" \
+ --request POST \
+ --data '{"pi": 3.14, "e": 2.718}' \
+ https://
+# 5.8580000000000005
+```
+
+### Environment Variables
+Lambda functions support environment variables. In order to set environment
+variables for your deployed code to use, you can configure them in
+``config.yaml``. To load the value for the environment variable at the time of
+deployment (instead of hard coding them in your configuration file), you can
+use local environment values (see 'env3' in example code below).
+
+```yaml
+environment_variables:
+ env1: foo
+ env2: baz
+ env3: ${LOCAL_ENVIRONMENT_VARIABLE_NAME}
+```
+
+This would create environment variables in the lambda instance upon deploy. If
+your functions don't need environment variables, simply leave this section out
+of your config.
+
+### Uploading to S3
+You may find that you do not need the toolkit to fully
+deploy your Lambda or that your code bundle is too large to upload via the API.
+You can use the ``upload`` command to send the bundle to an S3 bucket of your
+choosing. Before doing this, you will need to set the following variables in
+``config.yaml``:
+
+```yaml
+role: basic_s3_upload
+bucket_name: 'example-bucket'
+s3_key_prefix: 'path/to/file/'
+```
+Your role must have ``s3:PutObject`` permission on the bucket/key that you
+specify for the upload to work properly. Once you have that set, you can
+execute ``lambda upload`` to initiate the transfer.
+
+### Deploying via S3
+You can also choose to use S3 as your source for Lambda deployments. This can
+be done by issuing ``lambda deploy-s3`` with the same variables/AWS permissions
+you'd set for executing the ``upload`` command.
+
+## Development
+Development of "python-lambda" is facilitated exclusively on GitHub.
+Contributions in the form of patches, tests and feature creation and/or
+requests are very welcome and highly encouraged. Please open an issue if this
+tool does not function as you'd expect.
+
+### Environment Setup
+1. [Install pipenv](https://github.com/pypa/pipenv)
+2. [Install direnv](https://direnv.net/)
+3. [Install Precommit](https://pre-commit.com/#install) (optional but preferred)
+4. ``cd`` into the project and enter "direnv allow" when prompted. This will begin
+ installing all the development dependancies.
+5. If you installed pre-commit, run ``pre-commit install`` inside the project
+ directory to setup the githooks.
+
+### Releasing to Pypi
+Once you pushed your chances to master, run **one** of the following:
+
+ ```sh
+ # If you're installing a major release:
+ make deploy-major
+
+ # If you're installing a minor release:
+ make deploy-minor
+
+# If you're installing a patch release:
+make deploy-patch
+ ```
diff --git a/README.rst b/README.rst
deleted file mode 100644
index 35147df0..00000000
--- a/README.rst
+++ /dev/null
@@ -1,146 +0,0 @@
-========
-python-λ
-========
-
-.. image:: https://img.shields.io/pypi/v/python-lambda.svg
- :alt: Pypi
- :target: https://pypi.python.org/pypi/python-lambda/
-
-.. image:: https://img.shields.io/pypi/pyversions/python-lambda.svg
- :alt: Python Versions
- :target: https://pypi.python.org/pypi/python-lambda/
-
-Python-lambda is a toolset for developing and deploying *serverless* Python code in AWS Lambda.
-
-Description
-===========
-
-AWS Lambda is a service that allows you to write Python, Java, or Node.js code that gets executed in response to events like http requests or files uploaded to S3.
-
-Working with Lambda is relatively easy, but the process of bundling and deploying your code is not as simple as it could be.
-
-The *Python-Lambda* library takes away the guess work of developing your Python-Lambda services by providing you a toolset to streamline the annoying parts.
-
-Requirements
-============
-
-* Python 2.7 (At the time of writing this, AWS Lambda only supports Python 2.7).
-* Pip (~8.1.1)
-* Virtualenv (~15.0.0)
-* Virtualenvwrapper (~4.7.1)
-
-Getting Started
-===============
-
-Begin by creating a new virtualenv and project folder.
-
-.. code:: bash
-
- $ mkvirtualenv pylambda
- (pylambda) $ mkdir pylambda
-
-Next, download *Python-Lambda* using pip via pypi.
-
-.. code:: bash
-
- (pylambda) $ pip install python-lambda
-
-From your ``pylambda`` directory, run the following to bootstrap your project.
-
-.. code:: bash
-
- (pylambda) $ lambda init
-
-This will create the following files: ``event.json``, ``__init__.py``, ``service.py``, and ``config.yaml``.
-
-Let's begin by opening ``config.yaml`` in the text editor of your choice. For the purpose of this tutorial, the only required information is ``aws_access_key_id`` and ``aws_secret_access_key``. You can find these by logging into the AWS management console.
-
-Next let's open ``service.py``, in here you'll find the following function:
-
-.. code:: python
-
- def handler(event, context):
- # Your code goes here!
- e = event.get('e')
- pi = event.get('pi')
- return e + pi
-
-
-This is the handler function; this is the function AWS Lambda will invoke in response to an event. You will notice that in the sample code ``e`` and ``pi`` are values in a ``dict``. AWS Lambda uses the ``event`` parameter to pass in event data to the handler.
-
-So if, for example, your function is responding to an http request, ``event`` will be the ``POST`` JSON data and if your function returns something, the contents will be in your http response payload.
-
-Next let's open the ``event.json`` file:
-
-.. code:: json
-
- {
- "pi": 3.14,
- "e": 2.718
- }
-
-Here you'll find the values of ``e`` and ``pi`` that are being referenced in the sample code.
-
-If you now try and run:
-
-.. code:: bash
-
- (pylambda) $ lambda invoke -v
-
-You will get:
-
-.. code:: bash
-
- # 5.858
-
- # execution time: 0.00000310s
- # function execution timeout: 15s
-
-As you probably put together, the ``lambda invoke`` command grabs the values stored in the ``event.json`` file and passes them to your function.
-
-The ``event.json`` file should help you develop your Lambda service locally. You can specify an alternate ``event.json`` file by passing the ``--event-file=.json`` argument to ``lambda invoke``.
-
-When you're ready to deploy your code to Lambda simply run:
-
-.. code:: bash
-
- (pylambda) $ lambda deploy
-
-The deploy script will evaluate your virtualenv and identify your project dependencies. It will package these up along with your handler function to a zip file that it then uploads to AWS Lambda.
-
-You can now log into the `AWS Lambda management console `_ to verify the code deployed successfully.
-
-Wiring to an API endpoint
-=========================
-
-If you're looking to develop a simple microservice you can easily wire your function up to an http endpoint.
-
-Begin by navigating to your `AWS Lambda management console `_ and clicking on your function. Click the API Endpoints tab and click "Add API endpoint".
-
-Under API endpoint type select "API Gateway".
-
-Next change Method to ``POST`` and Security to "Open" and click submit (NOTE: you should secure this for use in production, open security is used for demo purposes).
-
-At last you need to change the return value of the function to comply with the standard defined for the API Gateway endpoint, the function should now look like this:
-
-.. code:: python
-
- def handler(event, context):
- # Your code goes here!
- e = event.get('e')
- pi = event.get('pi')
- return {
- "statusCode": 200,
- "headers": { "Content-Type": "application/json"},
- "body": e + pi
- }
-
-Now try and run:
-
-.. code:: bash
-
- $ curl --header "Content-Type:application/json" \
- --request POST \
- --data '{"pi": 3.14, "e": 2.718}' \
- https://
- # 5.8580000000000005
diff --git a/artwork/python-lambda.svg b/artwork/python-lambda.svg
new file mode 100644
index 00000000..0136f802
--- /dev/null
+++ b/artwork/python-lambda.svg
@@ -0,0 +1,27 @@
+
+
diff --git a/aws_lambda/__init__.py b/aws_lambda/__init__.py
old mode 100755
new mode 100644
index aad61f76..35145b50
--- a/aws_lambda/__init__.py
+++ b/aws_lambda/__init__.py
@@ -1,18 +1,28 @@
-# -*- coding: utf-8 -*-
# flake8: noqa
-__author__ = 'Nick Ficano'
-__email__ = 'nficano@gmail.com'
-__version__ = '0.4.0'
+__author__ = "Nick Ficano"
+__email__ = "nficano@gmail.com"
+__version__ = "11.8.0"
-from .aws_lambda import deploy, invoke, init, build, cleanup_old_versions
+from .aws_lambda import (
+ deploy,
+ deploy_s3,
+ invoke,
+ init,
+ build,
+ upload,
+ cleanup_old_versions,
+)
# Set default logging handler to avoid "No handler found" warnings.
import logging
+
try: # Python 2.7+
from logging import NullHandler
except ImportError:
+
class NullHandler(logging.Handler):
def emit(self, record):
pass
+
logging.getLogger(__name__).addHandler(NullHandler())
diff --git a/aws_lambda/aws_lambda.py b/aws_lambda/aws_lambda.py
old mode 100755
new mode 100644
index 4d701155..0b5ca884
--- a/aws_lambda/aws_lambda.py
+++ b/aws_lambda/aws_lambda.py
@@ -1,25 +1,58 @@
-# -*- coding: utf-8 -*-
-from __future__ import print_function
+import hashlib
import json
import logging
import os
+import subprocess
+import sys
import time
-from imp import load_source
-from shutil import copy, copyfile
+from collections import defaultdict
+
+from shutil import copy
+from shutil import copyfile
+from shutil import copystat
+from shutil import copytree
from tempfile import mkdtemp
-import botocore
import boto3
-import pip
+import botocore
import yaml
+import sys
+
+from .helpers import archive
+from .helpers import get_environment_variable_value
+from .helpers import LambdaContext
+from .helpers import mkdir
+from .helpers import read
+from .helpers import timestamp
-from .helpers import mkdir, read, archive, timestamp
+ARN_PREFIXES = {
+ "cn-north-1": "aws-cn",
+ "cn-northwest-1": "aws-cn",
+ "us-gov-west-1": "aws-us-gov",
+}
log = logging.getLogger(__name__)
-def cleanup_old_versions(src, keep_last_versions):
+def load_source(module_name, module_path):
+ """Loads a python module from the path of the corresponding file."""
+
+ if sys.version_info[0] == 3 and sys.version_info[1] >= 5:
+ import importlib.util
+ spec = importlib.util.spec_from_file_location(module_name, module_path)
+ module = importlib.util.module_from_spec(spec)
+ spec.loader.exec_module(module)
+ elif sys.version_info[0] == 3 and sys.version_info[1] < 5:
+ import importlib.machinery
+ loader = importlib.machinery.SourceFileLoader(module_name, module_path)
+ module = loader.load_module()
+ return module
+
+
+def cleanup_old_versions(
+ src, keep_last_versions, config_file="config.yaml", profile_name=None,
+):
"""Deletes old deployed versions of the function in AWS Lambda.
Won't delete $Latest and any aliased version
@@ -33,36 +66,49 @@ def cleanup_old_versions(src, keep_last_versions):
if keep_last_versions <= 0:
print("Won't delete all versions. Please do this manually")
else:
- path_to_config_file = os.path.join(src, 'config.yaml')
- cfg = read(path_to_config_file, loader=yaml.load)
-
- aws_access_key_id = cfg.get('aws_access_key_id')
- aws_secret_access_key = cfg.get('aws_secret_access_key')
-
- client = get_client('lambda', aws_access_key_id, aws_secret_access_key,
- cfg.get('region'))
+ path_to_config_file = os.path.join(src, config_file)
+ cfg = read_cfg(path_to_config_file, profile_name)
+
+ profile_name = cfg.get("profile")
+ aws_access_key_id = cfg.get("aws_access_key_id")
+ aws_secret_access_key = cfg.get("aws_secret_access_key")
+
+ client = get_client(
+ "lambda",
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region"),
+ )
response = client.list_versions_by_function(
- FunctionName=cfg.get("function_name")
+ FunctionName=cfg.get("function_name"),
)
versions = response.get("Versions")
if len(response.get("Versions")) < keep_last_versions:
print("Nothing to delete. (Too few versions published)")
else:
- version_numbers = [elem.get("Version") for elem in
- versions[1:-keep_last_versions]]
+ version_numbers = [
+ elem.get("Version") for elem in versions[1:-keep_last_versions]
+ ]
for version_number in version_numbers:
try:
client.delete_function(
FunctionName=cfg.get("function_name"),
- Qualifier=version_number
+ Qualifier=version_number,
)
except botocore.exceptions.ClientError as e:
- print("Skipping Version {}: {}".format(version_number,
- e.message))
+ print(f"Skipping Version {version_number}: {e}")
-def deploy(src, local_package=None):
+def deploy(
+ src,
+ requirements=None,
+ local_package=None,
+ config_file="config.yaml",
+ profile_name=None,
+ preserve_vpc=False,
+):
"""Deploys a new function to AWS Lambda.
:param str src:
@@ -73,22 +119,118 @@ def deploy(src, local_package=None):
well (and/or is not available on PyPi)
"""
# Load and parse the config file.
- path_to_config_file = os.path.join(src, 'config.yaml')
- cfg = read(path_to_config_file, loader=yaml.load)
+ path_to_config_file = os.path.join(src, config_file)
+ cfg = read_cfg(path_to_config_file, profile_name)
# Copy all the pip dependencies required to run your code into a temporary
# folder then add the handler file in the root of this directory.
# Zip the contents of this folder into a single file and output to the dist
# directory.
- path_to_zip_file = build(src, local_package)
+ path_to_zip_file = build(
+ src,
+ config_file=config_file,
+ requirements=requirements,
+ local_package=local_package,
+ )
- if function_exists(cfg, cfg.get('function_name')):
- update_function(cfg, path_to_zip_file)
+ existing_config = get_function_config(cfg)
+ if existing_config:
+ update_function(
+ cfg, path_to_zip_file, existing_config, preserve_vpc=preserve_vpc
+ )
else:
create_function(cfg, path_to_zip_file)
-def invoke(src, alt_event=None, verbose=False):
+def deploy_s3(
+ src,
+ requirements=None,
+ local_package=None,
+ config_file="config.yaml",
+ profile_name=None,
+ preserve_vpc=False,
+):
+ """Deploys a new function via AWS S3.
+
+ :param str src:
+ The path to your Lambda ready project (folder must contain a valid
+ config.yaml and handler module (e.g.: service.py).
+ :param str local_package:
+ The path to a local package with should be included in the deploy as
+ well (and/or is not available on PyPi)
+ """
+ # Load and parse the config file.
+ path_to_config_file = os.path.join(src, config_file)
+ cfg = read_cfg(path_to_config_file, profile_name)
+
+ # Copy all the pip dependencies required to run your code into a temporary
+ # folder then add the handler file in the root of this directory.
+ # Zip the contents of this folder into a single file and output to the dist
+ # directory.
+ path_to_zip_file = build(
+ src,
+ config_file=config_file,
+ requirements=requirements,
+ local_package=local_package,
+ )
+
+ use_s3 = True
+ s3_file = upload_s3(cfg, path_to_zip_file, use_s3)
+ existing_config = get_function_config(cfg)
+ if existing_config:
+ update_function(
+ cfg,
+ path_to_zip_file,
+ existing_config,
+ use_s3=use_s3,
+ s3_file=s3_file,
+ preserve_vpc=preserve_vpc,
+ )
+ else:
+ create_function(cfg, path_to_zip_file, use_s3=use_s3, s3_file=s3_file)
+
+
+def upload(
+ src,
+ requirements=None,
+ local_package=None,
+ config_file="config.yaml",
+ profile_name=None,
+):
+ """Uploads a new function to AWS S3.
+
+ :param str src:
+ The path to your Lambda ready project (folder must contain a valid
+ config.yaml and handler module (e.g.: service.py).
+ :param str local_package:
+ The path to a local package with should be included in the deploy as
+ well (and/or is not available on PyPi)
+ """
+ # Load and parse the config file.
+ path_to_config_file = os.path.join(src, config_file)
+ cfg = read_cfg(path_to_config_file, profile_name)
+
+ # Copy all the pip dependencies required to run your code into a temporary
+ # folder then add the handler file in the root of this directory.
+ # Zip the contents of this folder into a single file and output to the dist
+ # directory.
+ path_to_zip_file = build(
+ src,
+ config_file=config_file,
+ requirements=requirements,
+ local_package=local_package,
+ )
+
+ upload_s3(cfg, path_to_zip_file)
+
+
+def invoke(
+ src,
+ event_file="event.json",
+ config_file="config.yaml",
+ profile_name=None,
+ verbose=False,
+):
"""Simulates a call to your function.
:param str src:
@@ -100,32 +242,51 @@ def invoke(src, alt_event=None, verbose=False):
Whether to print out verbose details.
"""
# Load and parse the config file.
- path_to_config_file = os.path.join(src, 'config.yaml')
- cfg = read(path_to_config_file, loader=yaml.load)
+ path_to_config_file = os.path.join(src, config_file)
+ cfg = read_cfg(path_to_config_file, profile_name)
+
+ # Set AWS_PROFILE environment variable based on `--profile` option.
+ if profile_name:
+ os.environ["AWS_PROFILE"] = profile_name
+
+ # Load environment variables from the config file into the actual
+ # environment.
+ env_vars = cfg.get("environment_variables")
+ if env_vars:
+ for key, value in env_vars.items():
+ os.environ[key] = get_environment_variable_value(value)
# Load and parse event file.
- if alt_event:
- path_to_event_file = os.path.join(src, alt_event)
- else:
- path_to_event_file = os.path.join(src, 'event.json')
+ path_to_event_file = os.path.join(src, event_file)
event = read(path_to_event_file, loader=json.loads)
- handler = cfg.get('handler')
+ # Tweak to allow module to import local modules
+ try:
+ sys.path.index(src)
+ except ValueError:
+ sys.path.append(src)
+
+ handler = cfg.get("handler")
# Inspect the handler string (.) and translate it
# into a function we can execute.
fn = get_callable_handler_function(src, handler)
- # TODO: look into mocking the ``context`` variable, currently being passed
- # as None.
+ timeout = cfg.get("timeout")
+ if timeout:
+ context = LambdaContext(cfg.get("function_name"), timeout)
+ else:
+ context = LambdaContext(cfg.get("function_name"))
start = time.time()
- results = fn(event, None)
+ results = fn(event, context)
end = time.time()
print("{0}".format(results))
if verbose:
- print("\nexecution time: {:.8f}s\nfunction execution "
- "timeout: {:2}s".format(end - start, cfg.get('timeout', 15)))
+ print(
+ "\nexecution time: {:.8f}s\nfunction execution "
+ "timeout: {:2}s".format(end - start, cfg.get("timeout", 15))
+ )
def init(src, minimal=False):
@@ -137,16 +298,25 @@ def init(src, minimal=False):
Minimal possible template files (excludes event.json).
"""
- templates_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
- "project_templates")
+ templates_path = os.path.join(
+ os.path.dirname(os.path.abspath(__file__)), "project_templates",
+ )
for filename in os.listdir(templates_path):
- if (minimal and filename == 'event.json') or filename.endswith('.pyc'):
+ if (minimal and filename == "event.json") or filename.endswith(".pyc"):
continue
- destination = os.path.join(templates_path, filename)
- copy(destination, src)
+ dest_path = os.path.join(templates_path, filename)
+ if not os.path.isdir(dest_path):
+ copy(dest_path, src)
-def build(src, local_package=None):
+
+def build(
+ src,
+ requirements=None,
+ local_package=None,
+ config_file="config.yaml",
+ profile_name=None,
+):
"""Builds the file bundle.
:param str src:
@@ -157,53 +327,92 @@ def build(src, local_package=None):
well (and/or is not available on PyPi)
"""
# Load and parse the config file.
- path_to_config_file = os.path.join(src, 'config.yaml')
- cfg = read(path_to_config_file, loader=yaml.load)
+ path_to_config_file = os.path.join(src, config_file)
+ cfg = read_cfg(path_to_config_file, profile_name)
# Get the absolute path to the output directory and create it if it doesn't
# already exist.
- dist_directory = cfg.get('dist_directory', 'dist')
+ dist_directory = cfg.get("dist_directory", "dist")
path_to_dist = os.path.join(src, dist_directory)
mkdir(path_to_dist)
# Combine the name of the Lambda function with the current timestamp to use
# for the output filename.
- function_name = cfg.get('function_name')
+ function_name = cfg.get("function_name")
output_filename = "{0}-{1}.zip".format(timestamp(), function_name)
- path_to_temp = mkdtemp(prefix='aws-lambda')
- pip_install_to_target(path_to_temp, local_package)
+ path_to_temp = mkdtemp(prefix="aws-lambda")
+ pip_install_to_target(
+ path_to_temp, requirements=requirements, local_package=local_package,
+ )
+
+ # Hack for Zope.
+ if "zope" in os.listdir(path_to_temp):
+ print(
+ "Zope packages detected; fixing Zope package paths to "
+ "make them importable.",
+ )
+ # Touch.
+ with open(os.path.join(path_to_temp, "zope/__init__.py"), "wb"):
+ pass
# Gracefully handle whether ".zip" was included in the filename or not.
- output_filename = ('{0}.zip'.format(output_filename)
- if not output_filename.endswith('.zip')
- else output_filename)
+ output_filename = (
+ "{0}.zip".format(output_filename)
+ if not output_filename.endswith(".zip")
+ else output_filename
+ )
+
+ # Allow definition of source code directories we want to build into our
+ # zipped package.
+ build_config = defaultdict(**cfg.get("build", {}))
+ build_source_directories = build_config.get("source_directories", "")
+ build_source_directories = (
+ build_source_directories
+ if build_source_directories is not None
+ else ""
+ )
+ source_directories = [
+ d.strip() for d in build_source_directories.split(",")
+ ]
files = []
for filename in os.listdir(src):
if os.path.isfile(filename):
- if filename == '.DS_Store':
+ if filename == ".DS_Store":
continue
- if filename == 'config.yaml':
+ if filename == config_file:
continue
+ print("Bundling: %r" % filename)
+ files.append(os.path.join(src, filename))
+ elif os.path.isdir(filename) and filename in source_directories:
+ print("Bundling directory: %r" % filename)
files.append(os.path.join(src, filename))
# "cd" into `temp_path` directory.
os.chdir(path_to_temp)
for f in files:
- _, filename = os.path.split(f)
-
- # Copy handler file into root of the packages folder.
- copyfile(f, os.path.join(path_to_temp, filename))
+ if os.path.isfile(f):
+ _, filename = os.path.split(f)
+
+ # Copy handler file into root of the packages folder.
+ copyfile(f, os.path.join(path_to_temp, filename))
+ copystat(f, os.path.join(path_to_temp, filename))
+ elif os.path.isdir(f):
+ src_path_length = len(src) + 1
+ destination_folder = os.path.join(
+ path_to_temp, f[src_path_length:]
+ )
+ copytree(f, destination_folder)
# Zip them together into a single file.
# TODO: Delete temp directory created once the archive has been compiled.
- path_to_zip_file = archive('./', path_to_dist, output_filename)
+ path_to_zip_file = archive("./", path_to_dist, output_filename)
return path_to_zip_file
def get_callable_handler_function(src, handler):
- """Tranlate a string of the form "module.function" into a callable
+ """Translate a string of the form "module.function" into a callable
function.
:param str src:
@@ -215,7 +424,7 @@ def get_callable_handler_function(src, handler):
# "cd" into `src` directory.
os.chdir(src)
- module_name, function_name = handler.split('.')
+ module_name, function_name = handler.split(".")
filename = get_handler_filename(handler)
path_to_module_file = os.path.join(src, filename)
@@ -229,123 +438,410 @@ def get_handler_filename(handler):
:param str handler:
A dot delimited string representing the `.`.
"""
- module_name, _ = handler.split('.')
- return '{0}.py'.format(module_name)
+ module_name, _ = handler.split(".")
+ return "{0}.py".format(module_name)
+
+
+def _install_packages(path, packages):
+ """Install all packages listed to the target directory.
+
+ Ignores any package that includes Python itself and python-lambda as well
+ since its only needed for deploying and not running the code
+ :param str path:
+ Path to copy installed pip packages to.
+ :param list packages:
+ A list of packages to be installed via pip.
+ """
-def pip_install_to_target(path, local_package=None):
+ def _filter_blacklist(package):
+ blacklist = ["-i", "#", "Python==", "python-lambda=="]
+ return all(package.startswith(entry) is False for entry in blacklist)
+
+ filtered_packages = filter(_filter_blacklist, packages)
+ for package in filtered_packages:
+ if package.startswith("-e "):
+ package = package.replace("-e ", "")
+
+ print("Installing {package}".format(package=package))
+ subprocess.check_call(
+ [
+ sys.executable,
+ "-m",
+ "pip",
+ "install",
+ package,
+ "-t",
+ path,
+ "--ignore-installed",
+ ]
+ )
+ print(
+ "Install directory contents are now: {directory}".format(
+ directory=os.listdir(path)
+ )
+ )
+
+
+def pip_install_to_target(path, requirements=None, local_package=None):
"""For a given active virtualenv, gather all installed pip packages then
copy (re-install) them to the path provided.
:param str path:
Path to copy installed pip packages to.
+ :param str requirements:
+ If set, only the packages in the supplied requirements file are
+ installed.
+ If not set then installs all packages found via pip freeze.
:param str local_package:
The path to a local package with should be included in the deploy as
well (and/or is not available on PyPi)
"""
- print('Gathering pip packages')
- for r in pip.operations.freeze.freeze():
- if r.startswith('Python=='):
- # For some reason Python is coming up in pip freeze.
- continue
- elif r.startswith('-e '):
- r = r.replace('-e ','')
+ packages = []
+ if not requirements:
+ print("Gathering pip packages")
+ pkgStr = subprocess.check_output(
+ [sys.executable, "-m", "pip", "freeze"]
+ )
+ packages.extend(pkgStr.decode("utf-8").splitlines())
+ else:
+ if os.path.exists(requirements):
+ print("Gathering requirement packages")
+ data = read(requirements)
+ packages.extend(data.splitlines())
- print('Installing {package}'.format(package=r))
- pip.main(['install', r, '-t', path, '--ignore-installed'])
+ if not packages:
+ print("No dependency packages installed!")
if local_package is not None:
- pip.main(['install', local_package, '-t', path])
+ if not isinstance(local_package, (list, tuple)):
+ local_package = [local_package]
+ for l_package in local_package:
+ packages.append(l_package)
+ _install_packages(path, packages)
-def get_role_name(account_id, role):
+def get_role_name(region, account_id, role):
"""Shortcut to insert the `account_id` and `role` into the iam string."""
- return "arn:aws:iam::{0}:role/{1}".format(account_id, role)
+ prefix = ARN_PREFIXES.get(region, "aws")
+ return "arn:{0}:iam::{1}:role/{2}".format(prefix, account_id, role)
-def get_account_id(aws_access_key_id, aws_secret_access_key):
- """Query IAM for a users' account_id"""
- client = get_client('iam', aws_access_key_id, aws_secret_access_key)
- return client.get_user()['User']['Arn'].split(':')[4]
+def get_account_id(
+ profile_name, aws_access_key_id, aws_secret_access_key, region=None,
+):
+ """Query STS for a users' account_id"""
+ client = get_client(
+ "sts", profile_name, aws_access_key_id, aws_secret_access_key, region,
+ )
+ return client.get_caller_identity().get("Account")
-def get_client(client, aws_access_key_id, aws_secret_access_key, region=None):
+def get_client(
+ client,
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ region=None,
+):
"""Shortcut for getting an initialized instance of the boto3 client."""
- return boto3.client(
- client,
+ boto3.setup_default_session(
+ profile_name=profile_name,
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
- region_name=region
+ region_name=region,
)
+ return boto3.client(client)
-def create_function(cfg, path_to_zip_file):
+def create_function(cfg, path_to_zip_file, use_s3=False, s3_file=None):
"""Register and upload a function to AWS Lambda."""
print("Creating your new Lambda function")
- byte_stream = read(path_to_zip_file)
- aws_access_key_id = cfg.get('aws_access_key_id')
- aws_secret_access_key = cfg.get('aws_secret_access_key')
-
- account_id = get_account_id(aws_access_key_id, aws_secret_access_key)
- role = get_role_name(account_id, cfg.get('role', 'lambda_basic_execution'))
-
- client = get_client('lambda', aws_access_key_id, aws_secret_access_key,
- cfg.get('region'))
-
- client.create_function(
- FunctionName=cfg.get('function_name'),
- Runtime=cfg.get('runtime', 'python2.7'),
- Role=role,
- Handler=cfg.get('handler'),
- Code={'ZipFile': byte_stream},
- Description=cfg.get('description'),
- Timeout=cfg.get('timeout', 15),
- MemorySize=cfg.get('memory_size', 512),
- Publish=True
+ byte_stream = read(path_to_zip_file, binary_file=True)
+ profile_name = cfg.get("profile")
+ aws_access_key_id = cfg.get("aws_access_key_id")
+ aws_secret_access_key = cfg.get("aws_secret_access_key")
+
+ account_id = get_account_id(
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region",),
+ )
+ role = get_role_name(
+ cfg.get("region"),
+ account_id,
+ cfg.get("role", "lambda_basic_execution"),
)
+ client = get_client(
+ "lambda",
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region"),
+ )
-def update_function(cfg, path_to_zip_file):
+ # Do we prefer development variable over config?
+ buck_name = os.environ.get("S3_BUCKET_NAME") or cfg.get("bucket_name")
+ func_name = os.environ.get("LAMBDA_FUNCTION_NAME") or cfg.get(
+ "function_name"
+ )
+ print("Creating lambda function with name: {}".format(func_name))
+
+ if use_s3:
+ kwargs = {
+ "FunctionName": func_name,
+ "Runtime": cfg.get("runtime", "python2.7"),
+ "Role": role,
+ "Handler": cfg.get("handler"),
+ "Code": {
+ "S3Bucket": "{}".format(buck_name),
+ "S3Key": "{}".format(s3_file),
+ },
+ "Description": cfg.get("description", ""),
+ "Timeout": cfg.get("timeout", 15),
+ "MemorySize": cfg.get("memory_size", 512),
+ "VpcConfig": {
+ "SubnetIds": cfg.get("subnet_ids", []),
+ "SecurityGroupIds": cfg.get("security_group_ids", []),
+ },
+ "Publish": True,
+ }
+ else:
+ kwargs = {
+ "FunctionName": func_name,
+ "Runtime": cfg.get("runtime", "python2.7"),
+ "Role": role,
+ "Handler": cfg.get("handler"),
+ "Code": {"ZipFile": byte_stream},
+ "Description": cfg.get("description", ""),
+ "Timeout": cfg.get("timeout", 15),
+ "MemorySize": cfg.get("memory_size", 512),
+ "VpcConfig": {
+ "SubnetIds": cfg.get("subnet_ids", []),
+ "SecurityGroupIds": cfg.get("security_group_ids", []),
+ },
+ "Publish": True,
+ }
+
+ if "tags" in cfg:
+ kwargs.update(
+ Tags={key: str(value) for key, value in cfg.get("tags").items()}
+ )
+
+ if "environment_variables" in cfg:
+ kwargs.update(
+ Environment={
+ "Variables": {
+ key: get_environment_variable_value(value)
+ for key, value in cfg.get("environment_variables").items()
+ },
+ },
+ )
+
+ client.create_function(**kwargs)
+
+ concurrency = get_concurrency(cfg)
+ if concurrency > 0:
+ client.put_function_concurrency(
+ FunctionName=func_name, ReservedConcurrentExecutions=concurrency
+ )
+
+
+def update_function(
+ cfg,
+ path_to_zip_file,
+ existing_cfg,
+ use_s3=False,
+ s3_file=None,
+ preserve_vpc=False,
+):
"""Updates the code of an existing Lambda function"""
print("Updating your Lambda function")
- byte_stream = read(path_to_zip_file)
- aws_access_key_id = cfg.get('aws_access_key_id')
- aws_secret_access_key = cfg.get('aws_secret_access_key')
+ byte_stream = read(path_to_zip_file, binary_file=True)
+ profile_name = cfg.get("profile")
+ aws_access_key_id = cfg.get("aws_access_key_id")
+ aws_secret_access_key = cfg.get("aws_secret_access_key")
+
+ account_id = get_account_id(
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region",),
+ )
+ role = get_role_name(
+ cfg.get("region"),
+ account_id,
+ cfg.get("role", "lambda_basic_execution"),
+ )
+
+ client = get_client(
+ "lambda",
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region"),
+ )
+
+ # Do we prefer development variable over config?
+ buck_name = os.environ.get("S3_BUCKET_NAME") or cfg.get("bucket_name")
+
+ if use_s3:
+ client.update_function_code(
+ FunctionName=cfg.get("function_name"),
+ S3Bucket="{}".format(buck_name),
+ S3Key="{}".format(s3_file),
+ Publish=True,
+ )
+ else:
+ client.update_function_code(
+ FunctionName=cfg.get("function_name"),
+ ZipFile=byte_stream,
+ Publish=True,
+ )
+
+ # Wait for function to be updated
+ waiter = client.get_waiter('function_updated')
+ waiter.wait(FunctionName=cfg.get("function_name"))
+
+ kwargs = {
+ "FunctionName": cfg.get("function_name"),
+ "Role": role,
+ "Runtime": cfg.get("runtime"),
+ "Handler": cfg.get("handler"),
+ "Description": cfg.get("description", ""),
+ "Timeout": cfg.get("timeout", 15),
+ "MemorySize": cfg.get("memory_size", 512),
+ }
+
+ if preserve_vpc:
+ kwargs["VpcConfig"] = existing_cfg.get("Configuration", {}).get(
+ "VpcConfig"
+ )
+ if kwargs["VpcConfig"] is None:
+ kwargs["VpcConfig"] = {
+ "SubnetIds": cfg.get("subnet_ids", []),
+ "SecurityGroupIds": cfg.get("security_group_ids", []),
+ }
+ else:
+ del kwargs["VpcConfig"]["VpcId"]
+ else:
+ kwargs["VpcConfig"] = {
+ "SubnetIds": cfg.get("subnet_ids", []),
+ "SecurityGroupIds": cfg.get("security_group_ids", []),
+ }
+
+ if "environment_variables" in cfg:
+ kwargs.update(
+ Environment={
+ "Variables": {
+ key: str(get_environment_variable_value(value))
+ for key, value in cfg.get("environment_variables").items()
+ },
+ },
+ )
- account_id = get_account_id(aws_access_key_id, aws_secret_access_key)
- role = get_role_name(account_id, cfg.get('role', 'lambda_basic_execution'))
+ ret = client.update_function_configuration(**kwargs)
- client = get_client('lambda', aws_access_key_id, aws_secret_access_key,
- cfg.get('region'))
+ concurrency = get_concurrency(cfg)
+ if concurrency > 0:
+ client.put_function_concurrency(
+ FunctionName=cfg.get("function_name"),
+ ReservedConcurrentExecutions=concurrency,
+ )
+ elif "Concurrency" in existing_cfg:
+ client.delete_function_concurrency(
+ FunctionName=cfg.get("function_name")
+ )
- client.update_function_code(
- FunctionName=cfg.get('function_name'),
- ZipFile=byte_stream,
- Publish=True
+ if "tags" in cfg:
+ tags = {key: str(value) for key, value in cfg.get("tags").items()}
+ if tags != existing_cfg.get("Tags"):
+ if existing_cfg.get("Tags"):
+ client.untag_resource(
+ Resource=ret["FunctionArn"],
+ TagKeys=list(existing_cfg["Tags"].keys()),
+ )
+ client.tag_resource(Resource=ret["FunctionArn"], Tags=tags)
+
+
+def upload_s3(cfg, path_to_zip_file, *use_s3):
+ """Upload a function to AWS S3."""
+
+ print("Uploading your new Lambda function")
+ profile_name = cfg.get("profile")
+ aws_access_key_id = cfg.get("aws_access_key_id")
+ aws_secret_access_key = cfg.get("aws_secret_access_key")
+ client = get_client(
+ "s3",
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region"),
+ )
+ byte_stream = b""
+ with open(path_to_zip_file, mode="rb") as fh:
+ byte_stream = fh.read()
+ s3_key_prefix = cfg.get("s3_key_prefix", "/dist")
+ checksum = hashlib.new("md5", byte_stream).hexdigest()
+ timestamp = str(time.time())
+ filename = "{prefix}{checksum}-{ts}.zip".format(
+ prefix=s3_key_prefix, checksum=checksum, ts=timestamp,
)
- client.update_function_configuration(
- FunctionName=cfg.get('function_name'),
- Role=role,
- Handler=cfg.get('handler'),
- Description=cfg.get('description'),
- Timeout=cfg.get('timeout', 15),
- MemorySize=cfg.get('memory_size', 512)
+ # Do we prefer development variable over config?
+ buck_name = os.environ.get("S3_BUCKET_NAME") or cfg.get("bucket_name")
+ func_name = os.environ.get("LAMBDA_FUNCTION_NAME") or cfg.get(
+ "function_name"
)
+ kwargs = {
+ "Bucket": "{}".format(buck_name),
+ "Key": "{}".format(filename),
+ "Body": byte_stream,
+ }
+
+ client.put_object(**kwargs)
+ print("Finished uploading {} to S3 bucket {}".format(func_name, buck_name))
+ if use_s3:
+ return filename
+
+
+def get_function_config(cfg):
+ """Check whether a function exists or not and return its config"""
+
+ function_name = cfg.get("function_name")
+ profile_name = cfg.get("profile")
+ aws_access_key_id = cfg.get("aws_access_key_id")
+ aws_secret_access_key = cfg.get("aws_secret_access_key")
+ client = get_client(
+ "lambda",
+ profile_name,
+ aws_access_key_id,
+ aws_secret_access_key,
+ cfg.get("region"),
+ )
+
+ try:
+ return client.get_function(FunctionName=function_name)
+ except client.exceptions.ResourceNotFoundException as e:
+ if "Function not found" in str(e):
+ return False
+
+def get_concurrency(cfg):
+ """Return the Reserved Concurrent Executions if present in the config"""
+ concurrency = int(cfg.get("concurrency", 0))
+ return max(0, concurrency)
-def function_exists(cfg, function_name):
- """Check whether a function exists or not"""
- aws_access_key_id = cfg.get('aws_access_key_id')
- aws_secret_access_key = cfg.get('aws_secret_access_key')
- client = get_client('lambda', aws_access_key_id, aws_secret_access_key,
- cfg.get('region'))
- functions = client.list_functions().get('Functions', [])
- for fn in functions:
- if fn.get('FunctionName') == function_name:
- return True
- return False
+def read_cfg(path_to_config_file, profile_name):
+ cfg = read(path_to_config_file, loader=yaml.full_load)
+ if profile_name is not None:
+ cfg["profile"] = profile_name
+ elif "AWS_PROFILE" in os.environ:
+ cfg["profile"] = os.environ["AWS_PROFILE"]
+ return cfg
diff --git a/aws_lambda/helpers.py b/aws_lambda/helpers.py
index 78099049..edfd8e9d 100644
--- a/aws_lambda/helpers.py
+++ b/aws_lambda/helpers.py
@@ -1,7 +1,9 @@
# -*- coding: utf-8 -*-
+import datetime as dt
import os
+import re
+import time
import zipfile
-import datetime as dt
def mkdir(path):
@@ -9,8 +11,9 @@ def mkdir(path):
os.makedirs(path)
-def read(path, loader=None):
- with open(path) as fh:
+def read(path, loader=None, binary_file=False):
+ open_mode = "rb" if binary_file else "r"
+ with open(path, mode=open_mode) as fh:
if not loader:
return fh.read()
return loader(fh.read())
@@ -18,7 +21,7 @@ def read(path, loader=None):
def archive(src, dest, filename):
output = os.path.join(dest, filename)
- zfh = zipfile.ZipFile(output, 'w', zipfile.ZIP_DEFLATED)
+ zfh = zipfile.ZipFile(output, "w", zipfile.ZIP_DEFLATED)
for root, _, files in os.walk(src):
for file in files:
@@ -27,6 +30,40 @@ def archive(src, dest, filename):
return os.path.join(dest, filename)
-def timestamp(fmt='%Y-%m-%d-%H%M%S'):
+def timestamp(fmt="%Y-%m-%d-%H%M%S"):
now = dt.datetime.utcnow()
return now.strftime(fmt)
+
+
+def get_environment_variable_value(val):
+ env_val = val
+ if val is not None and isinstance(val, str):
+ match = re.search(r"^\${(?P\w+)*}$", val)
+ if match is not None:
+ env_val = os.environ.get(match.group("environment_key_name"))
+ return env_val
+
+
+class LambdaContext:
+ def current_milli_time(x):
+ return int(round(time.time() * 1000))
+
+ def get_remaining_time_in_millis(self):
+ return max(
+ 0,
+ self.timeout_millis
+ - (self.current_milli_time() - self.start_time_millis),
+ )
+
+ def __init__(self, function_name, timeoutSeconds=3):
+ self.function_name = function_name
+ self.function_version = None
+ self.invoked_function_arn = None
+ self.memory_limit_in_mb = None
+ self.aws_request_id = None
+ self.log_group_name = None
+ self.log_stream_name = None
+ self.identity = None
+ self.client_context = None
+ self.timeout_millis = timeoutSeconds * 1000
+ self.start_time_millis = self.current_milli_time()
diff --git a/aws_lambda/project_templates/config.yaml b/aws_lambda/project_templates/config.yaml
index 7f39794d..bc293717 100644
--- a/aws_lambda/project_templates/config.yaml
+++ b/aws_lambda/project_templates/config.yaml
@@ -2,8 +2,14 @@ region: us-east-1
function_name: my_lambda_function
handler: service.handler
-# role: lambda_basic_execution
description: My first lambda function
+runtime: python2.7
+# role: lambda_basic_execution
+
+# S3 upload requires appropriate role with s3:PutObject permission
+# (ex. basic_s3_upload), a destination bucket, and the key prefix
+# bucket_name: 'example-bucket'
+# s3_key_prefix: 'path/to/file/'
# if access key and secret are left blank, boto will use the credentials
# defined in the [default] section of ~/.aws/credentials.
@@ -13,3 +19,21 @@ aws_secret_access_key:
# dist_directory: dist
# timeout: 15
# memory_size: 512
+# concurrency: 500
+#
+
+# Experimental Environment variables
+environment_variables:
+ env_1: foo
+ env_2: baz
+
+# If `tags` is uncommented then tags will be set at creation or update
+# time. During an update all other tags will be removed except the tags
+# listed here.
+#tags:
+# tag_1: foo
+# tag_2: bar
+
+# Build options
+build:
+ source_directories: lib # a comma delimited list of directories in your project root that contains source to package.
diff --git a/aws_lambda/project_templates/service.py b/aws_lambda/project_templates/service.py
index e5bcb681..f04dba34 100644
--- a/aws_lambda/project_templates/service.py
+++ b/aws_lambda/project_templates/service.py
@@ -3,6 +3,6 @@
def handler(event, context):
# Your code goes here!
- e = event.get('e')
- pi = event.get('pi')
+ e = event.get("e")
+ pi = event.get("pi")
return e + pi
diff --git a/requirements.txt b/requirements.txt
deleted file mode 100644
index a57c4966..00000000
--- a/requirements.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-boto3==1.3.1
-botocore==1.4.32
-click==6.6
-docutils==0.12
-futures==3.0.5
-jmespath==0.9.0
-pyaml==15.8.2
-python-dateutil==2.5.3
-PyYAML==3.11
-six==1.10.0
\ No newline at end of file
diff --git a/scripts/lambda b/scripts/lambda
index 5ea9d018..08c5eef8 100755
--- a/scripts/lambda
+++ b/scripts/lambda
@@ -1,13 +1,15 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
+import logging
import os
+
import click
+
import aws_lambda
-import logging
CURRENT_DIR = os.getcwd()
-logging.getLogger('pip').setLevel(logging.CRITICAL)
+logging.getLogger("pip").setLevel(logging.CRITICAL)
@click.group()
@@ -16,38 +18,196 @@ def cli():
@click.command(help="Create a new function for Lambda.")
-def init():
- aws_lambda.init(CURRENT_DIR)
+@click.option(
+ "--minimal",
+ default=False,
+ is_flag=True,
+ help="Exclude any unnecessary template files",
+)
+@click.argument(
+ "folder", nargs=-1, type=click.Path(file_okay=False, writable=True),
+)
+def init(folder, minimal):
+ path = CURRENT_DIR
+ if len(folder) > 0:
+ path = os.path.join(CURRENT_DIR, *folder)
+ if not os.path.exists(path):
+ os.makedirs(path)
+ aws_lambda.init(path, minimal=minimal)
@click.command(help="Bundles package for deployment.")
-@click.option('--local-package', default=None, help='Install local package as well.', type=click.Path())
-def build(local_package):
- aws_lambda.build(CURRENT_DIR, local_package)
+@click.option(
+ "--config-file", default="config.yaml", help="Alternate config file.",
+)
+@click.option(
+ "--profile", help="AWS profile to use.",
+)
+@click.option(
+ "--requirements",
+ default=None,
+ type=click.Path(),
+ help="Install packages from supplied requirements file.",
+)
+@click.option(
+ "--local-package",
+ default=None,
+ type=click.Path(),
+ help="Install local package as well.",
+ multiple=True,
+)
+def build(requirements, local_package, config_file, profile):
+ aws_lambda.build(
+ CURRENT_DIR,
+ requirements=requirements,
+ local_package=local_package,
+ config_file=config_file,
+ profile_name=profile,
+ )
@click.command(help="Run a local test of your function.")
-@click.option('--event-file', default=None, help='Alternate event file.')
-@click.option('--verbose', '-v', is_flag=True)
-def invoke(event_file, verbose):
- aws_lambda.invoke(CURRENT_DIR, event_file, verbose)
+@click.option(
+ "--event-file", default="event.json", help="Alternate event file.",
+)
+@click.option(
+ "--config-file", default="config.yaml", help="Alternate config file.",
+)
+@click.option(
+ "--profile", help="AWS profile to use.",
+)
+@click.option("--verbose", "-v", is_flag=True)
+def invoke(event_file, config_file, profile, verbose):
+ aws_lambda.invoke(
+ CURRENT_DIR,
+ event_file=event_file,
+ config_file=config_file,
+ profile_name=profile,
+ verbose=verbose,
+ )
@click.command(help="Register and deploy your code to lambda.")
-@click.option('--local-package', default=None, help='Install local package as well.', type=click.Path())
-def deploy(local_package):
- aws_lambda.deploy(CURRENT_DIR, local_package)
+@click.option(
+ "--config-file", default="config.yaml", help="Alternate config file.",
+)
+@click.option(
+ "--profile", help="AWS profile to use.",
+)
+@click.option(
+ "--requirements",
+ default=None,
+ type=click.Path(),
+ help="Install all packages defined in supplied requirements file",
+)
+@click.option(
+ "--local-package",
+ default=None,
+ type=click.Path(),
+ help="Install local package as well.",
+ multiple=True,
+)
+@click.option(
+ "--preserve-vpc",
+ default=False,
+ is_flag=True,
+ help="Preserve VPC configuration on existing functions",
+)
+def deploy(requirements, local_package, config_file, profile, preserve_vpc):
+ aws_lambda.deploy(
+ CURRENT_DIR,
+ requirements=requirements,
+ local_package=local_package,
+ config_file=config_file,
+ profile_name=profile,
+ preserve_vpc=preserve_vpc,
+ )
+
+
+@click.command(help="Upload your lambda to S3.")
+@click.option(
+ "--config-file", default="config.yaml", help="Alternate config file.",
+)
+@click.option(
+ "--profile", help="AWS profile to use.",
+)
+@click.option(
+ "--requirements",
+ default=None,
+ type=click.Path(),
+ help="Install all packages defined in supplied requirements file",
+)
+@click.option(
+ "--local-package",
+ default=None,
+ type=click.Path(),
+ help="Install local package as well.",
+ multiple=True,
+)
+def upload(requirements, local_package, config_file, profile):
+ aws_lambda.upload(
+ CURRENT_DIR,
+ requirements=requirements,
+ local_package=local_package,
+ config_file=config_file,
+ profile_name=profile,
+ )
+
+
+@click.command(help="Deploy your lambda via S3.")
+@click.option(
+ "--config-file", default="config.yaml", help="Alternate config file.",
+)
+@click.option(
+ "--profile", help="AWS profile to use.",
+)
+@click.option(
+ "--requirements",
+ default=None,
+ type=click.Path(),
+ help="Install all packages defined in supplied requirements file",
+)
+@click.option(
+ "--local-package",
+ default=None,
+ type=click.Path(),
+ multiple=True,
+ help="Install local package as well.",
+)
+def deploy_s3(requirements, local_package, config_file, profile):
+ aws_lambda.deploy_s3(
+ CURRENT_DIR,
+ requirements=requirements,
+ local_package=local_package,
+ config_file=config_file,
+ profile_name=profile,
+ )
@click.command(help="Delete old versions of your functions")
-@click.option("--keep-last", type=int, prompt="Please enter the number of recent versions to keep")
-def cleanup(keep_last):
- aws_lambda.cleanup_old_versions(CURRENT_DIR, keep_last)
+@click.option(
+ "--config-file", default="config.yaml", help="Alternate config file.",
+)
+@click.option(
+ "--profile", help="AWS profile to use.",
+)
+@click.option(
+ "--keep-last",
+ type=int,
+ prompt="Please enter the number of recent versions to keep",
+)
+def cleanup(keep_last, config_file, profile):
+ aws_lambda.cleanup_old_versions(
+ CURRENT_DIR, keep_last, config_file=config_file, profile_name=profile,
+ )
+
-if __name__ == '__main__':
+if __name__ == "__main__":
cli.add_command(init)
cli.add_command(invoke)
cli.add_command(deploy)
+ cli.add_command(upload)
+ cli.add_command(deploy_s3)
cli.add_command(build)
cli.add_command(cleanup)
cli()
diff --git a/setup.cfg b/setup.cfg
index c37b95be..2d16abea 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,14 +1,20 @@
[bumpversion]
-current_version = 0.4.0
commit = True
tag = True
+current_version = 11.8.0
+parse = (?P\d+)\.(?P\d+)\.(?P\d+)(\-(?P[a-z]+))?
+serialize =
+ {major}.{minor}.{patch}
+
+[metadata]
+description-file = README.md
[bumpversion:file:setup.py]
[bumpversion:file:aws_lambda/__init__.py]
-[wheel]
-universal = 1
+[coverage:run]
+source = aws_lambda
[flake8]
exclude = docs
diff --git a/setup.py b/setup.py
old mode 100755
new mode 100644
index 6d47417b..bce3297e
--- a/setup.py
+++ b/setup.py
@@ -1,56 +1,89 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-import pip
+"""This module contains setup instructions for python-lambda."""
+import codecs
+import os
+import sys
+from shutil import rmtree
-from setuptools import setup, find_packages
+from setuptools import Command
+from setuptools import find_packages
+from setuptools import setup
-with open('README.rst') as readme_file:
- readme = readme_file.read()
+REQUIREMENTS = [
+ "boto3>=1.4.4",
+ "click>=6.6",
+ "PyYAML==5.1",
+]
+PACKAGE_DATA = {
+ "aws_lambda": ["project_templates/*"],
+ "": ["*.json"],
+}
+THIS_DIR = os.path.abspath(os.path.dirname(__file__))
+README = os.path.join(THIS_DIR, "README.md")
-with open('HISTORY.rst') as history_file:
- history = history_file.read()
+with codecs.open(README, encoding="utf-8") as fh:
+ long_description = "\n" + fh.read()
-requirements = pip.req.parse_requirements(
- "requirements.txt", session=pip.download.PipSession()
-)
-pip_requirements = [str(r.req) for r in requirements]
-test_requirements = [
- # TODO: put package test requirements here
-]
+class UploadCommand(Command):
+ """Support setup.py publish."""
+
+ description = "Build and publish the package."
+ user_options = []
+
+ @staticmethod
+ def status(s):
+ """Print in bold."""
+ print(f"\033[1m{s}\033[0m")
+
+ def initialize_options(self):
+ """Initialize options."""
+ pass
+
+ def finalize_options(self):
+ """Finialize options."""
+ pass
+
+ def run(self):
+ """Upload release to Pypi."""
+ try:
+ self.status("Removing previous builds ...")
+ rmtree(os.path.join(THIS_DIR, "dist"))
+ except Exception:
+ pass
+ self.status("Building Source distribution ...")
+ os.system(f"{sys.executable} setup.py sdist")
+ self.status("Uploading the package to PyPI via Twine ...")
+ os.system("twine upload dist/*")
+ sys.exit()
+
setup(
- name='python-lambda',
- version='0.4.0',
- description="The bare minimum for a Python app running on Amazon Lambda.",
- long_description=readme + '\n\n' + history,
+ name="python-lambda",
+ version="11.8.0",
author="Nick Ficano",
- author_email='nficano@gmail.com',
- url='https://github.com/nficano/python-lambda',
+ author_email="nficano@gmail.com",
packages=find_packages(),
- package_data={
- 'aws_lambda': ['project_templates/*'],
- '': ['*.json'],
- },
- include_package_data=True,
- scripts=['scripts/lambda'],
- install_requires=pip_requirements,
+ url="https://github.com/nficano/python-lambda",
license="ISCL",
- zip_safe=False,
- keywords='python-lambda',
+ install_requires=REQUIREMENTS,
+ package_data=PACKAGE_DATA,
+ test_suite="tests",
+ tests_require=[],
classifiers=[
- 'Development Status :: 2 - Pre-Alpha',
- 'Intended Audience :: Developers',
- 'License :: OSI Approved :: ISC License (ISCL)',
- 'Natural Language :: English',
- "Programming Language :: Python :: 2",
- 'Programming Language :: Python :: 2.6',
- 'Programming Language :: Python :: 2.7',
- 'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.3',
- 'Programming Language :: Python :: 3.4',
- 'Programming Language :: Python :: 3.5',
+ "Development Status :: 2 - Pre-Alpha",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: ISC License (ISCL)",
+ "Natural Language :: English",
+ "Programming Language :: Python :: 3.5",
+ "Programming Language :: Python :: 3.6",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
],
- test_suite='tests',
- tests_require=test_requirements
+ description="The bare minimum for a Python app running on Amazon Lambda.",
+ include_package_data=True,
+ long_description_content_type="text/markdown",
+ long_description=long_description,
+ zip_safe=True,
+ cmdclass={"upload": UploadCommand},
+ scripts=["scripts/lambda"],
)
diff --git a/tests/__init__.py b/tests/__init__.py
old mode 100755
new mode 100644
index 40a96afc..e69de29b
--- a/tests/__init__.py
+++ b/tests/__init__.py
@@ -1 +0,0 @@
-# -*- coding: utf-8 -*-
diff --git a/tests/dev_requirements.txt b/tests/dev_requirements.txt
new file mode 100644
index 00000000..0886536b
--- /dev/null
+++ b/tests/dev_requirements.txt
@@ -0,0 +1,5 @@
+bumpversion==0.5.3
+pre-commit==2.6.0
+pytest>=3.6
+pytest-cov
+flake8
diff --git a/HISTORY.rst b/tests/functional/__init__.py
similarity index 100%
rename from HISTORY.rst
rename to tests/functional/__init__.py
diff --git a/tests/unit/__init__.py b/tests/unit/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/tests/unit/test_LambdaContext.py b/tests/unit/test_LambdaContext.py
new file mode 100644
index 00000000..16c66303
--- /dev/null
+++ b/tests/unit/test_LambdaContext.py
@@ -0,0 +1,15 @@
+import time
+import unittest
+
+from aws_lambda.helpers import LambdaContext
+
+
+class TestLambdaContext(unittest.TestCase):
+ def test_get_remaining_time_in_millis(self):
+ context = LambdaContext("function_name", 2000)
+ time.sleep(0.5)
+ self.assertTrue(context.get_remaining_time_in_millis() < 2000000)
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/tests/unit/test_readHelper.py b/tests/unit/test_readHelper.py
new file mode 100644
index 00000000..33c27529
--- /dev/null
+++ b/tests/unit/test_readHelper.py
@@ -0,0 +1,36 @@
+import os
+import unittest
+
+import yaml
+
+from aws_lambda.helpers import read
+
+
+class TestReadHelper(unittest.TestCase):
+
+ TEST_FILE = "readTmp.txt"
+
+ def setUp(self):
+ with open(TestReadHelper.TEST_FILE, "w") as tmp_file:
+ tmp_file.write("testYaml: testing")
+
+ def tearDown(self):
+ os.remove(TestReadHelper.TEST_FILE)
+
+ def test_read_no_loader_non_binary(self):
+ fileContents = read(TestReadHelper.TEST_FILE)
+ self.assertEqual(fileContents, "testYaml: testing")
+
+ def test_read_yaml_loader_non_binary(self):
+ testYaml = read(TestReadHelper.TEST_FILE, loader=yaml.full_load)
+ self.assertEqual(testYaml["testYaml"], "testing")
+
+ def test_read_no_loader_binary_mode(self):
+ fileContents = read(TestReadHelper.TEST_FILE, binary_file=True)
+ self.assertEqual(fileContents, b"testYaml: testing")
+
+ def test_read_yaml_loader_binary_mode(self):
+ testYaml = read(
+ TestReadHelper.TEST_FILE, loader=yaml.full_load, binary_file=True
+ )
+ self.assertEqual(testYaml["testYaml"], "testing")
diff --git a/tox.ini b/tox.ini
deleted file mode 100644
index 951b70c4..00000000
--- a/tox.ini
+++ /dev/null
@@ -1,12 +0,0 @@
-[tox]
-envlist = py26, py27, py33, py34, py35
-
-[testenv]
-setenv =
- PYTHONPATH = {toxinidir}:{toxinidir}/python-lambda
-commands = python setup.py test
-
-; If you want to make tox run the tests with the same versions, create a
-; requirements.txt with the pinned versions and uncomment the following lines:
-; deps =
-; -r{toxinidir}/requirements.txt