[go: up one dir, main page]

0% found this document useful (0 votes)
10 views32 pages

AWS Notes QuickPointers v1.1

The document provides an overview of cloud computing, highlighting its definition, benefits, and various use cases across industries. It explains the different types of cloud services (IaaS, PaaS, SaaS), AWS's infrastructure, and features like Availability Zones and Amazon VPC. Additionally, it discusses compute options, security, and storage solutions available on AWS, emphasizing the flexibility and cost-effectiveness of cloud resources.

Uploaded by

neehach10525
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views32 pages

AWS Notes QuickPointers v1.1

The document provides an overview of cloud computing, highlighting its definition, benefits, and various use cases across industries. It explains the different types of cloud services (IaaS, PaaS, SaaS), AWS's infrastructure, and features like Availability Zones and Amazon VPC. Additionally, it discusses compute options, security, and storage solutions available on AWS, emphasizing the flexibility and cost-effectiveness of cloud resources.

Uploaded by

neehach10525
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

AWS Notes – Important Pointers

What is cloud computing?


Cloud computing is the on-demand delivery of IT resources over the Internet with
pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data
centers and servers, you can access technology services, such as computing power,
storage, and databases, on an as-needed basis from a cloud provider like Amazon
Web Services (AWS).

Who is using cloud computing?


Organizations of every type, size, and industry are using the cloud for a wide variety of use
cases, such as data backup, disaster recovery, email, virtual desktops, software development
and testing, big data analytics, and customer-facing web applications. For example, healthcare
companies are using the cloud to develop more personalized treatments for patients. Financial
services companies are using the cloud to power real-time fraud detection and prevention. And
video game makers are using the cloud to deliver online games to millions of players around the
world.

Benefits of cloud computing

Agility
The cloud gives you easy access to a broad range of technologies so that you can innovate
faster and build nearly anything that you can imagine. You can quickly spin up resources as you
need them–from infrastructure services, such as compute, storage, and databases, to Internet of
Things, machine learning, data lakes and analytics, and much more.

You can deploy technology services in a matter of minutes, and get from idea to implementation
several orders of magnitude faster than before. This gives you the freedom to experiment, test
new ideas to differentiate customer experiences, and transform your business.

Elasticity
With cloud computing, you don’t have to over-provision resources up front to handle peak levels
of business activity in the future. Instead, you provision the amount of resources that you actually
need. You can scale these resources up or down to instantly to grow and shrink capacity as your
business needs change.
AWS Notes – Important Pointers

Cost savings
The cloud allows you to trade capital expenses (such as data centers and physical servers) for
variable expenses, and only pay for IT as you consume it. Plus, the variable expenses are much
lower than what you would pay to do it yourself because of the economies of scale.

Deploy globally in minutes


With the cloud, you can expand to new geographic regions and deploy globally in minutes. For
example, AWS has infrastructure all over the world, so you can deploy your application in
multiple physical locations with just a few clicks. Putting applications in closer proximity to end
users reduces latency and improves their experience.

Types of cloud computing

The three main types of cloud computing include Infrastructure as a Service, Platform as a
Service, and Software as a Service. Each type of cloud computing provides different levels of
control, flexibility, and management so that you can select the right set of services for your
needs.
Learn more

Infrastructure as a Service (IaaS)


IaaS contains the basic building blocks for cloud IT. It typically provides access to networking
features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives you
the highest level of flexibility and management control over your IT resources. It is most similar to
the existing IT resources with which many IT departments and developers are familiar.
AWS Notes – Important Pointers

Platform as a Service (PaaS)


PaaS removes the need for you to manage underlying infrastructure (usually hardware and
operating systems), and allows you to focus on the deployment and management of your
applications. This helps you be more efficient as you don’t need to worry about resource
procurement, capacity planning, software maintenance, patching, or any of the other
undifferentiated heavy lifting involved in running your application.

Software as a Service (SaaS)


SaaS provides you with a complete product that is run and managed by the service provider. In
most cases, people referring to SaaS are referring to end-user applications (such as web-based
email). With a SaaS offering, you don’t have to think about how the service is maintained or how
the underlying infrastructure is managed. You only need to think about how you will use that
particular software.

Availability Zones

AZ give customers the ability to operate production applications and databases that are more highly
available, fault tolerant, and scalable than would be possible from a single data center. AWS
maintains 69 AZ around the world and we continue to add at a fast pace. Each AZ can be multiple
data centers (typically 3), and at full scale can be hundreds of thousands of servers. They are fully
isolated partitions of the AWS Global Infrastructure. With their own power infrastructure, the AZs
are physically separated by a meaningful distance, many kilometers, from any other AZ, although all
are within 100 km (60 miles of each other).

All AZs are interconnected with high-bandwidth, low-latency networking, over fully redundant,
dedicated metro fiber providing high-throughput, low-latency networking between AZs. The
network performance is sufficient to accomplish synchronous replication between AZs. AWS
Availability Zones are also powerful tools for helping build highly available applications. AZs make
partitioning applications about as easy as it can be. If an application is partitioned across AZs,
companies are better isolated and protected from issues such as lightning strikes, tornadoes,
earthquakes and more.
AWS Notes – Important Pointers

Choices of Compute

Compute is at the core of nearly every AWS customers’ infrastructure, whether it be in the form of
instances, containers or serverless compute. We are delivering choice in how you consume compute
to support existing applications and build new applications in the way that suits your business and
applications needs. And within each of these areas, we are rapidly adding completely new
capabilities.

Amazon Ec2

Instances is the most mature area of our compute platform with deep investment and long running
proven experience. It is also where customers have the greatest need for choice to support their
current and future applications. For instances, we offer choice across a number of dimensions. You
have your choice of operating systems with Linux and Windows as well as choice of architectures
with support for X86 and Arm workloads. For those workloads, we have instances which are general
purpose as well as optimized for specific needs such compute-optimized for HPC workload or
memory-optimized for big data and analytics. Over the last year, we have introduce new capabilities
to enhance our instances with bare metal, attached SSD and most recently, enhanced networking.
These instances are packaged for you in many ways – you can choose one of our AMIs, you can
customize your own images or you can select from additional varieties of AMIs provided by our
community. And those instances are available through flexibility in purchase models to meet your
business and budget needs.

EC2 Terminology

Walk through the terminology from what an AMI is, launching an instance into a specific network
environment, in specific AZ/Region, there are multiple regions, block storage is in an AZ, S3 is
regional and holds snapshots.

What is virtual CPU.

CPU Optimize:

In most cases, there is an Amazon EC2 instance type that has a combination of memory and number
of vCPUs to suit your workloads. However, you can specify the following CPU options to optimize
your instance for specific workloads or business needs:

Number of CPU cores: You can customize the number of CPU cores for the instance. You might do
this to potentially optimize the licensing costs of your software with an instance that has sufficient
amounts of RAM for memory-intensive workloads but fewer CPU cores.

Threads per core: You can disable multithreading by specifying a single thread per CPU core. You
might do this for certain workloads, such as high performance computing (HPC) workloads.

Choose your processor and architecture.


AWS Notes – Important Pointers

Beyond the operating system, we are providing you the choice of processor and architecture to build
the applications you need with the flexibility in choice that you want. We believe that by providing
greater choice, customers can choose the right compute to power their application and workload.

AWS have had a rich and long-term partnership with Intel and the Cascade Lake processor is key to
powering some of our most powerful instances (c5.metal, c5.24xl, c5.12xl)

NVDIA helps to power your machine learning and graphics workloads.

In early November, we announced our support for AMD and the AMD EPYC processor and we are
the only cloud with AMD available today. Lastly we announced that AWS has released a new
processor, the Graviton processor, based on Arm-architectures.

Now we are the only major cloud provider to support Arm workloads. Customers have told us
processor choice matters to them and we are already seeing customer testing their apps with these
new instances and processors.

Choice of Accelerators for specialized groups

Just as with operating systems and processors, we are helping to reduce the costs with new graphics
workloads and for machine learning. With Elastic Graphics and new Elastic Inference, we are
enabling you to cost-effectively add acceleration to your workload.

Elastic Graphics enables you to add the right amount of graphics acceleration for a fraction of the
cost of using standalone graphics instances. Similarly, with Elastic Inference, you can reduce deep
learning inference costs up to 75%. You can attach fractional size of a GPU to an E2 or SageMaker
instances and scale up and down as needed. You can also use EC2 Auto Scaling to scale inference
acceleration up and down per your needs.

Purchasing options at a glance.

Explain how pricing works: per-second billing, RIs, and Spot. Integrate our (not so) new SPOT pricing
model (predictable prices based on long-term supply and demand. --no more bidding! -- Max price is
set to OD by default and always pay the market price, customers can set a lower price if under
budgetary constraints). Great * to just talk and whiteboard out how our offerings could be bought in
a hybrid model. Some Spot, some Demand, and some RI.

Hibernate Amazon EC2 Instances

Hibernation lets you pause and resume your work by retaining memory across stop-start cycles.
Applications relying on memory contents can pick up exactly where they left off instead of building
the memory footprint all over again.

To date, the ability to Stop and Start instances has helped customers lower costs as they pay only for
what they consume, without losing the state of the instance. This functionality is exposed via the
familiar Start and Stop APIs.

Hibernation helps save on compute costs as during hibernation you only pay for the storage costs as
well as minimize warm up times.

Hibernation is a significant next step in our effort to help customers optimize their scaling strategies.
They can now respond quickly to demand surges without compromising on costs.
AWS Notes – Important Pointers

EC2 Options

As we take a step back, let’s bring this all together as the choice we offer with our instances. Part of
customer choice is delivering on your specific workloads needs. We have done that with innovating
on our general purpose and burstable workloads as well as bring specialized instances to market like
z1d for design automation. We have invested in new capabilities over the last year such as faster
processors working with Intel, introducing new instances for accelerate computing and now
enhanced networking to remove network bottlenecks for high performance workloads. As we just
discussed, you can also add options such as Elastic Graphics or Elastic Inference and of course Elastic
Block Store to provide greater performance and storage flexibility with instance storage.

As all this comes together, we have 175 instances types today, more than the next major cloud
provider and nearly triple the number of instances launched this year relative to last year. We plan
to continue to bring new instances to market such as more bare metal instance and will have
instances to support virtually every workload and business need.

Tiered EC2 Security Groups

Go into more detail about how Security Groups are the customer’s way to control traffic flow and
create tiered network architectures in their environment. The diagrams on the left and right are
representing the same thing – they are just different ways of visualizing the rules in the middle. We
find each representation resonates differently with different customers. Explain how these rules
control the flow of traffic from the web through the DB layer by having security groups reference
other security groups.

Stress that none of the server groups (e.g. Web, App, DB) can talk with each other just because they
are in the same Security Group. That a specific, self-referencing rule would need to be created to
allow this traffic. Also stress how these rules will be dynamically updated as instances are added or
removed from each server farm.

Instance MetaData

. For example, the instance may need to know its instance ID, and can query the instance metadata
service. Another example may be that the instance needs to know which AZ it is in to apply AZ-
specific configuration details.

VPC

Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated
section of the AWS Cloud where you can launch AWS resources in a virtual network
that you define. You have complete control over your virtual networking environment,
including selection of your own IP address range, creation of subnets, and
configuration of route tables and network gateways. You can use both IPv4 and IPv6
in your VPC for secure and easy access to resources and applications.

You can easily customize the network configuration of your Amazon VPC. For
example, you can create a public-facing subnet for your web servers that have
access to the internet. You can also place your backend systems, such as
AWS Notes – Important Pointers

databases or application servers, in a private-facing subnet with no internet access.


You can use multiple layers of security, including security groups and network
access control lists, to help control access to Amazon EC2 instances in each subnet.

Secure

Amazon VPC provides advanced security features, such as security groups and
network access control lists, to enable inbound and outbound filtering at the instance
and subnet level. In addition, you can store data in Amazon S3 and restrict access
so that it’s only accessible from instances inside your VPC. For additional security,
you can create dedicated instances that are physically isolated from other AWS
accounts, at the hardware level.

Simple

Create a VPC quickly and easily using the AWS Management Console. Select from
common network setups and find the best match for your needs. Subnets, IP ranges,
route tables, and security groups are automatically created. You spend less time
setting up and managing, so you can concentrate on building the applications that
run in your VPCs.

Customizable

Control your virtual networking environment, including selection of your own IP


address range, creation of subnets, and configuration of route tables and network
gateways. Customize the network configuration, such as by creating a public-facing
subnet for your webservers that has access to the internet, and placing your
backend systems such as databases or application servers in a private-facing subnet
with no internet access.

Use cases

Host a simple, public-facing website

Host a basic web application, such as a blog or simple website in a VPC, and gain the additional
layers of privacy and security afforded by Amazon VPC. You can help secure the website by
creating security group rules which allow the webserver to respond to inbound HTTP and SSL
requests from the Internet while simultaneously prohibiting the webserver from initiating
outbound connections to the Internet. You can create a VPC that supports this use case by
selecting "VPC with a Single Public Subnet Only" from the Amazon VPC console wizard.

Host multi-tier web applications

Host multi-tier web applications and strictly enforce access and security restrictions
between your web servers, application servers, and databases. Launch web servers
in a publicly accessible subnet while running your application servers and databases
in private subnets, so that application servers and databases cannot be directly
AWS Notes – Important Pointers

accessed from the internet. You control access between the servers and subnets
using inbound and outbound packet filtering provided by network access control lists
and security groups. To create a VPC that supports this use case, you can select
"VPC with Public and Private Subnets" in the Amazon VPC console wizard.

Disaster recovery

By using Amazon VPC for disaster recovery, you can have all the benefits of a
disaster recovery site at a fraction of the cost. You can periodically backup critical
data from your datacenter to a small number of Amazon EC2 instances with Amazon
Elastic Block Store (EBS) volumes, or import your virtual machine images to Amazon
EC2. To ensure business continuity, you can quickly launch replacement compute
capacity in AWS. When the disaster is over, you can send your mission critical data
back to your datacenter and terminate the Amazon EC2 instances that you no longer
need.

Extend your corporate network into the cloud

Move corporate applications to the cloud, launch additional web servers, or add
more compute capacity to your network by connecting your VPC to your corporate
network. Because your VPC can be hosted behind your corporate firewall, you can
seamlessly move your IT resources into the cloud without changing how your users
access these applications. You can select "VPC with a Private Subnet Only and
Hardware VPN Access" from the Amazon VPC console wizard to create a VPC that
supports this use case.

Storage
AWS has a variety of storage options.

Each storage option has a unique combination of performance, durability, cost, and interface.

AWS SNOWBALL is a petabyte-scale data transport solution that uses secure appliances to transfer
large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges
with large-scale data transfers including high network costs, long transfer times, and security
concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the
cost of high-speed Internet.

AWS SNOWMOBILE is NEW and its a secure, Exabyte-scale data transfer service used to transfer
large amounts of data into and out of AWS. Each Snowmobile can transfer up to 100PB. When you
order a Snowmobile it comes to your site and AWS personnel connect a removable, high-speed
network switch from Snowmobile to your local network. This makes Snowmobile appear as a
network attached data store. Once it is connected, secure, high-speed data transfer begins. After
your data is transferred to Snowmobile, it is driven back to AWS where the data is loaded into the
AWS service you select, including S3, Glacier, Redshift and others. It allows customers with large
amounts of data to migrate to AWS much faster and easier.

AWS EBS Features


AWS Notes – Important Pointers

Amazon Web Services give you reliable, durable backup storage without the up-front capital
expenditures and complex capacity-planning burden of on-premises storage. Amazon storage
services remove the need for complex and time-consuming capacity planning, ongoing negotiations
with multiple hardware and software vendors, specialized training, and maintenance of offsite
facilities or transportation of storage media to third party offsite locations.

EBS has 99.99% SLA

Amazon EBS

Network device

• Data lifecycle is independent from EC2 instance lifecycle

• Each volume is like a hard drive on a physical server

• Attach multiple volumes to an EC2 instance, but only one EC2 instance per volume

POSIX-compliant file systems

• Virtual disk ideal for: OS boot device; file systems

Raw block devices

• Ideal for Databases (Oracle Active Storage Manager)

• Other raw block devices

Amazon S3.

A chief information security officer (CISO) is the senior-level executive within an organization
responsible for establishing and maintaining the enterprise vision, strategy, and program to ensure
information assets and technologies are adequately protected.

On-demand analytics: RedShift Spectrum, QuickSight, Athena

Amazon Glacier

Query in place:

One of the most important capabilities of a data lake that is built on AWS is the ability to do in-place
transformation and querying of data assets without having to provision and manage clusters.

AWS Glue, as described in the previous sections, provides the data discovery and ETL capabilities,
and Amazon Athena and Amazon Redshift Spectrum provide the in-place querying capabilities.

Glacier Select

Amazon Glacier Select allows queries to run directly on data stored in Amazon Glacier without
having to retreive the entire archive. Amazon Glacier Select changes the value of archive storage by
allowing you to process and find only the bytes you need out of the archive to use for analytics.
AWS Notes – Important Pointers

Storage Tired to your requirements

Amazon Glacier provides three ways to retrieve your archives to meet varying access time and cost
requirements: Expedited, Standard, and Bulk retrievals. Archives requested using Expedited
retrievals are typically available within 1 – 5 minutes, allowing you to quickly access your data when
occasional urgent requests for a subset of archives are required. With Standard retrievals, archives
typically become accessible within 3 – 5 hours. Or you can use Bulk retrievals to cost-effectively
access significant portions of your data, even petabytes, for just a quarter-of-a-cent per GB.

Storage Gateway Hybrid Storage Solutions.

The AWS SGW is typically deployed in your existing storage environment as a VM.

You connect your existing applications, storage systems, or devices to the SGW. The SGW provides
standard storage protocol interfaces so apps can connect to it without changes.

The gateway in turn connects to AWS so you can store data securely and durably in Amazon S3,
Glacier.

The gateway optimizes data transfer from on-premises to AWS. It also provides low-latency access
through a local cache so your apps can access frequently used data locally. The service is also
integrated with Cloudwatch, cloudtrail, IAM, etc. so you get an extension of aws management
services locally.

---

“Enable cloud storage on-premises as part of your AWS platform”

“Native access

Industry standard protocols for file, block, and tape

Secure and durable storage in Amazon S3 and Glacier

Optimized data transfer from on-premises to AWS

Low-latency access to frequently used data

Integrated with AWS security and management services

Amazon Snowball and Snow Edge

The original Snowball has 50 TB & 80TB capacity; AWS Snowball Edge, like the original Snowball, is a
terabyte-scale data transfer solution, but transports more data up to 100TB of data, and retains the
same embedded cryptography and security as the original Snowball. However, Snowball Edge hosts
a file server and an S3-compatible endpoint that allow you to use the NFS protocol, S3 SDK or S3
CLI to transfer data directly to the device without specialized client software. Multiple units may be
clustered together, forming a temporary data collection storage tier in your datacenter so you can
work as data is generated without managing copies. As storage needs scale up and down, devices
can be easily added or removed from the local cluster and returned to AWS.
AWS Notes – Important Pointers

What is Snowball?

Snowball is a new AWS Import/Export offering that provides a terabyte-scale data transfer service
that uses Amazon-provided storage devices for transport. Previously customers purchased their own
portable storage devices and used these devices to ship their data. With the launch of Snowball
customers are now able to use

highly secure, rugged Amazon-owned Network Attached Storage (NAS) devices, called Snowballs, to
ship their data. Once received and set up, customers are able to copy up to 50TB data from their on
prem file system to the Snowball via the Snowball client software via a 10Gbps network interface .
Prior to transfer to the Snowball all data is encrypted by 256-bit GSM encryption by the client. When
customers finish transferring data to the device they simply ship it back to an AWS facility where the
data is ingested

at high speed into Amazon S3.

How fast is SnowBall?

Compare and contrast Internet vs 5x Snowball.

Note that with 80TB Snowball or 100TB Snowball Edge, less devices needed

Snowball Edge supports up to 40GB connections (QSFP+)

Also note you can also use Snowball Edge to run edge computing workloads such as performing local
analysis of data on a Snowball Edge cluster and writing it to the S3-compatible endpoint

SnowMobile

AWS Snowmobile is a secure, Exabyte-scale data transfer service used to transfer large amounts of
data into and out of AWS. Each Snowmobile can transfer up to 100PB. When you order a
Snowmobile it comes to your site and AWS personnel connect a removable, high-speed network
switch from Snowmobile to your local network. This makes Snowmobile appear as a network
attached data store. Once it is connected, secure, high-speed data transfer begins. After your data is
transferred to Snowmobile, it is driven back to AWS where the data is loaded into the AWS service
you select, including S3, Glacier, Redshift and others.

DataBases on AWS
Amazon RDS

Customers that are running commercial databases such as Oracle and SQL Server on premises often
choose to first migrate to Amazon RDS, a fully managed relational database service that you can use
to run your choice of database engines including open source engines as well as Oracle, and SQL
Server. Amazon RDS improves database scale and performance and automates time-consuming
administration tasks such as hardware provisioning, database setup, patching, and backups.
AWS Notes – Important Pointers

Amazon Aurora

Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud that
combines the performance and availability of high-end commercial databases with the simplicity and
cost-effectiveness of open source databases. Aurora has 5x the performance of standard MySQL and
3x the performance of standard PostgreSQL, with the security, availability, and reliability of
commercial-grade databases, at 1/10th the cost.

Scale-out, distributed, multi-tenant architecture

Digging into the first point…while AWS is compatible with MySQL and PostgreSQL, we’ve made a
number of changes under the cover to deliver superior performance and availability.

We’ve separated the compute and storage layers. The compute layer, which I’ll also call the head
node going forward, includes the query procesing, transaction and caching layers in a traditional
database and is marked in blue here.

The storage layer is in green. The storage layer is a purpose built log-structured distributed storage
system. It is multi-tenant and spans hundreds of nodes distributed over 3 different Availability Zones
(AZ); you can think of an AZ as a fault-isolated data center. A given Aurora database is split into 10GB
chunks. Each 10GB chunk is copied 6 times, with 2 copies in each AZ.

This sounds neat, but what does it buy us? For starters, you can lose an entire data center as well as
one more copy and still be able to recover your database. And I’ll describe through this session,
there are numerous innovations we can do to improve performance, availability, and manageability
due to this architecture.

Finally, you can add up to 15 read replicas that all sit on top of the same storage. As I‘ll describe
shortly, this is one of the features that customers most love about Aurora, because you replicas with
only 20-30 millisecond lag, and they are very easy to scale.

Aurora MySQL performance

In our Sysbench benchmarks, we have seen a 5x improvement in throughput for Aurora over RDS
MySQL 5.6 and 5.7. Your results will vary depending on your workload. Some customers such as
Alfresco, which is a content management systems company has reported 10X increase in throughput
with Aurora.

Everything you get from Amazon RDS…

And while this may seen unglamorous, one of the biggest advantages of moving to a managed
database like Aurora is you get to focus on the stuff that drives your business rather than making
sure the machines are doing their job correctly. When you’re running your database on premises,
you have to do everything by yourself. This is all the stuff in red. As you move to Amazon Aurora on
the right, most of the stuff, all the boxes in green, are managed for you. And you only have to focus
on optimizing your app
AWS Notes – Important Pointers

And more

With Aurora, you get more.

We autoscale storage in 10GB increments, so you don’t have to worry about pre-provisioning
storage. In addition to saving you time and effort, it also avoids downtime for scaling storage, which
can be significant with MySQL.

I already mentioned we do continuous incremental backups to S3 and all you have to do is specify
how far back you want to go.

Creating snapshots is instantaneous and does not affect performance. You can store snapshots
indefinitely.

And finally, our storage layer automatically handles things like hardware failure and you don’t have
to worry about it.

Zero downtime patching

The zero-downtime patching (ZDP) feature attempts, on a best-effort basis, to preserve client
connections through an engine patch. If ZDP executes successfully, application sessions are
preserved and the database engine restarts while patching. The database engine restart can cause a
drop in throughput lasting approximately 5 seconds. ZDP is available in Aurora MySQL 1.13
(compatible with MySQL 5.6) and later. It isn't available in Aurora MySQL version 2 (compatible with
MySQL 5.7).

Aurora MultiMaster

Amazon Aurora Multi-Master is a new feature of the Aurora MySQL-compatible edition that adds
the ability to scale out write performance across multiple Availability Zones, allowing applications to
direct read/write workloads to multiple instances in a database cluster and operate with higher
availability.

Amazon Aurora Multi-Master is now available in Preview for the MySQL-compatible edition of
Amazon Aurora.

Global Database

An Aurora global database consists of one primary AWS Region where your data is mastered, and
one read-only, secondary AWS Region. Aurora replicates data to the secondary AWS Region with
typical latency of under a second. You issue write operations directly to the primary DB instance in
the primary AWS Region. An Aurora global database uses dedicated infrastructure to replicate your
data, leaving database resources available entirely to serve application workloads. Applications with
a worldwide footprint can use reader instances in the secondary AWS Region for low latency reads.
In the unlikely event your database becomes degraded or isolated in an AWS region, you can
promote the secondary AWS Region to take full read-write workloads in under a minute.

The Aurora cluster in the primary AWS Region where your data is mastered performs both read and
write operations. The cluster in the secondary region enables low-latency reads. You can scale up
AWS Notes – Important Pointers

the secondary cluster independently by adding one of more DB instances (Aurora Replicas) to serve
read-only workloads. For disaster recovery, you can remove and promote the secondary cluster to
allow full read and write operations.

Only the primary cluster performs write operations. Clients that perform write operations connect to
the DB cluster endpoint of the primary cluster.

Global database – physical replication

An Aurora global database consists of one primary AWS Region where your data is mastered, and
one read-only, secondary AWS Region. Aurora replicates data to the secondary AWS Region with
typical latency of under a second. You issue write operations directly to the primary DB instance in
the primary AWS Region. An Aurora global database uses dedicated infrastructure to replicate your
data, leaving database resources available entirely to serve application workloads. Applications with
a worldwide footprint can use reader instances in the secondary AWS Region for low latency reads.
In the unlikely event your database becomes degraded or isolated in an AWS region, you can
promote the secondary AWS Region to take full read-write workloads in under a minute.

The Aurora cluster in the primary AWS Region where your data is mastered performs both read and
write operations. The cluster in the secondary region enables low-latency reads. You can scale up
the secondary cluster independently by adding one of more DB instances (Aurora Replicas) to serve
read-only workloads. For disaster recovery, you can remove and promote the secondary cluster to
allow full read and write operations.

Only the primary cluster performs write operations. Clients that perform write operations connect to
the DB cluster endpoint of the primary cluster.

Fully managed and autoscaling

For even more performance, Amazon DynamoDB Accelerator (DAX) is a fully managed, highly
available, in-memory cache for DynamoDB, that delivers up to a 10x performance improvement—
from milliseconds to microseconds—even at millions of requests per second. DAX does all the heavy
lifting required to add in-memory acceleration to your DynamoDB tables, without requiring
developers to manage cache invalidation, data population, or cluster management. Now you can
focus on building great applications for your customers without worrying about performance at
scale. You do not need to modify application logic, because DAX is compatible with existing
DynamoDB API calls.

Introducing Amazon ElastiCache ( we will spend a bit of time here)

That’s where Amazon ElastiCache come in.

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory
data store and cache in the cloud. The service improves the performance of web applications by
allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying
entirely on slower disk-based databases. Amazon ElastiCache supports two open-source in-memory
engines:
AWS Notes – Important Pointers

Redis - a fast, open source, in-memory data store and cache. Amazon ElastiCache for Redis is a Redis-
compatible in-memory service that delivers the ease-of-use and power of Redis along with the
availability, reliability and performance suitable for the most demanding applications. Both single-
node and up to 15-shard clusters are available, enabling scalability to up to 6.1 TiB of in-memory
data. ElastiCache for Redis is fully managed, scalable, and secure - making it an ideal candidate to
power high-performance use cases such as Web, Mobile Apps, Gaming, Ad-Tech, and IoT.

Memcached - a widely adopted memory object caching system. Amazon ElastiCache for Memcached
is protocol compliant with Memcached, so popular tools that you use today with existing
Memcached environments will work seamlessly with the service. ElastiCache for Memcached is
suitable for caching use cases where performance and concurrency is important.

Key benefits of Amazon ElastiCache include:

Redis and Memcached Compatible

With Amazon ElastiCache, you get native access to Redis or Memcached in-memory environments.
This enables compatibility with your existing tools and applications.

Extreme Performance

Amazon ElastiCache works as an in-memory data store and cache to support the most demanding
applications requiring sub-millisecond response times. By utilizing an end-to-end optimized stack
running on customer dedicated nodes, Amazon Elasticache provides you secure, blazing fast
performance.

Fully Managed

You no longer need to perform management tasks such as hardware provisioning, software
patching, setup, configuration, monitoring, failure recovery, and backups. ElastiCache continuously
monitors your clusters to keep your workloads up and running so that you can focus on higher value
application development.

Easily Scalable

Amazon ElastiCache can scale-out, scale-in, and scale-up to meet fluctuating application
demands. Write and memory scaling is supported with sharding. Replicas provide read scaling.

ElastiCache Redis

Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond
latency to power internet-scale real-time applications. Built on open-source Redis and compatible
with the Redis APIs, ElastiCache for Redis works with your Redis clients and uses the open Redis data
format to store your data. Your self-managed Redis applications can work seamlessly with
ElastiCache for Redis without any code changes. ElastiCache for Redis combines the speed,
simplicity, and versatility of open-source Redis with manageability, security, and scalability from
Amazon to power the most demanding real-time applications in Gaming, Ad-Tech, E-Commerce,
Healthcare, Financial Services, and IoT.

Key capabilities of ElastiCache for Redis include:

- Redis-compatible, #1 Key-Value Store:


AWS Notes – Important Pointers

- Fully managed and hardened: Amazon ElastiCache for Redis is a fully managed service. You no
longer need to perform management tasks such as hardware provisioning, software patching, setup,
configuration, monitoring, failure recovery, and backups. ElastiCache continuously monitors your
clusters to keep your Redis up and running so that you can focus on higher value application
development. It provides detailed monitoring metrics associated with your nodes, enabling you to
diagnose and react to issues quickly. ElastiCache adds automatic write throttling, intelligent swap
memory management, and failover enhancements to improve upon the availability and
manageability of open source Redis.

- Secure & Compliant: Amazon ElastiCache for Redis supports Amazon VPC, enabling you to isolate
your cluster to the IP ranges you choose for your nodes and use them to connect to your application.
Also, ElastiCache team continuously monitors for known security vulnerabilities in open-source
Redis, operating system, and firmware, and promptly applies the security-related patches to keep
customers’ Redis environment secure. It is HIPAA eligible and offers encryption in transit, at rest,
and Redis AUTH for secure internode communications to help keep sensitive data such as personally
identifiable information (PII) safe.

- Highly available & Reliable: Amazon ElastiCache for Redis supports Redis cluster mode and
provides high availability via support for automatic failover by detecting the primary node failure
and promoting the replica to be the primary with minimal impact. It allows for read availability for
your application by supporting read replicas (across availability zones), to enable the reads to be
served when the primary is busy with the increased workload. ElastiCache for Redis supports
enhanced failover logic to allow for automatic failover in cases when majority of the primary nodes
for Redis cluster mode are unavailable

- Easily scalable: With Amazon ElastiCache for Redis, you can start small and easily scale your Redis
data as your application grows - all the way up to a cluster with 6.1 TiB of in-memory data. It
supports online cluster resizing to scale-out and scale-in your Redis clusters without downtime and
adapts to changing demand. To scale read capacity, ElastiCache allows you to add up to five read
replicas across multiple availability zones. To scale write capacity, ElastiCache supports Redis cluster
which enables you to partition your write traffic across multiple primaries.

ElastiCache Memcached

Memcached - a widely adopted memory object caching system. ElastiCache for Memcached offers a
fully managed Memcached service that is protocol compliant with Memcached, so popular tools that
you use today with existing Memcached environments will work seamlessly with the service.

Key benefits include:

EXTREME PERFORMANCE

Amazon ElastiCache for Memcached works as an in-memory data store and cache to support the
most demanding applications requiring sub-millisecond response times. By utilizing an end-to-end
optimized stack running on customer dedicated nodes, Amazon ElastiCache provides you secure,
blazing fast performance.

SECURE AND HARDENED

Amazon ElastiCache for Memcached supports Amazon VPC, enabling you to isolate your cluster to
the IP ranges you choose for your nodes, and use them to connect to your application. ElastiCache
AWS Notes – Important Pointers

continuously monitors your nodes and applies the necessary patches to keep your environment
secure.

EASILY SCALABLE

Amazon ElastiCache with Memcached includes sharding to scale in-memory cache with up to 20
nodes and 8.14 TiB per cluster

Monitoring :

Customer Experience

- Are my customers getting a good experience?

Performance & Cost

- How are my changes impacting overall performance?

Trends

- Do I need to scale?

Troubleshooting & Remediation

- Where did the problem occur?

Learning & Improvement

- Can I detect or prevent this problem in the future?

Traditional Approches Vs Cloud

Monitoring and logging can be challenging in many on premise environments due to the manual
configuration of physical and logical resources. Monitoring data, if even available may span multiple
systems and processes which further complicates things. In AWS, resources are software defined
and changes to them are tracked as API calls. The current and past states of your environment can
be monitored and acted on in real time. AWS scale allows for ubiquitous logging which can be
extended to your application logs and centralized for analysis, audit and mitigation purposes.

AWS cloud Watch

You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application
performance, and operational health. You can use these insights to react and keep your application
running smoothly. Amazon CloudWatch monitors your AWS cloud resources and your cloud-
powered applications. It tracks the metrics so that you can visualize and review them. You can also
set alarms that will fire when a metrics goes beyond a limit that you specified. CloudWatch gives you
visibility into resource utilization, application performance, and operational health.

Cloud Watch Concepts:


AWS Notes – Important Pointers

Metrics

A metric is the fundamental concept in CloudWatch. It represents a time-ordered set of data points
that are published to CloudWatch. These data points can be either your custom metrics or metrics
from other services in AWS. You can retrieve statistics about those data points as an ordered set of
time-series data. Metrics exist only in the region in which they are created. Metrics cannot be
deleted, but they automatically expire in 14 days if no new data is published to them.

Namespaces

CloudWatch namespaces are containers for metrics. Metrics in different namespaces are isolated
from each other, so that metrics from different applications are not mistakenly aggregated into the
same statistics.

Dimensions

A dimension is a name/value pair that helps you to uniquely identify a metric. Every metric has
specific characteristics that describe it, and you can think of dimensions as categories for those
characteristics. Dimensions help you design a structure for your statistics plan. Because dimensions
are part of the unique identifier for a metric, whenever you add a unique name/value pair to one of
your metrics, you are creating a new metric.

Time Stamps

With Amazon CloudWatch, each metric data point must be marked with a time stamp. The time
stamp can be up to two weeks in the past and up to two hours into the future. If you do not provide
a time stamp, CloudWatch creates a time stamp for you based on the time the data element was
received.

Units

Units represent your statistic's unit of measure. For example, the units for the Amazon
EC2 NetworkIn metric areBytes because NetworkIn tracks the number of bytes that an instance
receives on all network interfaces.

Statistics

Statistics are metric data aggregations over specified periods of time. CloudWatch provides statistics
based on the metric data points provided by your custom data or provided by other services in AWS
to CloudWatch. (ex. Min, Max, Sum,, Avg)

Periods

A period is the length of time associated with a specific Amazon CloudWatch statistic. Each statistic
represents an aggregation of the metrics data collected for a specified period of time. Although
periods are expressed in seconds, the minimum granularity for a period is one minute. Accordingly,
you specify period values as multiples of 60.

Aggregation

Amazon CloudWatch aggregates statistics according to the period length that you specify in calls
toGetMetricStatistics. You can publish as many data points as you want with the same or similar
time stamps.

Alarms
AWS Notes – Important Pointers

Alarms can automatically initiate actions on your behalf, based on parameters you specify. An alarm
watches a single metric over a specified time period, and performs one or more actions based on the
value of the metric relative to a given threshold over a number of time periods.

Regions

Amazon cloud computing resources are housed in highly available data center facilities. To provide
additional scalability and reliability, each data center facility is located in a specific geographical area,
known as a region.

Examples

- CloudWatch metrics are available across the AWS services for hundreds of data points.

- CloudWatch records metrics statistics across AWS resources and services ranging from core
EC2 metrics to higher level metrics.

- Default interval is 5 minutes but can be reduced to 1 minute.

- Retention is 14 days.

Amazon CloudWatch is a set of primitives that begins with the core metrics collection and tracking
but also includes Alarms, Events and Logs. Each of these compliment each other and extend
functionality to meet various monitoring and logging requirements.

Cloud Watch Alarms

You can create a CloudWatch alarm that sends an Amazon Simple Notification Service message
when the alarm changes state. An alarm watches a single metric over a time period you specify, and
performs one or more actions based on the value of the metric relative to a given threshold over a
number of time periods. The action is a notification sent to an Amazon Simple Notification Service
topic or Auto Scaling policy. Alarms invoke actions for sustained state changes only. CloudWatch
alarms will not invoke actions simply because they are in a particular state, the state must have
changed and been maintained for a specified number of periods.

Example:

In this example, the alarm threshold is set to 3 and the evaluation period is 3. That is, the alarm
invokes its action if the oldest period is breaching and the others are breaching or missing within a
time window of 3 periods. In the figure, this happens with the third through fifth time periods, and
the alarm's state is set to ALARM. At period six, the value dips below the threshold, and the state
reverts to OK. Later, during the ninth time period, the threshold is breached again, but the previous
periods are OK. Consequently, the alarm's state remains OK.

Cloud Watch Events

Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes
in Amazon Web Services (AWS) resources to AWS Lambda functions, Amazon SNS topics, Amazon
SQS queues, streams in Amazon Kinesis Streams, or built-in targets. Using simple rules that you can
quickly set up, you can match events and route them to one or more target functions or streams.
CloudWatch Events becomes aware of operational changes as they occur. CloudWatch Events
AWS Notes – Important Pointers

responds to these operational changes and takes corrective action as necessary, by sending
messages to respond to the environment, activating functions, making changes, and capturing state
information.

Example – A daily scheduled job to create EBS snapshots for Instances tagged “Backup = Yes”.

Example – Detect when EC2 instances are launched and automatically tag them per defined logic.

Cloud Watch Logs

You can use CloudWatch Logs to monitor and troubleshoot your systems and applications using your
existing system, application, and custom log files. You can send your existing log files to CloudWatch
Logs and monitor these logs in near real-time.

Cloud Watch Dashboards

CloudWatch Dashboards can display multiple metrics, and can be accessorized with text and images.
You can build multiple dashboards if you’d like, each one focusing on providing a distinct view of
your environment. You can even pull data from multiple regions into a single dashboard in order to
create a global view.

Different Log categories

CloudWatch Logs can be used to monitor your logs for specific phrases, values, or patterns. For
example, you could set an alarm on the number of errors that occur in your system logs or view
graphs of web request latencies from your application logs. You can view the original log data to see
the source of the problem if needed. Log data can be stored and accessed for as long as you need
using highly durable, low-cost storage so you don’t have to worry about filling up hard drives.

CloudWatch Logs includes an installable agent for Ubuntu, Amazon Linux, and Windows at no
additional charge. You can use the agent to quickly and easily send your logs to CloudWatch. The
CloudWatch Logs Agent can be installed using CloudFormation, Chef, EC2 User Data or through
direct command-line setup.

AWS Cloud Trial Best Practices

As a best practice, we recommend that you:

Enable AWS CloudTrail in all regions to get logs of API calls by setting up a trail that applies to ALL
regions.

Enable log file validation using industry standard algorithms: SHA-256 for hashing and SHA-256 with
RSA for digital signing.

By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side
encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is
directly manageable, you can instead use server-side encryption with AWS KMS–managed keys (SSE-
KMS) for your CloudTrail log files.
AWS Notes – Important Pointers

Setup real-time monitoring of CloudTrail logs by sending them to CloudWatch Logs.

If you are using multiple AWS accounts, centralize CloudTrail logs in a single account.

For added durability, configure Cross-Region Replication (CRR) for S3 buckets containing CloudTrail
logs.

AWS Config

AWS Config provides you with a detailed inventory of your AWS resources and their current
configuration in an AWS account

Continuously records configuration changes to these resources (e.g., EC2 instance launch,
ingress/egress rules of security groups, Network ACL rules for VPCs, etc.).

Determine how a resource was configured at any point in time, and get notified via Amazon SNS
when the configuration of a resource changes, or when a rule becomes noncompliant.

AWS Config rules represent your ideal configuration settings.

AWS Config provides customizable, predefined rules to help you get started. You can also create
your own custom rules.

If a resource violates a rule, AWS Config flags the resource and the rule as noncompliant.

The AWS Config console shows the compliance status of your rules and resources.

You can also use the AWS CLI, API, and SDKs to make requests to the AWS Config service for
compliance information.

VPC Flow Logs

Flow log data is stored using Amazon CloudWatch Logs, which can then be integrated with additional
services, such as Elasticsearch/Kibana for visualization

You can enable VPC Flow Logs at different scope levels:

- VPC - which would cover all network interfaces in that VPC

- Subnet – capturing traffic on all network interfaces in that subnet

- Network Interface – capture traffic specific to a single network interface

After you've created a flow log, it can take several minutes to begin collecting data and publishing to
CloudWatch Logs. That is why it is not a tool for capturing real-time log streams for network
interfaces.

Flow logs can capture either all flows, rejected flows or accepted flows.

They can be used both for security monitoring and application troubleshooting.
AWS Notes – Important Pointers

The amount of data getting captured can be significant in some situations, so choose carefully if you
plan to capture flows for the entire VPC.

VPC Flow Logs do not require any agents on EC2 instances.

Enable per ENI, per Subnet or per VPC

All network traffic data is logged to CloudWatch logs so you get durable storage but also all the
analysis features such as filter queries and metric creation

And then Create Alarms on those metrics

Collected, processed and stored in ~10 minute capture windows into Cloudwatch Logs

Here is an example of a real time network dashboard using the Amazon Elasticsearch Service and
Kibana visual interface

Also based on a CloudWatch Logs Subscription filter that tees Flow Log data into a Kinesis stream
and a stream reader then takes data and puts it into Elasticsearch

See Jeff’s blog post where he details how to setup this VPC Flow Dashboard in a few clicks (using a
CloudFormation template)

Security Essentials & Best Practices

There’s a shared responsibility to accomplish security and compliance objectives in AWS cloud.
There are some elements that AWS takes responsibility for, and others that the customer must
address. The outcome of the collaborative approach is positive results seen by customers around
the world.
AWS Notes – Important Pointers

There’s a shared responsibility to accomplish security and compliance objectives in AWS cloud.
There are some elements that AWS takes responsibility for, and others that the customer must
address. The outcome of the collaborative approach is positive results seen by customers around
the world.

IAM

IAM allows you to implement a comprehensive access control on AWS resources.

IAM is giving you the ability to Authenticate, Authorize, Log all access.

-> Authenticate, including regular credentials or with strong authentication for your privilege users
(or everybody), as well authenticate an other AWS accounts or even trust other Identity Providers

-> Authorize with granularity who can do what. Therefore you can implement Least Privilege and
Segregation of Duties.

-> And finally, Log every allow and deny in CloudTrail, for troubleshoot or audit purposes.

Basically when you think Access control with AWS resources then think IAM… Every time.

Account Owner

Can do anything

IAM Policies

User Level

Resource Level

• A username for each user


• Groups to manage multiple users
• Centralized access control
AWS Notes – Important Pointers

• Optional provisions:
• Password for console access
• Policies to control access to AWS APIs
• Two methods to sign API calls:
• X.509 certificate
• Access Key ID + Secret Access Key
• Multifactor Authentication

Encryption

This * is not about teaching what encryption is or the differences between transit and rest. It is
about mentioning that not only do we provide our customers the ability to encrypt their data as it
sits and flows through and in/out of our environment, but that we provide many services and
features that make it easier. You should call out KMS, CloudHSM, VGW, EBS encryption, ELB SSL
offloading, RDS Oracle TDE, MSSQL TDE, S3 object encryption, etc.

If the customer is currently not encrypting data (either in transit or at rest), this might be a good
place to discuss the differences and emphasis the need to do so under the shared responsibility
model.

AWS certification Manager

• Provision trusted SSL/TLS certificates from AWS for use with AWS resources:

Elastic Load Balancing

Amazon CloudFront distributions

• AWS handles the muck

Key pair and CSR generation

Managed renewal and deployment

• Domain validation (DV) through email

• Available through AWS Management console, CLI, or API

AWS KMS

AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and
control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs)
to protect the security of your keys. AWS Key Management Service is integrated with several other
AWS services to help you protect your data you store with these services. AWS Key Management
Service is also integrated with AWS CloudTrail to provide you with logs of all key usage to help meet
your regulatory and compliance needs.

Integrated with AWS SDKs and AWS services:

S3, EBS, AWS Import/Export Snowball, RDS, Redshift, CodeCommit, CloudTrail, EMR, Kinesis
Firehose, Elastic Transcoder, SES, WorkSpaces, WorkMail
AWS Notes – Important Pointers

Centralized control.

Easy and automatic key rotation (KMS keeps track of old keys for decryption)

*New Feature*: Bring your own keys to KMS

Amazon Inspector

Inspector is an automated security assessment service to help improve the security and compliance
of applications deployed on AWS.

AWS WAF

Let’s talk about why we built the WAF based on customer feedback.

WAF was initially a CDN offering, but now integrates with ELB as well

WAFs help protect web sites & applications against attacks that cause data breaches and downtime.

General WAF use cases

• Protect from SQL Injection (SQLi) and Cross Site Scripting (XSS)

• Prevent Web Site Scraping, Crawlers, and BOTs

• Mitigate DDoS (HTTP/HTTPS floods)

Gartner reports that main driver of WAF purchase (25-30%) is PCI compliance

AWS Cloud Trail

• Who made the API call?

• When was the API call made?

• What was the API call?

• Which resources were acted up on in the API call?

• Where was the API call made from and made to?

Stored durably in S3

Discuss ways to consume CloudTrail logs (Console, CLI, Splunk, SumoLogic, AlertLogic, Loggly,
DataDog, etc.)

You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application
performance, and operational health. You can use these insights to react and keep your application
running smoothly.

VPC Flow Logs


AWS Notes – Important Pointers

No Agents! Just Turn it on. No really, Ill wait.

Enable per ENI, per Subnet or per VPC

All network traffic data is logged to CloudWatch logs so you get durable storage but also all the
analysis features such as filter queries and metric creation

And then Create Alarms on those metrics

Collected, processed and stored in ~10 minute capture windows into Cloudwatch Logs

Or roll your own real time network dashboard with the new Amazon Elasticsearch Service

Also based on a CloudWatch Logs Subscription filter that tees Flow Log data into a Kinesis stream
and a stream reader then takes data and puts it into Elasticsearch

See Jeff’s blog post where he details how to setup this VPC Flow Dashboard in a few clicks

AWS Config is a fully managed service that provides you with an AWS resource inventory,
configuration history, and configuration change notifications to enable security and governance.
With AWS Config you can discover existing AWS resources, export a complete inventory of your AWS
resources with all configuration details, and determine how a resource was configured at any point
in time. These capabilities enable compliance auditing, security analysis, resource change tracking,
and troubleshooting.

Use Cases:

Security analysis: Am I safe?

Audit compliance: Where is the evidence?

Change management: What will this change affect?

Troubleshooting: What has changed?

Discovery: What resources exist?

A Config Rule represents desired configurations for a resource and is evaluated against configuration
changes on the relevant resources, as recoded by AWS Config. The results of evaluating a rule
against the configuration of a resource are available on a dashboard. Using Config Rules, you can
assess your overall compliance and risk status from a configuration perspective, view compliance
trends over time and pinpoint which configuration change caused a resource to drift out of
compliance with a rule.

Notes: This * is very similar to the previous one. It adds the concept of Config Rules. It should be
noted that although the “Changing Resources” are moved off the page with the animation, they are
still important and not being replaced. We’re just making room. It’s probably a good idea to read all
the Config Rule faqs from the public page to make sure you’re comfortable discussing the different
elements.

Architecting Approaches for AWS


AWS Notes – Important Pointers

Lift-and-shift

Deploy existing apps in AWS with minimal re-design

• Good strategy if starting out on AWS, or if application can’t be re-architected due to cost or
resource constraints

• Primarily use core services such as EC2, EBS, VPC

Cloud-optimized

• Evolve architecture for existing app to leverage AWS services

• Gain cost and performance benefits from using AWS services such as Auto Scaling Groups,
RDS, SQS, and so on

Cloud-native architecture

• Architect app to be cloud-native from the outset

• Leverage the full AWS portfolio

• Truly gain all the benefits of AWS (security, scalability, cost, durability, low operational
burden, etc)

Design for failure and nothing fails

Build security in every layer

Leverage different storage options

Implement elasticity

Think parallel

Loose coupling sets you free

Don’t fear constraints

Design for Failure: A Single User

This here is the most basic set up you would need to serve up a web application.

Any user would first hit Route53 for DNS resolution.


AWS Notes – Important Pointers

Behind the DNS service is an EC2 instance running our webapp and database on a single server,

We will need to attach an Elastic IP so Route53 can direct traffic to our webstack at that IP Address
with an A record.

To scale this infrastructure, the only real option we have is to get a bigger EC2 instance.

Design for Failure: Solving No Failover/Redundancy

Next up we need to address the lack of failover and redundancy in our infrastructure.

We’re going to do this by adding in another webapp instance, and enabling the Multi-AZ feature of
RDS, which will give us a standby instance in a different AZ from the Primary.

We’re also going to replace our EIP with an Elastic Load Balancer to share the load between our two
web instances

Now we have an app that is a bit more scalable and has some fault tolerance built in as well.

Design for Failure: Best Practices

Best Practices:

• Eliminate single points of failure

• Use multiple Availability Zones

• Use Elastic Load Balancing

• Do real-time monitoring with CloudWatch

• Create a database standby across Availability Zones

Build Security in Every Layer

More Tools for your Security Toolbox:

• Amazon Inspector

• Amazon Certificate Manager

• AWS Shield

• AWS Web Application Firewall (WAF)

• Amazon Macie
AWS Notes – Important Pointers

• Amazon GuardDuty

• AWS Config

Leverage Many Storage Options

One size does NOT fit all

• Amazon Elastic Block Storage (EBS) – persistent block storage

• Amazon EC2 Instance Storage – ephemeral block storage

• Amazon RDS – managed relational database

• Amazon CloudFront – content distribution network

• Amazon S3 – object/blob store, good for large objects

• Amazon DynamoDB – non-relational data (key-value)

Amazon ElastiCache – managed Redis or Memcached

Implement Elasticity

How To Guide:

• Write Auto Scaling policies with your specific application access patterns in mind

• Prepare your application to be flexible: don’t assume the health, availability, or fixed
location of components

• Architect resiliency to reboot and relaunch

• When an instance launches, it should ask “Who am I and what is my role?”

• Leverage highly scalable, managed services such as S3 and DynamoDB

Think Parallel

Scale Horizontally, Not Vertically

• Decouple compute from state/session data

• Use ELBs to distribute load

• Break up big data into pieces for distributed processing

• AWS Elastic Map Reduce (EMR) – managed Hadoop

Faster doesn’t need to mean more expensive

• With EC2 On Demand, the following will cost the same:

• 12 hours of work using 4 vCPUs

• 1 hour of work using 48 vCPUs

• Right Size your infrastructure to your workload to get the best balance between cost and
performance

Parallelize using native managed services


AWS Notes – Important Pointers

• Get the best performance out of S3 with parallelized reads/writes

• Multi-part uploads (API) and byte-range GETs (HTTP)

• Take on high concurrency with Lambda

• Initial soft limit: 1000 concurrent requests per region

Loose Coupling Sets You Free: Don’t Reinvent the Wheel

Build in redundancy and scalability

AWS has quite a few services that can solve key functionality areas in your application.

Combining loose coupling, SOA, and prebuilt services, can also really have some huge advantages.

Instead of writing all these mini services yourself, try and leverage already existing services and
applications, especially when you are starting out.

DON’T REINVENT THE WHEEL! For example, at AWS we have services to help you with Email,
Queues, Transcoding, Search, Databases, and Monitoring and Metrics. Lean on other 3rd parties for
more.

Loose coupling different tiers of your architecture and using SOA gives you the ability to move
quickly

Loose Coupling Sets You Free

When services are loosely coupled, they can scale and be made fault tolerant independently of each
other.

The looser that services are coupled the larger they can scale

So remember:

Design everything as a black box

Build separate services instead of something that is tightly interacting with something else

Uses common interfaces or common APIs between the components

And remember to favor services with built-in redundancy and scalability rather than building your
own;

Don’t Fear Constraints

Rethink traditional architectural constraints

Need more RAM?

• Don’t: vertically scale

• Do: distribute load across machines or a shared cache

Need better IOPS for database?

• Don’t: rework schema/indexes or vertically scale

• Do: create read replicas, implement sharding, add a caching layer


AWS Notes – Important Pointers

Hardware failed or config got corrupted?

• Don’t: waste production time diagnosing the problem

• Do: “Rip and replace” – stop/terminate old instance and relaunch

Need a Cost Effective Disaster Recovery (DR) strategy?

• Don’t: double your infrastructure costs when you don’t need to

Do: implement Pilot Light or Warm Standby DR stacks

Learn to Fish:-

• AWS has documentation available per cloud service at


https://aws.amazon.com/documentation/

• Each service typically has a User Guide and CLI Guides available.

• Getting Started Guides are also very common.

• Some guides vary by audience - Admin Guides, Developer/API Guides, Network


Admin

• Available multiple formats online/HTML, downloadable/PDF, Kindle ebooks

• Beyond these core documentation resources exist additional resources

• FAQs, Whitepapers, Blog articles and demonstrations, Solutions, reference


architectures and Quick Starts

• Understanding these resources will give you a complete view of a given service

• This presentation covers what those additional resources are and how to find them

https://aws.amazon.com/faqs/

https://aws.amazon.com/blogs/aws/

https://aws.amazon.com/whitepapers/?whitepapers-main.sort-
by=item.additionalFields.sortDate&whitepapers-main.sort-order=desc

https://aws.amazon.com/architecture/well-architected/

Wish you all the Best.


AWS Notes – Important Pointers

You might also like