Amazon Web Services
Amazon Web Services
In 2006, Amazon Web Services (AWS) started to offer IT services to the market in
the form of web services, which is nowadays known as cloud computing. With this
cloud, we need not plan for servers and other IT infrastructure which takes up much
of time in advance. Instead, these services can instantly spin up hundreds or
thousands of servers in minutes and deliver results faster. We pay only for what we
use with no up-front expenses and no long-term commitments, which makes AWS
cost efficient.
Today, AWS provides a highly reliable, scalable, low-cost infrastructure platform in
the cloud that powers multitude of businesses in 190 countries around the world.
Types of Clouds
There are three types of clouds − Public, Private, and Hybrid cloud.
Public Cloud
In public cloud, the third-party service providers make resources and services
available to their customers via Internet. Customer’s data and related security is
with the service providers’ owned infrastructure.
Private Cloud
A private cloud also provides almost similar features as public cloud, but the data
and services are managed by the organization or by the third party only for the
customer’s organization. In this type of cloud, major control is over the infrastructure
so security related issues are minimized.
Hybrid Cloud
A hybrid cloud is the combination of both private and public cloud. The decision to
run on private or public cloud usually depends on various parameters like sensitivity
of data and applications, industry certifications and required standards, regulations,
etc.
IaaS
IaaS stands for Infrastructure as a Service. It provides users with the capability to
provision processing, storage, and network connectivity on demand. Using this
service model, the customers can develop their own applications on these
resources.
PaaS
PaaS stands for Platform as a Service. Here, the service provider provides various
services like databases, queues, workflow engines, e-mails, etc. to their customers.
The customer can then use these components for building their own applications.
The services, availability of resources and data backup are handled by the service
provider that helps the customers to focus more on their application's functionality.
SaaS
SaaS stands for Software as a Service. As the name suggests, here the third-party
providers provide end-user applications to their customers with some administrative
capability at the application level, such as the ability to create and manage their
users. Also some level of customizability is possible such as the customers can use
their own corporate logos, colors, etc.
Security issues
Security is the major issue in cloud computing. The cloud service providers
implement the best security standards and industry certifications, however, storing
data and important files on external service providers always bears a risk.
AWS cloud infrastructure is designed to be the most flexible and secured cloud
network. It provides scalable and highly reliable platform that enables customers to
deploy applications and data quickly and securely.
Technical issues
As cloud service providers offer services to number of clients each day, sometimes
the system can have some serious issues leading to business processes
temporarily being suspended. Additionally, if the internet connection is offline then
we will not be able to access any of the applications, server, or data from the cloud.
Cloud service providers promises vendors that the cloud will be flexible to use and
integrate, however switching cloud services is not easy. Most organizations may
find it difficult to host and integrate current cloud applications on another platform.
Interoperability and support issues may arise such as applications developed on
Linux platform may not work properly on Microsoft Development Framework (.Net).
Load Balancing
Load balancing simply means to hardware or software load over web servers, that
improver's the efficiency of the server as well as the application. Following is the
diagrammatic representation of AWS architecture with load balancing.
Hardware load balancer is a very common network appliance used in traditional
web application architectures.
AWS provides the Elastic Load Balancing service, it distributes the traffic to EC2
instances across multiple available sources, and dynamic addition and removal of
Amazon EC2 hosts from the load-balancing rotation.
Elastic Load Balancing can dynamically grow and shrink the load-balancing
capacity to adjust to traffic demands and also support sticky sessions to address
more advanced routing needs.
Amazon Cloud-front
It is responsible for content delivery, i.e. used to deliver website. It may contain
dynamic, static, and streaming content using a global network of edge locations.
Requests for content at the user's end are automatically routed to the nearest edge
location, which improves the performance.
Amazon Cloud-front is optimized to work with other Amazon Web Services, like
Amazon S3 and Amazon EC2. It also works fine with any non-AWS origin server
and stores the original files in a similar manner.
In Amazon Web Services, there are no contracts or monthly commitments. We pay
only for as much or as little content as we deliver through the service.
It is used to spread the traffic to web servers, which improves performance. AWS
provides the Elastic Load Balancing service, in which traffic is distributed to EC2
instances over multiple available zones, and dynamic addition and removal of
Amazon EC2 hosts from the load-balancing rotation.
Elastic Load Balancing can dynamically grow and shrink the load-balancing
capacity as per the traffic conditions.
Security Management
Amazon’s Elastic Compute Cloud (EC2) provides a feature called security groups,
which is similar to an inbound network firewall, in which we have to specify the
protocols, ports, and source IP ranges that are allowed to reach your EC2
instances.
Each EC2 instance can be assigned one or more security groups, each of which
routes the appropriate traffic to each instance. Security groups can be configured
using specific subnets or IP addresses which limits access to EC2 instances.
Elastic Caches
Amazon Elastic Cache is a web service that manages the memory cache in the
cloud. In memory management, cache has a very important role and helps to
reduce the load on the services, improves the performance and scalability on the
database tier by caching frequently used information.
Amazon RDS
AWS cloud provides various options for storing, accessing, and backing up web
application data and assets. The Amazon S3 (Simple Storage Service) provides a
simple web-services interface that can be used to store and retrieve any amount of
data, at any time, from anywhere on the web.
Amazon S3 stores data as objects within resources called buckets. The user can
store as many objects as per requirement within the bucket, and can read, write and
delete objects from the bucket.
Amazon EBS is effective for data that needs to be accessed as block storage and
requires persistence beyond the life of the running instance, such as database
partitions and application logs.
Amazon EBS volumes can be maximized up to 1 TB, and these volumes can be
striped for larger volumes and increased performance. Provisioned IOPS volumes
are designed to meet the needs of database workloads that are sensitive to storage
performance and consistency.
Amazon EBS currently supports up to 1,000 IOPS per volume. We can stripe
multiple volumes together to deliver thousands of IOPS per instance to an
application.
Auto Scaling
The difference between AWS cloud architecture and the traditional hosting model is
that AWS can dynamically scale the web application fleet on demand to handle
changes in traffic.
In the traditional hosting model, traffic forecasting models are generally used to
provision hosts ahead of projected traffic. In AWS, instances can be provisioned on
the fly according to a set of triggers for scaling the fleet out and back in. Amazon
Auto Scaling can create capacity groups of servers that can grow or shrink on
demand.
In AWS, network devices like firewalls, routers, and load-balancers for AWS
applications no longer reside on physical devices and are replaced with software
solutions.
Multiple options are available to ensure quality software solutions. For load
balancing choose Zeus, HAProxy, Nginx, Pound, etc. For establishing a VPN
connection choose OpenVPN, OpenSwan, Vyatta, etc.
No security concerns
AWS provides a more secured model, in which every host is locked down. In
Amazon EC2, security groups are designed for each type of host in the architecture,
and a large variety of simple and tiered security models can be created to enable
minimum access among hosts within your architecture as per requirement.
EC2 instances are easily available at most of the availability zones in AWS region
and provides model for deploying your application across data centers for both high
availability and reliability.
Step 3 − Select the service of your choice and the console of that service will open.
Click the Edit menu on the navigation bar and a list of services appears. We can
create their shortcuts by simply dragging them from the menu bar to the navigation
bar.
Adding Services Shortcuts
When we drag the service from the menu bar to the navigation bar, the shortcut will
be created and added. We can also arrange them in any order. In the following
screenshot we have created shortcut for S3, EMR and DynamoDB services.
To delete the shortcut, click the edit menu and drag the shortcut from the navigation
bar to the service menu. The shortcut will be removed. In the following screenshot,
we have removed the shortcut for EMR services.
Selecting a Region
Many of the services are region specific and we need to select a region so that
resources can be managed. Some of the services do not require a region to be
selected like AWS Identity and Access Management (IAM).
To select a region, first we need to select a service. Click the Oregon menu (on the
left side of the console) and then select a region
We can change password of our AWS account. To change the password, following
are the steps.
Step 1 − Click the account name on the left side of the navigation bar.
Step 2 − Choose Security Credentials and a new page will open having various
options. Select the password option to change the password and follow the
instructions.
Step 3 − After signing-in, a page opens again having certain options to change the
password and follow the instructions.
Click the account name in the navigation bar and select the 'Billing & Cost
Management' option.
Now a new page will open having all the information related to money section.
Using this service, we can pay AWS bills, monitor our usage and budget estimation.
S3
Browse buckets and view their properties.
View properties of objects.
Route 53
Browse and view hosted zones.
Browse and view details of record sets.
Auto Scaling
View group details, policies, metrics and alarms.
Manage the number of instances as per the situation.
Elastic Beanstalk
View applications and events.
View environment configuration and swap environment CNAMEs.
Restart app servers.
DynamoDB
View tables and their details like metrics, index, alarms, etc.
CloudFormation
View stack status, tags, parameters, output, events, and resources.
OpsWorks
View configuration details of stack, layers, instances and applications.
View instances, its logs, and reboot them.
CloudWatch
View CloudWatch graphs of resources.
List CloudWatch alarms by status and time.
Action configurations for alarms.
Services Dashboard
Provides information of available services and their status.
All information related to the billing of the user.
Switch the users to see the resources in multiple accounts.
Amazon provides a fully functional free account for one year for users to use and
learn the different components of AWS. You get access to AWS services like EC2,
S3, DynamoDB, etc. for free. However, there are certain limitations based on the
resources consumed.
Step 1 − To create an AWS account, open this link https://aws.amazon.com and
sign-up for new account and enter the required details.
If we already have an account, then we can sign-in using the existing AWS
password.
Step 2 − After providing an email-address, complete this form. Amazon uses this
information for billing, invoicing and identifying the account. After creating the
account, sign-up for the services needed.
Step 3 − To sign-up for the services, enter the payment information. Amazon
executes a minimal amount transaction against the card on the file to check that it is
valid. This charge varies with the region.
Step 4 − Next, is the identity verification. Amazon does a call back to verify the
provided contact number.
Step 5 − Choose a support plan. Subscribe to one of the plans like Basic,
Developer, Business, or Enterprise. The basic plan costs nothing and has limited
resources, which is good to get familiar with AWS.
Step 6 − The final step is confirmation. Click the link to login again and it redirects
to AWS management console.
Now the account is created and can be used to avail AWS services.
AWS Account Identifiers
AWS assigns two unique IDs to each AWS account.
An AWS account ID
A conical user ID
AWS Account ID
Account Alias
Account alias is the URL for your sign-in page and contains the account ID by
default. We can customize this URL with the company name and even overwrite the
previous one.
Step 1 − Sign in to the AWS management console and open the IAM console using
the following link https://console.aws.amazon.com/iam/
Step 2 − Select the customize link and create an alias of choice.
Step 3 − To delete the alias, click the customize link, then click the Yes, Delete
button. This deletes the alias and it reverts to the Account ID.
Requirements
To use MFA services, the user has to assign a device (hardware or virtual) to IAM
user or AWS root account. Each MFA device assigned to the user must be unique,
i.e. the user cannot enter a code from another user's device to authenticate.
In this method, MFA requires us to configure the IAM user with the phone number of
the user's SMS-compatible mobile device. When the user signs in, AWS sends a
six-digit code by SMS text message to the user's mobile device. The user is
required to enter the same code on a second web page during sign-in to
authenticate the right user. This SMS-based MFA cannot be used with AWS root
account.
In this method, MFA requires us to assign an MFA device (hardware) to the IAM
user or the AWS root account. The device generates a six-digit numeric code based
upon a time synchronized one-time password algorithm. The user has to enter the
same code from the device on a second web page during sign-in to authenticate the
right user.
In this method, MFA requires us to assign an MFA device (virtual) to the IAM user
or the AWS root account. A virtual device is a software application (mobile app)
running on a mobile device that emulates a physical device. The device generates a
six-digit numeric code based upon a time-synchronized one-time password
algorithm. The user has to enter the same code from the device on a second web
page during sign-in to authenticate the right user.
Step 5 − We can manage the user’s own security credentials like creating
password, managing MFA devices, managing security certificates, creating/deleting
access keys, adding user to groups, etc.
There are many more features that are optional and are available on the web page.
EC2 Components
In AWS EC2, the users must be aware about the EC2 components, their operating
systems support, security measures, pricing structures, etc.
Operating System Support
Security
Users have complete control over the visibility of their AWS account. In AWS EC2,
the security systems allow create groups and place running instances into it as per
the requirement. You can specify the groups with which other groups may
communicate, as well as the groups with which IP subnets on the Internet may talk.
Pricing
AWS offers a variety of pricing options, depending on the type of resources, types
of applications and database. It allows the users to configure their resources and
compute the charges accordingly.
Fault tolerance
Amazon EC2 allows the users to access its resources to design fault-tolerant
applications. EC2 also comprises geographic regions and isolated locations known
as availability zones for fault tolerance and stability. It doesn’t share the exact
locations of regional data centers for security reasons.
When the users launch an instance, they must select an AMI that's in the same
region where the instance will run. Instances are distributed across multiple
availability zones to provide continuous services in failures, and Elastic IP (EIPs)
addresses are used to quickly map failed instance addresses to concurrent running
instances in other zones to avoid delay in services.
Migration
This service allows the users to move existing applications into EC2. It costs $80.00
per storage device and $2.49 per hour for data loading. This service suits those
users having large amount of data to move.
Features of EC2
Here is a list of some of the prominent features of EC2 −
Reliable − Amazon EC2 offers a highly reliable environment where replacement of
instances is rapidly possible. Service Level Agreement commitment is 99.9% availability
for each Amazon EC2 region.
Designed for Amazon Web Services − Amazon EC2 works fine with Amazon services
like Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon SQS. It provides a
complete solution for computing, query processing, and storage across a wide range of
applications.
Secure − Amazon EC2 works in Amazon Virtual Private Cloud to provide a secure and
robust network to resources.
Flexible Tools − Amazon EC2 provides the tools for developers and system
administrators to build failure applications and isolate themselves from common failure
situations.
Inexpensive − Amazon EC2 wants us to pay only for the resources that we use. It
includes multiple purchase plans such as On-Demand Instances, Reserved Instances,
Spot Instances, etc. which we can choose as per our requirement.
Load Balancer
This includes monitoring and handling the requests incoming through the
Internet/intranet and distributes them to EC2 instances registered with it.
Control Service
SSL Termination
ELB provides SSL termination that saves precious CPU cycles, encoding and
decoding SSL within your EC2 instances attached to the ELB. An X.509 certificate
is required to be configured within the ELB. This SSL connection in the EC2
instance is optional, we can also terminate it.
Features of ELB
Following are the most prominent features of ELB −
ELS is designed to handle unlimited requests per second with gradually increasing load
pattern.
We can configure EC2 instances and load balancers to accept traffic.
We can add/remove load balancers as per requirement without affecting the overall flow
of information.
It is not designed to handle sudden increase in requests like online exams, online
trading, etc.
Customers can enable Elastic Load Balancing within a single Availability Zone or across
multiple zones for even more consistent application performance.
Step 7 − Click the Add button and a new pop-up will appear to select subnets from
the list of available subnets as shown in the following screenshot. Select only one
subnet per availability zone. This window will not appear if we do not select Enable
advanced VPC configuration.
Step 8 − Choose Next; a pop-up window will open. After selecting a VPC as your
network, assign security groups to Load Balancers.
Step 9 − Follow the instructions to assign security groups to load balancers and
click Next.
Step 10 − A new pop-up will open having health checkup configuration details with
default values. Values can be set on our own, however these are optional. Click on
Next: Add EC2 Instances.
Step 11 − A pop-up window will open having information about instances like
registered instances, add instances to load balancers by selecting ADD EC2
Instance option and fill the information required. Click Add Tags.
Step 12 − Adding tags to your load balancer is optional. To add tags click the Add
Tags Page and fill the details such as key, value to the tag. Then choose Create
Tag option. Click Review and Create button.
A review page opens on which we can verify the setting. We can even change the
settings by choosing the edit link.
Step 13 − Click Create to create your load balancer and then click the Close button.
How It Works?
Each WorkSpace is a persistent Windows Server 2008 R2 instance that looks like
Windows 7, hosted on the AWS cloud. Desktops are streamed to users via PCoIP
and the data backed up will be taken on every 12 hours by default.
User Requirements
An Internet connection with TCP and UDP open ports is required at the user’s end.
They have to download a free Amazon WorkSpaces client application for their
device.
A review page will open to review the information. Make changes if incorrect, then click
the Create Simple AD button.
Select the cloud directory. Enable/disable WorkDocs for all users in this directory, then
click the Yes, Next button.
A new page will open. Fill the details for the new user and select the Create
Users button. Once the user is added to the WorkSpace list, select Next.
Enter the number of bundles needed in the value field of WorkSpaces Bundles page,
then select Next.
A review page will open. Check the details and make changes if required. Select Launch
WorkSpaces.
There will be a message to confirm the account, after which we can use
WorkSpaces.
Step 4 − Test your WorkSpaces using the following steps.
Download and install the Amazon WorkSpaces client application using the following
link − https://clients.amazonworkspaces.com/.
Run the application. For the first time, we need to enter the registration code received in
email and click Register.
Connect to the WorkSpace by entering the user name and password for the user. Select
Sign In.
Now WorkSpace desktop is displayed. Open this
link http://aws.amazon.com/workspaces/ on THE web browser. Navigate and verify that
the page can be viewed.
A message saying “Congratulations! Your Amazon WorkSpaces cloud directory has
been created, and your first WorkSpace is working correctly and has Internet access”
will be received.
This AWS WorkSpaces feature verifies if the network and Internet connections are
working, checks if WorkSpaces and their associated registration services are
accessible, checks if the port 4172 is open for UDP and TCP access or not.
Client Reconnect
This AWS WorkSpaces feature allows the users to access to their WorkSpace
without entering their credentials every time when they disconnect. The application
installed at the client’s device saves an access token in a secure store, which is
valid for 12 hours and uses to authenticate the right user. Users click on the
Reconnect button on the application to get access on their WorkSpace. Users can
disable this feature any time.
Auto Resume Session
This AWS WorkSpaces feature allows the client to resume a session that was
disconnected due to any reason in network connectivity within 20 minutes (by
default and can be extended for 4 hours). Users can disable this feature any time in
group policy section.
Console Search
This feature allows Administrators to search for WorkSpaces by their user name,
bundle type, or directory.
You will see this entry in Event Sources Tab of Lambda Service page.
Step 8 − Add some entries into the table. When the entry gets added and saved,
then Lambda service should trigger the function. It can be verified using the Lambda
logs.
Step 9 − To view logs, select the Lambda service and click the Monitoring tab. Then
click the View Logs in CloudWatch.
Benefits of AWS Lambda
Following are some of the benefits of using Lambda tasks −
Lambda tasks need not to be registered like Amazon SWF activity types.
We can use any existing Lambda functions that you’ve already defined in workflows.
Lambda functions are called directly by Amazon SWF; there is no need design a
program to implement and execute them.
Lambda provides us the metrics and logs for tracking function executions.
Throttle Limit
The throttle limit is 100 concurrent Lambda function executions per account and is
applied to the total concurrent executions across all functions within a same region.
The formula to calculate the number of concurrent executions for a function =
(average duration of the function execution) X (number of requests or events
processed by AWS Lambda).
When throttle limit is reached, then it returns a throttling error having an error code
429. After 15-30 minute you can start work again. The throttle limit can be increased
by contacting AWS support center.
Resources Limit
The following table shows the list of resources limits for a Lambda function.
Service Limit
The following table shows the list of services limits for deploying a Lambda function.
Size of code/dependencies that you can zip into a deployment package (uncompressed zip/jar 250 MB
size)
Total size of all the deployment packages that can be uploaded per region 1.5 GB
Number of unique event sources of the Scheduled Event source type per account 50
Number of unique Lambda functions you can connect to each Scheduled Event 5
Amazon EC2
Amazon Route 53
Amazon WorkSpaces
Auto Scaling
Elastic Load Balancing
AWS Data Pipeline
Elastic Beanstalk
Amazon Elastic Cache
Amazon EMR
Amazon OpsWorks
Amazon RDS
Amazon Redshift
Create VPC
Step 1 − Open the Amazon VPC console by using the following link
− https://console.aws.amazon.com/vpc/
Step 2 − Select creating the VPC option on the right side of the navigation bar.
Make sure that the same region is selected as for other services.
Step 3 − Click the start VPC wizard option, then click VPC with single public subnet
option on the left side.
Step 4 − A configuration page will open. Fill in the details like VPC name, subnet
name and leave the other fields as default. Click the Create VPC button.
Step 5 − A dialog box will open, showing the work in progress. When it is
completed, select the OK button.
The Your VPCs page opens which shows a list of available VPCs. The setting of
VPC can be changed here.
Step 1 − Open the Amazon VPC console by using the following link
− https://console.aws.amazon.com/vpc/
Step 2 − Select the security groups option in the navigation bar, then choose create
security group option.
Step 3 − A form will open, enter the details like group name, name tag, etc. Select
ID of your VPC from VPC menu, then select the Yes, create button.
Step 4 − The list of groups opens. Select the group name from the list and set rules.
Then click the Save button.
Step 1 − Open the Amazon VPC console using the following link
− https://console.aws.amazon.com/vpc/
Step 2 − Select the same region as while creating VPC and security group.
Step 3 − Now select the Launch Instance option in the navigation bar.
Step 4 − A page opens. Choose the AMI which is to be used.
Step 5 − A new page opens. Choose an Instance Type and select the hardware
configuration. Then select Next: Configure Instance Details.
Step 6 − Select the recently created VPC from the Network list, and the subnet from
the Subnet list. Leave the other settings as default and click Next till the Tag
Instance page.
Step 7 − On the Tag Instance page, tag the instance with the Name tag. This helps
to identify your instance from the list of multiple instances. Click Next: Configure
Security Group.
Step 8 − On the Configure Security Group page, select the recently created group
from the list. Then, select Review and Launch button.
Step 9 − On the Review Instance Launch page, check your instance details, then
select Launch.
Step 10 − A dialog box appears. Choose the option Select an existing key pair or
create a new key pair, then click the Launch Instances button.
Step 11 − The confirmation page open which shows all the details related to
instances.
Step 1 − Open the Amazon VPC console using the following link
− https://console.aws.amazon.com/vpc/
Step 2 − Select Elastic IP’s option in the navigation bar.
Step 3 − Select Allocate New Address. Then select Yes, Allocate button.
Step 4 − Select your Elastic IP address from the list, then select Actions, and then
click the Associate Address button.
Step 5 − A dialog box will open. First select the Instance from the Associate with
list. Then select your instance from the Instance list. Finally click the Yes, Associate
button.
Delete a VPC
There are several steps to delete VPC without losing any resources associated with
it. Following are the steps to delete a VPC.
Step 1 − Open the Amazon VPC console using the following link
− https://console.aws.amazon.com/vpc/
Step 2 − Select Instances option in the navigation bar.
Step 3 − Select the Instance from the list, then select the Actions → Instance State
→ Terminate button.
Step 4 − A new dialog box opens. Expand the Release attached Elastic IPs section,
and select the checkbox next to the Elastic IP address. Click the Yes, Terminate
button.
Step 5 − Again open the Amazon VPC console using the following link
− https://console.aws.amazon.com/vpc/
Step 6 − Select the VPC from the navigation bar. Then select Actions & finally click
the Delete VPC button.
Step 7 − A confirmation message appears. Click the Yes, Delete button.
Features of VPC
Many connectivity options − There are various connectivity options that exist in
Amazon VPC.
o Connect VPC directly to the Internet via public subnets.
o Connect to the Internet using Network Address Translation via private subnets.
o Connect securely to your corporate datacenter via encrypted IPsec hardware
VPN connection.
o Connect privately to other VPCs in which we can share resources across
multiple virtual networks through AWS account.
o Connect to Amazon S3 without using an internet gateway and have good control
over S3 buckets, its user requests, groups, etc.
o Combine connection of VPC and datacenter is possible by configuring Amazon
VPC route tables to direct all traffic to its destination.
Easy to use − Ease of creating a VPC in very simple steps by selecting network set-ups
as per requirement. Click "Start VPC Wizard", then Subnets, IP ranges, route tables,
and security groups will be automatically created.
Easy to backup data − Periodically backup data from the datacenter into Amazon EC2
instances by using Amazon EBS volumes.
Easy to extend network using Cloud − Move applications, launch additional web
servers and increase storage capacity by connecting it to a VPC.
Step 5 − If it is godaddy.com then select domain’s control panel and update the
Route 53 DNS endpoints. Delete the rest default values. It will take 2-3 minutes to
update.
Step 6 − Go back to Route 53 console and select the go to record sets option. This
will show you the list of record sets. By default, there are two record sets of type NS
& SOA.
Step 7 − To create your record set, select the create record set option. Fill the
required details such as: Name, Type, Alias, TTL seconds, Value, Routing policy,
etc. Click the Save record set button.
Step 8 − Create one more record set for some other region so that there are two
record sets with the same domain name pointing to different IP addresses with your
selected routing policy.
Once completed, the user requests will be routed based on the network policy.
Features of Route 53
Easy to register your domain − We can purchase all level of domains like .com, .net,
.org, etc. directly from Route 53.
Highly reliable − Route 53 is built using AWS infrastructure. Its distributed nature
towards DNS servers help to ensure a consistent ability to route applications of end
users.
Scalable − Route 53 is designed in such a way that it automatically handles large
volume queries without the user’s interaction.
Can be used with other AWS Services − Route 53 also works with other AWS
services. It can be used to map domain names to our Amazon EC2 instances, Amazon
S3 buckets, Amazon and other AWS resources.
Easy to use − It is easy to sign-up, easy to configure DNS settings, and provides quick
response to DNS queries.
Health Check: Route 53 monitors the health of the application. If an outage is detected,
then it automatically redirects the users to a healthy resource.
Cost-Effective − Pay only for the domain service and the number of queries that the
service answers for each domain.
Secure − By integrating Route 53 with AWS (IAM), there is complete control over every
user within the AWS account, such as deciding which user can access which part of
Route 53.
step 4 − Create a Connection dialog box opens up. Fill the required details and
click the Create button.
AWS will send an confirmation email within 72 hours to the authorized user.
Step 5 − Create a Virtual Interface using the following steps.
Open AWS console page again.
Select Connection in the navigation bar, then select Create Virtual Interface. Fill the
required details and click the Continue button.
Select Download Router Configuration, then click the Download button.
Verify the Virtual Interface (optional). To verify the AWS Direct Connect connections use
the following procedures.
To verify virtual interface connection to the AWS cloud − Run traceroute and verify
that the AWS Direct Connect identifier is in the network trace.
To verify virtual interface connection to Amazon VPC − Use any pingable AMI and
launch Amazon EC2 instance into the VPC that is attached to the virtual private
gateway.
When an instance is running, get its private IP address and ping the IP address to get a
response.
The bucket is created successfully in Amazon S3. The console displays the list of
buckets and its properties.
Select the Static Website Hosting option. Click the radio button Enable website hosting
and fill the required details.
Click the Add files option. Select those files which are to be uploaded from the system
and then click the Open button.
Click the start upload button. The files will get uploaded into the bucket.
To open/download an object − In the Amazon S3 console, in the Objects &
Folders list, right-click on the object to be opened/downloaded. Then, select the
required object.
This volume type is suitable for small and medium workloads like Root disk EC2
volumes, small and medium database workloads, frequently logs accessing
workloads, etc. By default, SSD supports 3 IOPS (Input Output Operations per
Second)/GB means 1 GB volume will give 3 IOPS, and 10 GB volume will give 30
IOPS. Its storage capacity of one volume ranges from 1 GB to 1 TB. The cost of
one volume is $0.10 per GB for one month.
This volume type is suitable for the most demanding I/O intensive, transactional
workloads and large relational, EMR and Hadoop workloads, etc. By default, IOPS
SSD supports 30 IOPS/GB means 10GB volume will give 300 IOPS. Its storage
capacity of one volume ranges from 10GB to 1TB. The cost of one volume is $0.125
per GB for one month for provisioned storage and $0.10 per provisioned IOPS for
one month.
It was formerly known as standard volumes. This volume type is suitable for ideal
workloads like infrequently accessing data, i.e. data backups for recovery, logs
storage, etc. Its storage capacity of one volume ranges from 10GB to 1TB. The cost
of one volume is $0.05 per GB for one month for provisioned storage and $0. 05 per
million I/O requests.
Each account will be limited to 20 EBS volumes. For a requirement of more than 20
EBS volumes, contact Amazon’s Support team. We can attach up to 20 volumes on
a single instance and each volume ranges from 1GB to 1TB in size.
In EC2 instances, we store data in local storage which is available till the instance is
running. However, when we shut down the instance, the data gets lost. Thus, when
we need to save anything, it is advised to save it on Amazon EBS, as we can
access and read the EBS volumes anytime, once we attach the file to an EC2
instance.
Step 2 − Store EBS Volume from a snapshot using the following steps.
Repeat the above 1 to 4 steps to create volume.
Type snapshot ID in the Snapshot ID field from which the volume is to be restored and
select it from the list of suggested options.
If there is requirement for more storage, change the storage size in the Size field.
Select the Yes Create button.
Step 3 − Attach EBS Volume to an Instance using the following steps.
Open the Amazon EC2 console.
Select Volumes in the navigation pane. Choose a volume and click the Attach Volume
option.
An Attach Volume dialog box will open. Enter the name/ID of instance to attach the
volume in the Instance field or select it from the list of suggestion options.
Click the Attach button.
A confirmation dialog box opens. Click the Yes, Detach button to confirm.
Amazon Web Services - Storage Gateway
AWS Storage Gateway provides integration between the on-premises IT
environment and the AWS storage infrastructure. The user can store data in the
AWS cloud for scalable, data security features and cost-efficient storage.
AWS Gateway offers two types of storage, i.e. volume based and tape based.
Volume Gateways
This storage type provides cloud-backed storage volumes which can be mount as
Internet Small Computer System Interface (iSCSI) devices from on-premises
application servers.
Gateway-cached Volumes
AWS Storage Gateway stores all the on-premises application data in a storage
volume in Amazon S3. Its storage volume ranges from 1GB to 32 TB and up to 20
volumes with a total storage of 150TB. We can attach these volumes with iSCSI
devices from on-premises application servers. It is of two categories −
Gateway-stored Volumes
When the Virtual Machine (VM) is activated, gateway volumes are created and
mapped to the on-premises direct-attached storage disks. Hence, when the
applications write/read the data from the gateway storage volumes, it reads and
writes the data from the mapped on-premises disk.
A gateway-stored volume allows to store primary data locally and provides on-
premises applications with low-latency access to entire datasets. We can mount
them as iSCSI devices to the on-premises application servers. It ranges from 1 GB
to 16 TB in size and supports up to 12 volumes per gateway with a maximum
storage of 192 TB.
Features of CloudFront
Fast − The broad network of edge locations and CloudFront caches copies of
content close to the end users that results in lowering latency, high data transfer
rates and low network traffic. All these make CloudFront fast.
Simple − It is easy to use.
Can be used with other AWS Services − Amazon CloudFront is designed in such
a way that it can be easily integrated with other AWS services, like Amazon S3,
Amazon EC2.
Cost-effective − Using Amazon CloudFront, we pay only for the content that you
deliver through the network, without any hidden charges and no up-front fees.
Elastic − Using Amazon CloudFront, we need not worry about maintenance. The
service automatically responds if any action is needed, in case the demand
increases or decreases.
Reliable − Amazon CloudFront is built on Amazon’s highly reliable infrastructure,
i.e. its edge locations will automatically re-route the end users to the next nearest
location, if required in some situations.
Global − Amazon CloudFront uses a global network of edge locations located in
most of the regions.
Default Cache Behavior Settings page opens. Keep the values as default and move to
the next page.
A Distribution settings page opens. Fill the details as per your requirement and click the
Create Distribution button.
The Status column changes from In Progress to Deployed. Enable your distribution by
selecting the Enable option. It will take around 15 minutes for the domain name to be
available in the Distributions list.
Step 5 − On the Specify DB Details page, provide the required details and click the
Continue button.
Step 8 − On the Review page, verify the details and click the Launch DB Instance
button.
Now DB instance shows in the list of DB instances.
Create Table window opens. Fill the details into their respective fields and click the
Continue button.
Finally, a review page opens where we can view details. Click the Create button.
Now the Table-name is visible in the in-to the list and Dynamo Table is ready to use.
The Cluster Details page opens. Provide the required details and click the Continue
button till the review page.
A confirmation page opens. Click the Close button to finish so that cluster is visible in the
Clusters list.
Select the cluster in the list and review the Cluster Status information. The page will
show Cluster status.
Step 2 − Configure security group to authorize client connections to the cluster. The
authorizing access to Redshift depends on whether the client authorizes an EC2
instance or not.
Follow these steps to security group on EC2-VPC platform.
Open Amazon Redshift Console and click Clusters on the navigation pane.
Select the desired Cluster. Its Configuration tab opens.
Click the Edit button. Set the fields as shown below and click the Save button.
o Type − Custom TCP Rule.
o Protocol − TCP.
o Port Range − Type the same port number used while launching the cluster. By-
default port for Amazon Redshift is 5439.
o Source − Select Custom IP, then type 0.0.0.0/0.
Use the following steps to connect the Cluster with SQL Workbench/J.
o Open SQL Workbench/J.
o Select the File and click the Connect window.
o Select Create a new connection profile and fill the required details like name, etc.
o Click Manage Drivers and Manage Drivers dialog box opens.
o Click the Create a new entry button and fill the required details.
Click the folder icon and navigate to the driver location. Finally, click the Open button.
Leave the Classname box and Sample URL box blank. Click OK.
Choose the Driver from the list.
In the URL field, paste the JDBC URL copied.
Enter the username and password to their respective fields.
Select the Autocommit box and click Save profile list.
Select the Kinesis icon and fill the required details. Click the Next button.
Select the desired Stream on the Stream tab.
On the Fields tab, create unique label names, as required and click the Next button.
On the Charts Tab, enable the charts for data. Customize the settings as required and
then click the Finish button to save the setting.
On the Hardware Configuration section, select m3.xlarge in EC2 instance type field and
leave other settings as default. Click the Next button.
On the Security and Access section, for EC2 key pair, select the pair from the list in EC2
key pair field and leave the other settings as default.
On Bootstrap Actions section, leave the fields as set by default and click the Add button.
Bootstrap actions are scripts that are executed during the setup before Hadoop starts on
every cluster node.
On the Steps section, leave the settings as default and proceed.
Click the Create Cluster button and the Cluster Details page opens. This is where we
should run the Hive script as a cluster step and use the Hue web interface to query the
data.
Step 4 − Run the Hive script using the following steps.
Open the Amazon EMR console and select the desired cluster.
Move to the Steps section and expand it. Then click the Add step button.
The Add Step dialog box opens. Fill the required fields, then click the Add button.
o The Parameters section opens only when the template is selected. Leave the S3
input folder and Shell command to run with their default values. Click the folder
icon next to S3 output folder, and select the buckets.
o In Schedule, leave the values as default.
o In Pipeline Configuration, leave the logging as enabled. Click the folder icon
under S3 location for logs and select the buckets.
o In Security/Access, leave IAM roles values as default.
o Click the Activate button.
Step 4 − After S3 location verification is completed, Schema section opens. Fill the
fields as per requirement and proceed to the next step.
Step 5 − In Target section, reselect the variables selected in Schema section and
proceed to the next step.
Step 6 − Leave the values as default in Row ID section and proceed to the Review
section. Verify the details and click the Continue button.
Following are some screenshots of Machine Learning services.
Data Set Created by Machine Learning
Summary Made by Machine Learning
Amazon CloudSearch
Amazon Simple Queue Services (SQS)
Amazon Simple Notification Services (SNS)
Amazon Simple Email Services (SES)
Amazon SWF
In this chapter, we will discuss Amazon SWF.
Amazon Simple Workflow Service (SWF) is a task based API that makes it easy
to coordinate work across distributed application components. It provides a
programming model and infrastructure for coordinating distributed components and
maintaining their execution state in a reliable way. Using Amazon SWF, we can
focus on building the aspects of the application that differentiates it.
A workflow is a set of activities that carry out some objective, including logic that
coordinates the activities to achieve the desired output.
Workflow history consists of complete and consistent record of each event that
occurred since the workflow execution started. It is maintained by SWF.
Step 3 − Run a Sample Workflow window opens. Click the Get Started button.
Step 4 − In the Create Domain section, click the Create a new Domain radio button
and then click the Continue button.
Step 5 − In Registration section, read the instructions then click the Continue
button.
Step 6 − In the Deployment section, choose the desired option and click the
Continue button.
Step 7 − In the Run an Execution section, choose the desired option and click the
Run this Execution button.
Finally, SWF will be created and will be available in the list.
Step 3 − Select the desired option and choose the Region from the top right side of
the navigation bar.
Step 4 − Fill the required details and proceed to the next step to configure an
account. Follow the instructions. Finally, the mailbox will look like as shown in the
following screenshot.
Features of Amazon WorkMail
Secure − Amazon WorkMail automatically encrypts entire data with the encryption
keys using the AWS Key Management Service.
Managed − Amazon WorkMail offers complete control over email and there is no
need to worry about installing a software, maintaining and managing hardware.
Amazon WorkMail automatically handles all these needs.
Accessibility − Amazon WorkMail supports Microsoft Outlook on both Windows
and Mac OS X. Hence, users can use the existing email client without any
additional requirements.
Availability − Users can synchronize emails, contacts and calendars with iOS,
Android, Windows Phone, etc. using the Microsoft Exchange ActiveSync protocol
anywhere.
Cost-efficient − Amazon WorkMail charges 4$ per user per month up to 50GB of
storage.