CSM Lab Manual Final
CSM Lab Manual Final
AIM:
PROCEDURE:
What is AWS ?
Amazon Web Services (AWS) is a subsidiary of Amazon providing on-demand cloud computing
platforms and APIs (Application Programming Interfaces) to individuals, companies, and
governments, on a metered pay-as-you-go basis. AWS offers a wide range of services that can be
categorized into computing power, storage, networking, databases, machine learning, analytics,
security, IoT (Internet of Things), and more. Here's a detailed breakdown of some of the key
components and services within AWS:
1. Compute Services:
-Amazon EC2 (Elastic Compute Cloud): Virtual servers in the cloud for running applications.
- AWS Lambda: Serverless computing service where you can run code without provisioning or
managing servers.
- Amazon ECS (Elastic Container Service): Docker container management service for running,
stopping, and managing Docker containers on a cluster.
-AWS Batch: Batch computing service for running batch computing workloads on the cloud.
2. Storage Services:
- Amazon S3 (Simple Storage Service): Object storage service for storing and retrieving any
amount of data from anywhere on the web.
- Amazon EBS (Elastic Block Store): Block storage service for EC2 instances, offering persistent
block-level storage volumes for use with Amazon EC2 instances.
- Amazon Glacier:Low-cost storage service for data archiving and long-term backup.
1
3. Database Services:
- Amazon RDS (Relational Database Service): Managed relational database service that supports
various database engines like MySQL, PostgreSQL, SQL Server, and others.
- Amazon DynamoDB: Fully managed NoSQL database service providing fast and predictable
performance with seamless scalability.
- Amazon Redshift: Fully managed data warehousing service for analyzing large datasets using
SQL.
4. Networking Services:
- Amazon VPC (Virtual Private Cloud): Service that lets you provision a logically isolated section
of the AWS cloud where you can launch AWS resources in a virtual network.
- Amazon Route 53: Scalable DNS (Domain Name System) web service designed to route end-
user requests to internet applications.
6. Analytics Services:
- Amazon Athena: Interactive query service that makes it easy to analyse data in Amazon S3 using
standard SQL.
- Amazon EMR (Elastic MapReduce): Managed big data framework for processing and analyzing
large data sets using popular distributed processing frameworks such as Apache Hadoop, Apache
Spark, and Presto.
AWS's global infrastructure spans multiple geographic regions, allowing users to deploy their
applications and services close to their end-users for improved performance and reliability.
2
Additionally, AWS provides a wide range of tools and resources for developers, administrators, and
businesses to manage, monitor, and optimize their AWS environments effectively.
2) Enter a valid email address and then verify the email address.
3) Enter your payment info
4) Make a transfer of about $1 dollars to verify payment information.
5) Don’t worry, it will be refunded after verification.
6) Enjoy 1 year of free tier Access!
RESULT:
In this experiment we have successfully created a new Amazon Web Services account.
3
Ex.No: 2 Create a cost-model for a web application using
Date : various services and do cost-benefit analysis
AIM:
To create a lambda function and then analyze the best cost benefit configuration using “aws
power tuning tool”.
PROCEDURE:
Let's create a cost model for a typical web application using various AWS services. For this
example, let's consider a simple web application consisting of the following components:
1. Compute: We'll use Amazon EC2 instances to host the web application.
2. Storage: We'll store static files (like images, CSS, and JavaScript) in Amazon S3.
3. Database: We'll use Amazon RDS for a managed relational database.
4. Networking: We'll use Amazon Route 53 for DNS and Amazon CloudFront for content delivery
network (CDN) to improve performance.
5. Monitoring: We'll use Amazon CloudWatch for monitoring and logging.
Let's consider a small-scale setup for a startup that expects moderate traffic. Here's a breakdown of the
estimated monthly costs for each service:
1. Amazon EC2 (Compute):
• Instance Type: t3.micro
• Number of Instances: 2 (for redundancy)
• Estimated Cost: $0.0116 per hour * 24 hours * 30 days * 2 instances = $16.70
2. Amazon S3 (Storage):
• Estimated Cost: Assuming 100 GB of storage and 100 GB of data transfer out per month:
• Storage: $0.023 per GB * 100 GB = $2.30
• Data Transfer Out: $0.09 per GB * 100 GB = $9.00
• Total: $2.30 (Storage) + $9.00 (Data Transfer Out) = $11.30
3. Amazon RDS (Database):
• Database Engine: MySQL
• Instance Type: db.t2.micro
• Storage: 20 GB
4
• Estimated Cost: $0.017 per hour * 24 hours * 30 days = $12.24 (Instance) $0.115 per GB *
20 GB = $2.30 (Storage)
• Total: $12.24 (Instance) + $2.30 (Storage) = $14.54
4. Amazon Route 53 (Networking):
• Estimated Cost: $0.50 per hosted zone * 1 hosted zone = $0.50
5. Amazon CloudFront (CDN):
• Estimated Cost: Varies based on usage (number of requests, data transfer out). Let's
assume $0.085 per GB for data transfer out.
6. Amazon CloudWatch (Monitoring):
• Estimated Cost: Free tier available for basic monitoring. Additional charges may apply for
custom metrics and alarms.
Total Estimated Monthly Cost:
• EC2: $16.70
• S3: $11.30
• RDS: $14.54
• Route 53: $0.50
• CloudFront: Variable (based on usage)
• CloudWatch: Free tier
Total Fixed Cost: $16.70 + $11.30 + $14.54 + $0.50 = $43.04
Variable Cost: CloudFront usage
Cost-Benefit Analysis:
• Benefits:
• Scalability: AWS services allow for easy scalability, enabling the application to handle
increased traffic as the startup grows.
• Managed Services: AWS provides managed services like RDS, reducing the operational
overhead for database management.
• Global Reach: AWS has data centers worldwide, ensuring low-latency access for users
across the globe.
• Cost-Effective: Pay-as-you-go pricing model helps startups manage costs effectively,
paying only for the resources they consume.
• Considerations:
• Monitoring: While basic monitoring is included in the free tier, additional charges may
apply for advanced monitoring features.
• Data Transfer Costs: Costs associated with data transfer out from services like S3 and
CloudFront can vary based on usage.
• Instance Types: Depending on the application's resource requirements, different instance
types may be more cost-effective
5
3) Click on lambda and then click on create function
import json
if action == "GREET":
return {
'statusCode': 200,
6
'body': json.dumps('Hello from Lambda!')
}
return {
'statusCode': 400,
'body': json.dumps("invalid action")
}
{
"lambdaARN": "your-lambda-function-arn",
"powerValues": [128, 256, 512, 1024, 1536, 2048, 3008],
"num": 10,
"payload": {"body”: "{“action”:“GREET”}"},
"parallelInvocation": true,
"strategy": "cost"
}
7
15) Click on start execution again.
16) Wait for a while and then check out “Execution Input and Output”
17) Copy and paste the URL from visualization key in the output window.
18) Cost Benefit analysis graph is displayed for the user to find out the best memory size to
reduce cost and execution time.
RESULT:
Thus, a basic web application service was added and deployed as a serverless function and its cost
benefit is analyzed through free and open-source tool, “aws power tuning”. Different services like
Lambda, Step Functions, IAM are used to do this experiment.
8
Ex.No: 3 Create Usage Alerts of Cloud Resources
Date :
AIM:
PROCEDURE:
Creating usage alerts for resources on AWS is essential for several reasons:
1. Cost Control: AWS operates on a pay-as-you-go model, where you are billed for the resources you
consume. By setting up usage alerts, you can monitor resource consumption and costs in real-time.
Alerts can help you identify sudden spikes in usage or unexpected increases in costs, allowing you
to take proactive measures to control expenses and avoid billing surprises.
2. Resource Optimization: Usage alerts enable you to monitor the health and performance of your
AWS resources. By defining thresholds for metrics such as CPU utilization, memory usage, or
network traffic, you can identify instances of underutilized or overutilized resources. Optimizing
resource usage helps ensure efficient allocation of resources, improves application performance,
and reduces operational costs.
3. Performance Monitoring: Monitoring resource metrics allows you to track the performance of
your applications and infrastructure components. Usage alerts can notify you of performance
degradation or service disruptions, enabling you to investigate and address issues promptly. Timely
detection of performance problems helps maintain service availability, reliability, and user
satisfaction.
4. Capacity Planning: Usage alerts provide insights into resource utilization patterns over time. By
analyzing historical usage data and trends, you can forecast future resource requirements and plan
capacity accordingly. This proactive approach to capacity planning helps you avoid resource
shortages, scale resources preemptively to meet demand, and optimize infrastructure
investments.
5. Security and Compliance: Monitoring resource usage can help detect anomalous behavior or
security incidents, such as unauthorized access attempts or unusual data transfer patterns. Usage
alerts can trigger notifications for security events, allowing you to respond quickly and mitigate
potential threats. Additionally, compliance requirements often mandate monitoring and reporting
of resource usage, making usage alerts essential for maintaining regulatory compliance.
6. Operational Efficiency: Automated usage alerts streamline operational processes by providing
timely notifications of critical events or performance deviations. Rather than manually monitoring
9
resource metrics, alerts allow you to proactively manage your AWS environment, prioritize tasks,
and focus efforts on areas that require attention. This proactive approach enhances operational
efficiency, reduces downtime, and improves overall system reliability.
10
8) Change to these config values.
11
9) Click on next and then Under “actions”
10) Create a new SNS (Simple Notification Service) and then enter your email address where you want
to receive the alert. After that, subscribe and confirm by clicking on a confirmation email to enable
this alert finally.
11) Accept defaults for all other sections
12) Voila! Successfully created an alarm that reports errors!
RESULT:
From this experiment, we have learnt how to set alarms for a specific resource.
12
Ex.No: 4 Create Billing Alertsfor Your Cloud Organization
Date :
AIM:
PROCEDURE:
Let's delve into the importance of creating billing alerts for a cloud organization on AWS in more detail:
1. Cost Management: AWS services operate on a pay-as-you-go model, where organizations are billed
based on their actual resource usage. Without proper monitoring, it's easy for costs to escalate
rapidly, especially in dynamic cloud environments. By setting up billing alerts, organizations can
closely monitor their AWS spending in real-time. This allows them to promptly identify any
unexpected increases in costs and take necessary actions to mitigate them.
2. Budget Compliance: Many organizations establish budgets or spending limits for their AWS usage to
ensure financial discipline and cost control. Billing alerts play a crucial role in budget compliance by
providing timely notifications when spending approaches or exceeds predefined thresholds. This
proactive approach enables organizations to stay within budgetary constraints and avoid
overspending.
3. Cost Accountability: Billing alerts help promote cost accountability within the organization. By
notifying relevant stakeholders, such as finance managers, IT administrators, or department heads,
about deviations from budgeted spending, billing alerts foster transparency and accountability in
cost management practices. This encourages responsible resource usage and helps prevent cost
overruns.
4. Resource Optimization: Monitoring AWS spending through billing alerts enables organizations to
identify opportunities for resource optimization and cost savings. For example, alerts may reveal
instances of underutilized resources, idle instances, or unnecessary spending on unused services.
Armed with this information, organizations can optimize resource allocation, right-size instances,
and implement cost-saving measures to maximize efficiency and reduce waste.
5. Proactive Cost Control: Billing alerts empower organizations to take proactive measures to control
costs and prevent budget overruns. Instead of reacting to cost overages after they occur,
organizations can address potential issues in real-time. For instance, if an alert indicates a sudden
surge in spending, organizations can investigate the root cause, optimize configurations, or
implement cost-saving measures to bring spending back under control promptly.
6. Forecasting and Planning: Billing alerts provide valuable insights into spending trends and patterns
over time. By analyzing historical spending data and alert notifications, organizations can make more
accurate forecasts and develop strategic plans for future resource allocation and budgeting. This
13
proactive approach enhances financial forecasting, improves budget planning, and enables
organizations to allocate resources more effectively.
7. Compliance and Governance: Billing alerts help organizations ensure compliance with internal
policies, regulatory requirements, and governance standards related to financial management. By
monitoring spending in real-time and enforcing budgetary controls, organizations can demonstrate
compliance with audit and regulatory requirements. This not only mitigates financial risks but also
enhances trust and confidence among stakeholders
14
4. On the side panel, click on Budgets.
Existing budget plans get displayed, if you don’t have one don’t worry, click on Create Budget.
RESULT:
From this experiment, we have learnt how to set billing alerts for Amazon Web Services which will notify us if we
exceed past our usage limits.
15
Ex.No: 5 Compare Cloud cost for a simple web application across AWS,
Azure and GCP and suggest the best one
Date :
AIM:
To compare Cloud cost for a simple web application across AWS, Azure and GCP and
suggest the best one.
PROCEDURE:
Azure:
Azure is a cloud computing platform and set of services provided by Microsoft. It offers a wide range
of cloud-based solutions for computing, storage, networking, databases, machine learning, artificial
intelligence, analytics, and more. Azure enables businesses to build, deploy, and manage applications
and services through Microsoft's global network of data centers.
Key components and services of Azure include:
1. Compute Services: Azure Virtual Machines (VMs), Azure Kubernetes Service (AKS), Azure
Functions (serverless compute), and more.
2. Storage Services: Azure Blob Storage (for object storage), Azure File Storage (for file shares),
Azure Disk Storage (for block storage), and Azure Data Lake Storage (for big data analytics).
3. Networking Services: Azure Virtual Network (for creating isolated networks), Azure Load
Balancer (for distributing incoming network traffic), Azure VPN Gateway (for secure
connections), and Azure ExpressRoute (for dedicated private connections to Azure).
4. Database Services: Azure SQL Database (relational database as a service), Azure Cosmos DB
(globally distributed NoSQL database), Azure Database for MySQL, PostgreSQL, and more.
5. AI and Machine Learning Services: Azure Machine Learning (for building, training, and
deploying machine learning models), Azure Cognitive Services (pre-built AI capabilities like
computer vision, natural language processing, and speech recognition).
6. Analytics Services: Azure Synapse Analytics (formerly SQL Data Warehouse), Azure
HDInsight (for big data analytics), Azure Databricks (for collaborative Apache Spark-based
analytics).
7. IoT (Internet of Things) Services: Azure IoT Hub (for bi-directional communication with IoT
devices), Azure IoT Central (for IoT application management), Azure Sphere (for securing IoT
devices).
8. Security and Identity Services: Azure Active Directory (for identity and access management),
Azure Security Center (for threat protection and security management), Azure Key Vault (for
securely storing and managing cryptographic keys and secrets).
9. Development and DevOps Tools: Azure DevOps (for planning, tracking, and collaborating on
software development), Azure App Service (for building and hosting web apps), Azure DevTest
Labs (for creating development and testing environments).
16
Azure provides a comprehensive and flexible cloud platform suitable for startups, enterprises, and
developers looking to innovate, scale, and transform their businesses digitally. With a global presence,
robust security features, and extensive set of services, Azure competes closely with other major cloud
providers like AWS and Google Cloud Platform.
Google Cloud Platform (GCP) is a suite of cloud computing services offered by Google. It provides a
wide range of infrastructure and platform services for building, deploying, and managing
applications and services in the cloud. GCP offers computing power, storage options, networking
capabilities, databases, machine learning, data analytics, and more, all hosted in Google's global
network of data centers.
1. Compute Services: Google Compute Engine (virtual machines), Google Kubernetes Engine
(managed Kubernetes service), Google App Engine (platform as a service for building and deploying
applications), Cloud Functions (serverless compute platform).
2. Storage Services: Google Cloud Storage (for object storage), Cloud Filestore (managed file
storage), Cloud SQL (fully managed relational database service), Cloud Bigtable (NoSQL wide-
column database).
3. Networking Services: Virtual Private Cloud (VPC) for creating isolated networks, Cloud Load
Balancing for distributing incoming network traffic, Cloud CDN (Content Delivery Network) for
delivering content to users with low latency.
4. Database Services: Cloud Spanner (horizontally scalable, globally distributed relational database),
Cloud Firestore (NoSQL document database), Cloud Memorystore (fully managed in-memory data
store), and more.
5. Machine Learning and AI Services: Google Cloud AI Platform (for building, training, and
deploying machine learning models), TensorFlow Enterprise (enterprise-grade machine learning
platform), Cloud Vision API, Cloud Speech-to-Text API, Cloud Natural Language API, and others.
6. Analytics Services: BigQuery (fully managed data warehouse for analytics), Dataflow (stream and
batch processing service), Dataproc (managed Spark and Hadoop service), and Looker (business
intelligence and analytics platform).
7. IoT (Internet of Things) Services: Cloud IoT Core (for securely connecting and managing IoT
devices), Cloud IoT Edge (for running IoT applications at the edge), and others.
17
8. Security and Identity Services: Cloud Identity and Access Management (IAM) for managing
access control and permissions, Cloud Identity-Aware Proxy (IAP) for controlling access to
applications running on Google Cloud.
9. Development and DevOps Tools: Cloud Build (continuous integration and continuous delivery
platform), Cloud Source Repositories (Git version control), Stackdriver (monitoring, logging, and
diagnostics suite), and more.
Google Cloud Platform is known for its performance, scalability, and innovation, leveraging
Google's expertise in data centers, networking, and software engineering. It competes with other
major cloud providers like AWS and Azure, offering unique features and services that cater to a wide
range of industries and use cases.
AWS:
Amazon Web Services (AWS) is a subsidiary of Amazon providing on-demand cloud computing
platforms and APIs (Application Programming Interfaces) to individuals, companies, and
governments, on a metered pay-as-you-go basis. AWS offers a wide range of services that can be
categorized into computing power, storage, networking, databases, machine learning, analytics,
security, IoT (Internet of Things), and more. Here's a detailed breakdown of some of the key
components and services within AWS:
1. Compute Services:
-Amazon EC2 (Elastic Compute Cloud): Virtual servers in the cloud for running applications.
- AWS Lambda: Serverless computing service where you can run code without provisioning or
managing servers.
- Amazon ECS (Elastic Container Service): Docker container management service for running,
stopping, and managing Docker containers on a cluster.
-AWS Batch: Batch computing service for running batch computing workloads on the cloud.
2. Storage Services:
- Amazon S3 (Simple Storage Service): Object storage service for storing and retrieving any amount
of data from anywhere on the web.
- Amazon EBS (Elastic Block Store): Block storage service for EC2 instances, offering persistent
block-level storage volumes for use with Amazon EC2 instances.
- Amazon Glacier:Low-cost storage service for data archiving and long-term backup.
3. Database Services:
- Amazon RDS (Relational Database Service): Managed relational database service that supports
various database engines like MySQL, PostgreSQL, SQL Server, and others.
- Amazon DynamoDB: Fully managed NoSQL database service providing fast and predictable
performance with seamless scalability.
- Amazon Redshift: Fully managed data warehousing service for analyzing large datasets using SQL.
18
4. Networking Services:
- Amazon VPC (Virtual Private Cloud): Service that lets you provision a logically isolated section
of the AWS cloud where you can launch AWS resources in a virtual network.
- Amazon Route 53: Scalable DNS (Domain Name System) web service designed to route end-user
requests to internet applications.
6. Analytics Services:
- Amazon Athena: Interactive query service that makes it easy to analyse data in Amazon S3 using
standard SQL.
- Amazon EMR (Elastic MapReduce): Managed big data framework for processing and analyzing
large data sets using popular distributed processing frameworks such as Apache Hadoop, Apache
Spark, and Presto.
AWS's global infrastructure spans multiple geographic regions, allowing users to deploy their
applications and services close to their end-users for improved performance and reliability.
Additionally, AWS provides a wide range of tools and resources for developers, administrators, and
businesses to manage, monitor, and optimize their AWS environments effectively.
We will compare different `serverless compute` pricing from different cloud service providers like
Amazon Web Services, Azure and Google Cloud Platform, for it being the epitome of cloud
flexibility with no overhead of server maintenance.
19
We will take Wikipedia's API load and try to replicate its average requests per month through our
different service provider’s `serverless compute`. The API requests seem to average around 30000
per second. For a month, it will be 30000 * 2.628e+6. A really big number!
Common metrics for all providers:
• Runtime memory: 128 MB
• Runtime: 50 ms
• Number of requests: 78840000000.
• Service written in: Nodejs
For the comparison to be fair, we will exclude any free tier offer.
GB-second is nothing but the number of seconds your function runs for, multiplied by the amount of
RAM memory consumed.
To find out how much money it would take to have Wikipedia as a serverless lambda we can use a free tool
provided by Amazon Web Services called AWS Pricing Calculator.
20
A total of $23,980 USD is required to operate such a site through AWS.
Azure:
To find out how much money it would take to have Wikipedia as a serverless function we can use a free tool
provided by Azure called Azure Pricing Calculator.
21
A total of $23,645 USD is required to operate such a site through Azure.
To find out how much money it would take to have Wikipedia as a serverless function we can use a free tool
provided by GCP called Google Cloud Platform Pricing Calculator.
A total of $40,648 USD is required to operate such a site through Google Cloud Platform.
RESULT:
Through this experiment, we have understood how serverless pricing works and how much money it
would take for a normal performance heavy web application. For such applications, serverless
computation must be avoided and actual servers with load balancers should always be preferred.
If there is a blank check, with money to burn, then go with AWS as it is the cheapest of the three.
22
Ex.No: 6 Install Google App Engine and create web
Date : applications using python/java
AIM:
To Install Google App Engine. Create hello world app and other simple web applications
using python/java
PROCEDURE:
Step 3:Download the Windows installer – the simplest thing is to download it to your
Desktop or another folder that you remember.
23
24
Step4: Double Click on the GoogleApplicationEngine installer.
Step 5:Click through the installation wizard, and it should install the App Engine.
If you do not have Python 2.5, it will install Python 2.5 as well.
Step 6:Once the install is complete you can discard the downloaded installer
Now you need to create a simple application. We could use the “+”
option to have the launcher make us an application – but instead we
will do it by hand to get a better sense of what is going on.
25
And then make a sub-‐folder in within apps called “ae-01-- trivial” –
the path to this folder would be: :\ Documents and Settings
\csev\Desktop\apps\ae-01-trivial
Using a text editor such as JEdit (www.jedit.org), create a file called app.yaml in the
ae-01-trivial folder with the following contents:
application:
ae-01-trivial
version: 1
runtime:
python
api_version: 1
handlers:
- url: /.*
script: index.py
Note: Please do not copy and paste these lines into your text editor
– you might end up with strange characters – simply type them into
your editor.
Then create a file in the ae-01-trivial folder called index.py with three lines in it:
print 'Content-Type:
text/plain' print ' '
print 'Hello there Chuck'
26
Once you have selected your application and press Run. After a few
moments your application will start and the launcher will show a
little green icon next to your application. Then press Browse to open
a browser pointing at your application which is running at
http://localhost:8080/
Paste http://localhost:8080 into your browser and you should see
your application as follows:
RESULT:
Thus Installed the Google App Engine and Created hello world appsuccessfully.
27
Ex.No: 7 USE GAE LAUNCHER TO LAUNCH THE WEB
Date : APPLICATIONS.
AIM:
To Use Google App Engine launcher to launch the web applications.
PROCEDURE:
In this exercise, we are going to create a GAE based Python web project (hello world)using
Eclipse.
Python 2.7
Eclipse 3.7 + PyDev plugin
Google App Engine SDK for Python 1.6.4
PROCEDURE:
P.S Assume Python 2.7 and Eclipse 3.7 are installed.
Step:1. Install PyDev plugin for Eclipse
28
Figure 1 – In Eclipse , menu, “Help –> Install New Software..” and put above URL. Select
“PyDev for Eclipse” option, follow steps, and restart Eclipse once Completed.
Step 2. Verify PyDev
After Eclipse is restarted, make sure PyDev’s interpreter is pointed to your “python.exe“.
29
Figure 2 – Eclipse -> Windows –> Preferences, make sure “Interpreter – Python” is
configured properly
30
Figure 4.1 – Eclipse menu, File -> New -> Other… , PyDev folder, choose “PyDev Google App
Engine Project“.
Figure 4.2 – Type project name, if the interpreter is not configure yet (in step 2), you can do it
now.And select this option – “Create ‘src’ folder and add it to PYTHONPATH“.
31
Figure 4.3 – Click “Browse” button and point it to the Google App Engine installed directory (in
step 3).
32
Figure 4.4 – Name your application id in GAE, type anything, you can change it later
class MainPage(webapp.RequestHandler):
def get(self):
33
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write('Hello, webapp World!')
application = webapp.WSGIApplication([('/', MainPage)], debug=True)
def main():
run_wsgi_app(application)
main()
Copy
File : app.yaml – GAE need this file to run and deploy your Python project, it‟s quite self-
explanatory,for
application: mkyong-python
version: 1 runtime: python api_version: 1
handlers:
- url: /.*
script: helloworld.py
Copy
Figure 5.1 – In Main tab -> Main module, manually type the directory path of “dev_appserver.py“.
34
Figure 5.2 – In Arguments tab -> Program arguments, put “${project_loc}/src“.
35
Figure 5.4 – Done.
application. Review “app.yaml” again, this web app will be deployed to GAE with application
ID “mkyong-python“.
File : app.yaml
handlers:
- url: /.*
script: helloworld.py
Copy
37
Figure 5.2 – In Arguments tab -> Program arguments, put “update ${project_loc}/src“.
Figure 5.3 – During deploying process, you need to type your GAE email and password for
authentication
RESULT:
Thus a hello world web application has been launched using GAE.
39
Ex.No: 8 Install Google App Engine and create web
Date : applications using python/java
AIM:
To Simulate a cloud scenario using CloudSim and run a scheduling algorithm that is not present in CloudSim
PROCEDURE:
What is Cloudsim?
CloudSim is a simulation toolkit that supports the modeling and simulation of the core functionality of cloud, like
job/task queue, processing of events, creation of cloud entities(datacenter, datacenter brokers, etc), communication
between different entities, implementation of broker policies, etc. This toolkit allows to:
•The Support of modeling and simulation of large scale computing environment as federated cloud data centers,
virtualized server hosts, with customizable policies for provisioning host resources to virtual machines and energy-
aware computational resources
•It is a self-contained platform for modeling cloud‟s service brokers, provisioning, and allocation policies.
•It supports the simulation of network connections among simulated system elements.
•Support for simulation of federated cloud environment, that inter- networks resources from both private and public
domains.
•Availability of a virtualization engine that aids in the creation and management of multiple independent and co-hosted
virtual services on a data center node.
•Flexibility to switch between space shared and time shared allocation of processing cores to virtualized services.
CloudSim is written in Java. The knowledge you need to use CloudSim is basic Java programming and some basics
about cloud computing. Knowledge of programming IDEs such as Eclipse or NetBeans is also helpful. It is a library
40
and, hence, CloudSim does not have to be installed. Normally, you can unpack the downloaded package in any
directory, add it to the Java classpath and it is ready to be used. Please verify whether Java is available on your system.
1. broker.submitVmList(vmlist)
11. Create a cloudlet with length, file size, output size, and utilisation model:
1. broker.submitCloudletList(cloudletList)
13. Start the simulation:
41
CloudSim.startSimulation()
Program:
package org.cloudbus.cloudsim.e xa mples;
import java.text.DecimalFormat;
import java.util.Array List;
import java.util.Calendar;
import java.util.Linked List;
import java.util.List;
import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.DatacenterBroker;
import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.Pe;
import org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.UtilizationModel;
import org.cloudbus.cloudsim.UtilizationModelFull;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple;
import org.cloudbus.cloudsim.VmSchedulerTime
Shared;import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
/**
/**
*/
42
public static void main(String[] args) {
Log.printLine("Starting CloudSimExample2...");
try {
//Datacenters are the resource providers in CloudSim. We need at list one of them to run a
CloudSim simulation
@SuppressWarnings("unused")
//VM description
int vmid = 0;
Vm vm1 = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTime Shared());
vmid++;
Vm vm2 = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTime Shared());
43
//add the VMs to the vmList
vmlist.add(vm1);
vmlist.add(vm2);//submit
vmlist to the broker
broker.submit
VmList(vmlist);
//Cloudlet properties
int id = 0;
pesNumber=1;
long length = 250000;
long fileSize = 300;
long outputSize = 300;
cloudlet1.setUserId(brokerId);
id++;
cloudlet2.setUserId(brokerId);
CloudSim.stopSimulation();
printCloudletList(newList);
Log.printLine("CloudSimExample 2 finished!");
catch (Exception e) {
e.printStackTrace();
// our machine
//4. Create Host with its id and list of PEs and add them to the list of machines
int hostId=0;
hostList.add(
new Host(
hostId,
new RamProvisionerSimple(ram),
new BwProvisionerSimple(bw),
storage,
peList,
new VmSchedulerTimeShared(peList)
45
// 5. Create a DatacenterCharacteristics object that stores the
double costPerMem = 0.05; // the cost of using memory in this resource double
costPerStorage = 0.001; // the cost of using storage in this resource double costPerBw =
0.0; // the cost of using bw in this resource
Linked List<Storage> storageList = new LinkedList<Storage>(); //we are not adding SAN devices
by now
try {
storageList, 0);
} catch (Exception e) {
e.printStackTrace();
return datacenter;
according//We strongly encourage users to develop their own broker policies, to submit vms and cloudlets
//to the specific rules of the simulated scenario private static DatacenterBroker createBroker(){
} catch (Exception e) {
e.printStackTrace()return null;
return broker;
46
/**
*/
private static void printCloudlet List(List<Cloudlet> list) { int size = list.size ();
Cloudlet cloudlet;
"Data center ID" + indent + " VM ID" + indent + "Time" + indent + "Start Time" + indent +
"Finish Time");
cloudlet = list.get(i);
47
OUTPUT:
Starting
CloudSimExample2...
Initialising...
Starting CloudSim version
3.0Datacenter_0 is
starting...
Broker is
starting...
Entities started.
0.0 : Broker: Cloud Resource List received with 1
resource(s)0.0: Broker: Trying to Create VM #0 in
Datacenter_0
: Broker: Trying to Create VM #1 in Datacenter_0
: Broker: VM #0 has been created in Datacenter #2,
Host #00.1: Broker: VM #1 has been created in Datacenter #2, Host
#0
0.1 : Broker: Sending cloudlet 0 to VM #0
: Broker: Sending cloudlet 1 to
VM #11000.1: Broker: Cloudlet 0 received
1000.1: Broker: Cloudlet 1 received
1000.1: Broker: All Cloudlets executed.
Finishing...1000.1: Broker: Destroying VM #0
1000.1: Broker: Destroying
VM #1Broker is shutting
down...
Simulation: No more future events
CloudInformationService: Notify all CloudSim entities for shutting
down.Datacenter_0 is shutting down...
Broker is shutting
down...Simulation
completed.
Simulation completed.
Result: