EX. No.
: 1 Install VirtualBox/VMware Workstation with different
Date: flavours of Linux or windows OS on top of windows.
Aim:
Procedure:
Downloading and installing VMware
Step 1: Download the latest version of VMware workstation from the official VMware site
for windows or Linux. Here we are downloading VMware Workstation 16 player.
Step 2:
Install VMware
Step 3: Browse and change the path of installation if needed and check the boxes of necessary
features and then click Next.
Step 4: Edit default settings that can improve your user experience and then click Next.
Step 5: Create shortcuts for VMware workstation 16 player in desktop or start menu or both if
needed and click Next.
Step 6: Now you are finally ready to install the VMware Workstation 16 player and click
Install.
Step 7: Now the VMware Workstation 16 player Setup Wizard is complete if you have
license click License or click Finish.
Downloading Ubuntu
Step 8: Now we have to download OS to run above host OS. Here we are downloading
Ubuntu from the official site. Ubuntu 20.04 LTS is downloaded
Step 9: Open the VMware workstation 16 player and create a new virtual machine by click on
the first logo.
Step 10: Now to install guest OS on VMware check Installer doc image file and browse the
path where the Ubuntu is downloaded and then Click Next.
Step 11: Fill the install information by adding full name, username and password and then
click Next.
Step 12: Name the virtual machine and browse the location of it and then click Next.
Step 13: Now specify the disk capacity the default maximum disk is 20GB and split virtual
disk to multiple files or store as single file and click Next
Step 14: Now you are ready to create virtual machine if needed customize hardware and click
Finish
Step 15: Installing Ubuntu on VMware
Step 16: Login in by entering the password that you have registered.
Step 17: Thus, we have installed VMware Workstation with different flavours of Linux on
top of windows.
Result:
EX. No.: 2 Install a C compiler in the virtual machine
Date: and execute a simple program
Aim:
Procedure:
Step 1: Open terminal on Ubuntu and install C compiler using the command $ sudo apt
install gcc.
Step 2: Wait for some time to install completely and once it is installed then open editor by
gedit command and type a simple c program
Step 3: Now come to the terminal and compile the c program with the command gcc
filename.c and run the program with command ./a.out
Result:
EX. No.:3 Installation of a Google App Engine
Date:
Aim:
Procedure:
Step 1: To install google app engine. Go to google cloud and download the cloud SDK installer
for appropriate OS.
Step 2: Launch the installer and follow the prompts. Click on necessary properties and click
Next.
Step 3: Read terms and conditions of the google cloud platform completely and
click I agree.to proceed further.
Step 4: Select install type either a single user or all users and then click Next.
Step 5: Change the destination folder path in which you want to install your setup
and click next to continue.
Step 6: Check the components you want to install and uncheck the components you
don’t want to install. Then click Install.to start installation.
Step 7: After successful installation click next to proceed.
Step 8: Finally check the boxes you need and then click Finish close the setup
Result:
EX. No.: 4 Use GAE launcher to launch the web application
Date:
Aim:
Procedure:
Step 1: Create a google cloud account and create a new python project. Clone
python using git.
$ git clone \
> https://github.com/GoogleCloudPlatform/python-docs-samples
Step 2: Navigate to hello_world inside python docs samples
$ cd python-docs-samples/appengine/standarard_python3/hello_world
Step 3: Navigate to the default hello world program.
$ cat app.yaml
$ virtualenv --python python3 \
> ~/envs/hello_world
Step 4: Run the default hello world program
$ source \
> ~/envs/hello_world/bin/activate
Step 5: Create a new app in gcloud
$ python main.py
Step 6: Deploy the app
gcloud app create
gcloud app deploy app.yaml \
--project sample2-326109
Step 7: Navigate to port 8080 by clicking Preview on port 8080
Step 8: On port 8080, we can view the deployed app.
Result:
EX. No.: 5 Simulate a cloud scenario using CloudSim and run
Date: a scheduling algorithm
Aim:
Procedure:
Step 1: Download Cloudsim zar file from google code
Step 2: Open NetBeans IDE
Step 3: Click -> new project
Step 4: Create a Java Application
Step 5: Right-click Libraries and Click on Add JAR and Click on cloudsim JAR
Step 6: Type in the code
Step 7: Save and run the file
Step 8: Similarly follow the same above steps for the next program and run that
program
Result:
EX. No.: 6 Transfer the files from one virtual machine to
Date: another Virtual machine
Aim:
Procedure:
Step 1: Open VM virtualBox Workstation
Step 2: Create a two Virtual machines and name it.
Step 3: Define the required specification for the virtual machines.
Step 4: Start the virtual machines.
Step 5: Create a Folder in virtual machine 1 and name it.
Step 6: Right click on the folder and go to properties.
Step 7: Select the sharing option and enable network file sharing.
Step 8: Click yes on the dialog box that is popped up.
Step 9: Open command prompt in virtual machine 1.
Step 10: type “ipconfig” to view the ip address of the virtual machine 1.
Step 11: In virtual machine 2, open the run command and type the Ip address of virtual machine 1.
Step 12: Now, We can access the shared folder of virtual machine 1 in network place of virtual
machine 2.
Step 13: After opening the shared folder, we can copy the required files in virtual machine 2.
Result:
EX. No.: 7 Install Hadoop single node cluster and run a simple application
Date:
Aim:
Procedure:
Part 1: Installation of Hadoop single node cluster: Open the terminal in Ubuntu:
Step 1: Update the packages
$ sudo apt-get update
Step 2: Installing java
$ sudo apt-get install default-jdk
Step 3: Assigning a dedicated user on hadoop to perform operations
*First create a group:
$ sudo addgroup hadoop
*Next create a user:
$ sudo adduser --ingroup hadoop abinash
Step 4: Adding user to sudo list
$ sudo adduser abinash sudo
Step 5: Next install a package ssh(secured shell login)
$ sudo apt-get install openssh-server
Step 6: Now login the new user (dedicated user)
$ su – abinash
Step 7: Next step is key generation
$ ssh-keygen -t rsa -P " "
Step 8: Add key to a file
$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
Step 9: To check whether ssh is installed properly by logging in and after that exit from it
$ ssh localhost
$ exit
Step 10: Download using this link: https://downloads.apache.org/hadoop/common/
*Click on hadoop-2.9.2/
*The click on hadoop-2.9.2.tar.gz
*You can find the "hadoop-2.9.2.tar.gz" in the downloads.
Step 11: Now extract the tar file.
*Change the directory
$ cd /home/abinash201002/
$ cd Desktop/
$ ls //you can find the file “hadoop-2.9.2.tar.gz"
*Now to extract $ sudo tar -xvzf hadoop-2.9.2.tar.gz
Step 12: Move the file to local
$ sudo mv hadoop-2.9.2 /usr/local/hadoop
Step 13: Change the ownership of hadoop folder
$ sudo chown -R sample /usr/local
Step 14: First .bashrc file
$ sudo nano ~/.bashrc
*Now add the following code at the end (to add env var)
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
*After pasting Crtl+O(for writing) and then press enter. Then press Ctrl+X(for
exiting).
*Now make it as a source
$ source ~/.bashrc
Step 15: Specify the Java path to Hadoop
*Open "hadoop-env.sh" file for this:
$ sudo nano /usr/local/hadoop/etc/hadoop/hadoop-env.sh
Now comment the line "export JAVA_HOME=$(JAVA_HOME)" and add the
follwing line:
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64
Step 16: Now open the file "core-site.xml"
$ sudo nano /usr/local/hadoop/etc/hadoop/core-site.xml
*Now add the following code inside <configuration> </configuration>:
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
Step 17: Now open the file "hdfs-site.xml"
$ sudo nano /usr/local/hadoop/etc/hadoop/hdfs-site.xml
*Now add the following code inside <configuration> </configuration>:
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property>
Step 18: Now open the file "yarn-site.xml"
$ sudo nano /usr/local/hadoop/etc/hadoop/yarn-site.xml
*Now add the following code inside <configuration> </configuration>:
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
Step 19: Change filename from "mapred-site.xml.template" to "mapred-site.xml" using:
$ sudo cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template
/usr/local/hadoop/etc/hadoop/mapred-site.xml
*Now open the file "mapred-site.xml"
$ sudo nano /usr/local/hadoop/etc/hadoop/yarn-site.xml
*Now add the following code inside <configuration> </configuration>:
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
Step 20:
$ sudo mkdir -p /usr/local/hadoop_tmp
$ sudo mkdir -p /usr/local/hadoop_tmp/hdfs/namenode
$ sudo mkdir -p /usr/local/hadoop_tmp/hdfs/datanode
*Now assign the ownership to the dedicated user for this directory using:
$ sudo chown -R abinash /usr/local/hadoop_tmp
Now we have to format hdfs namenode
Step 21: $ cd
$ hdfs namenode -format
Next, we have start the DFS
Step 22:
$ start-dfs.sh
$ start-yarn.sh
*To check whether hadoop is correctly installed or not:
$ jps
Now we have created a single node cluster.
Now we have to run a "Word count" program on HDFS by passing a input.
Step 23:
$ cd /home/abinash201002/Desktop/
*Now create a directory called data:
$ sudo mkdir data
$ cd data/
*Now open a file "sample.txt" and type the input:
$ sudo nano sample.txt
Step 24:
$ cd
$ cd /usr/local/hadoop
$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /ypm
Step 25: $ bin/hdfs dfs -put /home/abinash201002/Desktop/data /user/input //for
copying sample.txt to hdfs
Step 26: Run the word count program will be in the jar file by default which we are using
now.
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar
WordCount /user/input output
Step 27: Now run the following command to see the output
$ bin/hdfs dfs -cat output/*
For GUI interface (hadoop must be running)
Open browser and type localhost:50070
After completing stop hadoop
$ stop-all.sh
*To check whether it has stopped
$ jps
Result:
EX. No.: 8 Creating and Executing First Container Using Docker
Date:
Aim:
Procedure:
Step 1: Create a docker hub image library. Search for Docker hub.
Step 2: Click Explore in docker hub.
Step 3: Search for hello world docker official image.
Step 4: Now run the hello world program from docker in the terminal.
Step 5: Run docker pc to list docker containers.
Step 6: Thus the first container using docker is created and executed successfully using hello
world.
Result:
EX. No.: 9 Run a Container from Docker
Date:
Aim:
Procedure:
Step 1: Create a docker hub image library. Search for Docker hub.
Step 2: Sign-up and create a new docker id with username and password.
Step 3: Now after creating docker id, sign in with the docker – id.
Step 4: Click tab repositories, in docker hub.
Step 5: Open terminal and run doicker images.
Step 6: Enter with docker login.
Step 7: Use docker tag from step 5 as buildpackdemo and its version
Step 8: Use docker push command to push the latest version of buildpackdemo
from docker images.
Step 9: Now the contents are pushed to docker. Hence, open the docker hub and check for
the buildpackdemo. Now you can find the contents.
Step 10: Now copy the docker pull command.
Step 11: Type the command in terminal.
Step 12: Run the docker rmi buildpackdemo command and pull it.
Step 13: Run the docker.
Step 13: Run the docker hub localhost:8081
Step 14: Thus the container is run from docker hub.
Result: