Grid and Cloud Computing (181 Copies)
Grid and Cloud Computing (181 Copies)
Aim:
To develop a new Web service for Calculator applications.
Procedure:
When you start Globus toolkit container, there will be number of services starts up. The service
for this task will be a simple Math service that can perform basic arithmetic for a client.
It is possible to start with this interface and create the necessary WSDL file using the standard
Web service tool called Java2WSDL. However, the WSDL file for GT 4 has to include details of
resource properties that are not given explicitly in the interface above. Hence, we will provide
the WSDL file.
WSDL service interface description file -- The WSDL service interface description
file is provided within the GT4services folder at:
GT4Services\schema\examples\MathService_instance\Math.wsdl
This file, and discussion of its contents, can be found in Appendix A. Later on we will need to
modify this file, but first we will use the existing contents that describe the Math service above.
Service code in Java -- For this assignment, both the code for service operations and for the
resource properties are put in the same class for convenience. More complex services and
resources would be defined in separate classes. The Java code for the service and its resource
properties is located within the GT4services folder at:
GT4services\org\globus\examples\services\core\first\impl\MathService.java.
Deployment Descriptor -- The deployment descriptor gives several different important sets of
information about the service once it is deployed. It is located within the GT4services folder at:
GT4services\org\globus\examples\services\core\first\deploy-server.wsdd.
MathServiceAddressingLocator()
try {
String serviceURI = args[0];
// Create endpoint reference to service
EndpointReferenceType endpoint = new
EndpointReferenceType();
endpoint.setAddress(new Address(serviceURI));
MathPortType math;
// Get PortType
math = locator.getMathPortTypePort(endpoint);
// Perform an addition
math.add(10);
// Perform another addition
math.add(5);
// Access value
System.out.println("Current value: "
+ math.getValueRP(new GetValueRP()));
// Perform a subtraction
math.subtract(5);
// Access value
System.out.println("Current value: "
+ math.getValueRP(new GetValueRP()));
} catch (Exception e) {
e.printStackTrace();
}
}
}
When the client is run from the command line, you pass it one argument. The argument is the
URL that specifies where the service resides. The client will create the end point rerference and
incorporate this URL as the address. The end point reference is then used with the
getMathPortTypePort method of a MathServiceAdressingLocator object to obtain a
reference to the Math interface (portType). Then, we can apply the methods available in the
service as though they were local methods Notice that the call to the service (add and subtract
method calls) must be in a “try {} catch(){}” block because a “RemoteException” may be
thrown. The code for the “MathServiceAddressingLocator” is created during the build process.
(Thus you don’t have to write it!)
globus-undeploy-gar org_globus_examples_services_core_first
which should result with the following output:
Undeploying gar...
Deleting /.
.
.
Undeploy successful
6 Adding Functionality to the Math Service
In this final task, you are asked to modify the Math service and associated files so the srvice
supports the multiplication operation. To do this task, you will need to modify:
Service code (MathService.java)
WSDL file (Math.wsdl)
The exact changes that are necessary are not given. You are to work them out yourself. You will
need to fully understand the contents of service code and WSDL files and then modify them
accordingly. Appendix A gives an explanation of the important parts of these files. Keep all file
names the same and simply redeploy the service afterwards. You will also need to add a code to
the client code (Client.java) to test the modified service to include multiplication.
Output:
C:\Globus>java –classpath
build\classes\org\globus\examples\services\core\first\impl\:%CLASSPATH%
org.globus.examples.clients.MathService_instance.Client
http://localhost:8080/wsrf/services/examples/core/first/MathService
which should give the output:
Current value: 16
Current value: 11
Result:
Aim:
To develop a new OGSA-compliant web service.
Procedure:
Writing and deploying a WSRF Web Service is easier than you might think. You just have to
follow five simple steps
To run this program, as a minimum you will be required to have installed the following
prerequisite software
a. Download the latest Axis2 runtime from the above link and extract it. Now we point
Eclipse WTP to downloaded Axis2 Runtime. Open Window -> Preferences -> Web
Services -> Axis2 Emitter
Select the Axis2 Runtime tab and point to the correct Axis2 runtime location.
Alternatively at the Axis2 Preference tab, you can set the default setting that will come
up on the Web Services Creation wizards. For the moment we will accept the default
settings.
b. Click OK.
c. Next we need to create a project with the support of Axis2 features. Open File -> New -
> Other... -> Web -> Dynamic Web Project
St.joseph’s college of Engineering 7
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019
Click next
d. Select the name Axis2WSTest as the Dynamic Web project name (you can specify any
name you prefer), and select the configured Tomcat runtime as the target runtime.
Click next.
e. Select the Axis2 Web service facet
Click Finish.
g. Import the wtp/Converter.java class into Axis2WSTest/src (be sure to preserve the
package).
h. Select Converter.java, open File -> New -> Other... -> Web Services -> Web Service
Click next.
i. The Web service wizard would be brought up with Web service type set to Bottom up
Java bean Web Service with the service implementation automatically filled in. Move
the service scale to Start service.
j. Click on the Web Service runtime link to select the Axis2 runtime.
Click OK.
k. Ensure that the correct server and service project are selected as displayed below.
Click next.
l. This page is the service.xml selection page. if you have a custom services.xml, you can
include that by clicking the Browse button. For the moment, just leave it at the default.
Click next.
m. This page is the Start Server page. It will be displayed if the server has not been started.
Click on the Start Server button. This will start the server runtime.
Click next.
St.joseph’s college of Engineering 12
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019
n. This page is the Web services publication page, accept the defaults.
Click Finish.
o. Now, select the Axis2WSTest dynamic Web project, right-click and select Run -> Run
As -> Run on Server to bring up the Axis2 servlet.
Click Next.
p. Make sure you have the Axis2WSTest dynamic Web project on the right-hand side
under the Configured project.
Click Finish.
q. This will deploy the Axis2 server webapp on the configured servlet container and will
display the Axis2 home page. Note that the servlet container will start up according to
the Server configuration files on your workspace.
r. Click on the Services link to view the available services. The newly created converter
Web service will be shown there.
s. Click on the Converter Service link to display the wsdl URL of the newly created Web
service. Copy the URL.
t. Now we'll generate the client for the newly created service by referring the ?wsdl
generated by the Axis2 Server. Open File -> New -> Other... -> Web Services -> Web
ServiceClient
u. Paste the URL that was copied earlier into the service definition field.
v. Click on the Client project hyperlink and enter Axis2WSTestClient as the name of the
client project. Click OK.
Back on the Web Services Client wizard, make sure the Web service runtime is set to
Axis2 and the server is set correctly. Click Next.
Next page is the Client Configuration Page. Accept the defaults and click Finish.
The Clients stubs will be generated to your Dynamic Web project Axis2WSTestClient.
Now we are going to write Java main program to invoke the client stub. Import the
ConverterClient.java file to the workspace into the wtp package in the src folder of
Axis2WSTestClient.
Then select the ConverterClient file, right-click and select Run As -> Java Application. Here's
what you get on the server console:
Another way to test and invoke the service is to select Generate test case to test the service
check box on the Axis2 Client Web Service Configuration Page when going through the Web
Service Client wizard.
If that option is selected, the Axis2 emitter will generate JUnit testcases matching the WSDL
we provide to the client. These JUnit testcases will be generated to a newly added source
directory to the Axis2WSTestClient project called test.
Next thing we need to do is to insert the test case with the valid inputs as the Web service
method arguments. In this case, let's test the ConverterConverterSOAP11Port_httpTest.java
by provide values for Celsius and Farenheit for the temperature conversion. As an example,
replace the generated TODO statement in each test method to fill in the data with values as:
testfarenheitToCelsius() -> farenheitToCelsius8.setFarenheit(212);
testStartfarenheitToCelsius() ->farenheitToCelsius8.setFarenheit(212);
testcelsiusToFarenheit() -> celsiusToFarenheit10.setCelsius(100);
testStartcelsiusToFarenheit() -> celsiusToFarenheit10.setCelsius(100);
Here the testcases were generated to test both the synchronous and asynchronous clients.
w. After that, select the testcase, right-click, select Run As -> JUnit Test. You will be able
to run the unit test successfully invoking the Web service.
The Web Service wizard orchestrates the end-to-end generation, assembly, deployment,
installation and execution of the Web service and Web service client. Now that your Web service
is running, there are a few interesting things you can do with this WSDL file. Examples:
You can choose Web Services -> Test with Web Services Explorer to test the service.
You can choose Web Services -> Publish WSDL file to publish the service to a public
UDDI registry.
Output:
Gobal@it~$ java -classpath
build\classes\org\globus\examples\services\core\second\impl\:%CLASSPATH%
org.globus.examples.clients.MathService_instance.Client
http://localhost:8080/wsrf/services/examples/core/second/MathService
Fahrenheit 84.32
Centigrade is 29.06
Centigrade 20
Fahrenheit is 68
RESULT:
Aim:
To develop a Grid Service using Apache Axis.
Procedure:
You will need to download and install the following software:
1. Java 2 SDK v1.4.1, http://java.sun.com/j2se/1.4.1/download.html
2. Apache Tomcat v4.124
http://jakarta.apache.org/builds/jakarta-tomcat-4.0/release/v4.1.24/bin/jakarta tomcat4.1.24.exe.
3. XML Security v1.0.4,
http://www.apache.org/dist/xml/security/java-library/xmlsecurity bin1.0.4.zip
4. Axis v1.1, http://ws.apache.org/axis/dist/1_1/axis-1_1.zip
1. Java 2 SDK
• Run the downloaded executable (j2sdk-1_4_1-windows-i586.exe) which will install the
• SDK in C:\j2sdk1.4.1. Set the JAVA_HOME environment variable to point to this
directory as follows:
• Click on START->CONTROL PANEL->SYSTEM
• Click on the Advanced tab
• Click on the Environment Variables button
• Click on the New… button in the user variable section and enter the details
• Add the Java binaries to your PATH variable in the same way by setting a user
variable called PATH with the value “%PATH%;C:\j2sdk1.4.1\bin”
2. Apache Tomcat
3. XML Security
• Download and unzip
http://www.apache.org/dist/xml/security/javalibrary/xmlsecurity-bin 1_0_4.zip
• Copy xml-sec.jar to C:\axis-1_1\lib\
• Set-up your CLASSPATH environment variable to including the following:
C:\axis1_1\lib\xml-sec.jar;
4. Apache Axis
• Unzip the downloaded Axis archive to C: (this will create a directory C:\axis-1_1).
• Extract the file xmlsec.jar from the downloaded security archive to
C:\axis1_1\webapps\axis\WEB-INF\lib.
• Set-up your CLASSPATH environment variable to including the following:
o The current working directory
o All the AXIS jar files as found in C:\axis-1_1\lib
C:\jakarta-tomcat-4.1.24\common\lib\servlet.jar
• Your CLASSPATH should therefore look something like:
C:\axis-1_1\lib\axis.jar;
C:\axis 1_1\lib\axis-ant.jar;
C:\axis-1_1\lib\commons-discovery.jar;
C:\axis-1_1\lib\commons-logging.jar;
C:\axis-1_1\lib\jaxrpc.jar;
C:\axis-1_1\lib\log4j-1.2.8.jar;
C:\axis-1_1\lib\saaj.jar;
C:\axis-1_1\lib\wsdl4j.jar;
C:\axis-1_1\lib\xercesImpl.jar
C:\axis-1_1\lib\xmlParserAPIs.jar;
C:\jakarta-tomcat-4.1.24\common\lib\servlet.jar
C:\axis-1_1\lib\xml-sec.jar;
• Now tell Tomcat about your Axis web application by creating the file
C:\jakarta- tomcat-4.1.24\webapps\axis.xml with the following content:
<Context path="/axis" docBase="C:\axis-1_1\webapps\axis" debug="0"
privileged="true">
<LoggerclassName="org.apache.catalina.logger.FileLogger"prefix="axis_log."
suffix=".txt" timestamp="false"/>
Deploy one of the sample Web Services to test the system and to create the C:\axis- 1_1\
webapps\axis\WEB-INF\server-config.wsdd file. From C:\axis-1_1 issue the command (on
one line):
java org.apache.axis.client.AdminClient
http://localhost:8080/axis/services/AdminService/samples/stock/deploy.wsdd
Result:
Aim:
To develop an applications using Java or C/C++ Grid APIs.
Sample Code:
import AgentTeamwork.Ateam.*;
import MPJ.*;
public class UserProgAteam extends AteamProg {
private int phase;
public UserProgAteam( Ateam o )
{}
public UserProgAteam( )
{}
public UserProgAteam( String[] args ) {
phase = 0;
}
// phase recovery
private void userRecovery( ) {
phase = ateam.getSnapshotId( );
}
private void compute( ) {
for ( phase = 0; phase < 10; phase++ ) {
try {
Thread.currentThread( ).sleep( 1000 );
}
catch(InterruptedException e ) {
}
ateam.takeSnapshot( phase );
System.out.println( "UserProgAteam at rank " + MPJ.COMM_WORLD.Rank( ) + " : took a
snapshot " + phase );
}}
public static void main( String[] args ) {
System.out.println( "UserProgAteam: got started" );
MPJ.Init( args, ateam);
UserProgAteam program = null;
// Timer timer = new Timer( );
if ( ateam.isResumed( ) ) {
program = ( UserProgAteam )
ateam.retrieveLocalVar( "program" );
program.userRecovery( );
}
else {
program = new UserProgAteam( args );
ateam.registerLocalVar( "program", program );
}
St.joseph’s college of Engineering 25
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019
program.compute( );
MPJ.Finalize( );
}
public class UserProgAteam extends AteamProg {
// application body private void compute( ) {
for ( phase = 0; phase < 10; phase++ ) {
try {
Thread.currentThread( ).sleep( 1000 );
}
catch(InterruptedException e ) {}
ateam.takeSnapshot( phase );
System.out.println ( "UserProgAteam at rank " + MPJ.COMM_WORLD.Rank( ) + " : took a snapshot
" + phase );
}}
Aim:
To develop a secured applications using basic security mechanisms available in Globus.
Procedure:
The Globus Toolkit's Authentication and Authorization components provide the de facto
standard for the "core" security software in Grid systems and applications. These software
development kits (SDKs) provide programming libraries, Java classes, and essential tools for a
PKI, certificate-based authentication system with single sign-on and delegation features, in either
Web Services or non-Web Services frameworks. ("Delegation" means that once someone
accesses a remote system, he can give the remote system permission to use his credentials to
access others systems on his behalf.)
A Web services implementation of the Grid Security Infrastructure (GSI), containing the
core libraries and tools needed to secure applications using GSI mechanisms. The Grid is a term
commonly used to describe a distributed computing infrastructure which will allow "coordinated
resource sharing and problem solving in dynamic, multi-institutional virtual organizations" . The
protocols and middleware to enable this Grid infrastructure have been developed by a number of
initiatives, most notably the Globus Project .
Web Services are simply applications that interact with each other using Web standards,
such as the HTTP transport protocol and the XML family of standards. In particular, Web
Services use the SOAP messaging standard for communication between service and requestor.
They should be self-describing, self-contained and modular; present a platform and
implementation neutral connection layer; and be based on open standards for description,
discovery and invocation.
The Grid Security Infrastructure (GSI) is based on the Generic Security Services API
(GSS-API) and uses an extension to X509 certificates to provide a mechanism to authenticate
subjects and authorise resources. It allows users to benefit from the ease of use of a single sign-
on mechanism by using delegated credentials, and time-limited proxy certificates. GSI is used as
the security infrastructure for the Globus Toolkit.
Recently, a new proposal for an Open Grid Services Architecture (OGSA) was announced which
marries the Grid and Web Services to create a new Grid Services model. One problem, which
has not yet been explicitly addressed, is that of security. A possible solution is to use a suitably
secure transport binding, e.g. TLS, and extend it to incorporate appropriate support for proxy
credentials. It would be useful to test out some of the principles of Grid Services using the
currently available frameworks and tools for developing Web Services. Unfortunately, no
standards currently exist for implemented proxy credential support to provide authenticated
communication between web services. A number of XML/Web Services security standards are
currently in development, e.g. XML Digital Signatures, SAML, XKMS, XACML, but the
remainder of this document describes an approach proposed by ANL to use GSI over an SSL
link.
A generic Job Submission environment, GAP enables researchers and scientists to execute
their applications on Grid from a conventional web browser. Both Sequential and Parallel jobs
can be submitted to GARUDA Grid through Portal. It provides a web interface for viewing
the resources, and for submitting and monitoring jobs.
Accessing GAP
Type http://192.168.60.40/GridPortal1.3/ (to access the Portal through GARUDA Network)
or http://203.200.36.236/GridPortal1.3 (to access the Portal through Internet) in the address bar
of the web browser to invoke the Portal. It is preferable to access the Portal through GARUDA
Network, since it is much faster than the Internet.
In order to access the facilities of Grid Portal such as Job Submission, Job Status tracking,
Storing(Uploading) of Executables and View Output/Error data, the user has to login into the
Portal using the User's Login Form in the Home page of the Portal.
a) New users are required to click Sign up in the User Login Form, which leads them to home
page of Indian Grid Certification Authority (IGCA). Click on Request Certificate and acquire
the required user/host certificate(s), details are provided in IGCA section.
b) Registered users are required to provide User Id and Password for logging into the Portal and
access various facilities.
Job Management
User can submit their job, monitor the status and view output files using the Job Management
interfaces. Types of job submission (Basic and Advanced) and Job information are covered
under this section.
Basic Job Submission
This interface can be used to submit sequential as well as parallel jobs. The user should provide
the following information:
1. Optional Job Name - User can provide a suitable (alias) name for their job.
2. Type of Job user want to execute,
3. Operating System – Required for their Job,
4. 'Have you reserved the Resources' - An optional parameter contains the Reservation Id's
St.joseph’s college of Engineering 28
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019
that can be used for job submission instead of choosing the Operating System/Processor
parameter.
5. No. of processes required for the job - This parameter is only for the parallel
applications that require more than one CPU.
6. Corresponding Executables – uploaded from either local or remote machine,
7. Input file, if required - The executable and the input file can either be uploaded from the
local machine or can be selected from the Remote File List, if it is available in the
Submit Node
8. STDIN - Required when the user wants to provide any inputs to the application during the
runtime.
9. Optional Execution Time - Here the Execution Time is the anticipated job completion time.
10. Any Command Line arguments or Environment Variables, if required.
11. User Specific Output/ Error files - If the application generates output/error files other
than standard output/error files and its entries should be separated by comma's or single empty
space in case of multiple files.
All those fields marked with * are mandatory fields and should be filled before submitting a
job. By clicking on submit button, the portal submits the job to Grid Way Meta Scheduler,
which then schedules the job for execution and returns the Job Id. The Job Id has to be noted
for future reference to this job. In the event of unsuccessful submission, the corresponding
error message is displayed.
All those fields marked with * are mandatory fields and should be filled before submitting a
job. By clicking on submit button, the portal submits the job to GridWay Meta Scheduler,
which then schedules the job for execution and returns the Job Id. The Job Id has to be noted
for future reference to this job.
This interface is provided for the user to submit their Sequential and Parallel Jobs. The
difference from Basic job submission being: it is using GT4 Web Services components for
submitting jobs to the Grid instead of Gridway as scheduler.
Job Info
The user can view the status of the job submitted through Portal and the output file of the job by
specifying the Job Id. The option for downloading the Output/ Error file is also provided, after
the job execution. To cancel any of the queued jobs, the user has to select the job and click
Cancel Job button, following which the acknowledgment for the job canceled is
provided.
Resources
The GridWay meta-scheduler provides the following information - Node Name, Head Node,
OS, ARCH, Load Average, Status, Configured Process and Available Process. This
information aids user to select a suitable cluster and reserve them in advance for job
submission.
File browser
For the logged-in user, the File Browser lists files, such as the uploaded executables and
Input/Output/Error files, along with their size and last modified information. It also allows
deletion of files.
Accounting
This module provides Accounting information of the jobs that are submitted to GARUDA, such
as no. of jobs submitted, and system parameters such as Memory usage, Virtual memory, Wall
Time, and CPU time. Last one month data is displayed by default.
MyProxy
MyProxy allows user to upload their Globus Certificates into Myproxy Server and the same
can be used for initializing the Grid proxy on the Grid. If the certificate has been already
generated for you, but you do not have access to the above- mentioned files, you can download
it from GridFS machine (from $HOME/.globus directory) using winscp/scp.
MyProxy Init
By default, the "Myproxy Init" option is enabled for the user. Upload proxy by entering valid
inputs - User name, Grid-proxy Passphrase, User certificate file (usercert.pem), User key file
(userkey.pem) and Proxy life time (168 hours is the default value).
MyProxyGet
Grid proxy will be initialized on the Grid head node by providing the inputs - User name,
Myproxy Passphrase and Life time of the certificate.
VOMS Proxy
The Virtual Organization Management System (VOMS) allows users to belong to Virtual
Organizations (VOs), thereby allowing them to utilize resources earmarked for those VOs.
The user can also request for a new VO by using "Request for VO" link. VOMS proxy
initialization with Multiple roles is provided to the user, by selecting more than one entry on
the Role combo box.
The users are required to adhere to following directory structure. Application Parent Dir-
src/,bin/,lib/,include/
1) Login
This method is for logging in to the Portal.
Inputs
user name MyProxy User Name
password MyProxy Password
life time Indicates how long is the proxy's life time
Output
Proxy string Proxy issued by the My proxy server
St.joseph’s college of Engineering 33
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019
Result:
Ex.No: 6 Develop a Grid portal, where user can submit a job and get the result.
Date Implement it with and without GRAM concept
Aim:
To develop a Grid portal, where user can submit a job and get the result and to implement it with
and without GRAM concept.
Procedure:
1. Opening the workflow editor
The editor is a Java Webstart application download and installation is only a click.
Download proxies
Result:
Aim:
To create and run the virtual machine of different configuration. Check how many virtual
machines can be utilized at particular time.
Procedure:
Step 1: Check that your CPU supports hardware virtualization.
$ egrep -c '(vmx|svm)' /proc/cpuinfo
Step 2: To see if your processor is 64-bit or not.
$ egrep -c ' lm ' /proc/cpuinfo
Step 3: Now see if your running kernel is 64-bit or not.
$ uname –a
Step 4: To install the KVM, execute the following command.
$ sudo apt-get install qemu-kvm
$ sudo apt-get install libvirt-bin
$ sudo apt-get install ubuntu-vm-builder
$ sudo apt-get install bridge-utils
Step 5: Verify the KVM installation has been successful or not.
$ virsh -c qemu:///system list
Step 6: Installing a GUI for KVM.
$ sudo apt-get install virt-manager
Step 7: Creating a KVM guest machine.
$ virt-manager
Step 9: Then start with creating a new virtual machine by hitting the new button. Enter the name
of your virtual machine. Select your installation media type and click forward.
Step 10: Then you will have to set the amount RAM and CPU's that will be available to that
virtual machine.
Step 11: Finally, you will get a confirmation screen that shows the details of your virtual
machine. Then click finish button.
Step 12: Repeat the same procedure to create multiple virtual machines.
Output:
Result:
Procedure:
1. Log in to the openstack dashboard.
2. Select the appropriate project from the drop down menu at the top left.
3. On the Project tab, open the Compute tab and click Images category.
4. Click Create Image. The Create An Image dialog box appears.
5. Enter the following values:
6. Click Create Image. The image is queued to be uploaded. It might take some time before the status
changes from Queued to Active.
7. On the Project tab, open the Compute tab and click Volumes category.
8. Click Create Volume. In the dialog box that opens, enter or select the following values.
Volume
Type: Leave this field blank.
Size (GB): The size of the volume in gibibytes (GiB).
Availability Zone: Select the Availability Zone from the list. By default, this value is set to
the availability zone given by the cloud provider. For some cases, it could be nova.
9. Click Create Volume.
10. Then select the manage attachment and select the Instance (virtual machine). It might take
some time to attach the volume to instance.
11. On the Project tab, open the Compute tab click instance and select “launch instance”
df -h
14. Create some files in that volume and save the file.
16. Delete the Virtual Machine (VM). After that create the new VM and attaché the volume to newly
created VM and check previously created files are present in that volume or not.
Output:
Result:
Aim: To perform a Virtual Machine Migration based on certain conditions from one node to
another.
Live migration is the process of moving a running virtual machine or application be-tween
different physical machines without disconnecting the client or application. Memory, storage,
and network connectivity of the virtual machine are transferred from the original guest machine
to the destination.
Procedure
1. To perform Migration in Openstack, Go to System Instances
2. On the right end of the virtual machine you wish to migrate, click on the down arrow
3. Click on Live Migrate Instance option
4. It lists out the available nodes (physical servers) to which you can migrate your instance
Result:
Aim: To install a C compiler in the virtual machine and execute a sample program.
Algorithm:
Step 1: To check Gcc compiler is installed or not.
$ dpkg - l | grep gcc
Step 2: If GCC compiler is not installed in VM, to execute the following command
$sudo apt-get install gcc (or) $sudo apt-get install build-essential
Step 3: Open a Vi Editor
Step 4: Get the no. of rows and columns for first and second matrix.
Step 5: Get the values of x and y matrix using for loop.
Step 6: Find the product of first and second and store the result in multiply matrix.
multiply[i][j]=multiply[i][j]+(first[i][k]*second[k][j]);
Step 7: Display the resultant matrix.
Step 8: Stop the program.
Program:
#include <stdio.h>
void main()
{
int m, n, p, q, c, d, k, sum = 0;
int first[10][10], second[10][10], multiply[10][10];
printf("Enter the number of rows and columns of first matrix\n");
scanf("%d%d", &m, &n);
printf("Enter the elements of first matrix\n");
for ( c = 0 ; c < m ; c++ )
for ( d = 0 ; d < n ; d++ )
scanf("%d", &first[c][d]);
printf("Enter the number of rows and columns of second matrix\n");
scanf("%d%d", &p, &q);
if ( n != p )
printf("Matrices with entered orders can't be multiplied with each other.\n");
else
{
printf("Enter the elements of second matrix\n");
Result:
Aim:
To find procedure to install storage controller and interact with it.
Procedure:
Step 1: Create Volume
1. Creating a volume
cinder create --display_name ers2 1
2. Attaching the volume to a VM
nova volume-attach INSTANCE_ID VOLUME_ID auto
Output:
Creating a volume
stack:~$ cinder create --display_name ers2 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-06-01T06:58:53.000000 |
| description | None |
| encrypted | False |
| id | 4cd8de9a-997e-4b6d-b2b3-cc1e3f96dfef |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | ers2 |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 6e10bfbc0fea4905b7d88ea84c7da54c |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | None |
| user_id | a18632a00c414220a3d3cfd5dfceddf0 |
| volume_type | lvmdriver-1 |
+--------------------------------+--------------------------------------+
Result:
Procedure
1. Install openssh server
sudo apt-get install openssh-server
10. Open the ‘hadoop-env.sh’ file and add java home directory or java folder path
gedit hadoop-env.sh
export JAVA HOME=~/jdk1.8.0_45
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hduser1/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hduser1/hdfs/datanode</value>
</property>
</configuration>
12. Format the hadoop namenode
hadoop namenode –format
15. If any node is not created execute the below command and repeat step 12 & 13
hadoop-2.6.0/sbin/stop-all.sh
rm –r hdfs/
15. Open the namenode and datanode. In browser type the following port number
localhost:50070
localhost:8088
Output:
NameNode:
DataNode:
Result:
PROCEDURE:
Hadoop Distributed File System (HDFS) is a distributed, scalable file system developed
as the
back-end storage for data-intensive Hadoop applications. As such, HDFS is designed to handle
very large files with "write-once-read-many" access model. As HDFS is not a full-fledged
POSIX compliant file system, it cannot be directly mounted by the operating system, and file
access with HDFS is done via HDFS shell commands.
However, one can leverage FUSE to write a userland application that exposes HDFS via
a traditional file system interface. fuse-dfs is one such FUSE-based application which allows you
to mount HDFS as if it were a traditional Linux file system. If you would like to mount HDFS on
Linux, you can install fuse-dfs, along with FUSE as follows.
Once fuse-dfs is installed, go ahead and mount HDFS using FUSE as follows.
$ sudo hadoop-fuse-dfs dfs://<name_node_hostname>:<namenode_port> <mount_point>
Once HDFS has been mounted at <mount_point>, you can use most of the traditional
filesystem
operations (e.g., cp, rm, cat, mv, mkdir, rmdir, more, scp). However, random write operations
such as rsync, and permission related operations such as chmod, chown are not supported in
FUSE-mounted HDFS.
RESULT:
Aim:
To write a program to use the API's of Hadoop to interact with it.
Procedure:
1. After created one node hadoop cluster. Create the HDFS input directory.
hdfs dfs -mkdir /Exam1
3. Execute hadoop mapreduce JAR file and specify an input & output directory.
hadoop jar WeatherJob.jar WeatherJob /Exam1 /output
4. Open the namenode and datanode. In browser type the following port number.
localhost:50070
localhost:8088
Program:
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public void map(LongWritable key, Text value, Context context) throws IOException,
InterruptedException {
String line = value.toString();
String[] tokens = line.split("\t");
weatherKey.set(tokens[0]);
temperature.set(Integer.parseInt(tokens[3]));
context.write(weatherKey,temperature);
weatherKey.set(tokens[0]+tokens[1]);
temperature.set(Integer.parseInt(tokens[3]));
context.write(weatherKey,temperature);
}
}
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
DataNode:
Result:
Date:
Aim:
To write a word count program to demonstrate the use of Map and Reduce tasks.
Procedure:
1. After created one node hadoop cluster. Create the HDFS input directory.
hdfs dfs -mkdir /inp1
2. Copy the input file to created input directory
hdfs dfs -copyFromLocal hadoop-2.6.0/etc/hadoop/a.txt /inp1
3. if name node is running in safemode to execute the bellow command
hadoop dfsadmin -safemode leave
4. Execute hadoop mapreduce JAR file and specify an input & output directory.
hadoop jar hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar
wordcount /inp1 /out1
5. Open the namenode and datanode. In browser type the following port number.
localhost:50070
localhost:8088
6. Download the file from output directory and view the wordcout program output.
Program:
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.*;
import org.apache.hadoop.mapreduce.lib.output.*;
import org.apache.hadoop.util.*;
FileInputFormat.setInputPaths(job, inputPath);
FileOutputFormat.setOutputPath(job, outputPath);
job.setJobName("WordCount");
job.setJarByClass(WordCount.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setCombinerClass(Reduce.class);
job.setReducerClass(Reduce.class);
return job.waitForCompletion(true) ? 0 : 1;
}
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
}
NameNode:
DataNode:
Input:
a.txt Output:
aaa aaa aaa aaa aaa 4
bbb bbb bbb bbb bbb 4
ccc ccc ccc ccc ccc 4
Result: