[go: up one dir, main page]

0% found this document useful (0 votes)
21 views56 pages

Grid and Cloud Computing (181 Copies)

Uploaded by

sri.03.saiv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views56 pages

Grid and Cloud Computing (181 Copies)

Uploaded by

sri.03.saiv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 56

IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.no: 1 Develop a new Web Service for Calculator


Date:

Aim:
To develop a new Web service for Calculator applications.

Procedure:
When you start Globus toolkit container, there will be number of services starts up. The service
for this task will be a simple Math service that can perform basic arithmetic for a client.

The Math service will access a resource with two properties:


1. An integer value that can be operated upon by the service
2. A string values that holds string describing the last operation
The service itself will have three remotely accessible operations that operate upon
value:
(a) add, that adds a to the resource property value.
(b) subtract that subtracts a from the resource property value.
(c) getValueRP that returns the current value of value.
Usually, the best way for any programming task is to begin with an overall description of what
you want the code to do, which in this case is the service interface. The service interface
describes how what the service provides in terms of names of operations, their arguments and
return values. A Java interface for our service is:

public interface Math {


public void add(int a);
public void subtract(int a);
public int getValueRP();
}

It is possible to start with this interface and create the necessary WSDL file using the standard
Web service tool called Java2WSDL. However, the WSDL file for GT 4 has to include details of
resource properties that are not given explicitly in the interface above. Hence, we will provide
the WSDL file.

Step 1 Getting the Files


All the required files are provided and comes directly from [1]. The MathService source code
files can be found from http://www.gt4book.com
(http://www.gt4book.com/downloads/gt4book-examples.tar.gz)
A Windows zip compressed version can be found at
http://www.cs.uncc.edu/~abw/ITCS4146S07/gt4book-examples.zip. Download and uncompress
the file into a directory called GT4services. Everything is included (the java source WSDL and
deployment files, etc.):

St.joseph’s college of Engineering 1


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

WSDL service interface description file -- The WSDL service interface description
file is provided within the GT4services folder at:
GT4Services\schema\examples\MathService_instance\Math.wsdl
This file, and discussion of its contents, can be found in Appendix A. Later on we will need to
modify this file, but first we will use the existing contents that describe the Math service above.
Service code in Java -- For this assignment, both the code for service operations and for the
resource properties are put in the same class for convenience. More complex services and
resources would be defined in separate classes. The Java code for the service and its resource
properties is located within the GT4services folder at:

GT4services\org\globus\examples\services\core\first\impl\MathService.java.
Deployment Descriptor -- The deployment descriptor gives several different important sets of
information about the service once it is deployed. It is located within the GT4services folder at:
GT4services\org\globus\examples\services\core\first\deploy-server.wsdd.

Step 2 – Building the Math Service


It is now necessary to package all the required files into a GAR (Grid Archive) file. The build
tool ant from the Apache Software Foundation is used to achieve this as shown overleaf:
Generating a GAR file with Ant (from http://gdp.globus.org/gt4-
tutorial/multiplehtml/ch03s04.html)
Ant is similar in concept to the Unix make tool but a java tool and XML based.
Build scripts are provided by Globus 4 to use the ant build file. The windows version of the build
script for MathService is the Python file called globus-build-service.py, which held in the
GT4services directory. The build script takes one argument, the name of your service that you
want to deploy. To keep with the naming convention in [1], this service will be called first.
In the Client Window, run the build script from the GT4services directory with:
globus-build-service.py first
The output should look similar to the following:
Buildfile: build.xml
.
.
.
.
BUILD SUCCESSFUL
St.joseph’s college of Engineering 2
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Total time: 8 seconds


During the build process, a new directory is created in your GT4Services directory that is named
build. All of your stubs and class files that were generated will be in that directory and its
subdirectories. More importantly, there is a GAR (Grid Archive) file called
org_globus_examples_services_core_first.gar. The GAR file is the package that contains
every file that is needed to successfully deploy your Math Service into the Globus container. The
files contained in the GAR file are the Java class files, WSDL, compiled stubs, and the
deployment descriptor.

Step 3 – Deploying the Math Service


If the container is still running in the Container Window, then stop it using Control-C. To deploy
the Math Service, you will use a tool provided by the Globus Toolkit called globus-deploy-gar.
In the Container Window, issue the command:
globus-deploy-gar org_globus_examples_services_core_first.gar
Successful output of the command is :

The service has now been deployed.


Check service is deployed by starting container from the Container Window:
You should see the service called MathService.

Step 4 – Compiling the Client


A client has already been provided to test the Math Service and is located in the
GT4Services directory at:
GT4Services\org\globus\examples\clients\MathService_instance\Client.java

and contains the following code:


package org.globus.examples.clients.MathService_instance;
import org.apache.axis.message.addressing.Address;
import org.apache.axis.message.addressing.EndpointReferenceType;
import org.globus.examples.stubs.MathService_instance.MathPortType;
import org.globus.examples.stubs.MathService_instance.GetValueRP;
import
org.globus.examples.stubs.MathService_instance.service.MathServiceAddressingL
ocator;
public class Client {
public static void main(String[] args) {
MathServiceAddressingLocator locator = new

St.joseph’s college of Engineering 3


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

MathServiceAddressingLocator()
try {
String serviceURI = args[0];
// Create endpoint reference to service
EndpointReferenceType endpoint = new
EndpointReferenceType();
endpoint.setAddress(new Address(serviceURI));
MathPortType math;
// Get PortType
math = locator.getMathPortTypePort(endpoint);
// Perform an addition
math.add(10);
// Perform another addition
math.add(5);
// Access value
System.out.println("Current value: "
+ math.getValueRP(new GetValueRP()));
// Perform a subtraction
math.subtract(5);
// Access value
System.out.println("Current value: "
+ math.getValueRP(new GetValueRP()));
} catch (Exception e) {
e.printStackTrace();
}
}
}

St.joseph’s college of Engineering 4


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

When the client is run from the command line, you pass it one argument. The argument is the
URL that specifies where the service resides. The client will create the end point rerference and
incorporate this URL as the address. The end point reference is then used with the
getMathPortTypePort method of a MathServiceAdressingLocator object to obtain a
reference to the Math interface (portType). Then, we can apply the methods available in the
service as though they were local methods Notice that the call to the service (add and subtract
method calls) must be in a “try {} catch(){}” block because a “RemoteException” may be
thrown. The code for the “MathServiceAddressingLocator” is created during the build process.
(Thus you don’t have to write it!)

(a) Settting the Classpath


To compile the new client, you will need the JAR files from the Globus toolkit in your
CLASSPATH. Do this by executing the following command in the Client Window:
%GLOBUS_LOCATION%\etc\globus-devel-env.bat
You can verify that this sets your CLASSPATH, by executing the command:
echo %CLASSPATH%
You should see a long list of JAR files.
Running \gt4\etc\globus-devel-env.bat only needs to be done once for each Client Window that
you open. It does not need to be done each time you compile.
(b) Compiling Client
Once your CLASSPATH has been set, then you can compile the Client code by typing in the
following command:
javac -classpath
build\classes\org\globus\examples\services\core\first\impl\:%CLASSPATH%
org\globus\examples\clients\MathService_instance\Client.java

Step 5 – Start the Container for your Service


Restart the Globus container from the Container Window with:
globus-start-container -nosec
if the container is not running.

Step 6 – Run the Client


To start the client from your GT4Services directory, do the following in the Client Window,
which passes the GSH of the service as an argument:
java -classpath
build\classes\org\globus\examples\services\core\first\impl\:%CLASSPATH%
org.globus.examples.clients.MathService_instance.Client
http://localhost:8080/wsrf/services/examples/core/first/MathService
which should give the output:
Current value: 15
Current value: 10
Step 7 – Undeploy the Math Service and Kill a Container
Before we can add functionality to the Math Service (Section 5), we must undeploy the service.
In the Container Window, kill the container with a Control-C. Then to undeploy the service, type
in the following command:

St.joseph’s college of Engineering 5


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

globus-undeploy-gar org_globus_examples_services_core_first
which should result with the following output:
Undeploying gar...
Deleting /.
.
.
Undeploy successful
6 Adding Functionality to the Math Service
In this final task, you are asked to modify the Math service and associated files so the srvice
supports the multiplication operation. To do this task, you will need to modify:
Service code (MathService.java)
WSDL file (Math.wsdl)
The exact changes that are necessary are not given. You are to work them out yourself. You will
need to fully understand the contents of service code and WSDL files and then modify them
accordingly. Appendix A gives an explanation of the important parts of these files. Keep all file
names the same and simply redeploy the service afterwards. You will also need to add a code to
the client code (Client.java) to test the modified service to include multiplication.

Output:

C:\Globus>java –classpath
build\classes\org\globus\examples\services\core\first\impl\:%CLASSPATH%
org.globus.examples.clients.MathService_instance.Client
http://localhost:8080/wsrf/services/examples/core/first/MathService
which should give the output:
Current value: 16
Current value: 11

Result:

St.joseph’s college of Engineering 6


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.no:2 Develop new OGSA-compliant Web Service


Date:

Aim:
To develop a new OGSA-compliant web service.

Procedure:

Writing and deploying a WSRF Web Service is easier than you might think. You just have to
follow five simple steps

1. Define the service's interface. This is done with WSDL


2. Implement the service. This is done with Java.
3. Define the deployment parameters. This is done with WSDD and JNDI
4. Compile everything and generate a GAR file. This is done with Ant
5. Deploy service. This is also done with a GT4 tool

To run this program, as a minimum you will be required to have installed the following
prerequisite software
a. Download the latest Axis2 runtime from the above link and extract it. Now we point
Eclipse WTP to downloaded Axis2 Runtime. Open Window -> Preferences -> Web
Services -> Axis2 Emitter

Select the Axis2 Runtime tab and point to the correct Axis2 runtime location.
Alternatively at the Axis2 Preference tab, you can set the default setting that will come
up on the Web Services Creation wizards. For the moment we will accept the default
settings.
b. Click OK.
c. Next we need to create a project with the support of Axis2 features. Open File -> New -
> Other... -> Web -> Dynamic Web Project
St.joseph’s college of Engineering 7
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Click next
d. Select the name Axis2WSTest as the Dynamic Web project name (you can specify any
name you prefer), and select the configured Tomcat runtime as the target runtime.

Click next.
e. Select the Axis2 Web service facet

Click Finish.

St.joseph’s college of Engineering 8


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

f. This will create a dynamic Web project in the workbench

g. Import the wtp/Converter.java class into Axis2WSTest/src (be sure to preserve the
package).

Build the Project, if its not auto build.


St.joseph’s college of Engineering 9
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

h. Select Converter.java, open File -> New -> Other... -> Web Services -> Web Service

Click next.

i. The Web service wizard would be brought up with Web service type set to Bottom up
Java bean Web Service with the service implementation automatically filled in. Move
the service scale to Start service.

St.joseph’s college of Engineering 10


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

j. Click on the Web Service runtime link to select the Axis2 runtime.

Click OK.

k. Ensure that the correct server and service project are selected as displayed below.

Click next.

St.joseph’s college of Engineering 11


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

l. This page is the service.xml selection page. if you have a custom services.xml, you can
include that by clicking the Browse button. For the moment, just leave it at the default.

Click next.

m. This page is the Start Server page. It will be displayed if the server has not been started.
Click on the Start Server button. This will start the server runtime.

Click next.
St.joseph’s college of Engineering 12
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

n. This page is the Web services publication page, accept the defaults.

Click Finish.

o. Now, select the Axis2WSTest dynamic Web project, right-click and select Run -> Run
As -> Run on Server to bring up the Axis2 servlet.

Click Next.

St.joseph’s college of Engineering 13


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

p. Make sure you have the Axis2WSTest dynamic Web project on the right-hand side
under the Configured project.

Click Finish.

q. This will deploy the Axis2 server webapp on the configured servlet container and will
display the Axis2 home page. Note that the servlet container will start up according to
the Server configuration files on your workspace.

St.joseph’s college of Engineering 14


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

r. Click on the Services link to view the available services. The newly created converter
Web service will be shown there.

s. Click on the Converter Service link to display the wsdl URL of the newly created Web
service. Copy the URL.

St.joseph’s college of Engineering 15


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

t. Now we'll generate the client for the newly created service by referring the ?wsdl
generated by the Axis2 Server. Open File -> New -> Other... -> Web Services -> Web
ServiceClient

u. Paste the URL that was copied earlier into the service definition field.

St.joseph’s college of Engineering 16


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

v. Click on the Client project hyperlink and enter Axis2WSTestClient as the name of the
client project. Click OK.

Back on the Web Services Client wizard, make sure the Web service runtime is set to
Axis2 and the server is set correctly. Click Next.

St.joseph’s college of Engineering 17


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Next page is the Client Configuration Page. Accept the defaults and click Finish.

The Clients stubs will be generated to your Dynamic Web project Axis2WSTestClient.

St.joseph’s college of Engineering 18


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Now we are going to write Java main program to invoke the client stub. Import the
ConverterClient.java file to the workspace into the wtp package in the src folder of
Axis2WSTestClient.

Then select the ConverterClient file, right-click and select Run As -> Java Application. Here's
what you get on the server console:

St.joseph’s college of Engineering 19


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Another way to test and invoke the service is to select Generate test case to test the service
check box on the Axis2 Client Web Service Configuration Page when going through the Web
Service Client wizard.

If that option is selected, the Axis2 emitter will generate JUnit testcases matching the WSDL
we provide to the client. These JUnit testcases will be generated to a newly added source
directory to the Axis2WSTestClient project called test.

St.joseph’s college of Engineering 20


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Next thing we need to do is to insert the test case with the valid inputs as the Web service
method arguments. In this case, let's test the ConverterConverterSOAP11Port_httpTest.java
by provide values for Celsius and Farenheit for the temperature conversion. As an example,
replace the generated TODO statement in each test method to fill in the data with values as:
testfarenheitToCelsius() -> farenheitToCelsius8.setFarenheit(212);
testStartfarenheitToCelsius() ->farenheitToCelsius8.setFarenheit(212);
testcelsiusToFarenheit() -> celsiusToFarenheit10.setCelsius(100);
testStartcelsiusToFarenheit() -> celsiusToFarenheit10.setCelsius(100);

Here the testcases were generated to test both the synchronous and asynchronous clients.

w. After that, select the testcase, right-click, select Run As -> JUnit Test. You will be able
to run the unit test successfully invoking the Web service.

St.joseph’s college of Engineering 21


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

The Web Service wizard orchestrates the end-to-end generation, assembly, deployment,
installation and execution of the Web service and Web service client. Now that your Web service
is running, there are a few interesting things you can do with this WSDL file. Examples:

 You can choose Web Services -> Test with Web Services Explorer to test the service.
 You can choose Web Services -> Publish WSDL file to publish the service to a public
UDDI registry.

Output:
Gobal@it~$ java -classpath
build\classes\org\globus\examples\services\core\second\impl\:%CLASSPATH%
org.globus.examples.clients.MathService_instance.Client
http://localhost:8080/wsrf/services/examples/core/second/MathService
Fahrenheit 84.32
Centigrade is 29.06
Centigrade 20
Fahrenheit is 68

RESULT:

St.joseph’s college of Engineering 22


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.No: 3 Using Apache Axis develop a Grid Service


Date

Aim:
To develop a Grid Service using Apache Axis.

Procedure:
You will need to download and install the following software:
1. Java 2 SDK v1.4.1, http://java.sun.com/j2se/1.4.1/download.html
2. Apache Tomcat v4.124
http://jakarta.apache.org/builds/jakarta-tomcat-4.0/release/v4.1.24/bin/jakarta tomcat4.1.24.exe.
3. XML Security v1.0.4,
http://www.apache.org/dist/xml/security/java-library/xmlsecurity bin1.0.4.zip
4. Axis v1.1, http://ws.apache.org/axis/dist/1_1/axis-1_1.zip

1. Java 2 SDK
• Run the downloaded executable (j2sdk-1_4_1-windows-i586.exe) which will install the
• SDK in C:\j2sdk1.4.1. Set the JAVA_HOME environment variable to point to this
directory as follows:
• Click on START->CONTROL PANEL->SYSTEM
• Click on the Advanced tab
• Click on the Environment Variables button
• Click on the New… button in the user variable section and enter the details
• Add the Java binaries to your PATH variable in the same way by setting a user
variable called PATH with the value “%PATH%;C:\j2sdk1.4.1\bin”

2. Apache Tomcat

• Run the downloaded executable (jakarta-tomcat-4.1.24.exe), and assume the


installation directory is C:\jakarta-tomcat-4.1.24.
• Edit C:\ jakarta-tomcat-4.1.24\conf\tomcat-users.xml and create an “admin” and
“manager” role as well as a user with both roles. The contents of the file should be
similar to:
<?xml version='1.0' encoding='utf8'?>
<tomcat-users>
<role rolename="manager"/>
<role rolename="admin"/>
<user username="myuser" password="mypass" roles="admin,manager"/>
</tomcat-users>
• Start Tomcat by running C:\ jakarta-tomcat-4.1.24\bin\startup.bat and test it by
browsing http://localhost:8080/
• Stop Tomcat by running C:\ jakarta-tomcat-4.1.24\bin\shutdown.bat.

3. XML Security
• Download and unzip

St.joseph’s college of Engineering 23


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

http://www.apache.org/dist/xml/security/javalibrary/xmlsecurity-bin 1_0_4.zip
• Copy xml-sec.jar to C:\axis-1_1\lib\
• Set-up your CLASSPATH environment variable to including the following:
C:\axis1_1\lib\xml-sec.jar;

4. Apache Axis
• Unzip the downloaded Axis archive to C: (this will create a directory C:\axis-1_1).
• Extract the file xmlsec.jar from the downloaded security archive to
C:\axis1_1\webapps\axis\WEB-INF\lib.
• Set-up your CLASSPATH environment variable to including the following:
o The current working directory
o All the AXIS jar files as found in C:\axis-1_1\lib
C:\jakarta-tomcat-4.1.24\common\lib\servlet.jar
• Your CLASSPATH should therefore look something like:
C:\axis-1_1\lib\axis.jar;
C:\axis 1_1\lib\axis-ant.jar;
C:\axis-1_1\lib\commons-discovery.jar;
C:\axis-1_1\lib\commons-logging.jar;
C:\axis-1_1\lib\jaxrpc.jar;
C:\axis-1_1\lib\log4j-1.2.8.jar;
C:\axis-1_1\lib\saaj.jar;
C:\axis-1_1\lib\wsdl4j.jar;
C:\axis-1_1\lib\xercesImpl.jar
C:\axis-1_1\lib\xmlParserAPIs.jar;
C:\jakarta-tomcat-4.1.24\common\lib\servlet.jar
C:\axis-1_1\lib\xml-sec.jar;
• Now tell Tomcat about your Axis web application by creating the file
C:\jakarta- tomcat-4.1.24\webapps\axis.xml with the following content:
<Context path="/axis" docBase="C:\axis-1_1\webapps\axis" debug="0"
privileged="true">
<LoggerclassName="org.apache.catalina.logger.FileLogger"prefix="axis_log."
suffix=".txt" timestamp="false"/>

5. Deploy a Sample Web service packaged within Axis installations

Deploy one of the sample Web Services to test the system and to create the C:\axis- 1_1\
webapps\axis\WEB-INF\server-config.wsdd file. From C:\axis-1_1 issue the command (on
one line):
java org.apache.axis.client.AdminClient
http://localhost:8080/axis/services/AdminService/samples/stock/deploy.wsdd

Result:

St.joseph’s college of Engineering 24


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.No: 4 Develop applications using Java or C/C++ Grid APIs


Date

Aim:
To develop an applications using Java or C/C++ Grid APIs.

Sample Code:
import AgentTeamwork.Ateam.*;
import MPJ.*;
public class UserProgAteam extends AteamProg {
private int phase;
public UserProgAteam( Ateam o )
{}
public UserProgAteam( )
{}
public UserProgAteam( String[] args ) {
phase = 0;
}
// phase recovery
private void userRecovery( ) {
phase = ateam.getSnapshotId( );
}
private void compute( ) {
for ( phase = 0; phase < 10; phase++ ) {
try {
Thread.currentThread( ).sleep( 1000 );
}
catch(InterruptedException e ) {
}
ateam.takeSnapshot( phase );
System.out.println( "UserProgAteam at rank " + MPJ.COMM_WORLD.Rank( ) + " : took a
snapshot " + phase );
}}
public static void main( String[] args ) {
System.out.println( "UserProgAteam: got started" );
MPJ.Init( args, ateam);
UserProgAteam program = null;
// Timer timer = new Timer( );
if ( ateam.isResumed( ) ) {
program = ( UserProgAteam )
ateam.retrieveLocalVar( "program" );
program.userRecovery( );
}
else {
program = new UserProgAteam( args );
ateam.registerLocalVar( "program", program );
}
St.joseph’s college of Engineering 25
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

program.compute( );
MPJ.Finalize( );
}
public class UserProgAteam extends AteamProg {
// application body private void compute( ) {
for ( phase = 0; phase < 10; phase++ ) {
try {
Thread.currentThread( ).sleep( 1000 );
}
catch(InterruptedException e ) {}
ateam.takeSnapshot( phase );
System.out.println ( "UserProgAteam at rank " + MPJ.COMM_WORLD.Rank( ) + " : took a snapshot
" + phase );
}}

Socket sample code – within some function body


import AgentTeamwork.Ateam.GridTcp.*;
private final int port = 2000;
private GridSocket socket; private
GridServerSocket server; private InputStream input;
private OutputStream output;
for ( int i = start; i < start + trans; i++ ) {
try {
output.write( i % 128 );
} catch ( IOException e ) {}
System.out.println ( "Sockets with " + myRank + ": " + " output[" + i + "]=" + i % 128 );
}
for ( int i = start; i < start + trans; i++ ) {
try {
System.out.println ( "Sockets with " + myRank + ": " + " input[" + i + "]=" + input.read( ) ); }
catch ( IOException e ) {
}}

MPI sample code


import AgentTeamwork.Ateam.*;
import MPI.*;
public class UserProgAteam extends AteamProg {
// application body private void compute( ) {
}
public static void main( String[] args ) {
MPJ.Init( args, ateam );
program.compute( ); MPJ.Finalize( );
}}
Result:

St.joseph’s college of Engineering 26


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.No: 5 Develop secured applications using basic security mechanisms


Date Available in Globus Toolkit

Aim:
To develop a secured applications using basic security mechanisms available in Globus.

Procedure:
The Globus Toolkit's Authentication and Authorization components provide the de facto
standard for the "core" security software in Grid systems and applications. These software
development kits (SDKs) provide programming libraries, Java classes, and essential tools for a
PKI, certificate-based authentication system with single sign-on and delegation features, in either
Web Services or non-Web Services frameworks. ("Delegation" means that once someone
accesses a remote system, he can give the remote system permission to use his credentials to
access others systems on his behalf.)

WEB SERVICES AUTHENTICATION AND AUTHORIZATION –

A Web services implementation of the Grid Security Infrastructure (GSI), containing the
core libraries and tools needed to secure applications using GSI mechanisms. The Grid is a term
commonly used to describe a distributed computing infrastructure which will allow "coordinated
resource sharing and problem solving in dynamic, multi-institutional virtual organizations" . The
protocols and middleware to enable this Grid infrastructure have been developed by a number of
initiatives, most notably the Globus Project .

Web Services are simply applications that interact with each other using Web standards,
such as the HTTP transport protocol and the XML family of standards. In particular, Web
Services use the SOAP messaging standard for communication between service and requestor.
They should be self-describing, self-contained and modular; present a platform and
implementation neutral connection layer; and be based on open standards for description,
discovery and invocation.

The Grid Security Infrastructure (GSI) is based on the Generic Security Services API
(GSS-API) and uses an extension to X509 certificates to provide a mechanism to authenticate
subjects and authorise resources. It allows users to benefit from the ease of use of a single sign-
on mechanism by using delegated credentials, and time-limited proxy certificates. GSI is used as
the security infrastructure for the Globus Toolkit.

Recently, a new proposal for an Open Grid Services Architecture (OGSA) was announced which
marries the Grid and Web Services to create a new Grid Services model. One problem, which
has not yet been explicitly addressed, is that of security. A possible solution is to use a suitably
secure transport binding, e.g. TLS, and extend it to incorporate appropriate support for proxy
credentials. It would be useful to test out some of the principles of Grid Services using the
currently available frameworks and tools for developing Web Services. Unfortunately, no
standards currently exist for implemented proxy credential support to provide authenticated

St.joseph’s college of Engineering 27


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

communication between web services. A number of XML/Web Services security standards are
currently in development, e.g. XML Digital Signatures, SAML, XKMS, XACML, but the
remainder of this document describes an approach proposed by ANL to use GSI over an SSL
link.
A generic Job Submission environment, GAP enables researchers and scientists to execute
their applications on Grid from a conventional web browser. Both Sequential and Parallel jobs
can be submitted to GARUDA Grid through Portal. It provides a web interface for viewing
the resources, and for submitting and monitoring jobs.

Pre-requisites for using GAP


Portal users need to set the following in their ~/.bashrc file.
export GLOBUS_LOCATION=/opt/asvija/GLOBUS-4.0.7/
source /opt/asvija/GLOBUS-4.0.7/etc/globus-user-env.sh
export PATH=/usr/local/jdk1.6.0_10/bin:
GW_LOCATION/bin:/opt/garudaresv/bin:/opt/voms_client/bin:$PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/voms_client/lib:

Accessing GAP
Type http://192.168.60.40/GridPortal1.3/ (to access the Portal through GARUDA Network)
or http://203.200.36.236/GridPortal1.3 (to access the Portal through Internet) in the address bar
of the web browser to invoke the Portal. It is preferable to access the Portal through GARUDA
Network, since it is much faster than the Internet.

In order to access the facilities of Grid Portal such as Job Submission, Job Status tracking,
Storing(Uploading) of Executables and View Output/Error data, the user has to login into the

Portal using the User's Login Form in the Home page of the Portal.

a) New users are required to click Sign up in the User Login Form, which leads them to home
page of Indian Grid Certification Authority (IGCA). Click on Request Certificate and acquire
the required user/host certificate(s), details are provided in IGCA section.
b) Registered users are required to provide User Id and Password for logging into the Portal and
access various facilities.
Job Management
User can submit their job, monitor the status and view output files using the Job Management
interfaces. Types of job submission (Basic and Advanced) and Job information are covered
under this section.
Basic Job Submission
This interface can be used to submit sequential as well as parallel jobs. The user should provide
the following information:
1. Optional Job Name - User can provide a suitable (alias) name for their job.
2. Type of Job user want to execute,
3. Operating System – Required for their Job,
4. 'Have you reserved the Resources' - An optional parameter contains the Reservation Id's
St.joseph’s college of Engineering 28
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

that can be used for job submission instead of choosing the Operating System/Processor
parameter.
5. No. of processes required for the job - This parameter is only for the parallel
applications that require more than one CPU.
6. Corresponding Executables – uploaded from either local or remote machine,
7. Input file, if required - The executable and the input file can either be uploaded from the
local machine or can be selected from the Remote File List, if it is available in the
Submit Node
8. STDIN - Required when the user wants to provide any inputs to the application during the
runtime.
9. Optional Execution Time - Here the Execution Time is the anticipated job completion time.
10. Any Command Line arguments or Environment Variables, if required.
11. User Specific Output/ Error files - If the application generates output/error files other
than standard output/error files and its entries should be separated by comma's or single empty
space in case of multiple files.
All those fields marked with * are mandatory fields and should be filled before submitting a
job. By clicking on submit button, the portal submits the job to Grid Way Meta Scheduler,
which then schedules the job for execution and returns the Job Id. The Job Id has to be noted
for future reference to this job. In the event of unsuccessful submission, the corresponding
error message is displayed.
All those fields marked with * are mandatory fields and should be filled before submitting a
job. By clicking on submit button, the portal submits the job to GridWay Meta Scheduler,
which then schedules the job for execution and returns the Job Id. The Job Id has to be noted
for future reference to this job.

Advanced Job Submission

St.joseph’s college of Engineering 29


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

This interface is provided for the user to submit their Sequential and Parallel Jobs. The
difference from Basic job submission being: it is using GT4 Web Services components for
submitting jobs to the Grid instead of Gridway as scheduler.

The user is provided with two modes in this interface:


1. Default mode - Portal creates the XML file for the user.
2. Second mode, recommended for advanced users - The user can provide their-own XML file
as the executable, provided the required files are available in the submit node.
The user can view the status of the job submitted through Portal and the output file of the job by
specifying the Job Id. The option for downloading the Output/ Error file is also provided, after
the job execution. To cancel any of the queued jobs, the user has to select the job and click
Cancel Job button, following which the acknowledgment for the job canceled is provided.

Job Info
The user can view the status of the job submitted through Portal and the output file of the job by
specifying the Job Id. The option for downloading the Output/ Error file is also provided, after
the job execution. To cancel any of the queued jobs, the user has to select the job and click
Cancel Job button, following which the acknowledgment for the job canceled is
provided.

Resources
The GridWay meta-scheduler provides the following information - Node Name, Head Node,
OS, ARCH, Load Average, Status, Configured Process and Available Process. This
information aids user to select a suitable cluster and reserve them in advance for job
submission.

Steps for Reservation of Resources


1. Check the available free resources with valid parameters (Start Time and End Time –
duration for which the resource needs to be reserved). The input fields No. of CPUs and OS
entries are optional.
Example: starttime= 2009-04-02 17:06:53 endtime=2009-04-02 19:07:10
No. of CPUs=2 OS NAME=Linux
2. Choose the Available Process required for the job. Example: Available Procs =
4
3. Select the required resource from the available list of
resources.
4. Book the resources for reserving a resource for the requested period of time and
process.
5. The reserved resources can be modified/
canceled.
6. Once the reservation process is successfully completed, the Reservation Id is displayed and
is made available in the Basic Job Submission page.
St.joseph’s college of Engineering 30
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

File browser
For the logged-in user, the File Browser lists files, such as the uploaded executables and
Input/Output/Error files, along with their size and last modified information. It also allows
deletion of files.
Accounting

This module provides Accounting information of the jobs that are submitted to GARUDA, such
as no. of jobs submitted, and system parameters such as Memory usage, Virtual memory, Wall
Time, and CPU time. Last one month data is displayed by default.

MyProxy
MyProxy allows user to upload their Globus Certificates into Myproxy Server and the same
can be used for initializing the Grid proxy on the Grid. If the certificate has been already
generated for you, but you do not have access to the above- mentioned files, you can download
it from GridFS machine (from $HOME/.globus directory) using winscp/scp.

MyProxy Init
By default, the "Myproxy Init" option is enabled for the user. Upload proxy by entering valid
inputs - User name, Grid-proxy Passphrase, User certificate file (usercert.pem), User key file
(userkey.pem) and Proxy life time (168 hours is the default value).

St.joseph’s college of Engineering 31


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

MyProxyGet
Grid proxy will be initialized on the Grid head node by providing the inputs - User name,
Myproxy Passphrase and Life time of the certificate.

VOMS Proxy
The Virtual Organization Management System (VOMS) allows users to belong to Virtual
Organizations (VOs), thereby allowing them to utilize resources earmarked for those VOs.

The user can also request for a new VO by using "Request for VO" link. VOMS proxy
initialization with Multiple roles is provided to the user, by selecting more than one entry on
the Role combo box.

Steps to be followed to access GSRM from gridfs:


Login to gridfs(192.168.60.40)
Upload your IGCA user certificates
Initialize proxy with grid-proxy-init
Set environmental variables, respectively for whichever client to be used.
Run the SRM commands
GSRM Access points
pvfs2 (172.20.1.81) node should be used to just test all the available SRM client interfaces like
StoRM, DPm, BestMan.
gridfs (192.168.60.40) node should, if the user wishes to use GSRM storage for job
execution. Users can download/Upload input/output files into GSRM while submitting jobs
from gridfs.
Following Access mechanisms are available at above mentioned nodes to access GSRM:
1. gridfs(192.168.60.40) : gridfs is the Bangalore GARUDA head node. GSRM services can
be accessed from here using StoRM command line interface.
If the user wants to use the clientSRM ( StoRM Clients) from gridfs machine
Create a valid user proxy using grid-proxy-init
Set the env variable for Globus location path
export GLOBUS_LOCATION= GLOBUS_LOCATION:/usr/local/GARUDA/GLOBUS-4.0.7/
export PATH=$PATH:/opt/gsrm-client/srmv2storm/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/gsrm- client/cgsi_soap/lib
Run the clientSRM command
2. pvfs2 (172.20.1.81): pvfs2 is the GSRM testing node with the following client interfaces
installed.
Bestman Java APIs
DPM C APIs
3. GSRM Web Client is accessible from any of the user machines reachable to GSRM server
(xn05.ctsf.cdac.org.in), using URL -- https://xn05.ctsf.cdac.org.in/
GSRM Client Interfaces
StoRM Command Line Client
1. StoRM command line client format:
St.joseph’s college of Engineering 32
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

clientSRM <requestName> <requestOptions>


2. To get help for clientSRM commands:
clientSRM –h
3. Command to ping to GSRM server:
clientSRM ping -e <GSRM end point>
Bestman Command Line Clients
1. Command to ping to GSRM server
srm-ping –serviceurl httpg://xn05.ctsf.cdac.org.in:8446/dpm/ctsf.cdac.org.in/home/garuda
2. Upload file to GSRM server
srm-copy <src url> <target url> <service url>
Pre-requisites for using SOA compiler
1. Java Run Time Environment (JDK1.6+)
2. Web Browser with Java web start support
Compiler GUI

The users are required to adhere to following directory structure. Application Parent Dir-
src/,bin/,lib/,include/
1) Login
This method is for logging in to the Portal.
Inputs
user name MyProxy User Name
password MyProxy Password
life time Indicates how long is the proxy's life time
Output
Proxy string Proxy issued by the My proxy server
St.joseph’s college of Engineering 33
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Login status Indicates the status of the operation


Last Login Time Gives when this user was last logged in
Current Login Time Gives users logging in time
2) uploadProxy
This method uploads a proxy that is generated using other tools, to the MyProxy Server.
Inputs
user name MyProxy User Name
password MyProxy Password
proxyBytes Existing proxy file is given as byte array
Output
uploadStatus Indicates the status of the operation
3) storeCredential
This method is used for uploading the credentials that is the PKCS12 certificate directly to the
MyProxy Server. It will convert the PKCS12 to certificate and stores in server for users to
download the proxy until it expires.
Inputs
user name MyProxy User Name
password MyProxy Password
p12Bytes PKCS12 file as byte array
Output
storeStatus Indicates the status of the operation

Result:

St.joseph’s college of Engineering 34


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.No: 6 Develop a Grid portal, where user can submit a job and get the result.
Date Implement it with and without GRAM concept

Aim:
To develop a Grid portal, where user can submit a job and get the result and to implement it with
and without GRAM concept.

Procedure:
1. Opening the workflow editor
The editor is a Java Webstart application download and installation is only a click.

2. Java Webstart application


Download and install

3. Job property window:


4. The information system can query EGEE and Globus information systems

5. List of available grids


6. Computing resources of such a grid

7. Broker resource selection


- Select a Broker Grid for the job
- Specify extra ranks and requirements for the job in Job description language.
- The broker will find the best resource for your job.

8. Defining input/output data for jobs


File type
Input: required by the job
Output: produced by the job
File location:
local: my desktop
remote: grid storage resource
File name:
Unique name of the file
File storage type:
Permanent: final result of WF
Volatile: only used for inter-job data transfer

9. Executing workflows with the P-Grade portal

St.joseph’s college of Engineering 35


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Download proxies

10. Downloading a proxy

11. Associating the proxy with a grid


12. Browsing Proxies

13. Workflow execution


Workflow portlet

14. Observation by the workflow portlet

15. Downloading the results

Result:

St.joseph’s college of Engineering 36


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.No: 7 Virtual Machine with Different Configuration


Date :

Aim:
To create and run the virtual machine of different configuration. Check how many virtual
machines can be utilized at particular time.
Procedure:
Step 1: Check that your CPU supports hardware virtualization.
$ egrep -c '(vmx|svm)' /proc/cpuinfo
Step 2: To see if your processor is 64-bit or not.
$ egrep -c ' lm ' /proc/cpuinfo
Step 3: Now see if your running kernel is 64-bit or not.
$ uname –a
Step 4: To install the KVM, execute the following command.
$ sudo apt-get install qemu-kvm
$ sudo apt-get install libvirt-bin
$ sudo apt-get install ubuntu-vm-builder
$ sudo apt-get install bridge-utils
Step 5: Verify the KVM installation has been successful or not.
$ virsh -c qemu:///system list
Step 6: Installing a GUI for KVM.
$ sudo apt-get install virt-manager
Step 7: Creating a KVM guest machine.
$ virt-manager

Step 9: Then start with creating a new virtual machine by hitting the new button. Enter the name
of your virtual machine. Select your installation media type and click forward.

St.joseph’s college of Engineering 37


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Step 10: Then you will have to set the amount RAM and CPU's that will be available to that
virtual machine.

Step 11: Finally, you will get a confirmation screen that shows the details of your virtual
machine. Then click finish button.

Step 12: Repeat the same procedure to create multiple virtual machines.

Output:

St.joseph’s college of Engineering 38


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Result:

Ex.No:8 Attach Virtual Block To The Virtual Machine


Date:
Aim:
To find procedure to attach virtual block to the virtual machine and check whether it
holds the data even after the release of the virtual machine.

Procedure:
1. Log in to the openstack dashboard.
2. Select the appropriate project from the drop down menu at the top left.
3. On the Project tab, open the Compute tab and click Images category.
4. Click Create Image. The Create An Image dialog box appears.
5. Enter the following values:

6. Click Create Image. The image is queued to be uploaded. It might take some time before the status
changes from Queued to Active.

7. On the Project tab, open the Compute tab and click Volumes category.

8. Click Create Volume. In the dialog box that opens, enter or select the following values.

Volume Name: Specify a name for the volume.


Description: Optionally, provide a brief description for the volume.
Volume Source: Select one of the following options:
No source, empty volume
Image
St.joseph’s college of Engineering 39
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Volume
Type: Leave this field blank.
Size (GB): The size of the volume in gibibytes (GiB).
Availability Zone: Select the Availability Zone from the list. By default, this value is set to
the availability zone given by the cloud provider. For some cases, it could be nova.
9. Click Create Volume.
10. Then select the manage attachment and select the Instance (virtual machine). It might take
some time to attach the volume to instance.
11. On the Project tab, open the Compute tab click instance and select “launch instance”

12. Select console and execute the following command

df -h

sudo mkfs.ext3 /dev/vdb

sudo mount /dev/vdb /mnt/

13. Created volume is attached to the instance.

14. Create some files in that volume and save the file.

15. Execute ‘unmounts /mnt/’ command to unmount the volume.

16. Delete the Virtual Machine (VM). After that create the new VM and attaché the volume to newly
created VM and check previously created files are present in that volume or not.

Output:

St.joseph’s college of Engineering 40


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Result:

Ex.no: 9 Show The Virtual Machine Migration Based on Certain


Date: Conditions from One Node to Another

Aim: To perform a Virtual Machine Migration based on certain conditions from one node to
another.

Migration is a process of moving an existing virtual machine between different physical


machines. It is usually desired, in case of a relocation of the client, or when there is a potential
for failure in the existing servers that hold the virtual machines.

Live migration is the process of moving a running virtual machine or application be-tween
different physical machines without disconnecting the client or application. Memory, storage,
and network connectivity of the virtual machine are transferred from the original guest machine
to the destination.

Procedure
1. To perform Migration in Openstack, Go to System  Instances
2. On the right end of the virtual machine you wish to migrate, click on the down arrow
3. Click on Live Migrate Instance option

St.joseph’s college of Engineering 41


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

4. It lists out the available nodes (physical servers) to which you can migrate your instance

5. Select the node and click Submit.

Result:

St.joseph’s college of Engineering 42


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.No: 10 Install Gcc Compilers in Virtual Machine


Date :

Aim: To install a C compiler in the virtual machine and execute a sample program.

Algorithm:
Step 1: To check Gcc compiler is installed or not.
$ dpkg - l | grep gcc
Step 2: If GCC compiler is not installed in VM, to execute the following command
$sudo apt-get install gcc (or) $sudo apt-get install build-essential
Step 3: Open a Vi Editor
Step 4: Get the no. of rows and columns for first and second matrix.
Step 5: Get the values of x and y matrix using for loop.
Step 6: Find the product of first and second and store the result in multiply matrix.
multiply[i][j]=multiply[i][j]+(first[i][k]*second[k][j]);
Step 7: Display the resultant matrix.
Step 8: Stop the program.

Program:
#include <stdio.h>
void main()
{
int m, n, p, q, c, d, k, sum = 0;
int first[10][10], second[10][10], multiply[10][10];
printf("Enter the number of rows and columns of first matrix\n");
scanf("%d%d", &m, &n);
printf("Enter the elements of first matrix\n");
for ( c = 0 ; c < m ; c++ )
for ( d = 0 ; d < n ; d++ )
scanf("%d", &first[c][d]);
printf("Enter the number of rows and columns of second matrix\n");
scanf("%d%d", &p, &q);
if ( n != p )
printf("Matrices with entered orders can't be multiplied with each other.\n");
else
{
printf("Enter the elements of second matrix\n");

for ( c = 0 ; c < p ; c++ )


for ( d = 0 ; d < q ; d++ )
scanf("%d", &second[c][d]);
for ( c = 0 ; c < m ; c++ )
{
for ( d = 0 ; d < q ; d++ )
{
St.joseph’s college of Engineering 43
IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

for ( k = 0 ; k < p ; k++ )


{
sum = sum + first[c][k]*second[k][d];
}
multiply[c][d] = sum;
sum = 0;
}
}
printf("Product of entered matrices:-\n");
for ( c = 0 ; c < m ; c++ )
{
for ( d = 0 ; d < q ; d++ )
printf("%d\t", multiply[c][d]);
printf("\n");
}
}
}
Output:
Compile: it105@it105-HP-ProDesk-400-G1-SFF:~$ gcc matrix.c
Run: it105@it105-HP-ProDesk-400-G1-SFF:~$ ./a.out
Enter the number of rows and columns of first matrix
22
Enter the elements of first matrix
2222
Enter the number of rows and columns of second matrix
22
Enter the elements of second matrix
2222
Product of entered matrices:-
8 8
8 8

Result:

St.joseph’s college of Engineering 44


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.no: 11 Storage Controller and Interact


Date:

Aim:
To find procedure to install storage controller and interact with it.

Procedure:
Step 1: Create Volume
1. Creating a volume
cinder create --display_name ers2 1
2. Attaching the volume to a VM
nova volume-attach INSTANCE_ID VOLUME_ID auto

Step2: Extending a volume size


1.Detach the volume
nova volume-detach INSTANCE_ID VOLUME_ID

2.Extend the volume


cinder extend VOLUME_ID 2
3. Attach the volume
nova volume-attach INSTANCE_ID VOLUME_ID auto

Step 3: Delete the Volume


1.Detach the volume
nova volume-detach INSTANCE_ID VOLUME_ID
2.Delte the volume
cinder delete ers2

Output:
Creating a volume
stack:~$ cinder create --display_name ers2 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-06-01T06:58:53.000000 |
| description | None |
| encrypted | False |
| id | 4cd8de9a-997e-4b6d-b2b3-cc1e3f96dfef |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | ers2 |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |

St.joseph’s college of Engineering 45


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 6e10bfbc0fea4905b7d88ea84c7da54c |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | None |
| user_id | a18632a00c414220a3d3cfd5dfceddf0 |
| volume_type | lvmdriver-1 |
+--------------------------------+--------------------------------------+

Attaching the volume to a VM


stack~$ nova volume-attach b4b34e22-edd2-4c69-8665-27ad7120119e 4cd8de9a-
997e-4b6d-b2b3-cc1e3f96dfef auto
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdc |
| id | 4cd8de9a-997e-4b6d-b2b3-cc1e3f96dfef |
| serverId | b4b34e22-edd2-4c69-8665-27ad7120119e |
| volumeId | 4cd8de9a-997e-4b6d-b2b3-cc1e3f96dfef |
+----------+--------------------------------------+

Result:

St.joseph’s college of Engineering 46


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.No:12 Hadoop Single Node Cluster


Date:
Aim:
To find procedure to set up the one node Hadoop cluster.

Procedure
1. Install openssh server
sudo apt-get install openssh-server

2. Create the new group


sudo addgroup <groupname>
eg:<groupname> cluster1

3. Create the new user in newly created group


sudo adduser --ingroup <groupname> <username>
eg: <username> hadoop1

4. Login to newly created user.

5. Generate the SSH key


ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh localhost

6. Unzip the JDK and Hadoop files


tar -xzf jdk-8u45-linux-x64.tar.gz
tar -xzf hadoop-2.6.0.tar.gz

7. Open the ‘bashrc’ file


gedit .bashrc

8. Type the following command in the ‘bashrc’ file bottom.


export JAVA_HOME=jdk1.8.0_45
export HADOOP_HOME=hadoop-2.6.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP/sbin
export PATH

9. Execute the ‘bashrc’ file


exec bash
source .bashrc
hadoop version

10. Open the ‘hadoop-env.sh’ file and add java home directory or java folder path
gedit hadoop-env.sh
export JAVA HOME=~/jdk1.8.0_45

St.joseph’s college of Engineering 47


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

11. Copy and Paste the following file to hadoop folder-->etc-->hadoop


vim core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
vim mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.job.tracker</name>
<value>localhost:54311</value>
</property>
<property>
<name>mapreduce.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property>
<property>
<name>mapreduce.map.tasks</name>
<value>4</value>
</property>
</configuration>
vim hdfs-site.xml // to edit the username in this file
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>

<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hduser1/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hduser1/hdfs/datanode</value>
</property>
</configuration>
12. Format the hadoop namenode
hadoop namenode –format

St.joseph’s college of Engineering 48


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

13. Start the hadoop


hadoop-2.6.0/sbin/start-all.sh

14. Check the nodes that are created or not


jps

15. If any node is not created execute the below command and repeat step 12 & 13
hadoop-2.6.0/sbin/stop-all.sh
rm –r hdfs/

15. Open the namenode and datanode. In browser type the following port number
localhost:50070
localhost:8088
Output:
NameNode:

DataNode:

Result:

St.joseph’s college of Engineering 49


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.no: 13 Mount the One Node Hadoop Cluster Using Fuse


Date:

Aim: To mount the one node Hadoop cluster using FUSE

PROCEDURE:
Hadoop Distributed File System (HDFS) is a distributed, scalable file system developed
as the
back-end storage for data-intensive Hadoop applications. As such, HDFS is designed to handle
very large files with "write-once-read-many" access model. As HDFS is not a full-fledged
POSIX compliant file system, it cannot be directly mounted by the operating system, and file
access with HDFS is done via HDFS shell commands.

However, one can leverage FUSE to write a userland application that exposes HDFS via
a traditional file system interface. fuse-dfs is one such FUSE-based application which allows you
to mount HDFS as if it were a traditional Linux file system. If you would like to mount HDFS on
Linux, you can install fuse-dfs, along with FUSE as follows.

Now, install fuse-dfs and all necessary dependencies as follows.


To install fuse-dfs on Ubuntu 12.04 and higher:
$ wget http://archive.cloudera.com/one-click-install/maverick/cdh3-repository_1.0_all.deb
$ sudo dpkg -i cdh3-repository_1.0_all.deb
$ sudo apt-get update
$ sudo apt-get install hadoop-0.20-fuse

Once fuse-dfs is installed, go ahead and mount HDFS using FUSE as follows.
$ sudo hadoop-fuse-dfs dfs://<name_node_hostname>:<namenode_port> <mount_point>

Once HDFS has been mounted at <mount_point>, you can use most of the traditional
filesystem
operations (e.g., cp, rm, cat, mv, mkdir, rmdir, more, scp). However, random write operations
such as rsync, and permission related operations such as chmod, chown are not supported in
FUSE-mounted HDFS.

RESULT:

St.joseph’s college of Engineering 50


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Ex.no: 14 API's of Hadoop


Date:

Aim:
To write a program to use the API's of Hadoop to interact with it.

Procedure:
1. After created one node hadoop cluster. Create the HDFS input directory.
hdfs dfs -mkdir /Exam1

2. Copy the input file to created input directory


hdfs dfs -copyFromLocal weather_data /Exam1

3. if name node is running in safemode to execute the bellow command


hadoop dfsadmin -safemode leave

3. Execute hadoop mapreduce JAR file and specify an input & output directory.
hadoop jar WeatherJob.jar WeatherJob /Exam1 /output

4. Open the namenode and datanode. In browser type the following port number.
localhost:50070
localhost:8088

Program:
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class WeatherJob {

public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {


private final static IntWritable temperature = new IntWritable(1);
private Text weatherKey = new Text();

public void map(LongWritable key, Text value, Context context) throws IOException,
InterruptedException {
String line = value.toString();
String[] tokens = line.split("\t");

St.joseph’s college of Engineering 51


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

weatherKey.set(tokens[0]);
temperature.set(Integer.parseInt(tokens[3]));
context.write(weatherKey,temperature);
weatherKey.set(tokens[0]+tokens[1]);
temperature.set(Integer.parseInt(tokens[3]));
context.write(weatherKey,temperature);
}
}

public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {


public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
long sum = 0;
long count = 0;
long min = 0;
long max = 0;
for (IntWritable val : values) {
int temperature = val.get();
sum += temperature;
count++;
if (min > temperature)
{
min = temperature;
}
if (max < temperature)
{
max = temperature;
}
}
context.write(new Text(key.toString() + " - Avg"), new IntWritable((int)(sum/count)));
context.write(new Text(key.toString() + " - Min"), new IntWritable((int)min));
context.write(new Text(key.toString() + " - Max"), new IntWritable((int)max));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "WeatherJob");
job.setJarByClass(WeatherJob.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);

job.setInputFormatClass(TextInputFormat.class);

St.joseph’s college of Engineering 52


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

job.setOutputFormatClass(TextOutputFormat.class);

FileInputFormat.addInputPath(job, new Path(args[0]));


FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}}
Input: Output:
COUNTRY1 ZIP1 DAY1 8 COUNTRY1 ZIP1 MIN 0
COUNTRY1 ZIP1 DAY2 31 COUNTRY1 ZIP1 AVG 49
COUNTRY1 ZIP1 DAY3 9 COUNTRY1 ZIP1 MAX 99
COUNTRY1 ZIP1 DAY4 65
COUNTRY1 ZIP1 DAY5 70
COUNTRY1 ZIP1 DAY6 18
COUNTRY1 ZIP1 DAY7 71
NameNode:

DataNode:

Result:

Ex.no: 15 Word count program

St.joseph’s college of Engineering 53


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

Date:
Aim:
To write a word count program to demonstrate the use of Map and Reduce tasks.

Procedure:
1. After created one node hadoop cluster. Create the HDFS input directory.
hdfs dfs -mkdir /inp1
2. Copy the input file to created input directory
hdfs dfs -copyFromLocal hadoop-2.6.0/etc/hadoop/a.txt /inp1
3. if name node is running in safemode to execute the bellow command
hadoop dfsadmin -safemode leave
4. Execute hadoop mapreduce JAR file and specify an input & output directory.
hadoop jar hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar
wordcount /inp1 /out1
5. Open the namenode and datanode. In browser type the following port number.
localhost:50070
localhost:8088

6. Download the file from output directory and view the wordcout program output.

Program:

import java.io.IOException;
import java.util.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.*;
import org.apache.hadoop.mapreduce.lib.output.*;
import org.apache.hadoop.util.*;

public class WordCount extends Configured implements Tool {


public static void main(String args[]) throws Exception {
int res = ToolRunner.run(new WordCount(), args);
System.exit(res);
}
public int run(String[] args) throws Exception {
Path inputPath = new Path(args[0]);
Path outputPath = new Path(args[1]);
Configuration conf = getConf();
Job job = new Job(conf, this.getClass().toString());

St.joseph’s college of Engineering 54


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

FileInputFormat.setInputPaths(job, inputPath);
FileOutputFormat.setOutputPath(job, outputPath);
job.setJobName("WordCount");
job.setJarByClass(WordCount.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

job.setMapperClass(Map.class);
job.setCombinerClass(Reduce.class);
job.setReducerClass(Reduce.class);

return job.waitForCompletion(true) ? 0 : 1;
}
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(LongWritable key, Text value,


Mapper.Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
} } }

public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterable<IntWritable> values, Context context) throws


IOException, InterruptedException {
int sum = 0;
for (IntWritable value : values) {
sum += value.get();
}

context.write(key, new IntWritable(sum));


}
}

}
NameNode:

St.joseph’s college of Engineering 55


IT6713 Grid & Cloud Computing Lab Department of IT 2018-2019

DataNode:

Input:
a.txt Output:
aaa aaa aaa aaa aaa 4
bbb bbb bbb bbb bbb 4
ccc ccc ccc ccc ccc 4

Result:

St.joseph’s college of Engineering 56

You might also like