GCC Lab Manual
GCC Lab Manual
GCC Lab Manual
LAB MANUAL
FACULTY IN CHARGE
Mr.D.KESAVARAJA
Ms.A.ANNIE JESUS SUGANTHI RANI
HOD/CSE
CS6712 GRID AND CLOUD COMPUTING LABORATORY
LIST OF EXPERIMENTS:
GRID COMPUTING LAB
Use Globus Toolkit or equivalent and do the following:
1. Develop a new Web Service for Calculator.
2. Develop new OGSA-compliant Web Service.
3. Using Apache Axis develop a Grid Service.
4. Develop applications using Java or C/C++ Grid APIs
5. Develop secured applications using basic security mechanisms available in Globus
Toolkit.
6. Develop a Grid portal, where user can submit a job and get the result. Implement it with
and without GRAM concept.
TOTAL: 45 PERIODS
OUTCOMES:
At the end of the course, the student should be able to
Use the grid and cloud tool kits.
Design and implement applications on the Grid.
Design and Implement applications on the Cloud.
SOFTWARE:
Globus Toolkit or equivalent
Eucalyptus or Open Nebula or equivalent
HARDWARE
Standalone desktops 30 Nos
TABLE OF CONTENTS
LIST OF EXPERIMENTS
3
1 FIND PROCEDURE TO SET UP THE ONE NODE HADOOP CLUSTER
9
2 TO MOUNT THE ONE NODE HADOOP CLUSTER USING FUSE
34
1 DEVELOP NEW WEB SERVICE FOR CALCULATOR
42
3 DEVELOPING NEW GRID SERVICE
44
4 DEVELOP APPLICATIONS USING JAVA - GRID APIS
AIM:
To find procedure to set up the one node Hadoop cluster
INTRODUCTION:
Apache Hadoop is an open-source software framework for distributed storage and distributed
processing of very large data sets on computer clusters built from commodity hardware.
All the modules in Hadoop are designed with a fundamental assumption that hardware failures
are common and should be automatically handled by the framework.
The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File
System (HDFS), and a processing part called MapReduce.
Hadoop splits files into large blocks and distributes them across nodes in a cluster.
STEPS:
Step 1: Install Java on CentOS 7
Step 2: Install Hadoop Framework in CentOS 7
Step 3: Configure Hadoop in CentOS 7
Step 4: Format Hadoop Namenode
Step 5: Start and Test Hadoop Cluster
Step 6: Browse Hadoop Services
PROCEDURE:
6. Create a new user account on your system without root powers (well use it for Hadoop
installation path and working environment)
The new account home directory will reside in /opt/hadoop063 directory.
# useradd -d /opt/hadoop063 hadoop063
# passwd hadoop063
7. To extract the hadoop archive file:
$ vi .bash_profile
9. To initialize the environment variables and to check their status, issue the below commands:
$ source .bash_profile
$ echo $HADOOP_HOME
$ echo $JAVA_HOME
10. Finally, configure ssh key based authentication for hadoop account by running the below
commands
Also, leave the passphrase filed blank in order to automatically login via ssh.
$ ssh-keygen -t rsa
(Just give enter)
11. To setup Hadoop cluster on a single node in a pseudo distributed mode edit its configuration
files.
Edit core-site.xml file. This file contains information about the port number used by Hadoop
instance, file system allocated memory, data store memory limit and the size of Read/Write
buffers.
$ vi etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000/</value>
</property>
</configuration>
12. Edit hdfs-site.xml file. This file contains information about the value of replication
data, namenode path and datanode path for local file systems.
$ vi etc/hadoop/hdfs-site.xml
Here add the following properties between <configuration> ... </configuration> tags.
/opt/volume/ directory is used to store our hadoop file system.
<configuration>
<property>
<name>dfs.data.dir</name>
<value>file:///opt/volume/datanode</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///opt/volume/namenode</value>
</property>
</configuration>
13. Weve specified /opt/volume/ as our hadoop file system storage, therefore we need to create
those two directories (datanode and namenode) from root account and grant all permissions to
hadoop account by executing the below commands.
$ su
Password: root
# mkdir -p /opt/volume/namenode
# mkdir -p /opt/volume/datanode
# chown -R hadoop063:hadoop063 /opt/volume/
# ls -al /opt/ //Verify permissions
# exit //To exit from root account
14. Create mapred-site.xml file to specify that we are using yarn MapReduce framework.
$ vi etc/hadoop/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
$ vi etc/hadoop/yarn-site.xml
Add the following lines to yarn-site.xml file between <configuration> ... </configuration>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
16. Set Java home variable for Hadoop environment by editing the below line from hadoop-env.sh
file.
$ vi etc/hadoop/hadoop-env.sh
Change the export JAVA_HOME line as below (You are pointing to your Java system path).
export JAVA_HOME=/usr/java/default/
$ vi etc/hadoop/slaves
18. Once hadoop single node cluster has been setup its time to initialize HDFS file system by
formatting the /opt/volume/namenode storage directory with the following command:
19. The Hadoop commands are located in $HADOOP_HOME/sbin directory. In order to start
Hadoop services run the below commands on your konsole:
$ start-dfs.sh
$ start-yarn.sh
$ jps
Alternatively, you can view a list of all open sockets for Apache Hadoop on your system using
the ss command.
$ ss -tul
$ ss -tuln // Numerical output
20. To test hadoop file system cluster create a random directory in the HDFS file system and copy a
file from local file system to HDFS storage (insert data to HDFS).
OUTPUT:
RESULT:
Thus the Apache Hadoop Cluster was configured and results were checked and verified.
Ex: No: 2 TO MOUNT THE ONE NODE HADOOP CLUSTER USING FUSE
AIM:
To mount the one node Hadoop cluster using FUSE on CentOS
FUSE:
FUSE permits you to write down a traditional user land application as a bridge for a conventional
file system interface.
The hadoop-hdfs-fuse package permits you to use your HDFS cluster as if it were a conventional
file system on Linux.
Its assumed that you simply have a operating HDFS cluster and grasp the hostname and port that
your NameNode exposes.
The Hadoop fuse installation and configuration with Mounting HDFS, HDFS mount using fuse is
done by following the below steps.
Step 1. Required Dependencies
Step 2. Download and Install FUSE
Step 3. Install RPM Packages
Step 4. Modify HDFS FUSE
Step 5. Check HADOOP Services
Step 6. Create a Directory to Mount HADOOP
Step 7. Modify HDFS-MOUNT Script
Step 8. Create softlinks of LIBHDFS.SO
Step 9. Check Memory Details
PROCEDURE:
#wget https://hdfs-fuse.googlecode.com/files/hdfs-fuse-0.2.linux2.6-gcc4.1-x86.tar.gz
Modify hdfs-mount script to show jvm path location and other environmental settings
$ vi hdfs-mount
JAVA_JVM_DIR=/usr/java/default/jre/lib/amd64/server
export JAVA_HOME=/usr/java/default
export HADOOP_HOME=/opt/hadoop063
export FUSE_HOME=/opt/hadoop063
export HDFS_FUSE_HOME=/opt/hadoop063/hdfs-fuse
export HDFS_FUSE_CONF=/opt/hadoop063/hdfs-fuse/conf
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_hadoop-lv_root 50G 1.4G 46G 3% /
tmpfs 504M 0 504M 0% /dev/shm
/dev/sda1 485M 30M 430M 7% /boot
/dev/mapper/vg_hadoop-lv_home 29G 1.2G 27G 5% /home
hdfs-fuse 768M 64M 704M 9% /home/hadoop/hdfsmount
$ ls /opt/hadoop063/hdfsmount/
tmp user
Use below fusermount command to unmount hadoop file system
$fusermount -u /opt/hadoop063/hdfsmount
RESULT:
Thus, the fuse mount is ready to use as local file system.
AIM:
To write a word count program to demonstrate the use of Map and Reduce tasks.
WORD COUNT:
The WordCount program reads text files and counts how often words occur.
The input is text files and the output is text files, each line of which contains a word and the count
of how often it occurred, separated by a tab.
Each mapper takes a line as input and breaks it into words. It then emits a key/value pair of the
word and 1. Each reducer sums the counts for each word and emits a single key/value with the
word and sum.
As an optimization, the reducer is also used as a combiner on the map outputs. This reduces the
amount of data sent across the network by combining each word into a single record.
STEPS:
Make a directory for mapreduce in your hadoop user and enter into mapreduce directory.
Open an editor (vi or nano), and write the code for WordCountMapper.java,
WordCountReducer.java, WordCount.java
WordCountMapper.java Contains the map function implementation.
WordCountReducer.java Contains the reduce function implementation.
WordCount.java Contains the code coordinating the execution of the map
and reduce functions.
To compile the WordCount program, execute the following commands in the mapreduce
folder
$ javac -cp hadoop-core-1.0.4.jar *.java
$ jar cvf WordCount.jar *.class
The first command compiles the program using the classes developed by Hadoop
(i.e., hadoop-core-1.0.4.jar)
The second command creates a jar file called WordCount.jar
To run the WordCount program in Hadoop
$ start-all.sh
The command starts the Hadoop services.
The former command creates a directory called input in the Hadoop Distributed File System
The second command will copy MyFile.txt into the input folder in HDFS.
Finally execute the following commands:
$ bin/hadoop jar WordCount.jar WordCount /input /output
$ hdfs dfs -get /output/*
The first command run the WordCount program in Hadoop.
Note that the command specifies the names of:
the class where the main method resides (cf. the WordCount.java file).
PROGRAM:
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Mapper;
public void map(LongWritable key, Text value, Context context) throws IOException,
InterruptedException
{
//taking one line at a time and tokenizing the same
String line = value.toString();
StringTokenizer tokenizer = newStringTokenizer(line);
//iterating through all the words available in that line and forming the key value pair
while (tokenizer.hasMoreTokens())
{
word.set(tokenizer.nextToken());
//sending to output collector which inturn passes the same to reducer
context.write(word, one);
}
}
}
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
//to accept the hdfs input and output dir at run time
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
return job.waitForCompletion(true) ? 0 : 1;
}
WORKING LOGIC:
OUTPUT:
Sometimes 2
you 2
win 1
learn 1
RESULT:
Thus, a word count program to demonstrate the use of Map and Reduce tasks was successfully
executed and verified.
AIM:
To write a program to use the API's of Hadoop to interact with it.
DESCRIPTION:
Hadoops org.apache.hadoop.fs.FileSystem is generic class to access and manage HDFS
files/directories located in distributed environment.
Files content are stored inside datanode with multiple equal large sizes of blocks.
Namenode keep the information of those blocks and Meta information.
FileSystem read and stream by accessing blocks in sequence order.
FileSystem first get blocks information from NameNode then open, read and close one by one.
It opens first block once it is completed, it closes the block and opens the next block.
HDFS replicate the block to give higher reliability and scalability.
If one of the datanode is temporarily down (fails) then it moves to other cluster datanode.
FileSystem uses FSDataOutputStream and FSDataInputStream to write and read the contents in
stream.
Configuration class passes the Hadoop configuration information to FileSystem.
It loads the core-site and core-default.xml through class loader and keeps Hadoop configuration
information such as fs.defaultFS, fs.default.name etc.
FileSystem uses Java IO FileSystem interface mainly DataInputStream and DataOutputStream for
IO operation.
Delete method on FileSystem remove the file/directory permanently
STEPS:
Open a Konsole, Login as 'root' user
Make a directory named 'HDFSJava' (Any name)
# mkdir -p /opt/hadoop063/HDFSJava
Copy the hadoop-0.20.2-core.jar and commons-logging-1.1.1.jar and put it in to the created
directory
# cp /home/student/Desktop/hadoop-0.20.2-core.jar/* /opt/hadoop063/HDFSJava
# cp /home/student/Desktop/commons-logging-1.1.1.jar/* /opt/hadoop063/HDFSJava
Change to HDFSJava directory
# cd /opt/hadoop063/HDFSJava
Extract the two jar files
# jar -xvf hadoop-0.20.2-core.jar
# jar -xvf commons-logging-1.1.1.jar
Open an editor (vi or nano), type the java coding.
# vi HDFSWordCounter.java
Type the following commands
# source .bash_profile
# start-all.sh
# jps
Check whether datanode and namenode are running
Compile the java program
# javac -cp hadoop-0.20.2-core.jar HDFSWordCounter.java
Run the java program
# java HDFSWordCounter
PROGRAM:
import java.io.IOException;
//hadoop imports
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.Path;
//Print to screen
System.out.println(msgIn);
fout.close();
}
catch (IOException ioe)
{
System.err.println("IOException during operation " + ioe.toString());
System.exit(1);
}
}
}
OUTPUT:
# java HDFSWordCounter
hdfsinput.txt
Count the amount of words in this sentence!
RESULT:
Thus The hdfsinput file has been written to the hdfs and output is displayed.
AIM:
To find procedure to run the virtual machine of different configuration. Check how many virtual
machines can be utilized at particular time.
KVM:
In computing, virtualization refers to the act of creating a virtual (rather than actual) version of
something, including virtual computer hardware platforms, operating systems, storage devices, and
computer network resources. Kernel-based Virtual Machine (KVM) is a virtualization infrastructure for
the Linux kernel that turns it into a hypervisor.
1. To run KVM, you need a processor that supports hardware virtualization. So check that your
CPU supports hardware virtualization
/dev/kvm
5. Install Necessary Packages using the following commands,
qemu-kvm
libvirt-bin
bridge-utils
virt-manager
qemu-system
6. Creating VMs
virt-install --connect
connect qemu:///system -n hardy -r 512 -f hardy1.qcow2 -s 12 -cc ubuntu-
ubuntu
14.04.2-server-amd64.iso --vnc
-- --noautoconsole --os-type linux --os-variant
variant ubuntuHardy
OUTPUT:
1. New virtual machine is created using KVM:
RESULT:
Thus the virtual machine of different configuration is created successfully.
AIM :
To find procedure to attach virtual block to the virtual machine and check whether it holds the
data even after the release of the virtual machine.
PROCEDURE:
This experiment is to be performed through portal. Login into Openstack portal, in instances,
create virtual machines.
In Volumes, create storage block of available capacity. Attach / Mount the storage block volumes
to virtual machines, unmount the volume and reattach it.
Volumes are block storage devices that you attach to instances to enable persistent storage. You
can attach a volume to a running instance or detach a volume and attach it to another instance at
any time. You can also create a snapshot from or delete a volume. Only administrative users can
create volume types.
Create a volume
1. Log in to the dashboard.
2. Select the appropriate project from the drop down menu at the top left.
3. On the Project tab, open the Compute tab and click Volumes category.
4. Click Create Volume.
In the dialog box that opens, enter or select the following values.
Volume Name: Specify a name for the volume.
Description: Optionally, provide a brief description for the volume.
Volume Source: Select one of the following options:
o No source, empty volume: Creates an empty volume. An empty volume does not
contain a file system or a partition table.
o Image: If you choose this option, a new field for Use image as a source displays.
You can select the image from the list.
o Volume: If you choose this option, a new field for Use volume as a
source displays. You can select the volume from the list. Options to use a
snapshot or a volume as the source for a volume are displayed only if there are
existing snapshots or volumes.
Type: Leave this field blank.
Size (GB): The size of the volume in gibibytes (GiB).
Availability Zone: Select the Availability Zone from the list. By default, this value is set
to the availability zone given by the cloud provider (for example, us-west or apac-south).
For some cases, it could be nova.
5. Click Create Volume.
The dashboard shows the volume on the Volumes tab.
You can view the status of a volume in the Volumes tab of the dashboard. The volume is either
Available or In-Use.
Now you can log in to the instance and mount, format, and use the disk.
Edit a volume
1. Log in to the dashboard.
2. Select the appropriate project from the drop down menu at the top left.
3. On the Project tab, open the Compute tab and click Volumes category.
4. Select the volume that you want to edit.
5. In the Actions column, click Edit Volume.
6. In the Edit Volume dialog box, update the name and description of the volume.
7. Click Edit Volume.
Delete a volume
When you delete an instance, the data in its attached volumes is not deleted.
1. Log in to the dashboard.
2. Select the appropriate project from the drop down menu at the top left.
3. On the Project tab, open the Compute tab and click Volumes category.
4. Select the check boxes for the volumes that you want to delete.
5. Click Delete Volumes and confirm your choice.
A message indicates whether the action was successful.
RESULT:
Thus the new virtual block is successfully added to existing virtual machine.
AIM:
To find procedure to attach virtual block to the virtual machine and check whether it holds the data
even after the release of the virtual machine.
PROCEDURE:
2. Show the virtual machine migration based on the certain condition from one node to the other.
To demonstrate virtual machine migration, two machines must be configured in one cloud. Take
snapshot of running virtual machine and copy the snapshot file to the other destination machine and
restore the snapshot. On restoring the snapshot, VM running in source will be migrated to destination
machine.
1. List the VMs you want to migrate, run:
$ nova list
2. After selecting a VM from the list, run this command where VM_ID is set to the ID in the list
returned in the previous step:
$ nova show VM_ID
3. Use the nova migrate command.
$ nova migrate VM_ID
4. To migrate an instance and watch the status, use this example script:
#!/bin/bash
# Provide usage
usage() {
echo "Usage: $0 VM_ID"
exit 1
}
[[ $# -eq 0 ]] && usage
# Migrate the VM to an alternate hypervisor
echo -n "Migrating instance to alternate host"
VM_ID=$1
nova migrate $VM_ID
VM_OUTPUT=`nova show $VM_ID`
VM_STATUS=`echo "$VM_OUTPUT" | grep status | awk '{print $4}'`
while [[ "$VM_STATUS" != "VERIFY_RESIZE" ]]; do
echo -n "."
sleep 2
VM_OUTPUT=`nova show $VM_ID`
VM_STATUS=`echo "$VM_OUTPUT" | grep status | awk '{print $4}'`
done
nova resize-confirm $VM_ID
echo " instance migrated and resized."
echo;
# Show the details for the VM
echo "Updated instance details:"
nova show $VM_ID
# Pause to allow users to examine VM details
read -p "Pausing, press <enter> to exit."
RESULT:
AIM:
To show the virtual machine migration based on the certain condition from one node to the
other.
1. Open virt-manager
Connect to the target host physical machine by clicking on the File menu, then click Add
Connection.
3. Add connection
Username: Enter the username for the remote host physical machine. Hostname: Enter
Click the Connect button. An SSH connection is used in this example, so the specified user's password
must be entered in the next step.
Open the list of guests inside the source host physical machine (click the small triangle on the left of the
host name) and right click on the guest that is to be migrated (guest1-rhel6-64
( in this example) and click
Migrate.
virt-manager now displays the newly migrated guest virtual machine running in the destination host. The
guest virtual machine that was running in the source host physical machine is now listed in the Shutoff
state.
RESULT:
Thus the virtual machine is migrated from one node to another node successfully.
AIM :
To find procedure to install storage controller and interact with it.
PROCEDURE:
Storage controller will be installed as Swift and Cinder components when installing Openstack.
The ways to interact with the storage will be done through portal.
OpenStack Object Storage (swift) is used for redundant, scalable data storage using clusters of
standardized servers to store petabytes of accessible data. It is a long-term storage system for
large amounts of static data which can be retrieved and updated.
OpenStack Object Storage provides a distributed, API-accessible storage platform that can be
integrated directly into an application or used to store any type of file, including VM images,
backups, archives, or media files. In the OpenStack dashboard, you can only manage containers
and objects.
In OpenStack Object Storage, containers provide storage for objects in a manner similar to a
Windows folder or Linux file directory, though they cannot be nested. An object in OpenStack
consists of the file to be stored in the container and any accompanying metadata.
Create a container
1. Select the appropriate project from the drop down menu at the top left.
2. On the Project tab, open the Object Store tab and click Containers category.
3. Click Create Container.
4. In the Create Container dialog box, enter a name for the container, and then click Create
Container.
You have successfully created a container.
Upload an object
1. Log in to the dashboard.
2. Select the appropriate project from the drop down menu at the top left.
3. On the Project tab, open the Object Store tab and click Containers category.
4. Select the container in which you want to store your object.
5. Click Upload Object.
The Upload Object To Container: <name> dialog box appears. <name> is the name of
the container to which you are uploading the object.
6. Enter a name for the object.
7. Browse to and select the file that you want to upload.
8. Click Upload Object.
You have successfully uploaded an object to the container.
Manage an object
To edit an object
1. Log in to the dashboard.
2. Select the appropriate project from the drop down menu at the top left.
3. On the Project tab, open the Object Store tab and click Containers category.
4. Select the container in which you want to store your object.
5. Click the menu button and choose Edit from the dropdown list.
The Edit Object dialog box is displayed.
6. Browse to and select the file that you want to upload.
7. Click Update Object.
RESULT:
Thus the procedure to install storage controller and interact with it was done successfully.
GLOBUS TOOLKIT
INTRODUCTION
The open source Globus Toolkit is a fundamental enabling technology for the "Grid," letting
people share computing power, databases, and other tools securely online across corporate,
institutional, and geographic boundaries without sacrificing local autonomy.
The toolkit includes software services and libraries for resource monitoring, discovery, and
management, plus security and file management.
In addition to being a central part of science and engineering projects that total nearly a half-
billion dollars internationally, the Globus Toolkit is a substrate on which leading IT companies
are building significant commercial Grid products.
MANDATORY PREREQUISITE:
1. cp /home/stack/downloads/* /usr/local
2. pwd
cd ..
cd ..
6. unzip junit3.8.1.zip
cd junit3.8.1
pwd
export JUNIT_HOME=/usr/local/grid/SOFTWARE/junit3.8.1
cd ..
pwd
7. dpkg -i globus-toolkit-repo_latest_all.deb
8. apt-get update
AIM:
To Develop new web service for calculator using Globus toolkit
PROCEDURE :
When you start Globus toolkit container, there will be number of services starts up. The service for this
task will be a simple Math service that can perform basic arithmetic for a client.
The Math service will access a resource with two properties:
The service itself will have three remotely accessible operations that operate upon value:
(a) add, that adds a to the resource property value.
(b) subtract that subtracts a from the resource property value.
(c) getValueRP that returns the current value of value.
Usually, the best way for any programming task is to begin with an overall description of what you
want the code to do, which in this case is the service interface. The service interface describes how what
the service provides in terms of names of operations, their arguments and return values. A Java
interface for our service is:
public interface
Math
{ public void add(int a);
public void subtract(int a);
public int getValueRP();
}
It is possible to start with this interface and create the necessary WSDL file using the standard Web
service tool called Java2WSDL. However, the WSDL file for GT 4 has to include details of resource
properties that are not given explicitly in the interface above.
Hence, we will provide the WSDL file. Step 1 Getting the Files All the required files are provided and
comes directly from [1]. The MathService source code files can be found from
http://www.gt4book.com (http://www.gt4book.com/downloads/gt4book-examples.tar.gz)
WSDL service interface description file -- The WSDL service interface description file is provided
within the GT4services folder at: GT4Services\schema\examples\MathService_instance\Math.wsdl
This file, and discussion of its contents, can be found in Appendix A. Later on we will need to modify
this file, but first we will use the existing contents that describe the Math service above.
Service code in Java -- For this assignment, both the code for service operations and for the resource
properties are put in the same class for convenience. More complex services and resources would be
defined in separate classes.
The Java code for the service and its resource properties is located within the GT4services folder at:
GT4services\org\globus\examples\services\core\first\impl\MathService.java.
Deployment Descriptor -- The deployment descriptor gives several different important sets of
information about the service once it is deployed. It is located within the GT4services folder at:
GT4services\org\globus\examples\services\core\first\deploy-server.wsdd. Step 2 Building the Math
Service It is now necessary to package all the required files into a GAR (Grid Archive) file.
The build tool ant from the Apache Software Foundation is used to achieve this as shown overleaf:
In the Client Window, run the build script from the GT4services directory with: globus-build-service.py
first The output should look similar to the following: Buildfile: build.xml
BUILD SUCCESSFUL
Total time: 8 seconds
During the build process, a new directory is created in your GT4Services directory that is named build.
All of your stubs and class files that were generated will be in that directory and its subdirectories.
More importantly, there is a GAR (Grid Archive) file called
org_globus_examples_services_core_first.gar. The GAR file is the package that contains every file that
is needed to successfully deploy your Math Service into the Globus container.
The files contained in the GAR file are the Java class files, WSDL, compiled stubs, and the deployment
descriptor. Step 3 Deploying the Math Service If the container is still running in the Container
Window, then stop it using Control-C.
To deploy the Math Service, you will use a tool provided by the Globus Toolkit called globus-deploy-
gar. In the Container Window, issue the command: globus-deploy-gar
org_globus_examples_services_core_first.gar Successful output of the command is :
Step 5 Start the Container for your Service Restart the Globus container from the Container Window
with: globus-start-container -nosec if the container is not running.
Step 6 Run the Client To start the client from your GT4Services directory, do the following in the
Client Window, which passes the GSH of the service as an argument: java -classpath
build\classes\org\globus\examples\services\core\first\impl\:%CLASSPATH%
org.globus.examples.clients.MathService_instance.Client
http://localhost:8080/wsrf/services/examples/core/first/MathService which should give the output:
Current value: 15 Current value: 10
Step 7 Undeploy the Math Service and Kill a Container Before we can add functionality to the Math
Service (Section 5), we must undeploy the service.
In the Container Window, kill the container with a Control-C.
Then to undeploy the service, type in the following command: globus-undeploy-gar
org_globus_examples_services_core_first which should result with the following output: Undeploying
gar... Deleting /.
Undeploy successful
RESULT:
Thus to develop new web service for calculator using Globus toolkit was done successfully.
AIM
To develop a new OGSA-Compliant Web service in Grid Service using .NET language.
PROCEDURE
The first issue is related to the implementation of Grid Service Specification prescribed in a MS .NET
language . In the framework of the GRASP project, we have selected the implementation of Grid Service
Specification provided by the Grid Computing Group of the Virginia University, named OGSI.NET.
To manage the dynamic nature of information describing the resources, GT3 leverages on Service
Data Providers. In the MS environment, we rely on Performance Counters and Windows Management
Instrumentation (WMI) architecture to implement the Service Data Providers. For each component of a
MS system we have a performance object (e.g. Processor Object) gathering all the performance data of
the related entity. Each performance object provides a set of Performance Counters that retrieves specific
performance data regarding the resource associated to the performance object. For example, the
%ProcessorTime is a Performance Counter of the Processor Object representing the percentage of time
during which the processor is executing a thread. The performance counters are based on services at the
operating system level, and they are integrated in the .NET platform. In fact, .NET Framework provides a
set of APIs that allows the management of the performance counters.
To perform the collection and provisioning of the performance data to an index service, we
leverage on Windows Management Instrumentation (WMI) architecture. WMI is a unifying architecture
that allows the access to data from a variety of underlying technologies. WMI is based on the Common
Information Model (CIM) schema, which is an industry standard specification.
Distributed Management Task Force (DMTF).WMI provides a three-tiered approach for collecting and
providing management data. This approach consists of a standard mechanism for storing data, a standard
protocol for obtaining and distributing management data, and a WMI provider. A WMI provider is a
Win32 Dynamic-Link
Library (DLL) that supplies instrumentation data for parts of the CIM schema. Figure 3 shows the
architecture of WMI.
When a request for management information comes from a consumer (see Figure 3) to the CIM Object
Manager (CIMON), the latter evaluates the re-quest, identifies which provider has the information, and
returns the data to the consumer. The consumer only requests the desired information, and never knows
the information source or any details about the way the information data are extracted from the underlying
API. The CIMOM and the CIM repository are implemented as a system service, called WinMgmt, and
are accessed through a set of Component Object Model (COM) interfaces.
structured, hierarchical storage of information about interesting objects, such as users, computers,
services, inside an enterprise network. AD provides a rich support for locating and working with these
objects, allowing the organizations to e ciently share and manage information about network resources
and users. It acts as the central authority for network security, letting the operating system to readily
verify a user identity and to control his/her access to network resources.
Our goal is to implement a Grid Service that, taking the role of a consumer (see Figure 1), queries at
regular intervals the Service Data Providers of a VO (see Figure 2) to obtain resources information,
collect and aggregate these information, and allows to perform searches, among the resources of an
organization, matching a specified criteria (e.g. to search for a machine with a specified number of
CPUs). In our environment this Grid Service is called Global Information Grid Service (GIGS) (see
Figure 2).
In order to avoid that the catalog grows too big and becomes slow and clumsy, AD is partitioned
into units, the triangles in Figure 3 (a). For each unit there is at least a domain controller. The AD
partitioning scheme emulates the Windows 2000 domain hierarchy (see Figure 3 (b)). Consequently, the
unit of partition for AD services is the domain. GIGS has to implement an interface in order to obtain,
using a publish/subscribe method, a set of data from Service Data Providers describing an active directory
object. Such data are then recorded in the AD by using Active Directory Service Interface (ADSI), a
COM based interface to perform common tasks, such as adding new objects.
After having stored those data in AD, the GIGS should be able to query AD for retrieving such data. This
is obtained exploiting the Directory Services.
RESULT
Thus the program for developing OGSA- Complaint web service was successfully executed.
AIM:
To develop new Grid Service
PROCEDURE :
1. Setting up Eclipse, GT4, Tomcat, and the other necessary plug-ins and tools
2. Creating and configuring the Eclipse project in preparation for the source files
3. Adding the source files (and reviewing their major features)
4. Creating the build/deploy Launch Configuration that orchestrates the automatic generation of the
remaining artifacts, assembling the GAR, and deploying the grid service into the Web services
container
5. Using the Launch Configuration to generate and deploy the grid service
6. Running and debugging the grid service in the Tomcat container
7. Executing the test client
8. To test the client, simply right-click the Client.java file and select Run > Run... from the pop-up
menu (See Figure 27).
9. In the Run dialog that is displayed, select the Arguments tab and enter
http://127.0.0.1:8080/wsrf/services/examples/ProvisionDirService in the Program Arguments:
textbox.
10. Run dialog
11. Run the client application by simply right-clicking the Client.java file and selecting Run > Java
Application
OUTPUT
Run Java Application
RESULT:
Thus to develop new Grid Service was done successfully.
AIM :
PROCEDURE:
OUTPUT
RESULT:
Thus to develop Applications using Java Grid APIs was done successfully.
AIM :
PROCEDURE:
Mandatory prerequisite:
Tomcat v4.0.3
Axis beta 1
Commons Logging v1.0
Java CoG Kit v0.9.12
Xerces v2.0.1
If you are testing under a user account, make sure that the proxy or certificates and keys
are readable by Tomcat. For testing purposes you can use user proxies or certificates
instead of host certificates e.g.:
<Connector className="org.apache.catalina.connector.http.HttpConnector"
port="8443" minProcessors="5" maxProcessors="75"
enableLookups="true" authenticate="true"
acceptCount="10" debug="1" scheme="httpg" secure="true">
<Factory className="org.globus.tomcat.catalina.net.GSIServerSocketFactory"
proxy="/tmp/x509u_up_neilc"
debug="1"/>
</Connector>
If you do test using user proxies, make sure the proxy has not expired!
Add a GSI Valve in the <engine> section:
<Valve className="org.globus.tomcat.catalina.valves.CertificatesValve"
debug="1" />
Copy gsiaxis.jar to the WEB-INF/lib directory of your Axis installation under Tomcat.
You should ensure that the following jars from the axis/lib directory are in your classpath:
o axis.jar
o clutil.jar
o commons-logging.jar
o jaxrpc.jar
o log4j-core.jar
o tt-bytecode.jar
o wsdl4j.jar
You should also have these jars in your classpath:
o gsiaxis.jar
o cog.jar
o xerces.jar (or other XML parser)
The extensions made to Tomcat allow us to receive credentials through a transport-level security
mechanism. Tomcat exposes these credentials, and Axis makes them available as part of the
MessageContext.
Alpha 3 version
Let's assume we already have a web service called MyService with a single method, myMethod. When a
SOAP message request comes in over the GSI httpg transport, the Axis RPC despatcher will look for the
same method, but with an additional parameter: the MessageContext. So we can write a new myMethod
which takes an additional argument, the MessageContext.
This can be illustrated in the following example:
package org.globus.example;
import org.apache.axis.MessageContext;
import org.globus.axis.util.Util;
Beta 1 version
In the Beta 1 version, you don't even need to write a different method. Instead the Message Context is put
on thread local store. This can be retrieved by calling MessageCOntext.getCurrentContext():
package org.globus.example;
import org.apache.axis.MessageContext;
import org.globus.axis.util.Util;
// Beta 1 version
public String myMethod(String arg) {
System.out.println("MyService: httpg request\n");
System.out.println("MyService: you sent " + arg);
}
Part of the code provided by ANL in gsiaxis.jar is a utility package which includes the getCredentials()
method. This allows the service to extract the proxy credentials from the MessageContext.
Before the service can be used it must be made available. This is done by deploying the service. This can
be done in a number of ways:
1. Use the Axis AdminClient to deploy the MyService classes.
2. Add the following entry to the server-config.wsdd file in WEB-INF directory of axis on Tomcat:
3. <service name="MyService" provider="java:RPC">
4. <parameter name="methodName" value="*"/>
5. <parameter name="className" value="org.globus.example.MyService"/>
6. </service>
RESULT:
Thus to develop secured applications using basic security in Globus was done successfully.
AIM :
To Develop a Grid portal, where user can submit a job and get the result. Implement it with
and without GRAM concept.
PROCEDURE:
Step 1. Building the GridSphere distribution requires 1.5+. You will also need Ant 1.6+ available
at http://jakarta.apache.org/ant.
Step 2. You will also need a Tomcat 5.5.x servlet container available at
http://jakarta.apache.org/tomcat. In addition to providing a hosting environment for GridSphere,
Tomcat provides some of the required XML (JAR) libraries that are needed for compilation.
Step 3. Compiling and Deploying
Step 4. The Ant build script, build.xml, uses the build.properties file to specify any compilation
options. Edit build.properties appropriately for your needs.
Step 5. At this point, simply invoking "ant install" will deploy the GridSphere portlet container to
Tomcat using the default database. Please see the User Guide for more details on configuring the
database.
Step 6. The build.xml supports the following basic tasks:
install -- builds and deploys GridSphere, makes the documentation and installs the database
clean -- removes the build and dist directories including all the compiled classes
update -- updates the existing source code from CVS
compile -- compiles the GridSphere source code
deploy -- deploys the GridSphere framework and all portlets to a Tomcat servlet container located
at $CATALINA_HOME
create-database - creates a new, fresh database with original GridSphere settings, this wipes out
your current database
docs -- builds the Javadoc documentation from the source code
To see all the targets invoke "ant --projecthelp".
Step 7. Startup Tomcat and then go to http://127.0.0.1:8080/gridsphere/gridsphere to see the
portal.
RESULT:
Thus to develop a Grid portal, where user can submit a job and get the result using GRAM
concept was done successfully.