GPFS Exercise
GPFS Exercise
GPFS Exercise
cluster
Objectives:
Use the GPFS web based administration tool to install a GPFS cluster
Requirements:
Node names and IP address provided by instructor
Node1:________________________
Node2:________________________
Account/Cluster name used for exercise
Name: root
Password: __________________
ClusterName: _______________
8. Enter the IP address and root password for node2 and select "Add host." Close the Task process dialogue
when completed.
9. The hosts have now been added. Select next to go to the Install and Verify Packages page. On this page
select "Check existing package installation." Close the task dialog when the check is complete.
10. GPFS Ships an open source component called the GPL layer that allows the support of a wide variety of
Linux kernels. The GPL layer installation page checks that the GPL layer is built and installed correctly. If
it is not, the installer will complete the build and install. Select "Check existing GPL layer installation".
Close the "Task" dialog when the check is complete.
11. GPFS verifies the network configuration of all nodes in the cluster. Select "Check current settings" to
verify the network configuration. Close the "Task" dialog when the check is complete.
12. GPFS uses ssh (Or other remote command tool) for some cluster operations. The installer will verify that
ssh is configured properly for a GPFS cluster. Select "Check Current Settings" to verify the ssh config.
Close the "Task" dialog when the check is complete.
13. It is recommended, though not required, that all the servers synchronize the time using a protocol such
as NTP. For this lab we will skip the NTP setup. Choose "Skip Setup" to continue.
14. Next, you set the name the GPFS cluster. Enter the cluster name and select " Next."
15. The last step is to define the primary and secondary cluster configuration servers. Since this is a two
node cluster we will leave it at the defaults. Select "Next" to continue.
16. Select Next to complete the cluster configuration. Close the "Task" dialog when the configuration is
complete.
17. When you select "Finish" you will be directed to the cluster management page.
18. The GPFS cluster is now installed and running.
Exercise #1B: Install and configure a GPFS
cluster
Objectives
Verify the system environment
Create a GPFS cluster
Define NSD's
Create a GPFS file system
You will need
An AIX 5.3 System
o Very similar to linux config with rpm instead
of AIX binary images and Linux admin
commands are different
At least 4 hdisks
GPFS 3.3 Software with latest PTF
Step 1: Verify Environment
1. Verify nodes properly installed
1. Check that the oslevel is supported
On the system run oslevel
Check the GPFS FAQ:http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp__
2. Is the installed OS level supported by GPFS? Yes No
3. Is there a specific GPFS patch level required for the installed OS? Yes No
4. If so what patch level is required? ___________
2. Verify nodes configured properly on the network(s)
1. Write the name of Node1: ____________
2. Write the name of Node2: ____________
3. From node 1 ping node 2
4. From node 2 ping node 1
If the pings fail, resolve the issue before continuing.
3. Verify node-to-node ssh communications (For this lab you will use ssh and scp for communications)
1. On each node create an ssh-key. To do this use the command ssh-keygen; if you don't specify a blank
passphrase, -N, then you need to press enter each time you are promoted to create a key with no
passphrase until you are returned to a prompt. The result should look something like this:
4. Add the public key from node2 to the authorized_keys file on node1
cat /tmp/id_rsa.pub >> /.ssh/authorized_keys
7. Supress ssh banners by creating a .hushlogin file in the root home directory
touch /.hushlogin
4. Verify the disks are available to the system
For this lab you should have 4 disks available for use hdiskn-hdiskt.
1. Use lspv to verify the disks exist
2. Ensure you see 4 disks besides hdisk0 talk.
7. Repeat Steps 1-7 on node28. On node1 and node2 confirm GPFS is installed using lslpp
lslpp -L gpfs.\*
the output should look similar to this
# lslpp -L gpfs.\*
Fileset Level State Type Description (Uninstaller)
----------------------------------------------------------------------------
gpfs.base 3.3.0.3 A F GPFS File Manager
gpfs.docs.data 3.3.0.3 A F GPFS Server Manpages and Documentation
gpfs.gui 3.3.0.3 C F GPFS GUI
gpfs.msg.en_US 3.3.0.1 A F GPFS Server Messages U.S. English
Note: Exact versions of GPFS may vary from this example, the important part is that all three packages
are present.
8. Confirm the GPFS binaries are in your path using the mmlscluster command
# mmlscluster
mmlscluster: 6027-1382 This node does not belong to a GPFS cluster.
mmlscluster: 6027-1639 Command failed. Examine previous error messages to determine cause.
2. Run the mmlscluster command again to see that the cluster was created
# mmlscluster
3. Set the license mode for the node using the mmchlicense command. Use a server license for this node.
mmchlicense server --accept -N node01
2. Confirm the node was added to the cluster using the mmlscluster command
# mmlscluster
3. Use the mmchcluster command to set node2 as the secondary configuration server
# mmchcluster -s node2
4. Set the license mode for the node using the mmchlicense command. Use a server license for this node.
mmchlicense server --accept -N node02
5. Start node2 using the mmstartup command
# mmstartup -N node2
6. Use the mmgetstate command to verify that both nodes are in the active state
# mmgetstate -a
2. Verify the file system was created correctly using the mmlsfs command
mmlsfs fs1
Is the file system automatically mounted when GPFS starts? _______________
3. Create a file system based on these NSD's using the mmcrfs command
* Set the file system blocksize to 64KB
* Mount the file system at /gpfs
4. Verify the file system was created correctly using the mmlsfs command
> mmlsfs fs1
Inode Information
-----------------
Number of used inodes: 4038
Number of free inodes: 397370
Number of allocated inodes: 401408
Maximum number of inodes: 401408
3. Link the filesets into the file system using the mmlinkfileset command
# mmlinkfileset fs1 fileset1 -J /gpfs/fileset1
# mmlinkfileset fs1 fileset2 -J /gpfs/fileset2
# mmlinkfileset fs1 fileset3 -J /gpfs/fileset3
# mmlinkfileset fs1 fileset4 -J /gpfs/fileset4
# mmlinkfileset fs1 fileset5 -J /gpfs/fileset5
Now what is the status of fileset1-fileset5? ___________________
/* Set a default rule that sends all files not meeting the other criteria to the system pool */
RULE 'default' set POOL 'system'
3. Record the free space in each pool using the mmdf command (Bigfile1)
> mmdf fs1
disk disk size failure holds holds free KB free KB
name in KB group metadata data in full blocks in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system
nsd1 20971520 -1 yes yes 20588288 ( 98%) 930 ( 0%)
nsd2 20971520 -1 yes yes 20588608 ( 98%) 806 ( 0%)
------------- -------------------- -------------------
(pool total) 41943040 41176896 ( 98%) 1736 ( 0%)
Inode Information
-----------------
Number of used inodes: 4044
Number of free inodes: 78132
Number of allocated inodes: 82176
Maximum number of inodes: 82176
6. Questions
Where did the data go for each file?
Bigfile1 ______________
Bigfile1.dat ______________
Bigfile2 ______________
Why?
7. Create a couple more files (These will be used in the next step)
> dd if=/dev/zero of=/gpfs/fileset3/bigfile3 bs=64k count=10000
> dd if=/dev/zero of=/gpfs/fileset4/bigfile4 bs=64k count=10000
3. Actually perform the migration and deletion using the mmapplypolicy command
> mmapplypolicy fs1 \-P managementpolicy.txt
4. Review:
Review the output of the mmapplypolicy command to answer these questions.
How many files were deleted? ____________
How many files were moved? ____________
How many KB total were moved? ___________
dt=`date +%h%d%y-%H_%M_%S`
results=/tmp/FileReport_${dt}
echo one $1
if [[ $1 == 'MIGRATE' ]];then
echo Filelist
echo There are `cat $2 | wc -l` files that match >> ${results}
cat $2 >> ${results}
echo ----
echo - The file list report has been placed in ${results}
echo ----
fi
File listrule1.txt
RULE EXTERNAL POOL 'externalpoolA' EXEC '/tmp/expool1.bash'
2. If these parameters are not set to 2 you will need to recreate the file system. To recreate the file system
a. Umount the file system
b. Delete the file system
c. Create the file system and specify -M 2 and -R 2
> mmcrfs /gpfs fs1 \-F pooldesc.txt \-B 64k \-M 2 \-R 2
Where pooldesc.txt is the disk descriptor file from Lab 1
2. Change the failure group to 1 for nsd1 and nsd3 and to 2 for nsd2 and nsd4 using the mmchdisk
command.
> mmchdisk fs1 change -d "nsd1:::dataAndMetadata:1:::"
> mmchdisk fs1 change -d "nsd2:::dataAndMetadata:2:::"
> mmchdisk fs1 change -d "nsd3:::dataOnly:1:::"
> mmchdisk fs1 change -d "nsd4:::dataOnly:2:::"
3. Change the file replication status of bigfile10 so that it is replicated in two failure groups using the
mmchattr command.
mmchattr \-m 2 \-r 2 /gpfs/fileset1/bigfile10
Notice that this command take a few moments to execute, as you change the replication status of a file
the data is copied before the command completes unless you use the "-I defer" option.
4. Again use the mmlsattr command to check the replication status of the file bigfile10
> mmlsattr /gpfs/fileset1/bigfile10
Did you see a change in the replication status of the file?
2. Use the mmlsattr command to check the replication status of the file bigfile11
mmlsattr /gpfs/fileset1/bigfile11
3. Using the mmchfs command change the default replicaton status for fs1.
mmchfs fs1 \-m 2 \-r 2
4. Use the mmlsattr command to check the replication status of the file bigfile11
mmlsattr /gpfs/fileset1/bigfile11
Has the replication status of bigfile11 changed? _________________
5. The replication status of a file does not change until mmrestripefs is run or a new file is created. To
test this create a new file called bigfile12
dd if=/dev/zero of=/gpfs/fileset1/bigfile12 bs=64k count=1000
6. Use the mmlsattr command to check the replication status of the file bigfile10
mmlsattr /gpfs/fileset1/bigfile12
Is the file replicated?
7. You can replicate the existing files in the file system using the mmrestripefs command
mmrestripefs fs1 \-R
Exercise #4: Snapshots
In this lab we will use the snapshot feature to create online copies of files.
Objectives:
Create a file system snapshot
Restore a user deleted file from a snapshot image
Manage multiple snapshot images
Requirements:
1. Complete Exercise 1: Installing the cluster
2. A File System - Use Exercise 2 to create a file system if you do not already have one.
6. Delete the file /gpfs/fileset1/snapfile1 Now that the file is deleted let's see what is in the snapshots:
7. Take a look at the snapshot images. To view the image change directories to the .snapshot directory
cd /gpfs/.snapshots
What directories do you see? _____________________
9. To restore the file from the snapshot copy the file back into the original location
cp /gpfs/.snapshots/snap2/fileset1/snapfile1 /gpfs/fileset1/snapfile1
10. When you are done with a snapshot you can delete the snapshot. Delete both of these snapshots
using the mmdelsnapshot command
> mmdelsnapshot fs1 snap1
> mmdelsnapshot fs1 snap2
11. Verify the snapshots were deleted using the mmlssnapshot command
mmlssnapshot fs1
Exercise #5:Dynamically Adding a Disk to an
Online File System
Objectives:
Add a disk to a storage pool online
Re-balance existing data in the file system
Requirements:
1. Complete Exercise 1: Installing the cluster
2. A File System (Use Exercise 2 to create a file
system if you do not already have one).
3. Device to add
/dev/sd___
2. Create a disk descriptor file /gpfs-course/data/adddisk.txt for the new disk using the format
#DiskName:serverlist::DiskUsage:FailureGroup:DesiredName:StoragePool
/dev/sd_:::dataOnly::nsd5:pool1
4. Verify the disk has been created using the mmlsnsd command
> mmlsnsd
The disk you just added should show as a (free disk)
5. Add the new NSD to the fs1 file system using the mmadddisk command
> mmadddisk fs1 -F /gpfs-course/data/adddisk.txt
Inode Information
-----------------
Number of used inodes: 4045
Number of free inodes: 78131
Number of allocated inodes: 82176
Maximum number of inodes: 82176