Monday, March 21, 2011

How to increase ASM disk space on Linux with iSCSI using udev instead of ASMLib.

Today I'd like to explain how to add more disk space on an existing Oracle ASM diskgroup on Linux using udev(7) instead of ASMLib


There has been a few threads about ASM, AMSLib and udev on the oracle-l mailing list (see OCR / VD external vs. normal redundancy using NFS and High Availability Options for instance). I prefer using udev. So in this example, I will be using udev in order to add two new LUNs to the existing +FRA ASM diskgroup. 
This particular ASM setup has three diskgroups: +CRS, +DATA and +FRA. The +CRS diskgroup is used for the Grid Infrastructure (GI) files (i.e. OCR and ASM spfile), the +DATA is used for the database (RDBMS) data files while the +FRA one is the Fast Recovery Area. That's the one we need to increase.

Technical Specs
  • Oracle RAC 11gR2 version 11.2.0.2.
  • Grid Infrastructure (GI) 11.2.0.2 owned by the grid user.
  • Database (RDBMS) 11.2.0.2 owned by the oracle user.
  • RedHat Enterprise Linux 5.6 x86_64 (i.e. 64 bit).
  • Clustered Oracle ZFS Appliance 7410 (a.k.a. Sun Unified Storage).
  • iSCSI LUNs.
Before we begin, here's a list of interesting ASM resources.
Current ASM Configuration

A good idea before you change anything is to take a look at your current setup. In our 11gR2 ASM setup, the ASM instances are in the Grid Infrastrucutre (GI) which is owned by the grid user. So we connect to one of the RAC node, switch to the grid user, connect to the local ASM instance and query a few dynamic views.

ssh racnode1.example.com
sudo su - grid
rlwrap sqlplus '/ as sysasm'
set linesize 100 pagesize 200;
col name for a10;

col path for a20;
select name, total_mb, free_mb from v$asm_diskgroup where name in ('DATA', 'FRA') order by name;
NAME TOTAL_MB FREE_MB
---------------- --------- --------
DATA 61424 15688
FRA 61424  1604

We can see from the above query result that the FRA diskgroup needs more space. 

select name, path, total_mb, free_mb from v$asm_disk order by path;
NAME PATH TOTAL_MB FREE_MB
----------- ----------------- --------- --------
CRS_0000 /dev/iscsi/crs1p1 1020 584 
CRS_0001 /dev/iscsi/crs2p1 1020 584 
CRS_0002 /dev/iscsi/crs3p1 1020 980 
DATA_0000 /dev/iscsi/data1p1 15356        3908 
DATA_0001 /dev/iscsi/data2p1 15356        3908 
DATA_0002 /dev/iscsi/data3p1 15356        3948 
DATA_0003 /dev/iscsi/data4p1 15356        3924 
FRA_0000 /dev/iscsi/fra1p1 15356        2116 
FRA_0001 /dev/iscsi/fra2p1 15356        2088 
FRA_0002 /dev/iscsi/fra3p1 15356        2060 
FRA_0003 /dev/iscsi/fra4p1 15356        2096 

We see from the PATH column above that the next disks we should add to this diskgroup are /dev/iscsi/fra5p1 see and /dev/iscsi/fra6p1. We also see from the TOTAL_MB column that all disks are 15 GB disks. We estimated that we need 30 GB, so let's add two new 15 GB disks to the FRA diskgroup.

SAN Configuration

We start by creating the new iSCSI targets on the storage subsystem. Of course these commands are specific to the Oracle ZFS Appliance. The commands entered are displayed in bold font. You will notice that I name the two new targets fra5 and fra6. That's simply because they will be the 5th and 6th targets for the existing +FRA ASM diskgroup. In order words, we already have four disks configured in the +FRA ASM diskgroup.

storage:> configuration san targets iscsi
storage:configuration san targets iscsi> create
storage:configuration san targets iscsi> set alias=oracle.asm.fra5
storage:configuration san targets iscsi target (uncommitted)> set interfaces=aggr1560001
storage:configuration san targets iscsi target (uncommitted)> set auth=chap
storage:configuration san targets iscsi target (uncommitted)> set targetchapuser=oracle.asm
storage:configuration san targets iscsi target (uncommitted)> set targetchapsecret
Enter new targetchapsecret:
Re-enter new targetchapsecret:
storage:configuration san targets iscsi target (uncommitted)> commit
storage:configuration san targets iscsi> create
storage:configuration san targets iscsi target (uncommitted)> set alias=oracle.asm.fra6
storage:configuration san targets iscsi target (uncommitted)> set interfaces=aggr1560001
storage:configuration san targets iscsi target (uncommitted)> set auth=chap
storage:configuration san targets iscsi target (uncommitted)> set targetchapuser=oracle.asm
storage:configuration san targets iscsi target (uncommitted)> set targetchapsecret
Enter new targetchapsecret:
Re-enter new targetchapsecret:
storage:configuration san targets iscsi target (uncommitted)> commit
storage:configuration san targets iscsi> ls

We can see that we have the two new iSCSI targets. Keep the IQN handy, because we need to add them to an iSCSI target group later.

target-000 oracle.asm.fra6
           |
           +-> IQN
               iqn.1986-03.com.sun:02:baf6387e-66ea-efe8-9598-be2110423674


target-001 oracle.asm.fra5
           |
           +-> IQN
               iqn.1986-03.com.sun:02:6513933f-e8ec-e3b8-e0e8-d437025e805b

Go to the iSCSI target groups context and list the currents groups. Note how the oracle.asm.fra group already has four iSCSI target IQN configured. It's in this group that we will add our two new IQN.

storage:configuration san targets iscsi> groups
storage:configuration san targets iscsi groups> ls


group-003 oracle.asm.fra
          |
          +-> TARGETS
              iqn.1986-03.com.sun:02:2ee6c0ff-cf5a-e1fe-db28-e62342e0cc57
              iqn.1986-03.com.sun:02:816eaee0-d7bf-4bb6-9c1e-a1f1a7269a3c
              iqn.1986-03.com.sun:02:751374b9-9b4a-e570-cc95-9d1ac88cb1bf
              iqn.1986-03.com.sun:02:bfb89059-163b-e1c0-8220-bd79198b4ccd

Add the two new IQN to the group. Don't worry if the set targets command looks daunting. Just hit <tab> and the appliance will provide you with a list of possible entries. Since we only add two LUNs, they will be listed automatically.

storage:configuration san targets iscsi groups> select group-003
storage:configuration san targets iscsi group-003> ls


Properties:
                          name = oracle.asm.fra
                        targets = iqn.1986-03.com.sun:02:2ee6c0ff-cf5a-e1fe-db28-e62342e0cc57,iqn.1986-03.com.sun:02:816eaee0-d7bf-4bb6-9c1e-a1f1a7269a3c,iqn.1986-03.com.sun:02:751374b9-9b4a-e570-cc95-9d1ac88cb1bf,iqn.1986-03.com.sun:02:bfb89059-163b-e1c0-8220-bd79198b4ccd


storage:configuration san targets iscsi group-003> set targets=iqn.1986-03.com.sun:02:2ee6c0ff-cf5a-e1fe-db28-e62342e0cc57,iqn.1986-03.com.sun:02:816eaee0-d7bf-4bb6-9c1e-a1f1a7269a3c,iqn.1986-03.com.sun:02:751374b9-9b4a-e570-cc95-9d1ac88cb1bf,iqn.1986-03.com.sun:02:bfb89059-163b-e1c0-8220-bd79198b4ccd,iqn.1986-03.com.sun:02:6513933f-e8ec-e3b8-e0e8-d437025e805b,iqn.1986-03.com.sun:02:baf6387e-66ea-efe8-9598-be2110423674
storage:configuration san targets iscsi group-003> commit
storage:configuration san targets iscsi group-003> cd ..
storage:configuration san targets iscsi groups> ls

We can now see that the oracle.asm.fra has the new IQNs.

group-003 oracle.asm.fra
          |
          +-> TARGETS
              iqn.1986-03.com.sun:02:2ee6c0ff-cf5a-e1fe-db28-e62342e0cc57
              iqn.1986-03.com.sun:02:816eaee0-d7bf-4bb6-9c1e-a1f1a7269a3c
              iqn.1986-03.com.sun:02:751374b9-9b4a-e570-cc95-9d1ac88cb1bf
              iqn.1986-03.com.sun:02:bfb89059-163b-e1c0-8220-bd79198b4ccd
              iqn.1986-03.com.sun:02:6513933f-e8ec-e3b8-e0e8-d437025e805b
              iqn.1986-03.com.sun:02:baf6387e-66ea-efe8-9598-be2110423674

We're now ready to create the new LUNs. Since we already have the oracle.asm project, then we will add the news LUNs in this project.

storage:configuration san targets iscsi groups> cd /
storage:> shares select oracle.asm
storage:shares oracle.asm> lun fra5
storage:shares oracle.asm/fra5 (uncommitted)> set volblocksize=64k
storage:shares oracle.asm/fra5 (uncommitted)> set volsize=15g
storage:shares oracle.asm/fra5 (uncommitted)> set targetgroup=oracle.asm.fra
storage:shares oracle.asm/fra5 (uncommitted)> set initiatorgroup=oracle.rac
storage:shares oracle.asm/fra5 (uncommitted)> set nodestroy=true
storage:shares oracle.asm/fra5 (uncommitted)> commit
storage:shares oracle.asm> lun fra6
storage:shares oracle.asm/fra6 (uncommitted)> set volblocksize=64k
storage:shares oracle.asm/fra6 (uncommitted)> set volsize=15g
storage:shares oracle.asm/fra6 (uncommitted)> set targetgroup=oracle.asm.fra
storage:shares oracle.asm/fra6 (uncommitted)> set initiatorgroup=oracle.rac
storage:shares oracle.asm/fra6 (uncommitted)> set nodestroy=true
storage:shares oracle.asm/fra6 (uncommitted)> commit

Configure Cluster Nodes iSCSI Subsystem

We now have to make sure our cluster nodes can login to the new LUNs. Start by the first node and do the same on all the other nodes of the grid.

Our first priority is to discover all the iSCSI targets available on the SAN. But first, we must find the SAN's IP address. Ask your storage admin for this info. For this example we will be using 192.168.18.32. We'll use that to filter out other iSCSI targets.

sudo iscsiadm -m discovery -t sendtargets -p 192.168.18.32 -o nonpersistent | grep 192.168.18.32 | cut -d' ' -f2 | sort -n | tee /tmp/iscsi.targets.all


iqn.1986-03.com.sun:02:056e76e0-170c-ccaf-e356-b322c85b5f71
iqn.1986-03.com.sun:02:2ee6c0ff-cf5a-e1fe-db28-e62342e0cc57
iqn.1986-03.com.sun:02:5134d4d0-d16c-6285-e77a-918d90433fee
iqn.1986-03.com.sun:02:57f62633-a35f-4f06-9b53-93143aa19e2f
iqn.1986-03.com.sun:02:6513933f-e8ec-e3b8-e0e8-d437025e805b
iqn.1986-03.com.sun:02:751374b9-9b4a-e570-cc95-9d1ac88cb1bf
iqn.1986-03.com.sun:02:816eaee0-d7bf-4bb6-9c1e-a1f1a7269a3c
iqn.1986-03.com.sun:02:87dbf9ec-0f5e-6954-c2f4-d78ede1162b9
iqn.1986-03.com.sun:02:8a01a50a-f158-e417-cc0e-dd4248498ab5
iqn.1986-03.com.sun:02:baf6387e-66ea-efe8-9598-be2110423674
iqn.1986-03.com.sun:02:bfb89059-163b-e1c0-8220-bd79198b4ccd
iqn.1986-03.com.sun:02:c6f5a325-d6d2-cc52-a9d6-f9ebd14eb002
iqn.1986-03.com.sun:02:e7b96198-a97a-c016-fbbc-c535dce980d4

This will find quite a few iSCSI targets. But we don't need all of them, so we must compare with the ones we already have configured on that node. To find it, run:

sudo iscsiadm -m session | cut -d' ' -f4 | sort -n | tee /tmp/iscsi.targets.configured


iqn.1986-03.com.sun:02:056e76e0-170c-ccaf-e356-b322c85b5f71
iqn.1986-03.com.sun:02:2ee6c0ff-cf5a-e1fe-db28-e62342e0cc57
iqn.1986-03.com.sun:02:5134d4d0-d16c-6285-e77a-918d90433fee
iqn.1986-03.com.sun:02:57f62633-a35f-4f06-9b53-93143aa19e2f
iqn.1986-03.com.sun:02:751374b9-9b4a-e570-cc95-9d1ac88cb1bf
iqn.1986-03.com.sun:02:816eaee0-d7bf-4bb6-9c1e-a1f1a7269a3c
iqn.1986-03.com.sun:02:87dbf9ec-0f5e-6954-c2f4-d78ede1162b9
iqn.1986-03.com.sun:02:8a01a50a-f158-e417-cc0e-dd4248498ab5
iqn.1986-03.com.sun:02:bfb89059-163b-e1c0-8220-bd79198b4ccd
iqn.1986-03.com.sun:02:c6f5a325-d6d2-cc52-a9d6-f9ebd14eb002
iqn.1986-03.com.sun:02:e7b96198-a97a-c016-fbbc-c535dce980d4

We now have two files (i.e. /tmp/iscsi.targets.configured and /tmp/iscsi.targets.all). Therefore we can compare these two files and find out which IQN we don't have configured.

diff -u /tmp/iscsi.targets.all /tmp/iscsi.targets.configured | grep '^-iqn' | sed -e "s/^-//g" | tee /tmp/iscsi.targets.new


iqn.1986-03.com.sun:02:6513933f-e8ec-e3b8-e0e8-d437025e805b
iqn.1986-03.com.sun:02:baf6387e-66ea-efe8-9598-be2110423674

Alright, we now have the two IQN we need to add to our cluster nodes. Let's configure those two new IQNs in our cluster node. To do this, we need to add the new IQN to our local iSCSI target database and then log into it.

cat /tmp/iscsi.targets.new | while read IQN; do
echo "Working on IQN ${IQN}..."
sudo iscsiadm -m node -o new -T ${IQN} -p 192.168.18.32:3260
sudo iscsiadm -m node -T ${IQN} -p 192.168.18.32:3260 -l
echo
done

Let's double-check if the new IQNs are in place?

cat /tmp/iscsi.targets.new | while read IQN; do
echo "Looking for $IQN..."
sudo iscsiadm -m session | grep $IQN
echo
done

You can also take a look into /var/log/messages to see kernel messages.

sudo less /var/log/messages

Clear the temporary files as we don't need them anymore.

rm /tmp/iscsi.targets.*

Persistent iSCSI Naming with udev

We now have to create a persistent naming for those two new iSCSI targets. We do this with udev(7). But first, we need to know the new LUN's GUID. We find them from the SAN.

ssh root@storage.example.com

List the LUNs in the oracle.asm project and look at our new LUN's GUID. Which are fra5 and fra6.

storage:> shares select oracle.asm ls


fra5              15G     600144F0BC0E077400004D68014F0001
fra6              15G     600144F0BC0E077400004D68017C0002

So we know our new LUN's GUID. We'll use those to create our udev(7) rules. But first we need to find the LUN's scsi_id with the aptly-named scsci_id(8).

ls -1 /dev/sd* | egrep -vw 'sda|sda[0-9]' | sed -e "s/\/dev\///g" | while read i; do printf "$i\t"; sudo scsi_id -ugs /block/$i; done |


sdas 3600144f0bc0e077400004d68014f0001
sdat 3600144f0bc0e077400004d68017c0002
sday 3600144f0bc0e077400004d68014f0001
sdaz 3600144f0bc0e077400004d68017c0002

As you can see, the GUID from the SAN looks close to the scsi_id found using scsi_id(8). Also notice that there are two output for each GUID. I'm not quite sure why? If anyone knows the answer, then please tell me.

LUN Name SAN GUID /dev Name scsi_id(8) result
fra5 600144F0BC0E077400004D68014F0001 sdas 3600144f0bc0e077400004d68014f0001
fra6 600144F0BC0E077400004D68017C0002 sdat 3600144f0bc0e077400004d68017c0002

With this info, we can build the udev(7) rules and add them to the existing rules found in /etc/udev/rules.d/20-names.rules. Make sure your udev rules are on a single line, not multiple ones. If you'd like a copy of the entire 20-names.rules file, just drop me a message and I'll share it with you.

sudo rcsdiff /etc/udev/rules.d/20-names.rules
sudo co -l /etc/udev/rules.d/20-names.rules
sudo vi /etc/udev/rules.d/20-names.rules


<20-names.rules>
# /dev/iscsi/fra5.
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -gus %p", RESULT=="3600144f0bc0e077400004d68014f0001", NAME="iscsi/fra5p%n", OWNER="grid", GROUP="oinstall", MODE="0660", OPTIONS="last_rule"


# /dev/iscsi/fra6.
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -gus %p", RESULT=="3600144f0bc0e077400004d68017c0002", NAME="iscsi/fra6p%n", OWNER="grid", GROUP="oinstall", MODE="0660", OPTIONS="last_rule"

</20-names.rules>


sudo rcsdiff /etc/udev/rules.d/20-names.rules
sudo ci -u /etc/udev/rules.d/20-names.rules

Check if your udev configuration is good? To do so, we'll use the two /dev names found in the table above.

sudo udevtest /block/sdas | grep '^udev_rules_get_name:'
udev_rules_get_name: rule applied, 'sdas' becomes 'iscsi/fra5p'

sudo udevtest /block/sdat | grep '^udev_rules_get_name:'
udev_rules_get_name: rule applied, 'sdat' becomes 'iscsi/fra6p'

OK, so we now know that our udev(7) rules are good. So let's reboot the cluster node to make sure the iSCSI persistent naming is properly configured after a reboot. It'll save lot's of problems later! Plus this is a grid right? So rebooting a node is safe.

sudo shutdown -r now

Make sure to perform all the iSCSI steps to all your cluster nodes!

Once the node is back online, we need to create iSCSI Partitions. Our two new LUNs will be mapped to /dev/iscsi/fra5p and /dev/iscsi/fra6p by our udev(7) configuration. So we use those device paths with fdisk(1) to create the new partitions /dev/iscsi/fra5p1 and /dev/iscsi/fra6p1.
Do this from a single node! There is NO need to run fdisk(1) on all nodes. Create a single partition that spans the entire iSCSI volume on each of the iSCSI disks.

sudo fdisk /dev/iscsi/fra5p


Command (m for help): p
Command (m for help): n
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-15360, default 1): <enter>
Last cylinder or +size or +sizeM or +sizeK (1-15360, default 15360): <enter>
Command (m for help): w

Make sure to create a partition on all the new iSCSI volumes!

Once your finished, double-check to make sure all disks have a partition on it. Don't worry about the Disk /dev/iscsi/fra3p1 doesn't contain a valid partition table error. What we're looking for here is to see wether the kernel sees the correct LUN size.

sudo fdisk -l /dev/iscsi/*p

Now that all the disks have a single partition, you need to tell all the other nodes about those new partitions. So make sure to execute the following commands on all cluster nodes!

ssh racnode2.example.com
sudo partprobe /dev/iscsi/*
sudo fdisk -l /dev/iscsi/*p

Add Disks to ASM

We're now ready to add the new iSCSI disks to ASM. Here is two links to the official 11gR2 documentation.
First check the ASM_DISKSTRING parameter. We need it to be set to /dev/iscsi/*p1. If it's not already like this, keep in mind that you can dynamically change it without restarting the ASM instance. But just be careful, as it has to be set correctly...

ssh racnode1.example.com
sudo su - grid
rlwrap sqlplus "/ as sysasm"
SQL> show parameter asm_diskstring;


NAME     TYPE     VALUE
------------------------------------ --------------- ----------------
asm_diskstring     string     /dev/iscsi/*p1

Ok, so our ASM_DISKSTRING is just fine. Let's check if the new disks have been discovered by ASM?

SQL> select path, header_status, name from v$asm_disk order by path;


PATH       HEADER_STATU NAME
------------------------------ ------------ ------------------------------
/dev/iscsi/crs1p1       MEMBER    CRS_0000
/dev/iscsi/crs2p1       MEMBER    CRS_0001
/dev/iscsi/crs3p1       MEMBER    CRS_0002
/dev/iscsi/data1p1       MEMBER    DATA_0003
/dev/iscsi/data2p1       MEMBER    DATA_0002
/dev/iscsi/data3p1       MEMBER    DATA_0001
/dev/iscsi/data4p1       MEMBER    DATA_0000
/dev/iscsi/fra1p1       MEMBER    FRA_0001
/dev/iscsi/fra2p1       MEMBER    FRA_0000
/dev/iscsi/fra3p1       MEMBER    FRA_0003
/dev/iscsi/fra4p1       MEMBER    FRA_0002
/dev/iscsi/fra5p1       CANDIDATE
/dev/iscsi/fra6p1       CANDIDATE

Sure enough, we now have two CANDIDATE disks. Before we add them to the FRA diskgroup, let's just take a look at the current state of our ASM diskgroups.

SQL> select name, state, type, total_mb, free_mb from v$asm_diskgroup order by name;


NAME       STATE   TYPE     TOTAL_MB FREE_MB
------------------------------ ----------- --------------- ---------- ----------
CRS       MOUNTED   NORMAL 3060    2148
DATA       MOUNTED   EXTERN 61424   18664
FRA       MOUNTED   EXTERN 61424   40108

Keep the above output handy, as we will compare it to the same query result after we've added the disks to the FRA diskgroup.

SQL> alter diskgroup FRA add disk '/dev/iscsi/fra5p1', '/dev/iscsi/fra6p1';
SQL> select name, state, type, total_mb, free_mb from v$asm_diskgroup where name = 'FRA';


NAME       STATE   TYPE     TOTAL_MB FREE_MB
------------------------------ ----------- --------------- ---------- ----------
FRA       MOUNTED   EXTERN 92136   70804

As we can see the FRA disk group has grown from a total of 61424 to 92136. Here's another way to look at it.

SQL> select d.header_status, d.path, d.name, dg.name
     from v$asm_disk d 
     inner join v$asm_diskgroup dg on d.group_number = dg.group_number 
     order by path, dg.name;


HEADER_STATU PATH    NAME   NAME
------------ ------------------------------ ------------------------------ -----
MEMBER     /dev/iscsi/crs1p1    CRS_0000   CRS
MEMBER     /dev/iscsi/crs2p1    CRS_0001   CRS
MEMBER     /dev/iscsi/crs3p1    CRS_0002   CRS
MEMBER     /dev/iscsi/data1p1    DATA_0003   DATA
MEMBER     /dev/iscsi/data2p1    DATA_0002   DATA
MEMBER     /dev/iscsi/data3p1    DATA_0001   DATA
MEMBER     /dev/iscsi/data4p1    DATA_0000   DATA
MEMBER     /dev/iscsi/fra1p1    FRA_0001   FRA
MEMBER     /dev/iscsi/fra2p1    FRA_0000   FRA
MEMBER     /dev/iscsi/fra3p1    FRA_0003   FRA
MEMBER     /dev/iscsi/fra4p1    FRA_0002   FRA
MEMBER     /dev/iscsi/fra5p1    FRA_0004   FRA
MEMBER     /dev/iscsi/fra6p1    FRA_0005   FRA

You can follow the ASM rebalance operation by querying the V$ASM_OPERATION. Or check the space used on the FRA disks. At the end of the rebalance operation, all disks should consume almost the same space, as we can see from the following query.

SQL> select path, mode_status, state, total_mb, free_mb from v$asm_disk where path like '/dev/iscsi/fra%p1' order by path;


PATH       MODE_ST STATE  TOTAL_MB    FREE_MB
------------------------------ ------- -------- ---------- ----------
/dev/iscsi/fra1p1       ONLINE  NORMAL     15356 11796
/dev/iscsi/fra2p1       ONLINE  NORMAL     15356 11800
/dev/iscsi/fra3p1       ONLINE  NORMAL     15356 11792
/dev/iscsi/fra4p1       ONLINE  NORMAL     15356 11796
/dev/iscsi/fra5p1       ONLINE  NORMAL     15356 11812
/dev/iscsi/fra6p1       ONLINE  NORMAL     15356 11808

That's it, you now have more disks in your +FRA diskgroup without using ASMLib.

If you have questions or comments on this, please do so!

David

No comments:

Post a Comment