But you must be warned : this ain't my best blog post.
Storage Array Setup
The first thing is of course to install the units in the computer room racks. Each unit uses 2U for a total of 6U. Be sure to use two seperate electrical circuits protected by a UPS. The Dell units have two power supply each for a total of six power supplies. Use three power cords from one circuit and another three from the other circuit. Use two circuits by units, one by power supply.
Connect each MD1200 units with the SAS cables as explained in the Dell MD1200 Disk Expansion SAS Quick Cabling Guide.
The MD3600f unit has two RAID controllers. One on top of the other. Each of these controllers has two fibre channel ports (well, they have more, but we only use two per controllers because we have SAN switches). Connect one fibre port of each controllers to your first SAN switch. And the other ports on the other SAN switch.
Each RAID controller also has an ethernet interface. Connect both of them to an ethernet switch. Make sure that they are part of a VLAN where the DHCP can assign them IP addresses. Take note of the MAC addresses listed on the unit.
Important : Power-on both MD1200 disk units before the MD3600f unit.
Wait for both MD1200 blue LED at the front to come on before you power-on the MD3600f unit. The idea is to make sure both MD1200 units are operational before you start the MD3600f unit. Why? Because the MD1200 are simple JBOD while the MD3600f has two RAID controllers. Those controllers will manage the MD1200 disks. So you want them to be ready when the controllers boot up.
This is inverted when you power-off the unit : start by the MD3600f and then both MD1200.
After the MD3600f has been powered-on, wait for the blue LED at the front. Once you have the blue go ahead LED, check you DHCP server's log files and look for the MAC addresses of the MD3600f ethernet interfaces. Record the IP addresses that were assigned to them by the DHCP server.
Configure your DNS servers with static IP addresses for the new storage array. You will need two IP addresses : one for the top controller and another for the bottom controller. Ideally use IP addresses that are part of a management VLAN. That is, a VLAN which only administrators have access. You obviously won't be able to use the static DNS names and IP until we configured them of course.
Management Station
Select a server that will be know as the Dell MD3600f Management Station. This host does not have to be a consumer of the storage found in the storage array. It is just that : a management station that runs Dell's management software. This machine has to have access to your management VLAN (if you use one).
Place the DVD labeled Dell Resource DVD that came with the storage array into the selected machine's DVD drive. Ideally download the latest version from Dell's support site by using your MD3600f Dell Support Tag. That's what we did and got the DELL_MDSS_Consolidated_RDVD_4_1_0_88.iso file. Place this file on an NFS directory for easy access because we will need this iso to configure storage consumers. For example, if /nfs is under automount control (and assuming you have permission), then do this :
mkdir -p /nfs/install/dell/md3600f/
mv ~/Downloads/DELL_MDSS_Consolidated_RDVD_4_1_0_88.iso /nfs/install/dell/md3600f/
The connect to the management machine with X11 forwarding enabled. Make sure your sshd_config(5) file allows X11 forwarding via the « X11Forwarding yes » configuration. If it's not there, edit the file and restart sshd then try again.
ssh -YX polaris.company.com
IMHO I prefer to use console mode, but this software has a bug that prevents installation in console mode (Bravo Dell! :S)
sudo mount -o loop /nfs/install/dell/md3600f/DELL_MDSS_Consolidated_RDVD_4_1_0_88.iso /mnt
From the mount directory, launch the installer. You obviously need to have a supported OS for this to work. In this case, we're running CentOS and RedHat Linux machines.
Now, on RedHat Linux 6.x, this installation requires several rpms to be installed. We didn't have them all and had problems starting the installation because of this. So make sure you have all the required software before you launch the installer.
sudo yum -y install NetworkManager gtk2 libcanberra-gkt2 dejavu-sans-fonts
If you don't install those packages, you will have these errors :
Not to mention that you won't be able to read any text in the GUI because they will all appear as white squares!
Unfortunately, this yum(8) command will not install those three packages, but an enormous 46 packages depending on your machine's status. Anyway, once this is done, you can launch the installation process. It will run in console mode if you don't have your DISPLAY environment variable set. Which is exactly what I would've done. But because of the bug, we have to run it in full GUI mode.
Of course we need to run this as root, but our xauth(1) list is different when we run sudo. So run this to fix this issue :
xauth list | grep `hostname` | while read auth; do sudo xauth add ${auth}; done
This actually places all the xauth from your normal user into the sudo enabled command.
cd /mnt
sudo ./autorun
From the GUI, click on Install MD Storage Software > Install Management Station ONLY. If you want this machine to consume storage from the SAN, then use the Default (Recommended) which installs both the Managent and Client packages.
This will install the required rpms and kernel modules. It will also force you to reboot the machine :S
Once this is done, issue the reboot command.
sudo shutdown -r now
When the machine is back up, connect to it again with X11 enabled.
ssh -YX polaris.company.com
From there, we can either launch the GUI or proceed in CLI mode. To launch the GUI, simply issue this :
xauth list | grep `hostname` | while read auth; do sudo xauth add ${auth}; done
RedHat / CentOS 6.x Linux Client Setup
ssh oxygen.company.com
sudo mount -o loop /nfs/install/dell/md3600f/DELL_MDSS_Consolidated_RDVD_4_1_0_88.iso /mnt
Beware, if you install the Client software with the GUI, it will modify the /etc/multipath.conf file by installing quite a few lines. These lines are also installed :
user_friendly_names no
polling_interval 5
queue_without_daemon no
sudo rpm -Uv /mnt/linux/dkms/dkms-2.1.1.2-1.noarch.rpm
sudo rpm -Uv /mnt/linux/coexistence/resources/linuxrdac/rhel6/linuxrdac-09.03.0C06.0452.2-1dkms.noarch.rpm
At this point we need to reboot.
sudo shutdown -r now
Once we're back.
sudo modprobe scsi_dh_rdac
sudo vim /etc/multipath.conf
sudo chkconfig multipathd on
sudo /etc/init.d/multipathd start
sudo multipath -ll
Next we need to setup the Linux Logical Volume Manger. Install it if it's not already there.
sudo yum -y install lvm2
Add the new LUNs into LVM control.
sudo pvcreate -M2 --metadatacopies 2 /dev/mapper/ora01 /dev/mapper/ora02 /dev/mapper/ora03 /dev/mapper/ora04
Check that these new LUNs are now under LVM control.
sudo pvs
PV VG Fmt Attr PSize PFree
Create a volume group on those new physical volumes. We will name this volume group « bkp » as it will be used to store online backups for Oracle databases.
sudo vgcreate bkp /dev/mapper/ora01 /dev/mapper/ora02 /dev/mapper/ora03 /dev/mapper/ora04
Check to see if the new volume group exists?
sudo vgs
VG #PV #LV #SN Attr VSize VFree
Add a logical volume on this new volume group. Notice that we don't specify neither mirror nor any RAID levels because RAID is handled by the storage array. And it's purpose built to do so, which means it must be better at it than the client machine, no? The new volume is called « ora » in the command below.
sudo lvcreate -L 2T -n ora bkp
sudo mkfs -t ext4 /dev/bkp/ora
Mount the new file system.
sudo mount /dev/bkp/ora /mnt
Check the new file system
df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/bkp-ora 2.0T 199M 1.9T 1% /mnt
Try it now baby!
sudo mount /export/oracle
That it.
Connect each MD1200 units with the SAS cables as explained in the Dell MD1200 Disk Expansion SAS Quick Cabling Guide.
The MD3600f unit has two RAID controllers. One on top of the other. Each of these controllers has two fibre channel ports (well, they have more, but we only use two per controllers because we have SAN switches). Connect one fibre port of each controllers to your first SAN switch. And the other ports on the other SAN switch.
Each RAID controller also has an ethernet interface. Connect both of them to an ethernet switch. Make sure that they are part of a VLAN where the DHCP can assign them IP addresses. Take note of the MAC addresses listed on the unit.
Important : Power-on both MD1200 disk units before the MD3600f unit.
Wait for both MD1200 blue LED at the front to come on before you power-on the MD3600f unit. The idea is to make sure both MD1200 units are operational before you start the MD3600f unit. Why? Because the MD1200 are simple JBOD while the MD3600f has two RAID controllers. Those controllers will manage the MD1200 disks. So you want them to be ready when the controllers boot up.
This is inverted when you power-off the unit : start by the MD3600f and then both MD1200.
After the MD3600f has been powered-on, wait for the blue LED at the front. Once you have the blue go ahead LED, check you DHCP server's log files and look for the MAC addresses of the MD3600f ethernet interfaces. Record the IP addresses that were assigned to them by the DHCP server.
Configure your DNS servers with static IP addresses for the new storage array. You will need two IP addresses : one for the top controller and another for the bottom controller. Ideally use IP addresses that are part of a management VLAN. That is, a VLAN which only administrators have access. You obviously won't be able to use the static DNS names and IP until we configured them of course.
Management Station
Select a server that will be know as the Dell MD3600f Management Station. This host does not have to be a consumer of the storage found in the storage array. It is just that : a management station that runs Dell's management software. This machine has to have access to your management VLAN (if you use one).
Place the DVD labeled Dell Resource DVD that came with the storage array into the selected machine's DVD drive. Ideally download the latest version from Dell's support site by using your MD3600f Dell Support Tag. That's what we did and got the DELL_MDSS_Consolidated_RDVD_4_1_0_88.iso file. Place this file on an NFS directory for easy access because we will need this iso to configure storage consumers. For example, if /nfs is under automount control (and assuming you have permission), then do this :
mkdir -p /nfs/install/dell/md3600f/
mv ~/Downloads/DELL_MDSS_Consolidated_RDVD_4_1_0_88.iso /nfs/install/dell/md3600f/
The connect to the management machine with X11 forwarding enabled. Make sure your sshd_config(5) file allows X11 forwarding via the « X11Forwarding yes » configuration. If it's not there, edit the file and restart sshd then try again.
ssh -YX polaris.company.com
IMHO I prefer to use console mode, but this software has a bug that prevents installation in console mode (Bravo Dell! :S)
sudo mount -o loop /nfs/install/dell/md3600f/DELL_MDSS_Consolidated_RDVD_4_1_0_88.iso /mnt
From the mount directory, launch the installer. You obviously need to have a supported OS for this to work. In this case, we're running CentOS and RedHat Linux machines.
Now, on RedHat Linux 6.x, this installation requires several rpms to be installed. We didn't have them all and had problems starting the installation because of this. So make sure you have all the required software before you launch the installer.
sudo yum -y install NetworkManager gtk2 libcanberra-gkt2 dejavu-sans-fonts
If you don't install those packages, you will have these errors :
Gtk-Message: Failed to load module "gnomesegvhandler": libgnomesegvhandler.so: cannot open shared object file: No such file or directory
(md_launcher_rhel_x86_64.bin:54899): Pango-WARNING **: failed to choose a font, expect ugly output. engine-type='PangoRenderFc', script='latin'
Not to mention that you won't be able to read any text in the GUI because they will all appear as white squares!
Unfortunately, this yum(8) command will not install those three packages, but an enormous 46 packages depending on your machine's status. Anyway, once this is done, you can launch the installation process. It will run in console mode if you don't have your DISPLAY environment variable set. Which is exactly what I would've done. But because of the bug, we have to run it in full GUI mode.
Of course we need to run this as root, but our xauth(1) list is different when we run sudo. So run this to fix this issue :
xauth list | grep `hostname` | while read auth; do sudo xauth add ${auth}; done
This actually places all the xauth from your normal user into the sudo enabled command.
cd /mnt
sudo ./autorun
From the GUI, click on Install MD Storage Software > Install Management Station ONLY. If you want this machine to consume storage from the SAN, then use the Default (Recommended) which installs both the Managent and Client packages.
This will install the required rpms and kernel modules. It will also force you to reboot the machine :S
Once this is done, issue the reboot command.
sudo shutdown -r now
When the machine is back up, connect to it again with X11 enabled.
ssh -YX polaris.company.com
From there, we can either launch the GUI or proceed in CLI mode. To launch the GUI, simply issue this :
xauth list | grep `hostname` | while read auth; do sudo xauth add ${auth}; done
RedHat / CentOS 6.x Linux Client Setup
ssh oxygen.company.com
sudo mount -o loop /nfs/install/dell/md3600f/DELL_MDSS_Consolidated_RDVD_4_1_0_88.iso /mnt
Beware, if you install the Client software with the GUI, it will modify the /etc/multipath.conf file by installing quite a few lines. These lines are also installed :
user_friendly_names no
polling_interval 5
queue_without_daemon no
So if you're like me and don't like the user_friendly_name, then revisit the file and change it back to « no » before you reboot the machine or restart multipathd(8). That's why I prefer to do this manually. Like this :
sudo rpm -Uv /mnt/linux/coexistence/resources/linuxrdac/rhel6/linuxrdac-09.03.0C06.0452.2-1dkms.noarch.rpm
At this point we need to reboot.
sudo shutdown -r now
Once we're back.
sudo modprobe scsi_dh_rdac
sudo vim /etc/multipath.conf
sudo chkconfig multipathd on
sudo /etc/init.d/multipathd start
sudo multipath -ll
Next we need to setup the Linux Logical Volume Manger. Install it if it's not already there.
sudo yum -y install lvm2
Add the new LUNs into LVM control.
sudo pvcreate -M2 --metadatacopies 2 /dev/mapper/ora01 /dev/mapper/ora02 /dev/mapper/ora03 /dev/mapper/ora04
Check that these new LUNs are now under LVM control.
sudo pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/ora01 lvm2 a-- 512.00g 512.00g
/dev/mapper/ora02 lvm2 a-- 512.00g 512.00g
/dev/mapper/ora03 lvm2 a-- 512.00g 512.00g
/dev/mapper/ora04 lvm2 a-- 512.00g 512.00g
Create a volume group on those new physical volumes. We will name this volume group « bkp » as it will be used to store online backups for Oracle databases.
sudo vgcreate bkp /dev/mapper/ora01 /dev/mapper/ora02 /dev/mapper/ora03 /dev/mapper/ora04
Check to see if the new volume group exists?
sudo vgs
VG #PV #LV #SN Attr VSize VFree
ora 4 1 0 wz--n- 2.00t 0
Add a logical volume on this new volume group. Notice that we don't specify neither mirror nor any RAID levels because RAID is handled by the storage array. And it's purpose built to do so, which means it must be better at it than the client machine, no? The new volume is called « ora » in the command below.
sudo lvcreate -L 2T -n ora bkp
Oups! What's that? It says that we can't build a 2 TB volume, but the pvs(8) above told us that we did have 2 TB. WTF?!
Well, don't panic, just use the maximum number of extents that the command told us : 524284 extents. Let's try again using extents instead of size.
sudo lvcreate -l 524284 -n ora bkp
There you go! Check the new volume.
sudo lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
ora bkp -wi-a---- 2.00t
Alright, we're now ready to create an ext4 file system on the new logical volume.
sudo mkfs -t ext4 /dev/bkp/ora
Mount the new file system.
sudo mount /dev/bkp/ora /mnt
Check the new file system
df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/bkp-ora 2.0T 199M 1.9T 1% /mnt
Add this new file system to the system's fstab(5) so that it's mounted each time the server reboots.
sudo vim /etc/fstab
/dev/mapper/bkp-ora /export/oracle ext4 defaults 1 2
Make sure the mount point exists.
sudo mkdir -p /export/oracle
Try it now baby!
sudo mount /export/oracle
That it.
HTH,
DA+
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.