Edit the file wp-content/themes/default/sidebar.php. Right after this:
<li>
<?php get_search_form(); ?>
</li>
</pre>
add this:
<li>
<?php wp_list_bookmarks(); ?>
</li>
Edit the file wp-content/themes/default/sidebar.php. Right after this:
<li>
<?php get_search_form(); ?>
</li>
</pre>
add this:
<li>
<?php wp_list_bookmarks(); ?>
</li>
Edit ../wp-content/themes/default/index.php and archive.php.
Look for a line like this:
<small> <? php the_time('F jS, Y') ?> <-- by <?php the_author() ?> --> </small>
Edit to this:
<small><? php the_time('F jS, Y') ?> by <?php the_author() ?></small>
In archive.php, the by <?php the_author() ?> needs to be added.
My problem from the other day was rebuilding a raid when I got ecc errors on a different disk than the one being rebuilt. I did a rescan and the ecc errors went away, but the rebuild seemed to be stuck. I contacted 3ware, makers of our raid card and was told to do this:
//cdfs3> info c0 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 OK - 64K 1396.95 ON OFF OFF u1 RAID-5 REBUILDING 89 64K 1396.95 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 465.76 GB 976773168 WD-WCANU1137212 p1 OK u0 465.76 GB 976773168 WD-WCANU1090078 p2 OK u0 465.76 GB 976773168 WD-WCANU1119743 p3 OK u0 465.76 GB 976773168 WD-WCANU1089924 p4 OK u1 465.76 GB 976773168 WD-WCANU1136981 p5 OK u1 465.76 GB 976773168 WD-WCANU1109927 p6 DEGRADED u1 465.76 GB 976773168 WD-WCAPW5103756 p7 OK u1 465.76 GB 976773168 WD-WCANU1125288 //cdfs3> maint remove c0 p6 Exporting port /c0/p6 ... Done. //cdfs3> info c0 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 OK - 64K 1396.95 ON OFF OFF u1 RAID-5 DEGRADED - 64K 1396.95 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 465.76 GB 976773168 WD-WCANU1137212 p1 OK u0 465.76 GB 976773168 WD-WCANU1090078 p2 OK u0 465.76 GB 976773168 WD-WCANU1119743 p3 OK u0 465.76 GB 976773168 WD-WCANU1089924 p4 OK u1 465.76 GB 976773168 WD-WCANU1136981 p5 OK u1 465.76 GB 976773168 WD-WCANU1109927 p6 NOT-PRESENT - - - - p7 OK u1 465.76 GB 976773168 WD-WCANU1125288 //cdfs3> rescan Rescanning controller /c0 for units and drives ...Done. Found the following unit(s): [none]. Found the following drive(s): [/c0/p6]. //cdfs3> info c0 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 OK - 64K 1396.95 ON OFF OFF u1 RAID-5 DEGRADED - 64K 1396.95 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 465.76 GB 976773168 WD-WCANU1137212 p1 OK u0 465.76 GB 976773168 WD-WCANU1090078 p2 OK u0 465.76 GB 976773168 WD-WCANU1119743 p3 OK u0 465.76 GB 976773168 WD-WCANU1089924 p4 OK u1 465.76 GB 976773168 WD-WCANU1136981 p5 OK u1 465.76 GB 976773168 WD-WCANU1109927 p6 OK - 465.76 GB 976773168 WD-WCAPW5103756 p7 OK u1 465.76 GB 976773168 WD-WCANU1125288 //cdfs3> /c0/u1 start rebuild disk=6 ignoreecc Sending rebuild start request to /c0/u1 on 1 disk(s) [6] ... Done. //cdfs3> info c0 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 OK - 64K 1396.95 ON OFF OFF u1 RAID-5 REBUILDING 0 64K 1396.95 OFF OFF ON Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 465.76 GB 976773168 WD-WCANU1137212 p1 OK u0 465.76 GB 976773168 WD-WCANU1090078 p2 OK u0 465.76 GB 976773168 WD-WCANU1119743 p3 OK u0 465.76 GB 976773168 WD-WCANU1089924 p4 OK u1 465.76 GB 976773168 WD-WCANU1136981 p5 OK u1 465.76 GB 976773168 WD-WCANU1109927 p6 DEGRADED u1 465.76 GB 976773168 WD-WCAPW5103756 p7 OK u1 465.76 GB 976773168 WD-WCANU1125288
This seems to be working. I guess I’ll know in a few hours if everything is ok.
If this still doesn’t work, I’m supposed to send 3ware an error log.
./tw_CLI info c0 diag>error.txt
The easy raid rebuild from yesterday turned out to not be so easy. Checking today, I got this message:
//cdfs3> info c0 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 OK - 64K 1396.95 ON OFF OFF u1 RAID-5 REBUILDING 89 64K 1396.95 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 465.76 GB 976773168 WD-WCANU1137212 p1 OK u0 465.76 GB 976773168 WD-WCANU1090078 p2 OK u0 465.76 GB 976773168 WD-WCANU1119743 p3 OK u0 465.76 GB 976773168 WD-WCANU1089924 p4 OK u1 465.76 GB 976773168 WD-WCANU1136981 p5 ECC-ERROR u1 465.76 GB 976773168 WD-WCANU1109927 p6 DEGRADED u1 465.76 GB 976773168 WD-WCAPW5103756 p7 OK u1 465.76 GB 976773168 WD-WCANU1125288
This does not look good at all, but the disk seems to be ok. So I tried this:
//cdfs3> /c0 rescan Rescanning controller /c0 for units and drives ...Done. Found the following unit(s): [none]. Found the following drive(s): [none]. //cdfs3> info c0 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 OK - 64K 1396.95 ON OFF OFF u1 RAID-5 REBUILDING 89 64K 1396.95 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 465.76 GB 976773168 WD-WCANU1137212 p1 OK u0 465.76 GB 976773168 WD-WCANU1090078 p2 OK u0 465.76 GB 976773168 WD-WCANU1119743 p3 OK u0 465.76 GB 976773168 WD-WCANU1089924 p4 OK u1 465.76 GB 976773168 WD-WCANU1136981 p5 OK u1 465.76 GB 976773168 WD-WCANU1109927 p6 DEGRADED u1 465.76 GB 976773168 WD-WCAPW5103756 p7 OK u1 465.76 GB 976773168 WD-WCANU1125288
That’s good. Now, I know the disk p6 is good because it was just replaced. I’m unsure now, if the rebuild will continue without problems or if I need to restart it. I think I’ll leave it for a while to see if things start working again. If it stays stuck at 89% for a while, I’ll run the rebuild command again.
I tried to reissue the command and got the following:
//cdfs3> /c0/u1 start rebuild disk=6 Sending rebuild start request to /c0/u1 on 1 disk(s) [6] ... Failed. (0x0B:0x0032): Unit is rebuilding //cdfs3> info c0 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 OK - 64K 1396.95 ON OFF OFF u1 RAID-5 REBUILDING 89 64K 1396.95 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 465.76 GB 976773168 WD-WCANU1137212 p1 OK u0 465.76 GB 976773168 WD-WCANU1090078 p2 OK u0 465.76 GB 976773168 WD-WCANU1119743 p3 OK u0 465.76 GB 976773168 WD-WCANU1089924 p4 OK u1 465.76 GB 976773168 WD-WCANU1136981 p5 OK u1 465.76 GB 976773168 WD-WCANU1109927 p6 DEGRADED u1 465.76 GB 976773168 WD-WCAPW5103756 p7 OK u1 465.76 GB 976773168 WD-WCANU1125288
So I think I’ll just have to leave it and hope that it finishes.
Fdisk doesn’t always work properly on disk sizes over 2TB. In our new servers, we’ve been having 3TB raids installed. Here are the instructions for setting up the disk.
[root@server ~]# parted /dev/sda GNU Parted 1.6.19 Copyright (C) 1998 - 2004 Free Software Foundation, Inc. This program is free software, covered by the GNU General Public License. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Using /dev/sda (parted) print Disk geometry for /dev/sda: 0.000-2860962.000 megabytes Disk label type: gpt Minor Start End Filesystem Name Flags 1 0.017 3000.000 ext3 (parted) mklabel gpt (parted) mkpart primary 0 -0 (parted) quit Information: Don't forget to update /etc/fstab, if necessary. [root@server ~]# mkfs.ext3 /dev/sda1 mke2fs 1.35 (28-Feb-2004) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 366215168 inodes, 732406263 blocks 36620313 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=734003200 22352 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 28 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@cps3 ~]# tune2fs -c0 -i0 /dev/sda1 tune2fs 1.35 (28-Feb-2004) Setting maximal mount count to -1 Setting interval between check 0 seconds
The mkpart primary 0 -0 says to use the entire disk for the partition.
Until I find a place to store info on all the raids and disks that I’ve replaced, this blog will have to do. So here is the latest replacement.
[root@pnn tw_cli]# ./tw_cli //pnn> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 DEGRADED - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANY1850307 p1 OK u0 233.76 GB 490234752 WD-WCANY1790824 p2 OK u0 233.76 GB 490234752 WD-WCANY1851579 p3 OK u0 233.76 GB 490234752 WD-WCANY1789766 p4 NOT-PRESENT - - - - p5 OK u0 233.76 GB 490234752 WD-WCANY1787889 p6 OK u0 233.76 GB 490234752 WD-WCANY1788952 p7 OK u0 233.76 GB 490234752 WD-WCANY1788819 //pnn> /c1 remove p4 Exporting port /c1/p4 ... Failed. (0x0B:0x002E): Port empty //pnn> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 DEGRADED - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANY1850307 p1 OK u0 233.76 GB 490234752 WD-WCANY1790824 p2 OK u0 233.76 GB 490234752 WD-WCANY1851579 p3 OK u0 233.76 GB 490234752 WD-WCANY1789766 p4 NOT-PRESENT - - - - p5 OK u0 233.76 GB 490234752 WD-WCANY1787889 p6 OK u0 233.76 GB 490234752 WD-WCANY1788952 p7 OK u0 233.76 GB 490234752 WD-WCANY1788819 //pnn> /c1 rescan Rescanning controller /c1 for units and drives ...Done. Found the following unit(s): [none]. Found the following drive(s): [none]. //pnn> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 DEGRADED - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANY1850307 p1 OK u0 233.76 GB 490234752 WD-WCANY1790824 p2 OK u0 233.76 GB 490234752 WD-WCANY1851579 p3 OK u0 233.76 GB 490234752 WD-WCANY1789766 p4 OK - 233.81 GB 490350672 WD-WCAT1A628774 p5 OK u0 233.76 GB 490234752 WD-WCANY1787889 p6 OK u0 233.76 GB 490234752 WD-WCANY1788952 p7 OK u0 233.76 GB 490234752 WD-WCANY1788819 //pnn> /c1/u0 start rebuild disk=4 Sending rebuild start request to /c1/u0 on 1 disk(s) [4] ... Done. //pnn> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 REBUILDING 0 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANY1850307 p1 OK u0 233.76 GB 490234752 WD-WCANY1790824 p2 OK u0 233.76 GB 490234752 WD-WCANY1851579 p3 OK u0 233.76 GB 490234752 WD-WCANY1789766 p4 DEGRADED u0 233.81 GB 490350672 WD-WCAT1A628774 p5 OK u0 233.76 GB 490234752 WD-WCANY1787889 p6 OK u0 233.76 GB 490234752 WD-WCANY1788952 p7 OK u0 233.76 GB 490234752 WD-WCANY1788819 //pnn> exit [root@pnn tw_cli]#
Download the plugin from wherever it’s located. Put the file in the wp-content/plugins folder and change permissions to 644 root:apache. Then, just reload the plugins page in wordpress and activate the new plugin.
Exmh was complaining about libpng not being the same version as what was used to compile the program. Since exmh is pretty old, this is not unusual. The package that we needed was libpng10, here is the description:
The libpng10 package contains an old version of libpng, a library of functions for creating and manipulating PNG (Portable Network Graphics) image format files. This package is needed if you want to run binaries that were linked dynamically with libpng 1.0.x.
Unfortunately, this package is not included with RHEL5. (It’s only in RHEL4 and RHEL3.) I did find a copy on the web and installed it so that exmh no longer complains about the images.
The rpm is saved in /system/kickstart/install/additions.
//pnn> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 DEGRADED - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANY1850307 p1 OK u0 233.76 GB 490234752 WD-WCANY1790824 p2 OK u0 233.76 GB 490234752 WD-WCANY1851579 p3 OK u0 233.76 GB 490234752 WD-WCANY1789766 p4 OK u0 233.76 GB 490234752 WD-WCANY1796905 p5 NOT-PRESENT - - - - p6 OK u0 233.76 GB 490234752 WD-WCANY1788952 p7 OK u0 233.76 GB 490234752 WD-WCANY1788819 //pnn> /c1 remove p5 Exporting port /c1/p5 ... Failed. (0x0B:0x002E): Port empty
Got the message above, which showed that p5 failed. Since it was already missing, according to the tw_cli software, the remove command failed. So, I put a different disk in (one that had been in another raid) and got the following:
//pnn> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 DEGRADED - 64K 1629.74 OFF OFF OFF u1 RAID-5 INOPERABLE - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANY1850307 p1 OK u0 233.76 GB 490234752 WD-WCANY1790824 p2 OK u0 233.76 GB 490234752 WD-WCANY1851579 p3 OK u0 233.76 GB 490234752 WD-WCANY1789766 p4 OK u0 233.76 GB 490234752 WD-WCANY1796905 p5 OK u1 233.76 GB 490234752 WD-WCANY1787889 p6 OK u0 233.76 GB 490234752 WD-WCANY1788952 p7 OK u0 233.76 GB 490234752 WD-WCANY1788819
Crazy. It now thought that p5 was part of another raid (u1) on this controller.
//pnn> info Ctl Model Ports Drives Units NotOpt RRate VRate BBU ------------------------------------------------------------------------ c0 9550SX-8LP 8 8 1 0 4 4 - c1 9550SX-8LP 8 7 2 2 4 4 -
Yep, it now shows two bad units on c1. I tried rescanning and even removing the disk and reformatting it, but it didn’t matter. The only solution was to delete that second unit, which is always nerve-wracking, for fear that I’ll delete the raid I wanted to keep. So after quadruple-checking the command, I ran:
//pnn> maint deleteunit c1 u1 Deleting unit c1/u1 ...Done. //pnn> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 DEGRADED - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANY1850307 p1 OK u0 233.76 GB 490234752 WD-WCANY1790824 p2 OK u0 233.76 GB 490234752 WD-WCANY1851579 p3 OK u0 233.76 GB 490234752 WD-WCANY1789766 p4 OK u0 233.76 GB 490234752 WD-WCANY1796905 p5 OK - 233.76 GB 490234752 WD-WCANY1787889 p6 OK u0 233.76 GB 490234752 WD-WCANY1788952 p7 OK u0 233.76 GB 490234752 WD-WCANY1788819 //pnn> /c1/u0 start rebuild disk=5 Sending rebuild start request to /c1/u0 on 1 disk(s) [5] ... Done.
That deleted the extra unit and let me start the rebuild on disk 5 of the degraded unit.
I got an email alert this morning letting me know about a disk failure in one of our raids.
//cdfs1> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 DEGRADED - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANK2922638 p1 OK u0 233.76 GB 490234752 WD-WCANK2785939 p2 OK u0 233.76 GB 490234752 WD-WCANK2785884 p3 DEVICE-ERROR u0 233.76 GB 490234752 WD-WCANK2941755 p4 OK u0 233.76 GB 490234752 WD-WCANK2922794 p5 OK u0 233.76 GB 490234752 WD-WCANY3726392 p6 OK u0 233.76 GB 490234752 WD-WCANK2785937 p7 OK u0 233.76 GB 490234752 WD-WCANK2941415
Here is a log of what I did:
//cdfs1> /c1 remove p3 Exporting port /c1/p3 ... Failed. Drive not degraded port=3 //cdfs1> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 DEGRADED - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANK2922638 p1 OK u0 233.76 GB 490234752 WD-WCANK2785939 p2 OK u0 233.76 GB 490234752 WD-WCANK2785884 p3 NOT-PRESENT - - - - p4 OK u0 233.76 GB 490234752 WD-WCANK2922794 p5 OK u0 233.76 GB 490234752 WD-WCANY3726392 p6 OK u0 233.76 GB 490234752 WD-WCANK2785937 p7 OK u0 233.76 GB 490234752 WD-WCANK2941415
I then replaced the disk with a working one.
//cdfs1> /c1 rescan Rescanning controller /c1 for units and drives ...Done. Found the following unit(s): [/c1/u0]. Found the following drive(s): [none]. //cdfs1> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 INOPERABLE - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANK2922638 p1 OK u0 233.76 GB 490234752 WD-WCANK2785939 p2 OK u0 233.76 GB 490234752 WD-WCANK2785884 p3 OK - 233.76 GB 490234752 WD-WCANY1569322 p4 OK u0 233.76 GB 490234752 WD-WCANK2922794 p5 OK - 233.76 GB 490234752 WD-WCANY3726392 p6 OK u0 233.76 GB 490234752 WD-WCANK2785937 p7 OK u0 233.76 GB 490234752 WD-WCANK2941415
That’s bad. Disk p5 disappeared as well. I tried rescanning a few times, but that didn’t find it. So I tried just rebuilding disk 3 anyway.
//cdfs1> /c1/u0 start rebuild disk=3 Sending rebuild start request to /c1/u0 on 1 disk(s) [3] ... Failed. (0x0B:0x0033): Unit busy
That didn’t work either. So I tried removing disk 3 and putting it back in.
//cdfs1> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 INOPERABLE - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANK2922638 p1 OK u0 233.76 GB 490234752 WD-WCANK2785939 p2 OK u0 233.76 GB 490234752 WD-WCANK2785884 p3 OK - 233.76 GB 490234752 WD-WCANY1569322 p4 OK u0 233.76 GB 490234752 WD-WCANK2922794 p5 OK - 233.76 GB 490234752 WD-WCANY3726392 p6 OK u0 233.76 GB 490234752 WD-WCANK2785937 p7 OK u0 233.76 GB 490234752 WD-WCANK2941415 //cdfs1> /c0 remove p3 <-----OOPS--This should have been /c1 remove p3 Exporting port /c0/p3 ... Done. //cdfs1> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 INOPERABLE - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANK2922638 p1 OK u0 233.76 GB 490234752 WD-WCANK2785939 p2 OK u0 233.76 GB 490234752 WD-WCANK2785884 p3 OK - 233.76 GB 490234752 WD-WCANY1569322 p4 OK u0 233.76 GB 490234752 WD-WCANK2922794 p5 OK - 233.76 GB 490234752 WD-WCANY3726392 p6 OK u0 233.76 GB 490234752 WD-WCANK2785937 p7 OK u0 233.76 GB 490234752 WD-WCANK2941415 //cdfs1> /c1 rescan Rescanning controller /c1 for units and drives ...Done. Found the following unit(s): [/c1/u0]. Found the following drive(s): [none]. //cdfs1> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 INOPERABLE - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANK2922638 p1 OK u0 233.76 GB 490234752 WD-WCANK2785939 p2 OK u0 233.76 GB 490234752 WD-WCANK2785884 p3 OK - 233.76 GB 490234752 WD-WCANY1569322 p4 OK u0 233.76 GB 490234752 WD-WCANK2922794 p5 OK - 233.76 GB 490234752 WD-WCANY3726392 p6 OK u0 233.76 GB 490234752 WD-WCANK2785937 p7 OK u0 233.76 GB 490234752 WD-WCANK2941415 //cdfs1> /c1 remove p3 Exporting port /c1/p3 ... Done. //cdfs1> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 INOPERABLE - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANK2922638 p1 OK u0 233.76 GB 490234752 WD-WCANK2785939 p2 OK u0 233.76 GB 490234752 WD-WCANK2785884 p3 NOT-PRESENT - - - - p4 OK u0 233.76 GB 490234752 WD-WCANK2922794 p5 OK - 233.76 GB 490234752 WD-WCANY3726392 p6 OK u0 233.76 GB 490234752 WD-WCANK2785937 p7 OK u0 233.76 GB 490234752 WD-WCANK2941415 //cdfs1> /c1 rescan Rescanning controller /c1 for units and drives ...Done. Found the following unit(s): [/c1/u0]. Found the following drive(s): [/c1/p3]. //cdfs1> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 INOPERABLE - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANK2922638 p1 OK u0 233.76 GB 490234752 WD-WCANK2785939 p2 OK u0 233.76 GB 490234752 WD-WCANK2785884 p3 OK - 233.76 GB 490234752 WD-WCANY1569322 p4 OK u0 233.76 GB 490234752 WD-WCANK2922794 p5 OK - 233.76 GB 490234752 WD-WCANY3726392 p6 OK u0 233.76 GB 490234752 WD-WCANK2785937 p7 OK u0 233.76 GB 490234752 WD-WCANK2941415
Nope, that didn’t work either. So I decided to remove disk 5 (but I never took it out of the case) and rescan.
//cdfs1> /c1 remove p5 Exporting port /c1/p5 ... Done. //cdfs1> /c1 rescan Rescanning controller /c1 for units and drives ...Done. Found the following unit(s): [/c1/u0]. Found the following drive(s): [none]. //cdfs1> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 DEGRADED - 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANK2922638 p1 OK u0 233.76 GB 490234752 WD-WCANK2785939 p2 OK u0 233.76 GB 490234752 WD-WCANK2785884 p3 OK - 233.76 GB 490234752 WD-WCANY1569322 p4 OK u0 233.76 GB 490234752 WD-WCANK2922794 p5 OK u0 233.76 GB 490234752 WD-WCANY3726392 p6 OK u0 233.76 GB 490234752 WD-WCANK2785937 p7 OK u0 233.76 GB 490234752 WD-WCANK2941415
Ah, success. I don’t know why disk 5 got goofy all of a sudden, but I could now rebuild the new disk.
//cdfs1> /c1/u0 start rebuild disk=3 Sending rebuild start request to /c1/u0 on 1 disk(s) [3] ... Done. //cdfs1> info c1 Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-5 REBUILDING 0 64K 1629.74 OFF OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 233.76 GB 490234752 WD-WCANK2922638 p1 OK u0 233.76 GB 490234752 WD-WCANK2785939 p2 OK u0 233.76 GB 490234752 WD-WCANK2785884 p3 DEGRADED u0 233.76 GB 490234752 WD-WCANY1569322 p4 OK u0 233.76 GB 490234752 WD-WCANK2922794 p5 OK u0 233.76 GB 490234752 WD-WCANY3726392 p6 OK u0 233.76 GB 490234752 WD-WCANK2785937 p7 OK u0 233.76 GB 490234752 WD-WCANK2941415 //cdfs1>
What was weird was that this computer has two raids. The errors that I got were all from raid c1. After it had been rebuilding a while, I went to check on it and found that disk 3 in raid c0 was now showing up as NOT-PRESENT because stupidly above, I had run /c0 remove p3 instead of /c1 remove p3. So, I rescanned c0 and rebuilt the drive on raid u0.