Archive for the ‘GB Network’ Category

The solution to getting the D-Link dge530t gb card working was to use the driver from the Marvell site and NOT the driver from the D-Link site. The D-Link driver would not compile. The Marvell driver compiled fine, as long as the kernel source rpm was installed. And now that it has been compiled on one machine running SLF305, it can just be copied to the new machine modules directory. Then, when the D-Link card is installed, kudzu will find it on the next reboot.

I upgraded cdf29 to the 2.4.21-40 kernel and installed one of the D-Link gb network cards. The card is listed with lspci, but it will not load the sk98lin driver. I downloaded the latest source and tried compiling it, but it gave me a bunch of errors. Through google, I found this page:

http://ncdf68.fnal.gov/twiki/bin/view/Main/MoversUpgrade#By_Wayne (Hit cancel on the window that comes up and look under “By Wayne”)

And it looks like Fermilab may have fixed this driver, but since I don’t have an account there, I can’t see for myself. I asked one of the students to download the driver file for me and we’ll see if that works.

I also bought a couple more Intel cards because I know those will work.

The networking problems were caused by a setting that I didn’t make after adding the new storage units. Since the storage units have two ethernet adapters, I put one on the campus 10 subnet and one on my own 192.168 subnet that was going to be set up for gb speeds. The idea was that most of the data transfer would take place on the 192.168 subnet, keeping it off the campus network. Unfortunately, I was unable to get the gb nics working. I decided to let the cdf users use the storage units through the 10 subnet, while I continued to work on the gb network stuff. The problem is that, by default, data on the 128 will go to the switch and then come back on the 10. This basically overwhelmed the switch, causing all our problems. Ron at Network Services told me to add a route to the 10, so that the step of going to the switch would be eliminated. So, I added the following:

route add -net 10.135.102.0 netmask 255.255.255.0 eth0

So, now our route table looks like this:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
128.135.102.0 * 255.255.255.0 U 0 0 0 eth0
10.135.102.0 * 255.255.255.0 U 0 0 0 eth0
169.254.0.0 * 255.255.0.0 U 0 0 0 eth0
default v102router.uchi 0.0.0.0 UG 0 0 0 eth0

To make the route permanent, do the following:

on SLF305, make /etc/sysconfig/network-scripts/route-eth0 (permissions 755):
10.135.102.0/24 dev eth0

Sasha told me to upgrade the kernel on cdf30 and change the network cable, which I did, and now the machine appears to be connecting at gb network speeds. A quick copy of my 1gb test file took 1min 11secs which is a transfer speed of around 15MB/s. Looks good. Next, I need to test the DLink cards to see if they work too.

The raid systems use the following sata disks:
Western Digital 2500SD with an 8MB cache, spinning at 7200rpm. These disks also have a transfer speed of 1.5Gb/s.

Our current linux systems tend to use the WD 2500JB EIDE disks, which have an 8MB cache and spin at 7200rpm, but only have a transfer speed on 100MB/s.

In the future, machines that connect to the gb network should have disks with the faster transfer speed put in them.

Since I was unable to get the gb nics working with linux, I tried getting one to work with Windows. I took out all the pci cards in a Windows machine, put in the gb nic and installed the driver. Transferring a file from the raid system, using sftp (because it was already installed) took about 4 minutes for a 1gb file. This was a transfer rate of around 4MB/s. It appears that the gb nic doesn’t work properly in Windows either. Though, on the gb switch, the light that indicates gb speeds was lit for the windows machine. Too bad the speeds didn’t correspond.

When choosing a motherboard, check that the ethernet controller is using either the pci-x or pci-e bus.

Intel: http://www.intel.com/design/network/products/ethernet/linecard_ec.htm

Broadcom: http://www.broadcom.com/products/Enterprise-Small-Office/Gigabit-Ethernet-Controllers

Realtek: http://www.realtek.com.tw/products/products1-1.aspx?lineid=1

I purchased a bunch of gb nics to put in the other machines in the glass room, to let them transfer files to the raid at gb speeds. This turned out to be quite a hassle.

First problem, all of the network cards that I bought, Linksys, 3Com and Dlink, all use the same Marvell ethernet controller, which means they use the same driver in linux. The driver is sk98lin or skge, depending on the version of linux. In RH7.3, I got the sk98lin driver compiled and working, but didn’t have any way to check if it was working at gb speed. Turns out, it wasn’t. In SLF3.0.5, I am unable to get the skge driver working at all, so none of the cards work in this version. I bought a different Intel card, that uses the Intel 82541PI ethernet controller and the e1000 driver in SLF305. This card does work, but not at gb speeds.

In looking at the ethernet controllers made by Intel it appears that any ethernet controller that uses the pci bus will NOT work at gb speeds. The limitation appears to be in the pci bus and not the controller. Those controllers that use pci-x or pci-e are able to transfer at higher speeds.

Since none of the other machines in the glass room have pci-x or pci-e busses on them, it is doubtful that these machines will ever be able to use the gb network at full speed.

Bought a gigabit network switch to connect the new raid machines (cdfs1 & cdfs2) to older machines in the glass room. Switch is LINKSYS 24-PORT 10/100/1000 GIGABIT SWITCH W/WEBVIEW (SRW2024).

I created a 1GB file on one of the raid machine and copied it to the other raid machine, going through the gb switch. Time to copy was around 12-15 seconds. This is around 70MB/s, which is fine. Wire speed for the gb network is 1000/8 = 125MB/s. Wire speed for our current mb network is 100/8 = 12.5MB/s. We’re usually closer to 4-8MB/s.

Specs on cdfs1 and cdfs2

AMD Dual Opteron 265 Dual Core SATA Raid System
Chenbro RM414, 4U, Rackmount Chassis (Power Supply 650W Redundant)
Supermicro H8DAE, AMD 8131 PCI-X/AMD 8111 I/O Hub Chipset, supports up to two Dual-Core AMD Opteron 200 Series processors. 8 x 184-pin DIMM sockets support up to 16GB DDR400 DIMMs. Onboard Broadcom 5704 Dual Port Gigabit Ethernet Controller, ATI Rage XL 8MB PCI Graphics, dual ATA 133/100 EIDE channels support up to 4 UDMA IDE devices. Expansion slots: 2 64-bit 133/100MHz PCI-x, 2 64-bit 66 MHz PCI-X, 2 32-bit 33 MHz PCI Slots.
(2) AMD Opteron 265 Dual Core, 1.8 GHz 1MB Cache, 1GHz FSB
(4) 512MB ECC/Registered DDR 400 PC3200 DIMM SDRAM
24x SLIM EIDE CD ROM Drive
(system disk)WD 80GB/8MB Cache/UATA/7200RPM
(2) 3Ware 9550SX-8LP SATAII150 8-Port SATA Raid Controller
(16) WD 2500SD/8MB Cache/7200RPM/SATA Hard Drive