Archive for August, 2008

Open the virusscan console. Double-click on Access Protection. Highlight the rule “Prevent mass mailing worms from sending mail” and click “Edit”. Add alpine.exe to the “Processes to exclude:” box. Ok and Apply and alpine should be able to send mail on either 25 or 587.

Must reload autofs for changes to auto.master to take effect.

/etc/rc.d/init.d/autofs reload

We have a vme crate running RedHat Enterprise Linux 3. Students are writing programs to access data on the vme crate. In order to do this, we need a computer with compilation tools and drivers for the vme crate. (The crate only has a 1gb flash card in it, which does not have enough room to hold all the compilation programs.) So, we are using a computer running Cern SL 3.0.8 (2.4.21-27.0.2.EL.cernsmp), which is close enough to the kernel on the crate (2.4.21-4.EL).

The driver/libraries were downloaded from here and the documentation came from here

The software was untarred to /cpv/code/vme where it was compiled with make and installed with make install (as root).

It installed:

ll /usr/lib/libvme.so*
lrwxrwxrwx    1 root     root           11 Aug 18 12:15 /usr/lib/libvme.so -> libvme.so.3
lrwxrwxrwx    1 root     root           13 Aug 18 12:15 /usr/lib/libvme.so.3 -> libvme.so.3.6
-rw-r--r--    1 root     root        12629 Aug 18 12:15 /usr/lib/libvme.so.3.6
/lib/modules/2.4.21-27.0.2.EL.cernsmp/kernel/drivers/vme
ll
total 68
-rw-r--r--    1 root     root        63606 Aug 18 12:14 vme_universe.o
cd /usr/include/vme
ll
total 72
-rw-r--r--    1 root     root        33954 Aug 18 12:15 universe.h
-rw-r--r--    1 root     root        10516 Aug 18 12:15 vme_api.h
-rw-r--r--    1 root     root        12804 Aug 18 12:15 vme.h
-rw-r--r--    1 root     root         5682 Aug 18 12:15 vmivme.h

Then go to the vme_universe/test directory and run make and make install. This creates the following:

cd /usr/bin
ll vme*
-rwxr-xr-x    1 root     root        10748 Aug 18 13:16 vme_acquire_bus*
-rwxr-xr-x    1 root     root        13198 Aug 18 13:16 vme_catch_interrupt*
-rwxr-xr-x    1 root     root        12982 Aug 18 13:16 vme_dma_read*
-rwxr-xr-x    1 root     root        13366 Aug 18 13:16 vme_dma_write*
-rwxr-xr-x    1 root     root        13231 Aug 18 13:16 vme_endian*
-rwxr-xr-x    1 root     root        11461 Aug 18 13:16 vme_generate_interrupt*
-rwxr-xr-x    1 root     root        13034 Aug 18 13:16 vme_peek*
-rwxr-xr-x    1 root     root        13234 Aug 18 13:16 vme_poke*
-rwxr-xr-x    1 root     root        10748 Aug 18 13:16 vme_release_bus*
-rwxr-xr-x    1 root     root        12205 Aug 18 13:16 vme_rmw*
-rwxr-xr-x    1 root     root        13035 Aug 18 13:16 vme_slave_peek*
-rwxr-xr-x    1 root     root        13327 Aug 18 13:16 vme_slave_poke*
-rwxr-xr-x    1 root     root        10708 Aug 18 13:16 vme_sysreset*

The command that brings up the window where one can change which startup items to run automatically is:

msconfig

1. Mount the drive with usrquota as an option
2. touch /local/home/aquota.user
3. chmod 600 /local/home/aquota.user
4. quotacheck -vagum

[root@server home]# quotacheck -vagum
quotacheck: WARNING -  Quotafile /local/home/aquota.user was probably truncated. Can't save quota settings...
quotacheck: Scanning /dev/sda1 [/local/home] quotacheck: Old group file not found. Usage will not be substracted.
done
quotacheck: Checked 5 directories and 7 files
quotacheck: Skipping server1:/local/code [/direct/code]
quotacheck: Skipping server2:/local/web [/direct/web]
quotacheck: Skipping server1:/local/system [/direct/system]

5. Set up a default quota user

edquota defquota

Edit the file that comes up with the quota values you want.

Disk quotas for user psecquota (uid 6004):
  Filesystem                   blocks       soft       hard     inodes     soft     hard
  /dev/sda1                         0    1000000    1500000          0   200000   300000

6. Make sure quotas are running

quotaon /dev/sda1

After the recent reinstallations, torque and maui need to be reinstalled. Since we’ve changed the setup a bit, I think that it’s now ok to take the default installation locations (/usr/local). So the command to configure and install the software is:

cd /system/software/linux/torque-2.0.0
./configure --with-rcp=scp
make
make install (as root)

This puts all the programs in /usr/local/bin and the needed libraries in /usr/local/lib. The spool directory is in /var/spool/torque.

Maui is done with:

cd /system/software/linux/maui-3.2.6p19
./configure
make
make install (as root)

Maui’s home is /usr/local/maui.

Startup scripts are provided in /system/software/linux/torque-2.3.0/contrib/init.d. We only need pbs_mom and pbs_server because we’ll be using maui for the scheduler. They need to be edited with the correct values.
PBS_DAEMON=/usr/local/sbin/pbs_server
PBS_HOME=/var/spool/torque

Copy pbs_mom and pbs_server to /etc/rc.d/init.d. And run /etc/rc.d/init.d/pbs_server. Once it’s a running process, can create the queues with qmgr.

[root@cpserver init.d]# qmgr
Max open servers: 4
Qmgr: p s
#
# Set server attributes.
#
set server acl_hosts = cpserver
set server log_events = 511
set server mail_from = adm
set server scheduler_iteration = 600
set server node_check_rate = 150
set server tcp_timeout = 6
Qmgr: c q cp1
Qmgr: s q cp1 queue_type=Execution
Qmgr: s q cp1 from_route_only=True
Qmgr: s q cp1 resources_max.cput=240:00:00
Qmgr: s q cp1 resources_min.cput=00:00:01
Qmgr: s q cp1 enabled=True
Qmgr: s q cp1 started=True
Qmgr: c q cp
Qmgr: s q cp queue_type=Route
Qmgr: s q cp route_destinations=cp1
Qmgr: s q cp route_held_jobs=True
Qmgr: s q cp route_waiting_jobs=True
Qmgr: s q cp enabled=True
Qmgr: s q cp started=True
Qmgr: s s scheduling=True
Qmgr: s s acl_host_enable=True
Qmgr: s s acl_hosts=*.uchicago.edu
Qmgr: s s default_queue=cp
Qmgr: s s query_other_jobs=True
Qmgr: s s resources_default.nodect=1
Qmgr: s s resources_default.nodes=1
Qmgr: s s resources_max.walltime-96:00:00
Qmgr: s s resources_max.walltime=96:00:00
Qmgr: s s submit_hosts = cpserver
Qmgr: c n cpserver np=2

Maui’s startup script is provided in /system/software/linux/maui-3.2.6p19/contrib/service-scripts/redhat.maui.d. Edit this file:
MAUI_PREFIX=/usr/local/maui
also change the user as which it should run. We don’t have a maui user, so use my own username instead. This turned out to be a big problem, so have to run as root.

and copy to /etc/rc.d/init.d/maui.

Now, chkconfig –add pbs_mom, pbs_server and maui. Restart them all and submit a test job.

The job was accepted in the queue, but never executed. Oops, forgot to edit maui.cfg, add ADMIN1 and ADMIN3 and change the RMCFG line:

ADMIN1   root maryh
ADMIN3   ALL

#RMCFG[CPS1] TYPE=PBS@RMNMHOST@
RMCFG[base] TYPE=PBS

Test job now works, so can move on to the compute node.

The compute node doesn’t need maui, only torque. So simply run make install on the compute node.

In /var/spool/torque, check that server_name has the proper name. Copy the pbs_mom startup script from the server to this node. Start it up. Back on the server, create a new node in qmgr.

c n cpcompute np=8

Create /var/spool/torque/mom_priv/config

$usecp cpserver.uchicago.edu
$ideal_load 8.0
$max_load 10.0
$restricted *.uchicago.edu

This node has eight cores, so the ideal_load is eight.

Finally go back on the server into qmgr and add the compute host as another submit host:

qmgr
s s submit_hosts += cpcompute

Copy the following to get exmh to work:

/usr/bin:
exmh*
exmh-bg*
exmh-async*
inc*
viamail*
sortm*
show*
sendfiles*
send*
scan*
rmm*
rmf*
repl*
refile*
prompter*
prev*
pick*
packf*
next*
msh*
msgchk*
mimencode*
mhstore*
mhshow*
mhpath*
mhparam*
mhn*
mhlist*
mhbuild*
mark*
forw*
folders*
folder*
flists*
flist*
dist*
comp*
burst*
anno*
ali*
whom*
whatnow*

/usr/lib/nmh:
ap*
conflict*
dp*
fmtdump*
install-mh*
mhl*
mhtest*
post*
rcvdist*
rcvpack*
rcvstore*
rcvtty*
slocal*
spost*

Tkpostage needs the tkpostage file and /usr/local/lib/TkPostage.xbm

Also put /usr/local/lib/exmh-2.7.2 there.