Notes about Installation and Configuration of a Torque server (no cream) - EMI-2 - SL6 x86_64
- These notes are provided by site admins on a best effort base as a contribution to the IGI communities and MUST not be considered as a subsitute of the Official IGI documentation
.
- This document is addressed to site administrators responsible for middleware installation and configuration.
- The goal of this page is to provide some hints and examples on how to install and configure an EMI torque server based on EMI-2 middleware.
References
- About IGI - Italian Grid infrastructure
- About IGI Release
- EMI-2 Release
- Yaim Guide
- TOBE CHANGED - site-info.def yaim variables
- TOBE CHANGED - site-BDII yaim variables
- Troubleshooting Guide for Operational Errors on EGI Sites
- Grid Administration FAQs page
Service installation
O.S. and Repos
- Starts from a fresh installation of Scientific Linux 6.x (x86_64).
# cat /etc/redhat-release
Scientific Linux release 6.2 (Carbon)
* Install the additional repositories: EPEL, Certification Authority, EMI-2
# yum install yum-priorities yum-protectbase epel-release
# rpm -ivh http://emisoft.web.cern.ch/emisoft/dist/EMI/2/sl6/x86_64/base/emi-release-2.0.0-1.sl6.noarch.rpm
# cd /etc/yum.repos.d/
# wget http://repo-pd.italiangrid.it/mrepo/repos/egi-trustanchors.repo
- Be sure that SELINUX is disabled (or permissive). Details on how to disable SELINUX are here
:
# getenforce
Disabled
yum install
# yum clean all
Loaded plugins: downloadonly, kernel-module, priorities, protect-packages, protectbase, security, verify, versionlock
Cleaning up Everything
# yum install ca-policy-egi-core
# yum install emi-torque-server emi-torque-utils
Service configuration
You have to copy the configuration files in another path, for example root, and set them properly (see later):
# cp -vr /opt/glite/yaim/examples/siteinfo .
vo.d
Create the directory
siteinfo/vo.d
and fill it with a file for each supported VO. You can download them from
HERE
users and groups
You can download them from
HERE
.
site-info.def
KISS: Keep it simple, stupid! For your convenience there is an explanation of each yaim variable. For more details look
HERE
.
# cat siteinfo/site-info.def
BATCH_SERVER=batch.cnaf.infn.it
CE_HOST=cream-01.cnaf.infn.it
CE_SMPSIZE=8
USERS_CONF=/root/siteinfo/ig-users.conf
GROUPS_CONF=/root/siteinfo/ig-users.conf
VOS="comput-er.it dteam igi.italiangrid.it infngrid ops gridit"
QUEUES="cert prod"
CERT_GROUP_ENABLE="dteam infngrid ops /dteam/ROLE=lcgadmin /dteam/ROLE=production /ops/ROLE=lcgadmin /ops/ROLE=pilot /infngrid/ROLE=SoftwareManager /infngrid/ROLE=pilot"
PROD_GROUP_ENABLE="comput-er.it gridit igi.italiangrid.it /comput-er.it/ROLE=SoftwareManager /gridit/ROLE=SoftwareManager /igi.italiangrid.it/ROLE=SoftwareManager"
WN_LIST="/root/siteinfo/wn-list.conf"
MUNGE_KEY_FILE=/etc/munge/munge.key
CONFIG_MAUI="no"
SITE_NAME=IGI-BOLOGNA
APEL_DB_PASSWORD=not_used
APEL_MYSQL_HOST=not_used
WN list
Set in this file the WNs list, for example:
# less /root/siteinfo/wn-list.conf
wn05.cnaf.infn.it
wn06.cnaf.infn.it
munge configuration
- generate a key by launching
/usr/sbin/create-munge-key
# ls -ltr /etc/munge/
total 4
-r-------- 1 munge munge 1024 Jan 13 14:32 munge.key
- Copy the key, /etc/munge/munge.key to every host of your cluster, adjusting the permissions:
# chown munge:munge /etc/munge/munge.key
- Start the munge daemon on each node:
# service munge start
Starting MUNGE: [ OK ]
# chkconfig munge on
tomcat and ldap users
It is necessary to create tomcat and ldap users on the torque server, otherwise the computing elements will fail in connecting the server.
When those users doesn't exist on the server, on the CE you will see errors like the following
2012-04-24 15:37:29 lcg-info-dynamic-scheduler: LRMS backend command returned nonzero exit status
2012-04-24 15:37:29 lcg-info-dynamic-scheduler: Exiting without output, GIP will use static values
Can not obtain pbs version from host
[...]
instead, on the torque server:
04/24/2012 14:00:46;0080;PBS_Server;Req;req_reject;Reject reply code=15021(Invalid credential), aux=0, type=StatusJob, from tomcat@cream-01.cnaf.infn.it
04/24/2012 14:01:02;0080;PBS_Server;Req;req_reject;Reject reply code=15021(Invalid credential), aux=0, type=StatusJob, from ldap@cream-01.cnaf.infn.it
Solution is to add tomcat and ldap users/groups to torque host and restart pbs_server - as they exists only on
CreamCE host.
# echo 'tomcat:x:91:91:Tomcat:/usr/share/tomcat5:/bin/sh' >> /etc/passwd
# echo 'ldap:x:55:55:LDAP User:/var/lib/ldap:/bin/false' >> /etc/passwd
# echo 'tomcat:x:91:' >> /etc/group
# echo 'ldap:x:55:' >> /etc/group
yaim check
Verify to have set all the yaim variables by launching:
# chmod -R 600 siteinfo/
# /opt/glite/yaim/bin/yaim -v -s siteinfo/site-info.def -n TORQUE_server -n TORQUE_utils
[...]
INFO: YAIM terminated succesfully.
yaim config
# /opt/glite/yaim/bin/yaim -c -s siteinfo/site-info.def -n TORQUE_server -n TORQUE_utils
[...]
INFO: YAIM terminated succesfully.
Service Checks
TORQUE checks
# qmgr -c 'p s'
# pbsnodes -a
maui settings
- In order to reserve a job slot for test jobs, you need to apply some settings in the maui configuration (
/var/spool/maui/maui.cfg
). Suppose you have enabled the test VOs (ops, dteam and infngrid) on the "cert" queue and that you have 8 job slots available. Add the following lines in the maui.cfg
files:
CLASSWEIGHT 1
QOSWEIGHT 1
QOSCFG[normal] MAXJOB=7
CLASSCFG[prod] QDEF=normal
CLASSCFG[cert] PRIORITY=5000
After the modification restart maui.
- In order to avoid that yaim overwrites this file during the host reconfiguration, set:
CONFIG_MAUI="no"
in your site.def (the first time you launch the yaim script, it has to be set to "yes")
Revisions
--
PaoloVeronesi - 2012-05-24
--
PaoloVeronesi - 2012-05-25