Notes about Installation and Configuration of a CREAM Computing Element - EMI-2 - SL6 (external Torque, external Argus, MPI enabled)
- These notes are provided by site admins on a best effort base as a contribution to the IGI communities and MUST not be considered as a subsitute of the Official IGI documentation.
- This document is addressed to site administrators responsible for middleware installation and configuration.
- The goal of this page is to provide some hints and examples on how to install and configure an EMI-2 CREAM CE service based on EMI middleware, in no cluster mode, with TORQUE as batch system installed on a different host, using an external ARGUS server for the users authorization and with MPI enabled.
References
- About IGI - Italian Grid infrastructure
- About IGI Release
- IGI Official Installation and Configuration guide
- EMI-2 Release
- CREAM
- CREAM TORQUE module
- Yaim Guide
- site-info.def yaim variables
- CREAM yaim variables
- TORQUE Yaim variables
- Troubleshooting Guide for Operational Errors on EGI Sites
- Grid Administration FAQs page
Service installation
O.S. and Repos
- Starts from a fresh installation of Scientific Linux 6.x (x86_64).
# cat /etc/redhat-release
Scientific Linux release 6.2 (Carbon)
- Install the additional repositories: EPEL, Certification Authority, EMI-2
# yum install yum-priorities yum-protectbase epel-release
# rpm -ivh http://emisoft.web.cern.ch/emisoft/dist/EMI/2/sl6/x86_64/base/emi-release-2.0.0-1.sl6.noarch.rpm
# cd /etc/yum.repos.d/
# wget http://repo-pd.italiangrid.it/mrepo/repos/egi-trustanchors.repo
- Be sure that SELINUX is disabled (or permissive). Details on how to disable SELINUX are here:
# getenforce
Disabled
yum install
# yum clean all
# yum install ca-policy-egi-core emi-cream-ce emi-torque-utils glite-mpi
Service configuration
You have to copy the configuration files in another path, for example root, and set them properly (see later):
# cp -vr /opt/glite/yaim/examples/siteinfo .
host certificate
# ll /etc/grid-security/host*
-rw-r--r-- 1 root root 1440 Oct 18 09:31 /etc/grid-security/hostcert.pem
-r-------- 1 root root 887 Oct 18 09:31 /etc/grid-security/hostkey.pem
vo.d directory
Create the directory
siteinfo/vo.d
and fill it with a file for each supported VO. You can download them from
HERE and
here an example for some VOs.
Information about the several VOs are available at the
CENTRAL OPERATIONS PORTAL.
# cat /root/siteinfo/vo.d/comput-er.it
SW_DIR=$VO_SW_DIR/computer
DEFAULT_SE=$SE_HOST
STORAGE_DIR=$CLASSIC_STORAGE_DIR/computer
VOMS_SERVERS="'vomss://voms2.cnaf.infn.it:8443/voms/comput-er.it?/comput-er.it'"
VOMSES="'comput-er.it voms2.cnaf.infn.it 15007 /C=IT/O=INFN/OU=Host/L=CNAF/CN=voms2.cnaf.infn.it comput-er.it' 'comput-er.it voms-02.pd.infn.it 15007 /C=IT/O=INFN/OU=Host/L=Padova/CN=voms-02.pd.infn.it comput-er.it'"
VOMS_CA_DN="'/C=IT/O=INFN/CN=INFN CA' '/C=IT/O=INFN/CN=INFN CA'"
# cat /root/siteinfo/vo.d/dteam
SW_DIR=$VO_SW_DIR/dteam
DEFAULT_SE=$SE_HOST
STORAGE_DIR=$CLASSIC_STORAGE_DIR/dteam
VOMS_SERVERS='vomss://voms.hellasgrid.gr:8443/voms/dteam?/dteam/'
VOMSES="'dteam lcg-voms.cern.ch 15004 /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch dteam 24' 'dteam voms.cern.ch 15004 /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch dteam 24' 'dteam voms.hellasgrid.gr 15004 /C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms.hellasgrid.gr dteam 24' 'dteam voms2.hellasgrid.gr 15004 /C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms2.hellasgrid.gr dteam 24'"
VOMS_CA_DN="'/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006' '/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006'"
# cat /root/siteinfo/vo.d/gridit
SW_DIR=$VO_SW_DIR/gridit
DEFAULT_SE=$SE_HOST
STORAGE_DIR=$CLASSIC_STORAGE_DIR/gridit
VOMS_SERVERS="'vomss://voms.cnaf.infn.it:8443/voms/gridit?/gridit' 'vomss://voms-01.pd.infn.it:8443/voms/gridit?/gridit'"
VOMSES="'gridit voms.cnaf.infn.it 15008 /C=IT/O=INFN/OU=Host/L=CNAF/CN=voms.cnaf.infn.it gridit' 'gridit voms-01.pd.infn.it 15008 /C=IT/O=INFN/OU=Host/L=Padova/CN=voms-01.pd.infn.it gridit'"
VOMS_CA_DN="'/C=IT/O=INFN/CN=INFN CA' '/C=IT/O=INFN/CN=INFN CA'"
# cat /root/siteinfo/vo.d/igi.italiangrid.it
SW_DIR=$VO_SW_DIR/igi
DEFAULT_SE=$SE_HOST
STORAGE_DIR=$CLASSIC_STORAGE_DIR/igi
VOMS_SERVERS="'vomss://vomsmania.cnaf.infn.it:8443/voms/igi.italiangrid.it?/igi.italiangrid.it'"
VOMSES="'igi.italiangrid.it vomsmania.cnaf.infn.it 15003 /C=IT/O=INFN/OU=Host/L=CNAF/CN=vomsmania.cnaf.infn.it igi.italiangrid.it'"
VOMS_CA_DN="'/C=IT/O=INFN/CN=INFN CA'"
# cat /root/siteinfo/vo.d/infngrid
SW_DIR=$VO_SW_DIR/infngrid
DEFAULT_SE=$SE_HOST
STORAGE_DIR=$CLASSIC_STORAGE_DIR/infngrid
VOMS_SERVERS="'vomss://voms.cnaf.infn.it:8443/voms/infngrid?/infngrid' 'vomss://voms-01.pd.infn.it:8443/voms/infngrid?/infngrid'"
VOMSES="'infngrid voms.cnaf.infn.it 15000 /C=IT/O=INFN/OU=Host/L=CNAF/CN=voms.cnaf.infn.it infngrid' 'infngrid voms-01.pd.infn.it 15000 /C=IT/O=INFN/OU=Host/L=Padova/CN=voms-01.pd.infn.it infngrid'"
VOMS_CA_DN="'/C=IT/O=INFN/CN=INFN CA' '/C=IT/O=INFN/CN=INFN CA'"
# cat /root/siteinfo/vo.d/ops
SW_DIR=$VO_SW_DIR/ops
DEFAULT_SE=$SE_HOST
STORAGE_DIR=$CLASSIC_STORAGE_DIR/ops
VOMS_SERVERS="vomss://voms.cern.ch:8443/voms/ops?/ops/"
VOMSES="'ops lcg-voms.cern.ch 15009 /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch ops 24' 'ops voms.cern.ch 15009 /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch ops 24'"
VOMS_CA_DN="'/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority'"
users and groups
You can download them from
HERE.
Munge
Copy the key
/etc/munge/munge.key
from the Torque server to every host of your cluster, adjust the permissions and start the service
# chown munge:munge /etc/munge/munge.key
# ls -ltr /etc/munge/
total 4
-r-------- 1 munge munge 1024 Jan 13 14:32 munge.key
# chkconfig munge on
# /etc/init.d/munge restart
site-info.def
KISS: Keep it simple, stupid! For your convenience there is an explanation of each yaim variable. For more details look
HERE.
SUGGESTION: use the same
site-info.def for CREAM and WNs: for this reason in this example file there are yaim variable used by CREAM, TORQUE or emi-WN.
# cat site-info.def
CE_HOST=cream-01.cnaf.infn.it
SITE_NAME=IGI-BOLOGNA
BATCH_SERVER=batch.cnaf.infn.it
BATCH_LOG_DIR=/var/torque
#BDII_HOST=egee-bdii.cnaf.infn.it
CE_BATCH_SYS=torque
JOB_MANAGER=pbs
BATCH_VERSION=torque-2.5.7
#CE_DATADIR=
CE_INBOUNDIP=FALSE
CE_OUTBOUNDIP=TRUE
CE_OS="ScientificSL"
CE_OS_RELEASE=6.2
CE_OS_VERSION="Carbon"
CE_RUNTIMEENV="IGI-BOLOGNA"
CE_PHYSCPU=8
CE_LOGCPU=16
CE_MINPHYSMEM=16000
CE_MINVIRTMEM=32000
CE_SMPSIZE=8
CE_CPU_MODEL=Xeon
CE_CPU_SPEED=2493
CE_CPU_VENDOR=intel
CE_CAPABILITY="CPUScalingReferenceSI00=1039 glexec"
CE_OTHERDESCR="Cores=1,Benchmark=4.156-HEP-SPEC06"
CE_SF00=951
CE_SI00=1039
CE_OS_ARCH=x86_64
CREAM_PEPC_RESOURCEID="http://cnaf.infn.it/cremino"
USERS_CONF=/root/siteinfo/ig-users.conf
GROUPS_CONF=/root/siteinfo/ig-users.conf
VOS="comput-er.it dteam igi.italiangrid.it infngrid ops gridit"
QUEUES="cert prod"
CERT_GROUP_ENABLE="dteam infngrid ops /dteam/ROLE=lcgadmin /dteam/ROLE=production /ops/ROLE=lcgadmin /ops/ROLE=pilot /infngrid/ROLE=SoftwareManager /infngrid/ROLE=pilot"
PROD_GROUP_ENABLE="comput-er.it gridit igi.italiangrid.it /comput-er.it/ROLE=SoftwareManager /gridit/ROLE=SoftwareManager /igi.italiangrid.it/ROLE=SoftwareManager"
VO_SW_DIR=/opt/exp_soft
WN_LIST="/root/siteinfo/wn-list.conf"
MUNGE_KEY_FILE=/etc/munge/munge.key
CONFIG_MAUI="no"
MYSQL_PASSWORD=*********************************
APEL_DB_PASSWORD=not_used
APEL_MYSQL_HOST=not_used
SE_LIST="darkstorm.cnaf.infn.it"
SE_MOUNT_INFO_LIST="none"
WN list
Set in this file the WNs list, for example:
# less /root/siteinfo/wn-list.conf
wn05.cnaf.infn.it
wn06.cnaf.infn.it
services/glite-mpi_ce
# cp /opt/glite/yaim/examples/siteinfo/services/glite-mpi_ce /root/siteinfo/services/
# cat services/glite-mpi_ce
# Setup configuration variables that are common to both the CE and WN
if [ -r ${config_dir}/services/glite-mpi ]; then
source ${config_dir}/services/glite-mpi
fi
# The MPI CE config function can create a submit filter for
# Torque to ensure that CPU allocation is performed correctly.
# Change this variable to "yes" to have YAIM create this filter.
# Warning: if you have an existing torque.cfg it will be modified.
MPI_SUBMIT_FILTER=${MPI_SUBMIT_FILTER:-"yes"}
services/glite-creamce
# cat /root/siteinfo/services/glite-creamce
#
# YAIM creamCE specific variables
#
#
# CE-monitor host (by default CE-monitor is installed on the same machine as
# cream-CE)
CEMON_HOST=$CE_HOST
#
# CREAM database user
CREAM_DB_USER=********************
CREAM_DB_PASSWORD=****************************
#
# Machine hosting the BLAH blparser.
# In this machine batch system logs must be accessible.
BLPARSER_HOST=$CE_HOST
# Value to be published as GlueCEStateStatus instead of Production
#CREAM_CE_STATE=Special
services/dgas_sensors (not available yet)
TODO
yaim check
Verify to have set all the yaim variables by launching:
# /opt/glite/yaim/bin/yaim -v -s /root/siteinfo/site-info.def -n creamCE -n TORQUE_utils
yaim config
# /opt/glite/yaim/bin/yaim -c -s /root/siteinfo/site-info.def -n creamCE -n TORQUE_utils
Service Checks
- After service installation to have a look if all were installed in a proper way, you could have a look to Service CREAM Reference Card
- You can also perform some checks after the installation and configuration of your CREAM
Revisions
--
PaoloVeronesi - 2012-05-25