Difference: SystemAdministratorGuideForEMI2 (2 vs. 3)

Revision 32012-01-16 - MassimoSgaravatto

Line: 1 to 1
 
META TOPICPARENT name="SystemAdministratorDocumentation"

System Administrator Guide for CREAM for EMI-2 release

Line: 193 to 193
 

0.0.1 Deployment models for CREAM databases

Added:
>
>
The databases used by CREAM can be deployed in the CREAM CE host (which is the default scenario) or on a different machine.

Click here for information how to deploy the databases on a machine different wrt the CREAM-CE.

0.1 CREAM CE Installation

This section explains how to install:

  • a CREAM CE in no cluster mode
  • a CREAM CE in cluster mode
  • a glite-CLUSTER node
For all these scenarios, the setting of the repositories is the same.

0.1.1 Repositories

For a successful installation, you will need to configure your package manager to reference a number of repositories (in addition to your OS);

  • the EPEL repository
  • the EMI middleware repository
  • the CA repository

and to REMOVE (!!!) or DEACTIVATE (!!!)

  • the DAG repository

0.1.1.1 The EPEL repository

On sl5_x86_64, you can install the EPEL repository, issuing:

rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm

TBC

0.1.1.2 The EMI middleware repository

On sl5_x86_64 you can install the EMI-2 yum repository, issuing:

wget TBD
yum install ./TBD

TBC

0.1.1.3 The Certification Authority repository

The most up-to-date version of the list of trusted Certification Authorities (CA) is needed on your node. The relevant yum repo can be installed issuing:

wget http://repository.egi.eu/sw/production/cas/1/current/repo-files/EGI-trustanchors.repo -O /etc/yum.repos.d/EGI-trustanchors.repo

0.1.1.4 Important note on automatic updates

An update of an the packages not followed by configuration can cause problems. Therefore WE STRONGLY RECOMMEND NOT TO USE AUTOMATIC UPDATE PROCEDURE OF ANY KIND.

Running the script available at http://forge.cnaf.infn.it/frs/download.php/101/disable_yum.sh (implemented by Giuseppe Platania (INFN Catania) yum autoupdate will be disabled

0.1.2 Installation of a CREAM CE node in no cluster mode

On sl5_x86_64 first of all install the yum-protectbase rpm:

  yum install yum-protectbase.noarch 

Then proceed with the installation of the CA certificates.

TBC

0.1.2.1 Installation of the CA certificates

On sl5_x86_64, the CA certificate can be installed issuing:

yum install ca-policy-egi-core 

TBC

0.1.2.2 Installation of the CREAM CE software

On sl5_x86_64 first of all install xml-commons-apis:

yum install xml-commons-apis 

This is due to a dependency problem within the Tomcat distribution

Then install the CREAM-CE metapackage:

yum install emi-cream-ce

TBC

0.1.2.3 Installation of the batch system specific software

After the installation of the CREAM CE metapackage it is necessary to install the batch system specific metapackage(s):

On sl5_x86_64:

  • If you are running Torque, and your CREAM CE node is the torque master, install the emi-torque-server and emi-torque-utils metapackages:

yum install emi-torque-server
yum install emi-torque-utils

  • If you are running Torque, and your CREAM CE node is NOT the torque master, install the emi-torque-utils metapackage:

yum install emi-torque-utils

  • If you are running LSF, install the emi-lsf-utils metapackage:

yum install emi-lsf-utils

  • If you are running GE, install the emi-ge-utils metapackage:
yum install emi-ge-utils

TBC

0.1.3 Installation of a CREAM CE node in cluster mode

On sl5_x86_64, first of all install the yum-protectbase rpm:

  yum install yum-protectbase.noarch 

Then proceed with the installation of the CA certificates.

0.1.3.1 Installation of the CA certificates

On sl5_86_64 the CA certificate can be installed issuing:

yum install ca-policy-egi-core 

0.1.3.2 Installation of the CREAM CE software

On sl5_x86_64, first of all install xml-commons-apis:

yum install xml-commons-apis 

This is due to a dependency problem within the Tomcat distribution

Then install the CREAM-CE metapackage:

yum install emi-cream-ce

0.1.3.3 Installation of the batch system specific software

After the installation of the CREAM CE metapackage it is necessary to install the batch system specific metapackage(s).

On sl5_x86_64:

  • If you are running Torque, and your CREAM CE node is the torque master, install the emi-torque-server and emi-torque-utils metapackages:

yum install emi-torque-server
yum install emi-torque-utils

  • If you are running Torque, and your CREAM CE node is NOT the torque master, install the emi-torque-utils metapackage:

yum install emi-torque-utils

  • If you are running LSF, install the emi-lsf-utils metapackage:

yum install emi-lsf-utils

* If you are running GE, install the emi-ge-utils metapackage:

yum install emi-ge-utils

TBC

0.1.3.4 Installation of the cluster metapackage

If the CREAM CE node has to host also the glite-cluster, install also the relevant metapackage.

On sl5_x86_64:

yum install emi-cluster 

TBC

0.1.4 Installation of a glite-cluster node

On sl5_x86_64, first of all install the yum-protectbase rpm:

  yum install yum-protectbase.noarch 

Then proceed with the installation of the CA certificates.

0.1.4.1 Installation of the CA certificates

On sl5_x86_64, the CA certificates can be installed issuing:

yum install ca-policy-egi-core 

0.1.4.2 Installation of the cluster metapackage

Install the glite-CLUSTER metapackage.

On sl5_x86_64:

yum install emi-cluster 

0.1.5 Installation of the BLAH BLparser

If the new BLAH Blparser must be used, there isn't anything to be installed for the BLAH Blparser (i.e. the installation of the CREAM-CE is enough).

This is also the case when the old BLAH Blparser must be used AND the BLPARSER_HOST is the CREAM-CE.

Only when the old BLAH Blparser must be used AND the BLPARSER_HOST is different than the CREAM-CE, it is necessary to install the BLParser software on this BLPARSER_HOST. This is done in the following way:

On sl5_x86_64:

yum install glite-ce-blahp 
yum install glite-yaim-cream-ce

TBC

0.1.6 Installation of the CREAM CLI

The CREAM CLI is part of the EMI-UI. To install it please refer to TBD.

0.2 CREAM CE configuration

0.2.1 Manual and automatic (yaim) configuration

The following sections describe the needed configuration steps for two following scenarios:

  • Manual configuration
  • Automatic configuration via yaim

For a detailed description on how to configure the middleware with YAIM, please check the YAIM guide.

The necessary YAIM modules needed to configure a certain node type are automatically installed with the middleware.

0.2.2 Configuration of a CREAM CE node in no cluster mode

0.2.2.1 Install host certificate

The CREAM CE node requires the host certificate/key files to be installed. Contact your national Certification Authority (CA) to understand how to obtain a host certificate if you do not have one already.

Once you have obtained a valid certificate:

  • hostcert.pem - containing the machine public key
  • hostkey.pem - containing the machine private key
make sure to place the two files in the target node into the /etc/grid-security directory. Then set the proper mode and ownerships doing:

chown root.root /etc/grid-security/hostcert.pem
chown root.root /etc/grid-security/hostkey.pem
chmod 600 /etc/grid-security/hostcert.pem
chmod 400 /etc/grid-security/hostkey.pem

0.2.2.2 Manual configuration

TBD

0.2.2.3 Configuration via yaim

0.2.2.3.1 Configure the siteinfo.def file

Set your siteinfo.def file, which is the input file used by yaim. Documentation about yaim variables relevant for CREAM CE is available at TBD,

Be sure that CREAMCE_CLUSTER_MODE is set to no (or not set at all, since no is the default value).

0.2.2.3.2 Run yaim

After having filled the siteinfo.def file, run yaim:

/opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n <LRMSnode> 

Examples:

  • Configuration of a CREAM CE in no cluster mode using Torque as batch system, with the CREAM CE being also Torque server

     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n TORQUE_server -n TORQUE_utils

  • Configuration of a CREAM CE in no cluster mode using Torque as batch system, with the CREAM CE NOT being also Torque server

     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n TORQUE_utils

  • Configuration of a CREAM CE in no cluster mode using LSF as batch system

     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n LSF_utils 

  • Configuration of a CREAM CE in no cluster mode using GE as batch system
     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n SGE_utils 

0.2.3 Configuration of a CREAM CE node in cluster mode

0.2.3.1 Install host certificate

The CREAM CE node requires the host certificate/key files to be installed. Contact your national Certification Authority (CA) to understand how to obtain a host certificate if you do not have one already.

Once you have obtained a valid certificate:

  • hostcert.pem - containing the machine public key
  • hostkey.pem - containing the machine private key
make sure to place the two files in the target node into the /etc/grid-security directory. Then set the proper mode and ownerships doing:

chown root.root /etc/grid-security/hostcert.pem
chown root.root /etc/grid-security/hostkey.pem
chmod 600 /etc/grid-security/hostcert.pem
chmod 400 /etc/grid-security/hostkey.pem

0.2.3.2 Manual configuration

TBD

0.2.3.3 Configuration via yaim

0.2.3.3.1 Configure the siteinfo.def file

Set your siteinfo.def file, which is the input file used by yaim.

Variables which are required in cluster mode are described at TBD.

When the CREAM CE is configured in cluster mode it will stop publishing information about clusters and subclusters. That information should be published by the glite-CLUSTER node type instead. A specific set of yaim variables has been defined for configuring the information which is still required by the CREAM CE in cluster mode. The names of these variables follow this syntax:

  • In general, variables based on hostnames, queues or VOViews containing '.' and '_' # should be transformed into '-'
  • <host-name>: identifier that corresponds to the CE hostname in lower case. Example: ctb-generic-1.cern.ch -> ctb_generic_1_cern_ch
  • <queue-name>: identifier that corresponds to the queue in upper case. Example: dteam -> DTEAM
  • <voview-name>: identifier that corresponds to the VOView id in upper case. '/' and '=' should also be transformed into '_'. Example: /dteam/Role=admin -> DTEAM_ROLE_ADMIN

Be sure that CREAMCE_CLUSTER_MODE is set to yes

0.2.3.3.2 Run yaim

After having filled the siteinfo.def file, run yaim:

/opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n <LRMSnode> [-n glite-CLUSTER]

-n glite-CLUSTER must be specified only if the glite-CLUSTER is deployed in the same node of the CREAM-CE

Examples:

  • Configuration of a CREAM CE in cluster mode (with glite-CLUSTER deployed on a different node) using LSF as batch system

     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n LSF_utils

  • Configuration of a CREAM CE in cluster mode (with glite-CLUSTER deployed on a different node) using GE as batch system
    /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n SGE_utils 
  • Configuration of a CREAM CE in cluster mode (with glite-CLUSTER deployed on a different node) using Torque as batch system, with the CREAM CE being also Torque server

     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n TORQUE_server -n TORQUE_utils

  • Configuration of a CREAM CE in cluster mode (with glite-CLUSTER deployed on a different node) using Torque as batch system, with the CREAM CE NOT being also Torque server

     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n TORQUE_utils

  • Configuration of a CREAM CE in cluster mode (with glite-CLUSTER deployed on the same node of the CREAM-CE) using LSF as batch system

     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n LSF_utils -n glite-CLUSTER

* Configuration of a CREAM CE in cluster mode (with glite-CLUSTER deployed on the same node of the CREAM-CE) using GE as batch system

     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n SGE_utils -n glite-CLUSTER
  • Configuration of a CREAM CE in cluster mode (with glite-CLUSTER deployed on the same node of the CREAM-CE) using Torque as batch system, with the CREAM CE being also Torque server

     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n TORQUE_server -n TORQUE_utils  -n glite-CLUSTER

  • Configuration of a CREAM CE in cluster mode (with glite-CLUSTER deployed on the same node of the CREAM-CE)) using Torque as batch system, with the CREAM CE NOT being also Torque server

     /opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n TORQUE_utils -n glite-CLUSTER

0.2.4 Configuration of a glite-CLUSTER node

0.2.4.1 Install host certificate

The glite-CLUSTER node requires the host certificate/key files to be installed. Contact your national Certification Authority (CA) to understand how to obtain a host certificate if you do not have one already.

Once you have obtained a valid certificate:

  • hostcert.pem - containing the machine public key
  • hostkey.pem - containing the machine private key
make sure to place the two files in the target node into the /etc/grid-security directory. Then set the proper mode and ownerships doing:

chown root.root /etc/grid-security/hostcert.pem
chown root.root /etc/grid-security/hostkey.pem
chmod 600 /etc/grid-security/hostcert.pem
chmod 400 /etc/grid-security/hostkey.pem

0.2.4.2 Manual configuration

0.2.4.3 Configuration via yaim

0.2.4.3.1 Configure the siteinfo.def file

Set your siteinfo.def file, which is the input file used by yaim. Documentation about yaim variables relevant for glite-CLUSTER is available at TBD.

0.2.4.3.2 Run yaim

After having filled the siteinfo.def file, run yaim:

/opt/glite/yaim/bin/yaim -c -s <site-info.def> -n glite-CLUSTER

0.2.5 Configuration of the BLAH Blparser

If the new BLAH Blparser must be used, there isn't anything to be configured for the BLAH Blparser (i.e. the configuration of the CREAM-CE is enough).

If the old BLparser must be used, it is necessary to configure it on the BLPARSER_HOST (which, as said above, can be the CREAM-CE node or on a different host). This is done via yaim in the following way:

/opt/glite/yaim/bin/yaim -r -s <site-info.def> -n creamCE -f config_cream_blparser

In case of manual configuration, TBD

Then it is necessary to restart tomcat on the CREAM-CE node:

service tomcat5 restart

0.2.5.1 Configuration of the old BLAH Blparser to serve multiple CREAM CEs

The configuration instructions reported above explains how to configure a CREAM CE and the BLAH blparser (old model) considering the scenario where the BLAH blparser has to "serve" a single CREAM CE.

Considering that the blparser (old model) has to run where the batch system log files are available, let's consider a scenario where there are 2 CREAM CEs ( ce1.mydomain and ce2.mydomain) that must be configured. Let's suppose that the batch system log files are not available on these 2 CREAM CEs machine. Let's assume they are available in another machine ( blhost.mydomain), where the old blparser has to be installed.

The following summarizes what must be done:

  • In the /services/glite-creamce for ce1.mydomain set:

BLPARSER_HOST=blhost.mydomain
BLAH_JOBID_PREFIX=cre01_
BLP_PORT=33333

and configure ce1.mydomain via yaim:

/opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n <LRMSnode> [-n glite-CLUSTER]

  • In the /services/glite-creamce for ce2.mydomain set:

BLPARSER_HOST=blhost.mydomain
BLAH_JOBID_PREFIX=cre02_
BLP_PORT=33334

and configure ce2.mydomain via yaim:

/opt/glite/yaim/bin/yaim -c -s <site-info.def> -n creamCE -n <LRMSnode> [-n glite-CLUSTER]

  • In the /services/glite-creamce for blhost.mydomain sets:

CREAM_PORT=56565

and configure blhost.mydomain via yaim:

/opt/glite/yaim/bin/yaim -r -s <site-info.def> -n creamCE -f config_cream_blparser

  • In blhost.mydomain edit the file /etc/blparser.conf setting (considering the pbs/torque scenario):

GLITE_CE_BLPARSERPBS_NUM=2

# ce01.mydomain
GLITE_CE_BLPARSERPBS_PORT1=33333
GLITE_CE_BLPARSERPBS_CREAMPORT1=56565

# ce02.mydomain
GLITE_CE_BLPARSERPBS_PORT2=33334
GLITE_CE_BLPARSERPBS_CREAMPORT2=56566

  • Restart the blparser on blhost.mydomain:

/etc/init.d/glite-ce-blparser restart

  • Restart tomcat on ce01.mydomain and ce02.mydomain
You can of course replace 33333, 33334, 56565, 56566 (reported in the above examples) with other port numbers

0.2.5.2 Configuration of the new BLAH Blparser to to use cached batch system commands

The new BLAH blparser can be configured in order to not interact directly with the batch system, but through a program (to be implemented by the site admin) which can implement some caching functionality. This is the case for example of CommandProxyTools, implement at Cern

To enable this feature, add in /etc/blah.config (the example below is for lsf, with /usr/bin/runcmd.pl as name of the "caching" program):

lsf_batch_caching_enabled=yes
batch_command_caching_filter=/usr/bin/runcmd.pl

So the blparser, insead of issuing bjobs -u ...., will issue /usr/bin/runcmd.pl bjobs -u ..." </verbatim>

0.2.6 Configuration of the CREAM databases on a host different than the CREAM-CE (using yaim)

To configure the CREAM databases on a host different than the CREAM-CE:

  • Set in the siteinfo.def file the variable CREAM_DB_HOST to the remote host (where mysql must be already installed)
  • Set in the siteinfo.def file the variable MYSQL_PASSWORD considering the mysql password of the remote host
  • On this remote host, grant appropriate privs to root@CE_HOST
  • Configure via yaim

0.2.7 Configuration of the CREAM CLI

The CREAM CLI is part of the EMI-UI. To configure it please refer to https://twiki.cern.ch/twiki/bin/view/EMI/EMIui#Client_Installation_Configuratio.

0.2.8 Configurations possible only manually

yaim allows to choose the most important parameters (via yaim variables) related to the CREAM-CE. It is then possible to tune some other attributes manually editing the relevant configuration files.

Please note that:

  • After having manually modified a configuration file, it is then necessary to restart the service
  • Manual changes done in the configuration files are scratched by following yaim reconfigurations


0.3 Batch system integration

0.3.1 Torque

0.3.1.1 Installation

 

-- MassimoSgaravatto - 2011-12-20 \ No newline at end of file

 
This site is powered by the TWiki collaboration platformCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback