Tags:
, view all tags

System Administrator Guide for CREAM for EMI-2 release

1 Installation and Configuration

1.1 Prerequisites

1.1.1 Operating system

The following operating systems are supported:

  • SL5 64 bit
  • TBC

It is assumed that the operating system is already properly installed.

1.1.2 Node synchronization

A general requirement for the Grid nodes is that they are synchronized. This requirement may be fulfilled in several ways. One of the most common one is using the NTP protocol with a time server.

1.1.3 Cron and logrotate

Many components deployed on the CREAM CE rely on the presence of cron (including support for /etc/cron.* directories) and logrotate. You should make sure these utils are available on your system.

1.1.4 Batch system

If you plan to use Torque as batch system for your CREAM CE, it will be installed and configured along with the middleware (i.e. you don't have to install and configure it in advance)

If you plan to use LSF as batch system for your CREAM CE, you have to install and configure it before installing and configuring the CREAM software. Since LSF is a commercial software it can't be distributed together with the middleware.

If you plan to use GE as batch system for your CREAM CE, you have to install and configure it before installing and configuring the CREAM software. The CREAM CE integration was tested with GE 6.2u5 but it should work with any forked version of the original GE software.

The support of the batch system softwares is out of the scope of this activity.

More information abut batch system integration is available in the relevant section.

1.2 Plan how to deploy the CREAM CE

1.2.1 CREAM CE and gLite-cluster

glite-CLUSTER is a node type that can publish information about clusters and subclusters in a site, referenced by any number of compute elements. In particular it allows to deal with sites having multiple CREAM CE nodes and/or multiple subclusters (i.e. disjoint sets of worker nodes, each set having sufficiently homogeneous properties).

In Glue1, batch system queues are represented through GlueCE objectclasses. Each GlueCE refers to a Cluster, which can be composed by one or more SubClusters. However the gLite WMS requires the publication of exactly one SubCluster per Cluster (and hence per batch queue).

Thus sites with heterogeneous hardware have two possible choices:

  • publish a SubCluster with a representative/minimum hardware description (e.g. the minimum memory on any node)
  • define separate batch queues for each hardware configuration, e.g. low/high memory queues, and attach the corresponding GlueCE objects to separate Cluster/SubCluster pairs. For attributes with discrete values, e.g. SL4 vs SL5, this second option is the only one which makes sense.
However, without the use of the gLite-cluster, YAIM allows configuring a single Cluster per CREAM head node.

A second problem, addressed by gLite-cluster arises for larger sites which install multiple CE headnodes submitting to the same batch queues for redundancy or load-balancing. Without the use of gLite-cluster, YAIM generates a separate Cluster/SubCluster pair for each head node even though they all describe the same hardware. This causes no problems for job submission, but by default would overcount the installed capacity at the site by a multiple of the number of SubClusters. The workaround, before introducing the gLite-cluster, was to publish zero values for the installed capacity from all but one of the nodes (but this is clearly far from being an ideal solution).

The glite-CLUSTER node addresses this issue. It contains a subset of the functionality incorporated in the CREAM node types: the publication of the Glue1 GlueCluster and its dependent objects, the publication of the Glue1 GlueService object for the GridFTP endpoint, and the directories which store the RunTimeEnvironment tags, together with the YAIM functions which configure them.

So, gLite-CLUSTER should be considered:

  • if in the site there are multiple CE head nodes, and/or
  • if in the site there are multiple disjoint sets of worker nodes, each set having sufficiently homogeneous properties

When configuring a gLite-cluster, please consider that:

  • There should be one cluster for each set of worker nodes having sufficiently homogeneous properties
  • There should be one subcluster for each cluster
  • Each batch system queue should refer to the WNs of a single subcluster

glite-CLUSTER can be deployed in the same host of a CREAM-CE or in a different one.

The following deployment models are possible for a CREAM-CE:

  • CREAM-CE can be configured without worrying about the glite-CLUSTER node. This can be useful for small sites who don't want to worry about cluster/subcluster configurations because they have a very simple setup. In this case CREAM-CE will publish a single cluster/subcluster. This is called no cluster mode. This is done as described below by defining the yaim setting CREAMCE_CLUSTER_MODE=no (or by no defining at all that variable).
  • CREAM-CE can work on cluster mode using the glite-CLUSTER node type. This is done as described below by defining the yaim setting CREAMCE_CLUSTER_MODE=yes. The CREAM-CE can be in the same host or in a different host wrt the glite-CLUSTER node.

More information about glite-CLUSTER can be found at https://twiki.cern.ch/twiki/bin/view/LCG/CLUSTER and in this note.

Information concerning glue2 publication is available here.

1.2.2 Define a DNS alias to refer to set of CREAM CEs

In order to distribute load for job submissions, it is possible to deploy multiple CREAM CEs head nodes referring to the same set of resources. As explained in the previous section, this should be implemented with:

  • a gLite-CLUSTER node
  • multiple CREAM CEs configured in cluster mode
It is then also possible to define a DNS alias to refer to the set of CREAM head nodes: after the initial contact from outside clients to the CREAM-CE alias name for job submission, all further actions on that job are based on the jobid which contains the physical hostname of the CREAM-CE to which the job was submitted. This allows to switch the DNS alias in order to distribute load.

The alias shouldn't be published in the information service, but should be simply communicated to the relevant users.

There are various techniques to change an alias entry in the DNS. The choice depends strongly on the way the network is set up and managed. For example at DESY a self-written service called POISE is used; using metrics (which take into account in particular the load and the sandbox size ) it decides the physical instance the alias should point to. Another possibility to define aliases is to use commercial network techniques such as F5.

It must be noted that, as observed by Desy sysadmins, the proliferation of the aliases (C-records) is not well defined among DNS'. Therefore changes of an alias sometimes can take hours to be propagated to other sites.

The use of alias for job submission is a good solution to improve load balancing and availability of the service (the unavailability of a physical CREAM CE would be hidden by the use of the alias). It must however be noted that:

  • The list operation ( glite-ce-job-list command of the CREAM CLI) issued on a alias returns the identifiers of the jobs submitted to the physical instance currently pointed to the alias, and not the identifiers of all the jobs submitted to all CREAM CEs instances
  • The operations to be done on all jobs (e.g. cancel all jobs, return the status of all jobs, etc.), i.e. the ones issued using the options -a -e of the CREAM CLI, issued on a alias, refer to just the CREAM physical instance currently pointed by the alias (and not to all CREAM CE instances).
  • The use of alias is not supported for submissions through the gLite-WMS

1.2.3 Choose the authorization model

The CREAM CE can be configured to use as authorization system:

  • the ARGUS authorization framework
OR

  • the grid Java Authorization Framework (gJAF)
In the former case a ARGUS box (recommended to be at site level: it can of course serve multiple CEs of that site) where to define policies for the CREAM CE box is needed.

To use ARGUS as authorization system, yaim variable USE_ARGUS must be set in the following way:

USE_ARGUS=yes

In this case it is also necessary to set the following yaim variables:

  • ARGUS_PEPD_ENDPOINTS The endpoint of the ARGUS box (e.g."https://cream-43.pd.infn.it:8154/authz")
  • CREAM_PEPC_RESOURCEID The id of the CREAM CE in the ARGUS box (e.g. "http://pd.infn.it/cream-18")

If instead gJAF should be used as authorization system, yaim variable USE_ARGUS must be set in the following way:

USE_ARGUS=no

1.2.4 Choose the BLAH BLparser deployment model

The BLAH Blparser is the component of the CREAM CE responsible to notify CREAM about job status changes.

For LSF and PBS/Torque it is possible to configure the BLAH blparser in two possible ways:

  • The new BLAH BLparser, which relies on the status/history batch system commands
  • The old BLAH BLparser, which parses the batch system log files

For GE and Condor, only the configuration with the new BLAH blparser is possible

1.2.4.1 New BLAH Blparser

The new Blparser runs on the CREAM CE machine and it is automatically installed when installing the CREAM CE. The configuration of the new BLAH Blparser is done when configuring the CREAM CE (i.e. it is not necessary to configure the Blparser separately from the CREAM CE).

To use the new BLAH blparser, it is just necessary to set:

BLPARSER_WITH_UPDATER_NOTIFIER=true

in the siteinfo.def and then configure the CREAM CE. This is the default value.

The new BLParser doesn't parse the log files. However the bhist (for LSF) and tracejob (for Torque) commands (used by the new BLParser) require the batch system log files, which therefore must be available (in case e.g. via NFS in the CREAM CE node. Actually for Torque the blparser uses tracejob (which requires the log files) only when qstat can't find anymore the job. And this can happen if the job has been completed more than keep_completed seconds ago and the blparser was not able to detect before that the job completed/was cancelled/whatever. This can happen e.g. if keep_completed is too short or if the BLAH blparser for whatever reason didn't run for a while. If the log files are not available and the tracejob command is issued (for the reasons specified above), the BLAH blparser will not be able to find the job, which will considered "lost" (DONE-FAILED wrt CREAM).

The init script of the new Blparser is /etc/init.d/glite-ce-blah-parser. Please note that it is not needed to explicitly start the new blparser: when CREAM is started, it starts also this new BLAH Blparser if it is not already running.

When the new Blparser is running, you should see the following two processes on the CREAM CE node:

  • /usr/bin/BUpdaterxxx
  • /usr/bin/BNotifier

Please note that the user tomcat on the CREAM CE should be allowed to issue the relevant status/history commands (for Torque: qstat, tracejob, for LSF: bhist, bjobs). Some sites configure the batch system so that users can only see their own jobs (e.g. in torque:

set server query_other_jobs = False

). If this is done at the site, then the tomcat user will need a special privilege in order to be exempt from this setting (in torque:

set server operators += tomcat@creamce.yoursite.domain

).

1.2.4.2 Old BLAH Blparser

The old BLAH blparser must be installed on a machine where the batch system log files are available (let's call this host BLPARSER_HOST. So the BLPARSER_HOST can be the batch system master or a different machine where the log files are available (e.g. they have been exported via NFS). There are two possible layouts:

  • The BLPARSER_HOST is the CREAM CE host
  • The BLPARSER_HOST is different than the CREAM CE host

If the BLPARSER_HOST is the CREAM CE host, after having installed and configured the CREAM CE, it is necessary to configure the old BLAH Blparser (as explained below) and then to restart tomcat.

If the BLPARSER_HOST is different than the CREAM CE host, after having installed and configured the CREAM CE it is necessary:

  • to install the old BLAH BLparser software on this BLPARSER_HOST as explained below
  • to configure thie old BLAH BLparser
  • to restart tomcat on the CREAM-CE

On the CREAM CE, to use the old BLAH blparser, it is necessary to set:

BLPARSER_WITH_UPDATER_NOTIFIER=false

in the siteinfo.def before configuring via yaim.

1.2.5 Deployment models for CREAM databases

-- MassimoSgaravatto - 2011-12-20

Edit | Attach | PDF | History: r31 | r4 < r3 < r2 < r1 | Backlinks | Raw View | More topic actions...
Topic revision: r2 - 2012-01-11 - MassimoSgaravatto
 

  • Edit
  • Attach
This site is powered by the TWiki collaboration platformCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback