Difference: IgiGlite (7 vs. 8)

Revision 82012-01-18 - SergioTraldi

Line: 1 to 1
 
META TOPICPARENT name="IGIGuides"
Deleted:
<
<

IGI (based on gLite 3.2 release) Installation and Configuration

 
Added:
>
>

INFNGRID Installation and Configuration Guide for gLite 3.2 SL5 x86_64

This document is a complete guide for the installation and the configuration of INFNGRID profiles aligned with gLite middleware version 3.2 on SL5 (or RHEL5 clones) for x86_64 architecture (no i386 profiles have been deployed up to now).

Currently only few profiles have been ported to SL5 x86_64 and integrated in INFNGRID release. At the following link you can find the current status of gLite and INFNGRID porting:

If you find errors in this document please open a ticket to the "Release & Documentation" group using the INFNGRID trouble ticketing system.

Released profiles:

IMPORTANT NOTE: Please be aware that the profile ig_GRIDFTP is obsoleted by the ig_SE_storm_gridftp, and an updgrade is not possible, only fresh installations of the ig_SE_storm_gridftp

Here below an updated lists of currently deployed profiles with related metapackage and nodetype names:

Profiles INSTALLATION Metapackages CONFIGURATION Nodetypes Release date Required in a grid site?
ARGUS ''ig_ARGUS'' ''ig_ARGUS_server''
or
''ARGUS_server''
26/02/2010 *NO*
BDII Site ''ig_BDII_site'' ''ig_BDII_site'' 04.08.2010 *YES*
BDII Top ''ig_BDII_top'' ''ig_BDII_top'' 05/10/2010 *NO*
CREAM CE ''ig_CREAM''
''ig_CREAM_LSF''
''ig_CREAM_torque''
''ig_CREAM''
''ig_CREAM_LSF''
''ig_CREAM_torque''
09/02/2010 *YES*
FTA_oracle ''ig_FTA_oracle'' ''ig_FTA_oracle'' 21/07/2010 *NO*
FTM* ''ig_FTM'' ''ig_FTM'' 21/07/2010 *NO*
FTS_oracle ''ig_FTS_oracle'' ''ig_FTS_oracle'' 21/07/2010 *NO*
GLEXEC_wn ''ig_GLEXEC_wn'' ''ig_GLEXEC_wn'' 18/12/2009 *NO*
HLR ''ig_HLR'' ''ig_HLR'' Not released *NO*
LB ''ig_LB'' ''ig_LB'' 28/04/2010 *NO*
LFC ''ig_LFC_mysql''
''ig_LFC_oracle''
''ig_LFC_mysql''
''ig_LFC_oracle''
22/10/2009
08/02/2010
*NO*
SGE_utils ''ig_SGE_utils'' ''ig_SGE_utils'' 03/05/2010 *NO*
SE dCache ''ig_SE_dcache_info''
''ig_SE_dcache_nameserver_chimera''
''ig_SE_dcache_pool''
''ig_SE_dcache_srm''
''ig_SE_dcache_info''
''ig_SE_dcache_nameserver_chimera''
''ig_SE_dcache_pool''
''ig_SE_dcache_srm''
10/06/2011 *NO*
SE DPM ''ig_SE_dpm_mysql' '
''ig_SE_dpm_disk''
''ig_SE_dpm_mysql''
''ig_SE_dpm_disk''
22/10/2009 *NO*
SE STORM ''ig_SE_storm_backend''
''ig_SE_storm_frontend''
''ig_SE_storm_checksum''
''ig_SE_storm_gridftp''
''ig_SE_storm_backend''
''ig_SE_storm_frontend''
''ig_SE_storm_checksum''
''ig_SE_storm_gridftp''
23/03/2011 *NO*
UI ''ig_UI''
''ig_UI_noafs''
''ig_UI''
''ig_UI_noafs''
27/07/2009 *NO*
but recommended
VOBOX ''ig_VOBOX'' ''ig_VOBOX'' 22/10/2009 *NO*
VOMS_mysql ''ig_VOMS_mysql'' ''ig_VOMS_mysql'' 03/05/2010 *NO*
VOMS_oracle ''ig_VOMS_oracle'' ''ig_VOMS_oracle'' 18/08/2010 *NO*
WN ''ig_WN''
''ig_WN_noafs''
''ig_WN_LSF''
''ig_WN_LSF_noafs''
''ig_WN_torque''
''ig_WN_torque_noafs''
''ig_WN''
''ig_WN_noafs''
''ig_WN_LSF''
''ig_WN_LSF_noafs''
''ig_WN_torque''
''ig_WN_torque_noafs''
24/07/2009 *YES*

Please keep in mind the difference between the following three concepts because of their different scopes and uses:

  • profile => we use this word to //generically// call a service
  • metapackage => we use this word during //installation// phase
  • nodetype => we use this word during //configuration// phase

Documentation references:

Please also refer to these gLite official guides:

 

Installation

Added:
>
>
 

OS installation

Install SL5 using SL5.X repository (CERN mirror) or one of the supported OS (RHEL5 clones).

Line: 19 to 76
 
 hostname -f

It should print the fully qualified domain name (e.g. prod-ce.mydomain.it). Correct your network configuration if it prints only the hostname without the domain. If you are installing WN on private network the command must return the external FQDN for the CE and the SE (e.g. prod-ce.mydomain.it) and the internal FQDN for the WNs (e.g. node001.myintdomain).

Changed:
<
<

>
>
 

Repository Settings

To have more details to the repository have a look to the this link Repository Specifications

Line: 108 to 166
  Where <metapackage> is one of those reported on the table above ( Metapackages column).
Added:
>
>
IMPORTANT NOTE: When you are installing ig_CREAM or ig_CREAM_LSF or ig_CREAM_torque adding an exclude line to the .repo file, e.g. in /etc/yum.repos.d/slc5-updates.repo or /etc/yum.repos.d/sl5-security.repo
exclude=c-ares

Special cases

CE and Batch Server configuration (access to log files)

It doesn't matter what kind of deployment you have, batch-system master on a different machine than the CE (CREAM, TORQUE or LSF) or on the same one, you have to be sure that you provide access to the batch system log files: You must set up a mechanism to ''transfer'' accounting logs to the CE:

  • through NFS (don't forget to set $BATCH_LOG_DIR and $DGAS_ACCT_DIR in <your-site-info.def> configuration file)
  • through a daily cron job to the directory defined in $BATCH_LOG_DIR and $DGAS_ACCT_DIR in <your-site-info.def> configuration file

CREAM_torque (multiple CEs and single Batch Server)

This configuration requires particular attention both during the metapackage installation and during the nodetype configuration.

Install the first CE (that will act also as Batch Server) following the usual procedure:

yum install ig_CREAM_torque

Install the other secondary CEs without batch server software as follows:

yum install ig_CREAM glite-TORQUE_utils

Standalone TORQUE_server

You should use the following .repo files, glite-torque_server.repo & glite-torque_utils.repo

yum install glite-TORQUE_server glite-TORQUE_utils

for the CEs (creamCEs), add the glite-torque_utils.repo file and:

yum install ig_CREAM glite-TORQUE_utils

Please pay attention also to the configuration for this special case

Batch system installation (only for WN)

LSF server/client installation must be done *manually*, whereas Torque server/client installation is included in the metapackage.

 

Configuration

Configuration files

Line: 130 to 240
  The optional folders are created to allow system administrators to organise their configurations in a more structured way.”
Deleted:
<
<
IMPORTANT NOTE:
If your site has the intention to support more VOs than the default ones, you should have a look at Whole site: How to enable a VO, specially for the enmr.eu VO, once the configuration finished you should follow extra_configuration.
 \ No newline at end of file
Added:
>
>
IMPORTANT NOTE:
If your site has the intention to support more VOs than the default ones, you should have a look at Whole site: How to enable a VO, specially for the enmr.eu VO, once the configuration finished you should follow "extra_configuration".

Default files

Variables that contain a meaningful default value are distributed under ''/opt/glite/yaim/defaults/'' directory and that don't need to be changed unless you are an advanced user and you know what you are doing. The files are:

  • ''ig-site-info.pre'';
  • ''ig-site-info.post'';
  • ''<node-type>.pre'', ''glite-<node-type>.pre'', ''ig-<node-type>.pre'';
  • ''<node-type>.post'', ''glite-<node-type>.post'', ''ig-<node-type>.post''.

In case you really need to change these variables, you don't need to modify the value in these files if you don't want to edit them. You can just add the same variable in site-info.def since this will overwrite the variables declared in these files. See the configuration flow in YAIM in the next section.

Configuration flow

This is the order in which the different configuration files are sourced (''<confdir>'' refers to the path of the configuration folder which is the path of ''<your-site-info.def>''):

  1. defaults ''.pre'' files in ''/opt/glite/yaim/defaults/'';
  2. ''<confdir>/<your-site-info.def>'';
  3. service-specific files in ''<confdir>/services/'';
  4. defaults ''.post'' files in ''/opt/glite/yaim/defaults/'';
  5. node-specific files in ''<confdir>/nodes/'';
  6. VO-specific files in ''<confdir>/vo.d/'';
  7. function files in ''/opt/glite/yaim/node-info.d/'';
  8. VO-specific group settings in ''<confdir>/group.d/*''.

Configuration variabiles

General

In the documentation you could find all the INFNGRID variables and some important gLite variables that can be configured in ''<your-site-info.def>'' are listed in alphabetically sorting (links to gLite variables are also listed):

  • C = compulsory, if you are going to configure that type of node;
  • O = optional.

For the other gLite variables please consider the official "site-info.def" information at "YAIM 4 guide for sysadmins".

Generic

Variable name Type Descpription >= ig-yaim version
''BASE_SW_DIR'' O Directory exported for SGM (it will be mounted by WN on ''VO_SW_DIR'', see below). Comment it if you have your own mounting tool. 3.0.1-0
''CE_INT_HOST'' O If PRIVATE_NETWORK=true, uncomment and write the internal FQDN hostname of CE. 3.0.1-0
''CE_LOGCPU'' C Total number of logical CPUs in the system (i.e. number of cores/hyperthreaded CPUs). 4.0.3-0
| ''CE_OS'' | C | OS type. Set using the output of the following command run on your WNs: # lsb_release -i | cut -f2 ScientificSL More details here: "| How to publish the OS name". | 3.0.1-0 |
''CE_OS_ARCH'' C OS architecture. Set using the output of the following command run on your WNs:
# uname -m
i686
More details here: "[[http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_my_machine_architecture
How to publish my machine architecture]]". 3.0.1-0
''CE_OS_RELEASE'' C OS release. Set using the output of the following command run on your WNs:
# lsb_release -r | cut -f2
4.5
More details here: "How to publish the OS name".
3.0.1-0
''CE_OS_VERSION'' C OS version. Set using the output of the following command run on your WNs:
# lsb_release -c | cut -f2
Beryllium
More details here: "How to publish the OS name".
3.0.1-0
''CE_PHYSCPU'' C Total number of physical CPUs in the system (i.e. number of sockets).
ATTENTION: if you have more than one CE and shared WNs set this variable to "0".
More details here:
"GIISQuery_Usage".
4.0.3-0
''HOST_SW_DIR'' O Host exporting the directory for SGM (usually a CE or a SE). 3.0.1-0
''INT_NET'' O If PRIVATE_NETWORK=true, uncomment and write your internal network. 3.0.1-0
''INT_HOST_SW_DIR'' O If PRIVATE_NETWORK=true, uncomment and write the internal hostname of the host exporting the directory used to install the application software. 3.0.1-0
''MY_INT_DOMAIN'' O If PRIVATE_NETWORK=true, uncomment and write the internal domain name 3.0.1-0
''NTP_HOSTS_IP'' C Space separated list of the IP addresses of the NTP servers (preferably set a local ntp server and a public one, ex. pool.ntp.org). 3.0.1-0
''PRIVATE_NETWORK'' O Set PRIVATE_NETWORK=true to use WN on private network. 3.0.1-0

Batch server

Variable name Type *Descpription >= ig-yaim version
''BATCH_LOG_DIR'' C Path for the batch-system log files: "/var/torque" (for torque); "/lsf_install_path/work/cluster_name/logdir" (for LSF). In case of separate batch-master (not on the same machine as the CE) PLEASE make sure that this directories are READABLE from the CEs 3.0.1-0
''BATCH_CONF_DIR'' C Only for LSF. Set path where ''lsf.conf'' file is located. 3.0.1-0

BDII Site

Specific variables are in the configuration file: * ''/opt/glite/yaim/examples/siteinfo/services/glite-bdii_site''

Please copy and edit that file in your ''<confdir>/services'' directory (have a look to Yaim configuration files").

* gLite BDII Site variables

Variable name Type Descpription >= ig-yaim version
''BDII__URL'' C If you are configuring a 3.1 node change the port to ''2170'' and ''mds-vo-name'' to ''resource''. For example:
BDII_CE_URL="ldap://$CE_HOST:2170/mds-vo-name=resource,o=grid"
3.0.1-0
''CLOSE_SE_HOST'' C Set the close SE for the site. It is chosen between one of the site SEs. >= 4.0.3-0
''SITE_BDII_HOST'' C Host name of site BDII >= 3.0.1-0
''SITE_DESC'' C A long format name for your site >= 4.0.4-0
''SITE_SECURITY_EMAIL'' C Contact email for security >= 4.0.4-0
''SITE_OTHER_GRID'' C Grid to which your site belongs to, i.e. WLG or EGEE. Use:
SITE_OTHER_GRID="EGEE" 
4.0.4-0
''SITE_OTHER_EGEE_ROC'' C Agree within your ROC what the field should be. USE:
 SITE_OTHER_EGI_ROC="Italy" 
4.0.4-0

BDII Top

* gLite BDII Top variables

By default, a BDII_top is configured to contact the GOC_DB. If you want to modify this behaviour, after the configuration you have to edit the /etc/glite/glite-info-update-endpoints.conf file using the following settings:

[configuration]
EGI = FALSE
OSG = FALSE
manual = True
manual_file = /opt/glite/yaim/config/>file_containing_your_sites.conf>
output_file = /opt/glite/etc/gip/top-urls.conf
cache_dir = /var/cache/glite/glite-info-update-endpoints 

The file containing the list of sites "known" to the BDII_top is updated every hour by the cron-job /etc/cron.hourly/glite-info-update-endpoints.

DGAS services on CE ( CE CREAM )

Variable name Type Descpription >= ig-yaim version
''DGAS_ACCT_DIR'' C Path for the batch-system log files.
- for torque/pbs:
DGAS_ACCT_DIR=/var/torque/server_priv/accounting
- for LSF:
DGAS_ACCT_DIR=lsf_install_path/work/cluster_name/logdir
4.0.8-3
''DGAS_IGNORE_JOBS_LOGGED_BEFORE'' O Bound date on jobs backward processing.
The backward processing doesn't consider jobs prior to that date.
Use the format as in this example:
DGAS_IGNORE_JOBS_LOGGED_BEFORE="2007-01-01"
Default value: ''2008-01-01''
4.0.2-9
''DGAS_JOBS_TO_PROCESS'' O Specify the type of job which the CE has to process.
ATTENTION: set "all" on "the main CE" of the site (the one with the best hardware), "grid" on the others.
Default value: ''all''
4.0.2-9
''DGAS_HLR_RESOURCE'' C Reference Resource HLR hostname. There is no need to specify the port as in the previous yaim versions (default value "''56568''" will be set by yaim). 4.0.2-9
''DGAS_USE_CE_HOSTNAME'' O Only for LSF. Main CE of the site.
ATTENTION: set this variable only in the case of site with a single LSF Batch Master (no need to set this variable in the Torque case) in which there are more than one CEs or local submission hosts (i.e. host from which you may submit jobs directly to the batch system.).
In this case, ''DGAS_USE_CE_HOSTNAME'' parameter must be set to the same value for all hosts sharing the "lrms" and this value can be arbitrary chosen among these submitting hostnames (you may choose the best one).
Otherwise leave it commented.
For example:
DGAS_USE_CE_HOSTNAME="my-ce.my-domain"
4.0.2-9
''DGAS_MAIN_POLL_INTERVAL'' O UR Box parse interval, that is if all jobs have been processed: seconds to wait before looking for new jobs in UR Box. default value "5" 4.0.11-4
''DGAS_JOB_PER_TIME_INTERVAL'' O Number of jobs to process at each processing step (several steps per mainPollInterval, depending on the number of jobs found in chocolateBox). default value "40" 4.0.11-4
''DGAS_TIME_INTERVAL'' O Time in seconds to sleep after each processing step (if there are still jobs to process, otherwise start a new mainPollInterval). default value "3" 4.0.11-4
''DGAS_QUEUE_POLL_INTERVAL'' O Garbage clean-up interval in seconds. default value "2" 4.0.11-4
''DGAS_SYSTEM_LOG_LEVEL'' O The "systemLogLevel" parameter defines the log verbosity from 0(no logging) to 9(maximum Verbosity) default value "7" 4.0.11-4

GLEXEC_wn

* gLite GLEXEC variables

Functional Description:

* The gLExec system, used in combination with the LCAS site-local authorization system and the LCMAPS local credential mapping service, provides an integrated solution for site access control to grid resources. With the introduction of gLExec, the submission model can be extended from the traditional gatekeeper models, where authorization and credential mapping only take place at the site’s ‘edge’. Retaining consistency in access control, gLExec allows a larger variety of job submission and management scenarios that include per-VO schedulers on the site and the late binding of workload to job slots in a scenario where gLExec in invoked by pilot jobs on the worker node. But it is also the mapping ingredient of a new generation of resource access services, like CREAM.

More details here.

UI

* gLite UI variables

 
This site is powered by the TWiki collaboration platformCopyright © 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback