Tags:
create new tag
,
view all tags
%TOC% ---+ IGI Installation and Configuration Guide for gLite 3.2 SL5 x86_64 This document is a complete guide for the installation and the configuration of INFNGRID profiles aligned with gLite middleware version 3.2 on SL5 (or RHEL5 clones) for x86_64 architecture (no i386 profiles have been deployed up to now). Currently only few profiles have been ported to *SL5 x86_64* and integrated in INFNGRID release. At the following link you can find the current status of gLite and INFNGRID porting: * [[https://twiki.cern.ch/twiki/bin/view/EGEE/SL5Planning][gLite Node Tracker]] * [[ig-node-tracker][IGI Node Traker]] If you find errors in this document please open a ticket to the "Release & Documentation" group using the [[https://ticketing.cnaf.infn.it/][INFNGRID trouble ticketing system]]. *Released profiles:* *IMPORTANT NOTE:* Please be aware that the profile ig_GRIDFTP is obsoleted by the ig_SE_storm_gridftp, and an updgrade is not possible, only fresh installations of the ig_SE_storm_gridftp Here below an updated lists of currently deployed profiles with related metapackage and nodetype names: | *Profiles* | *INSTALLATION Metapackages* | *CONFIGURATION Nodetypes* | *Release date* | *Required in a grid site?* | | *ARGUS* | ''ig_ARGUS'' | ''ig_ARGUS_server'' <BR/> or <BR/> ''ARGUS_server''| <fc green>26/02/2010</fc> | **NO** | | *BDII Site* | ''ig_BDII_site'' | ''ig_BDII_site'' | <fc green>04.08.2010</fc> | **YES** | | *BDII Top* | ''ig_BDII_top'' | ''ig_BDII_top'' | <fc green>05/10/2010</fc> | **NO** | | *CREAM CE* | ''ig_CREAM'' <BR/> ''ig_CREAM_LSF'' <BR/> ''ig_CREAM_torque'' | ''ig_CREAM'' <BR/> ''ig_CREAM_LSF'' <BR/> ''ig_CREAM_torque'' | <fc green>09/02/2010</fc> | **YES** | | *FTA_oracle* | ''ig_FTA_oracle'' | ''ig_FTA_oracle'' | <fc green>21/07/2010</fc> | **NO** | | *FTM** | ''ig_FTM'' | ''ig_FTM'' | <fc green>21/07/2010</fc> | **NO** | | *FTS_oracle* | ''ig_FTS_oracle'' | ''ig_FTS_oracle'' | <fc green>21/07/2010</fc> | **NO** | | *GLEXEC_wn* | ''ig_GLEXEC_wn'' | ''ig_GLEXEC_wn'' | <fc green>18/12/2009 </fc> | **NO** | | *HLR* | ''ig_HLR'' | ''ig_HLR'' | <fc red>Not released</fc> | **NO** | | *LB* | ''ig_LB'' | ''ig_LB'' | <fc green>28/04/2010</fc> | **NO** | | *LFC* | ''ig_LFC_mysql'' <BR/> ''ig_LFC_oracle'' | ''ig_LFC_mysql'' <BR/> ''ig_LFC_oracle'' | <fc green>22/10/2009 <BR/> 08/02/2010</fc> | **NO** | | *SGE_utils* | ''ig_SGE_utils'' | ''ig_SGE_utils'' | <fc green>03/05/2010 </fc> | **NO** | | *SE dCache* | ''ig_SE_dcache_info'' <BR/> ''ig_SE_dcache_nameserver_chimera'' <BR/> ''ig_SE_dcache_pool'' <BR/> ''ig_SE_dcache_srm'' | ''ig_SE_dcache_info'' <BR/> ''ig_SE_dcache_nameserver_chimera'' <BR/> ''ig_SE_dcache_pool'' <BR/> ''ig_SE_dcache_srm''| <fc green>10/06/2011</fc> | **NO** | | *SE DPM* | ''ig_SE_dpm_mysql' '<BR/> ''ig_SE_dpm_disk'' | ''ig_SE_dpm_mysql'' <BR/> ''ig_SE_dpm_disk''| <fc green> 22/10/2009</fc> | **NO** | | *SE STORM* | ''ig_SE_storm_backend'' <BR/> ''ig_SE_storm_frontend'' <BR/> ''ig_SE_storm_checksum''<BR/> ''ig_SE_storm_gridftp'' | ''ig_SE_storm_backend'' <BR/> ''ig_SE_storm_frontend'' <BR/> ''ig_SE_storm_checksum'' <BR/> ''ig_SE_storm_gridftp'' | <fc green>23/03/2011</fc> | **NO** | | *UI* | ''ig_UI'' <BR/> ''ig_UI_noafs'' | ''ig_UI'' <BR/> ''ig_UI_noafs'' | <fc green>27/07/2009</fc> | **NO** <BR/> but recommended | | *VOBOX* | ''ig_VOBOX'' | ''ig_VOBOX'' | <fc green>22/10/2009 </fc> | **NO** | | *VOMS_mysql* | ''ig_VOMS_mysql'' | ''ig_VOMS_mysql'' | <fc green>03/05/2010 </fc> | **NO** | | *VOMS_oracle* | ''ig_VOMS_oracle'' | ''ig_VOMS_oracle'' | <fc green>18/08/2010 </fc> | **NO** | | *WN* | ''ig_WN'' <BR/> ''ig_WN_noafs'' <BR/> ''ig_WN_LSF'' <BR/> ''ig_WN_LSF_noafs'' <BR/> ''ig_WN_torque'' <BR/> ''ig_WN_torque_noafs'' | ''ig_WN'' <BR/> ''ig_WN_noafs'' <BR/> ''ig_WN_LSF'' <BR/> ''ig_WN_LSF_noafs'' <BR/> ''ig_WN_torque'' <BR/> ''ig_WN_torque_noafs'' | <fc green>24/07/2009</fc> | **YES** | Please keep in mind the difference between the following three concepts because of their different scopes and uses: * *profile* => we use this word to __//generically//__ call a service * *metapackage* => we use this word during __//installation//__ phase * *nodetype* => we use this word during __//configuration//__ phase *Documentation references:* Please also refer to these gLite official guides: * [[https://twiki.cern.ch/twiki/bin/view/LCG/GenericInstallGuide320][Generic Installation and Configuration Guide for gLite 3.2]] * [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400][YAIM 4 Guide for Sysadmins]] * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables][YAIM Configuration Variables]] * [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400#Known_issues][YAIM 4 Known Issues]] ---++ *Installation* ---+++ OS installation Install SL5 using [[http://linuxsoft.cern.ch/scientific/5x/][ SL5.X repository (CERN mirror)]] or one of the supported <acronym title="Operating System">OS</acronym> (RHEL5 clones). You may find information on official repositories at [http://igrelease.forge.cnaf.infn.it/doku.php?id=doc:tips:repos][Repositories for APT and YUM]] <br /> If you want to set up a local installation server please refer to [[http://igrelease.forge.cnaf.infn.it/doku.php?id=doc:tips:mrepo][Mrepo Quick Guide]] *NOTE*: Please check if <em> =NTP= </em>, <em> =cron= </em> and <em> =logrotate= </em> are installed, otherwise install them! ---++++ Check the FQDN hostname Ensure that the hostnames of your machines are correctly set. Run the command: <pre> hostname -f</pre> It should print the fully qualified domain name (e.g. =prod-ce.mydomain.it=). Correct your network configuration if it prints only the hostname without the domain. If you are installing WN on private network the command must return the external FQDN for the CE and the SE (e.g. =prod-ce.mydomain.it=) and the internal FQDN for the WNs (e.g. =node001.myintdomain=). ---+++ Repository Settings To have more details to the repository have a look to the this link [[http://wiki.italiangrid.org/twiki/bin/view/IGIRelease/RepositoriesSpecifications][Repository Specifications]] With standard installation of SL5 it's possible that you have the EPEL repository enabled. Please disable it: <pre># mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.disabled </pre> otherwise you'll get the following error: <pre>Missing Dependency: libcares.so.0()(64bit) is needed by package glite-security-gss-2.0.0-3.sl5.x86_64 (glite-generic_sl5_x86_64_release) </pre> because of the presence of newer version of c-ares (1.4.0-1.el5). The middleware needs the 1.3.0-4.sl5 ! Remeber to enabled the dag.repo : With standard installation of SL5 it's possible that you have the DAG repository. Please check if it is enabled, if no please enabled it: cat /etc/yum.repos.d/dag.repo .... enabled=1 .... ---++++ Common repositories Each profile needs a set of common repositories: | *Common repositories *x86_64** | | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/dag.repo][dag.repo]] | | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/ig.repo][ig.repo]] | | [[http://repo-pd.italiangrid.it/mrepo/repos/egi-trustanchors.repo][egi-trustanchors.repo]] | In order to have a working IGI service you have to download also the gLite repo files. In the table bellow you'll find the match from metapackages to profiles. ---++++ Profile-specific repositories Furthermore each profile needs a set of repositories that contain the profile-related middleware. Look at the table below to know what specific repositories your profile needs: | *Metapackages* | *Profile-specific repositories *x86_64** | | ig_ARGUS | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-argus.repo][glite-argus.repo]] | | ig_BDII_top | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-bdii_top.repo][glite-bdii_top.repo]] | | ig_BDII_site | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-bdii_site.repo][glite-bdii_site.repo]] | | ig_CREAM <br /> ig_CREAM_LSF | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-cream.repo][glite-cream.repo]] | | ig_CREAM_torque | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-cream_torque.repo][glite-cream_torque.repo]] | | ig_FTA_oracle | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-fta_oracle.repo][glite-fta_oracle.repo]] | | ig_FTM | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-ftm.repo][glite-ftm.repo]] | | ig_FTS_oracle | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-fts_oracle.repo][glite-fts_oracle.repo]] | | ig_GLEXEC_wn | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-glexec.repo][glite-glexec.repo]] | | ig_HLR | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-generic.repo][glite-generic.repo]] | | ig_LB | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-lb.repo][glite-lb.repo]] | | ig_LFC_mysql | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-lfc_mysql.repo][glite-lfc_mysql.repo]] | | ig_SE_dpm_mysql | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-se_dpm_mysql.repo][glite-se_dpm_mysql.repo]] | | ig_SE_dcache_info | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-se_dcache_info.repo][glite-se_dcache_info.repo]] | | ig_SE_dcache_nameserver_chimera | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-se_dcache_nameserver.repo][glite-se_dcache_nameserver.repo]] | | ig_SE_dcache_pool | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-se_dcache_pool.repo][glite-se_dcache_pool.repo]] | | ig_SE_dcache_srm | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-se_dcache_srm.repo][glite-se_dcache_srm.repo]] | | ig_SE_storm_backend <br /> ig_SE_storm_frontend <br /> ig_SE_storm_checksum | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/ig-storm.repo][ig-storm.repo]] | | ig_SE_storm_gridftp | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/ig-storm_gridftp.repo][ig-storm_gridftp.repo]] | | ig_UI <br /> ig_UI_noafs | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-ui.repo][glite-ui.repo]] | | ig_VOBOX | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-vobox.repo][glite-vobox.repo]] | | ig_VOMS_mysql | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-voms_mysql.repo][glite-voms_mysql.repo]] | | ig_VOMS_oracle | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-voms_oracle.repo][glite-voms_oracle.repo]] | | ig_WN, ig_WN_noafs<strong> <br /> </strong>ig_WN_LSF, ig_WN_LSF_noafs | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-wn.repo][glite-wn.repo]] | | ig_WN_torque, ig_WN_torque_noafs | [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-wn_torque.repo][glite-wn_torque.repo]] | Then update your host: <pre>yum clean all </pre> ---+++ CAa installation: Install CAs on ALL profiles: <pre>yum install ca-policy-egi-core </pre> ---+++ Metapackage installation Please consider that x86_64 WN profiles has to be installed using the “groupinstall” yum command as follows: <pre> yum groupinstall <WN_profile> </pre> where <WN_profile> could be one of: ig_WN, ig_WN_noafs, ig_WN_torque, ig_WN_torque_noafs, ig_WN_LSF, ig_WN_LSF_noafs or <pre> yum groupinstall <UI_profile> </pre> where <UI_profile> could be one of: ig_UI, ig_UI_noafs If you are installing any othere profiles use: <pre> yum install <metapackage> </pre> Where =<metapackage>= is one of those reported on the table above (Metapackages_ column). *IMPORTANT NOTE:* When you are installing ig_CREAM or ig_CREAM_LSF or ig_CREAM_torque adding an exclude line to the .repo file, e.g. in /etc/yum.repos.d/slc5-updates.repo or /etc/yum.repos.d/sl5-security.repo <pre> exclude=c-ares </pre> ---++++Special cases ---+++++CE and Batch Server configuration (access to log files) It doesn't matter what kind of deployment you have, batch-system master on a different machine than the CE (CREAM, TORQUE or LSF) or on the same one, you have to be sure that you provide *access* to the *batch system log files*: You must set up a mechanism to ''transfer'' accounting logs to the CE: * through NFS (don't forget to set $BATCH_LOG_DIR and $DGAS_ACCT_DIR in <your-site-info.def> configuration file) * through a daily cron job to the directory defined in $BATCH_LOG_DIR and $DGAS_ACCT_DIR in <your-site-info.def> configuration file ---+++++CREAM_torque (multiple CEs and single Batch Server) This configuration requires particular attention both during the metapackage installation and during the nodetype configuration. Install the first CE (that will act also as Batch Server) following the usual procedure: <pre> yum install ig_CREAM_torque </pre> Install the other secondary CEs without batch server software as follows: <pre> yum install ig_CREAM glite-TORQUE_utils </pre> ---+++++Standalone TORQUE_server You should use the following .repo files, [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-torque_server.repo][glite-torque_server.repo]] & [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-torque_utils.repo][glite-torque_utils.repo]] <pre> yum install glite-TORQUE_server glite-TORQUE_utils </pre> for the CEs (creamCEs), add the [[http://repo-pd.italiangrid.it/mrepo/repos/glite/sl5/x86_64/glite-torque_utils.repo][glite-torque_utils.repo]] file and: <pre> yum install ig_CREAM glite-TORQUE_utils </pre> Please pay attention also to the configuration for this special case ---+++ Batch system installation (only for WN) LSF server/client installation must be done **manually**, whereas Torque server/client installation is included in the metapackage. ---++ *Configuration* ---+++ Configuration files ---++++ IGI YAIM configuration files YAIM configuration files should be stored in a directory structure. All the involved files *HAVE* to be under the same folder =<confdir>=, in a safe place, which *is not world readable*. This directory should contain: | *File* | *Scope* | *Details* | | =<your-site-info.def>= | *whole-site* | List of configuration variables in the format of key-value pairs. <br /> It's a *mandatory* file. <br /> It's a parameter passed to the =ig_yaim= command. <br /> *IMPORTANT*: You should always check if your =<your-site-info.def>= is up-to-date comparing with the last =/opt/glite/yaim/examples/siteinfo/ig-site-info.def= template deployed with =ig-yaim= and get the differences you find. <br /> For example you may use =vimdiff=: <pre>vimdiff /opt/glite/yaim/examples/siteinfo/ig-site-info.def <confdir>/<your-site-info.def></pre> | | =<your-wn-list.conf>= | *whole-site* | Worker nodes list in the format of =hostname.domainname= per row. <br /> It's a *mandatory* file. <br /> It's defined by =WN_LIST= variable in =<your-site-info.def>=. | | =<your-users.conf>= | *whole-site* | Pool account user mapping. <br /> It's a *mandatory* file. <br /> It's defined by =USERS_CONF= variable in =<your-site-info.def>=. <br /> *IMPORTANT*: You may create =<your-users.conf>= starting from the =/opt/glite/yaim/examples/ig-users.conf= template deployed with =ig-yaim=, but probably you have to fill it on the base of your site policy on uids/guis. We suggest to proceed as explained here: _”<a href="http://igrelease.forge.cnaf.infn.it/doku.php?id=doc:use_cases:users" title="doc:use_cases:users">Whole site: How to create local users.conf and configure users</a>”_. | | =<your-groups.conf>= | *whole-site* | VOMS group mapping. <br /> It's a *mandatory* file. <br /> It's defined by =GROUPS_CONF= variable in =<your-site-info.def>=. <br /> *IMPORTANT*: You may create =<your-groups.conf>= starting from the =/opt/glite/yaim/examples/ig-groups.conf= template deployed with =ig-yaim=. | ---++++ Additional files Furthermore the configuration folder can contain: | *Directory* | *Scope* | *Details* || | =services/= | *service-specific* | It contains a file per nodetype with the name format: =ig-node-type=. <br /> The file contains a list of configuration variables specific to that nodetype.<br /> Each yaim module distributes a configuration file in =/opt/glite/yaim/examples/siteinfo/services/[ig or glite]-node-type. <br /> It's a *mandatory* directory if required by the profile and *you should copy it* under the same directory where <your-site-info.def> is. | | | =nodes/= | *host-specific* | It contains a file per host with the name format: =hostname.domainname=. <br /> The file contains host specific variables that are different from one host to another in a certain site. <br /> It's an *optional* directory. || | =vo.d/= | *VO-specific* | It contains a file per VO with the name format: =vo_name=, but most of VO settings are still placed in =ig-site-info.def= template. For example, for ”<code>lights.infn.it</code>”: <pre># cat vo.d/lights.infn.it<br />SW_DIR=$VO_SW_DIR/lights<br />DEFAULT_SE=$SE_HOST<br />VOMS_SERVERS="vomss://voms2.cnaf.infn.it:8443/voms/lights.infn.it?/lights.infn.it"<br />VOMSES="lights.infn.it voms2.cnaf.infn.it 15013 /C=IT/O=INFN/OU=Host/L=CNAF/CN=voms2.cnaf.infn.it lights.infn.it"</pre> <p>It's an *optional* directory for “normal” VOs (like atlas, alice, babar), *mandatory* only for “fqdn-like” VOs. In case you support such VOs *you should copy* the structure vo.d/<vo.specific.file> under the same directory where <your-site-info.def> is.</p> || | =group.d/= | *VO-specific* | It contains a file per VO with the name format: =groups-<vo_name>.conf=. <br /> The file contains VO specific groups and it replaces the former =<your-groups.conf>= file where all the VO groups were specified all together. <br /> It's an *optional* directory. || The optional folders are created to allow system administrators to organise their configurations in a more structured way.” *IMPORTANT NOTE:* <br /> If your site has the intention to support more VOs than the default ones, you should have a look at [[http://wiki.italiangrid.it/twiki/bin/view/IGIRelease/EnableVo][Whole site: How to enable a VO]], specially for the *enmr.eu* VO, once the configuration finished you should follow "[[http://wiki.italiangrid.it/twiki/bin/view/IGIRelease/EnableVo#Extra_configuration][extra_configuration]]". ---++++Default files Variables that contain a meaningful default value are distributed under ''/opt/glite/yaim/defaults/'' directory and that don't need to be changed unless you are an advanced user and you know what you are doing. The files are: * ''ig-site-info.pre''; * ''ig-site-info.post''; * ''<node-type>.pre'', ''glite-<node-type>.pre'', ''ig-<node-type>.pre''; * ''<node-type>.post'', ''glite-<node-type>.post'', ''ig-<node-type>.post''. In case you really need to change these variables, you don't need to modify the value in these files if you don't want to edit them. You can just add the same variable in site-info.def since this will overwrite the variables declared in these files. See the configuration flow in YAIM in the next section. ---++++Configuration flow This is the order in which the different configuration files are sourced (''<confdir>'' refers to the path of the configuration folder which is the path of ''<your-site-info.def>''): 1. defaults ''.pre'' files in ''/opt/glite/yaim/defaults/''; 1. ''<confdir>/<your-site-info.def>''; 1. service-specific files in ''<confdir>/services/''; 1. defaults ''.post'' files in ''/opt/glite/yaim/defaults/''; 1. node-specific files in ''<confdir>/nodes/''; 1. VO-specific files in ''<confdir>/vo.d/''; 1. function files in ''/opt/glite/yaim/node-info.d/''; 1. VO-specific group settings in ''<confdir>/group.d/*''. ---+++ Configuration variabiles ---++++ General In the documentation you could find all the *INFNGRID variables* and some important *gLite variables* that can be configured in ''<your-site-info.def>'' are listed in alphabetically sorting (links to gLite variables are also listed): * *C* = compulsory, if you are going to configure that type of node; * *O* = optional. For the other *gLite variables* please consider the official "site-info.def" information at "[[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400#site_info_def][YAIM 4 guide for sysadmins]]". ---+++++Generic * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#site_info_def][gLite site-info.def variables]] * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#VO_related_variables][gLite VO-related variables]] * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#site_info_pre][gLite site-info.pre variables]] * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#site_info_post][gLite site-info.post variables]] | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ''BASE_SW_DIR'' | O | Directory exported for SGM (it will be mounted by WN on ''VO_SW_DIR'', see below). Comment it if you have your own mounting tool. | 3.0.1-0 | | ''CE_INT_HOST'' | O | If PRIVATE_NETWORK=true, uncomment and write the internal FQDN hostname of CE. | 3.0.1-0 | | ''CE_LOGCPU'' | *C* | Total number of logical CPUs in the system (i.e. number of cores/hyperthreaded CPUs). | 4.0.3-0 | | ''CE_OS'' | *C* | OS type. Set using the output of the following command *run on your WNs*: <pre># lsb_release -i | cut -f2 ScientificSL</pre> More details here: "[[http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_the_OS_name][How to publish the OS name]]". | 3.0.1-0 | | ''CE_OS_ARCH'' | *C* | OS architecture. Set using the output of the following command *run on your WNs*: <pre># uname -m i686</pre> More details here: "[[http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_my_machine_architecture][How to publish my machine architecture]]". | 3.0.1-0 | | ''CE_OS_RELEASE'' | *C* | OS release. Set using the output of the following command *run on your WNs*: <pre># lsb_release -r | cut -f2 4.5</pre> More details here: "[[http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_the_OS_name][How to publish the OS name]]".| 3.0.1-0 | | ''CE_OS_VERSION'' | *C* | OS version. Set using the output of the following command *run on your WNs*: <pre># lsb_release -c | cut -f2 Beryllium</pre> More details here: "[[http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_the_OS_name][How to publish the OS name]]".| 3.0.1-0 | | ''CE_PHYSCPU'' | *C* | Total number of physical CPUs in the system (i.e. number of sockets). <BR/> *ATTENTION*: if you have more than one CE and shared WNs set this variable to "0". <BR/> More details here: <BR/>"[[http://gstat.gridops.org/gstat/filter_help.html#GIISQuery_Usage][GIISQuery_Usage]]". | 4.0.3-0 | | ''HOST_SW_DIR'' | O | Host exporting the directory for SGM (usually a CE or a SE). | 3.0.1-0 | | ''INT_NET'' | O | If PRIVATE_NETWORK=true, uncomment and write your internal network. | 3.0.1-0 | | ''INT_HOST_SW_DIR'' | O | If PRIVATE_NETWORK=true, uncomment and write the internal hostname of the host exporting the directory used to install the application software. | 3.0.1-0 | | ''MY_INT_DOMAIN'' | O | If PRIVATE_NETWORK=true, uncomment and write the internal domain name | 3.0.1-0 | | ''NTP_HOSTS_IP'' | *C* | Space separated list of the IP addresses of the NTP servers (preferably set a local ntp server and a public one, ex. pool.ntp.org). | 3.0.1-0 | | ''PRIVATE_NETWORK'' | O | Set PRIVATE_NETWORK=true to use WN on private network. | 3.0.1-0 | ---+++++Batch server | *Variable name* | *Type* | *Description* | *>= ig-yaim version*| | ''BATCH_LOG_DIR'' | *C* | Path for the batch-system log files: "/var/torque" (for torque); "/lsf_install_path/work/cluster_name/logdir" (for LSF). In case of separate batch-master (not on the same machine as the CE) PLEASE make sure that this directories are *READABLE* from the CEs | 3.0.1-0 | | ''BATCH_CONF_DIR'' | *C* | *Only for LSF*. Set path where ''lsf.conf'' file is located. | 3.0.1-0 | ---+++++Amga * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#AMGA][gLite AMGA variables]] ---+++++BDII Site Specific variables are in the configuration file: * ''/opt/glite/yaim/examples/siteinfo/services/glite-bdii_site'' Please copy and edit that file in your ''<confdir>/services'' directory (have a look to [[#Configuration_files][Yaim configuration files]]"). * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#site_BDII][gLite BDII Site variables]] | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ''BDII_<nodetype>_URL'' | *C* | If you are configuring a 3.1 node change the port to ''2170'' and ''mds-vo-name'' to ''resource''. For example: <pre>BDII_CE_URL="ldap://$CE_HOST:2170/mds-vo-name=resource,o=grid"</pre> | 3.0.1-0 | | ''CLOSE_SE_HOST'' | *C* | Set the close SE for the site. It is chosen between one of the site SEs. | >= 4.0.3-0 | | ''SITE_BDII_HOST'' | *C* | Host name of site BDII | >= 3.0.1-0 | | ''SITE_DESC'' | *C* | A long format name for your site | >= 4.0.4-0 | | ''SITE_SECURITY_EMAIL'' | *C* | Contact email for security | >= 4.0.4-0 | | ''SITE_OTHER_GRID'' | *C* | Grid to which your site belongs to, i.e. WLG or EGEE. *Use:* <pre>SITE_OTHER_GRID="EGEE" </pre> | 4.0.4-0 | | ''SITE_OTHER_EGEE_ROC'' | *C* | Agree within your ROC what the field should be. *USE:* <pre> SITE_OTHER_EGI_ROC="Italy" </pre> | 4.0.4-0 | ---+++++BDII Top * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#top_BDII][gLite BDII Top variables]] By default, a BDII_top is configured to contact the GOC_DB. If you want to modify this behaviour, after the configuration you have to edit the */etc/glite/glite-info-update-endpoints.conf* file using the following settings: <pre> [configuration] EGI = FALSE OSG = FALSE manual = True manual_file = /opt/glite/yaim/config/<file_containing_your_sites.conf> output_file = /opt/glite/etc/gip/top-urls.conf cache_dir = /var/cache/glite/glite-info-update-endpoints </pre> The file containing the list of sites "known" to the BDII_top is updated every hour by the cron-job /etc/cron.hourly/glite-info-update-endpoints. ---+++++ CE LCG *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/lcg-ce'' * ''/opt/glite/yaim/examples/siteinfo/services/glite-mpi'' * ''/opt/glite/yaim/examples/siteinfo/services/glite-mpi_ce'' Please copy and edit those files in your ''<confdir>/services'' directory (have a look to [[#Configuration_files][Yaim configuration files]]"). * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#lcg_CE][gLite lcg-CE variables]] | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ''CE_RUNTIMEENV'' | *C* | Add the following tags to runtime environment: <BR/> * If your site belongs to INFN write ''<CITY>'' in capitals (ex. ''PADOVA'') otherwise ''<INSTITUTE>-<CITY>'' in capitals (ex. ''SNS-PISA'') <BR/> * Write the average value of SpecInt2000 and SpecFloat2000 for your WNs; please note that now a '_' is used as separator in place of '=' (see at [[http://repo-pd.italiangrid.it/fileadmin/Certification/MetricHowTo.pdf]]): <BR/> <pre> SI00MeanPerCPU_<your_value> SF00MeanPerCPU_<your_value> </pre> | >= 3.0.1-0 | ---+++++CREAM CE *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/glite-creamce'' Please copy and edit that file in your ''<confdir>/services'' directory (have a look to [[#Configuration_files][Yaim configuration files]]"). * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#cream_CE][gLite CREAM variables]] | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ACCESS_BY_DOMAIN | *C* | By default the cream db is installed on localhost and accessible only from localhost. Setting ACCESS_BY_DOMAIN to true, you allow the cream db to be accessed from all computers in your domain. | >= 4.0.5-4 | | BATCH_CONF_DIR | *C* | LSF settings: path where lsf.conf is located | >= 4.0.5-4 | | BLAH_JOBID_PREFIX | *C* | This parameter sets the BLAH jobId prefix (it MUST be 6 chars long, begin with cr and terminate by '_'). It is important in case of more than one CE connecting to the same blparser. In this case, it is better that each CREAM_CE has its own prefix. The default value for this variable (as specified in opt/glite/yaim/defaults/glite-creamce.pre) is "cream_" | >= 4.0.5-4 | | BLPARSER_HOST | *C* | to refer to the host where the Blah Log Parser is running or where the blparser is running (i.e. the variable xxx_BLPserver of blah.config). In this machine batch system logs must be accessible | >= 4.0.5-4 | | BLP_PORT | *C* | to refer to the the port where Blah Log Parser is running (i.e. the variable xxx_BLPport of blah.config, GLITE_CE_BLPARSERxxx_PORT1 of blparser.conf) | >= 4.0.5-4 | | CREAM_CE_STATE | *C* | This is the value to be published as GlueCEStateStatus instead of Production. The default value for this variable (as specified in opt/glite/yaim/defaults/glite-creamce.pre) is "Special" | >= 4.0.5-4 | | CREAM_DB_USER | *C* | Database user to access cream database (different by root). Yaim will create this user and grant him the access rights | >= 4.0.5-4 | | CEMON_HOST | *O* | Cream CE host name. In a more complex layout, the ce-monitor can be installed in a host different from the cream-ce host. In that case you need to put the right ce-monitor hostname in this variable. | >= 4.0.5-4 | | CREAM_PORT | *C* | to refer to the parser listening cream port (i.e. the variable GLITE_CE_BLPARSERxxx_CREAMPORT1 of blparser.conf) | >= 4.0.5-4 | | JOB_MANAGER | *C* | The name of the job manager used by the gatekeeper. For CREAM please define: pbs, lsf, sge or condor | >= 3.0.1-0 | ---+++++DGAS services on CE ( CE CREAM ) | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ''DGAS_ACCT_DIR'' | *C* | Path for the batch-system log files. <BR/> - for torque/pbs: <pre>DGAS_ACCT_DIR=/var/torque/server_priv/accounting</pre> - for LSF:<pre>DGAS_ACCT_DIR=lsf_install_path/work/cluster_name/logdir</pre> | 4.0.8-3 | | ''DGAS_IGNORE_JOBS_LOGGED_BEFORE'' | O | Bound date on jobs backward processing. <BR/> The backward processing doesn't consider jobs prior to that date. <BR/> Use the format as in this example: <pre>DGAS_IGNORE_JOBS_LOGGED_BEFORE="2007-01-01"</pre> Default value: ''2008-01-01'' | 4.0.2-9 | | ''DGAS_JOBS_TO_PROCESS'' | O | Specify the type of job which the CE has to process. <BR/> *ATTENTION*: set "all" on "the main CE" of the site (the one with the best hardware), "grid" on the others. <BR/> Default value: ''all'' | 4.0.2-9 | | ''DGAS_HLR_RESOURCE'' | *C* | Reference Resource HLR hostname. There is no need to specify the port as in the previous yaim versions (default value "''56568''" will be set by yaim). | 4.0.2-9 | | ''DGAS_USE_CE_HOSTNAME'' | O | *Only for LSF*. Main CE of the site. <BR/> *ATTENTION*: set this variable *only* in the case of site with a single LSF Batch Master (no need to set this variable in the Torque case) in which there are more than one CEs or local submission hosts (i.e. host from which you may submit jobs directly to the batch system.). <BR/> In this case, ''DGAS_USE_CE_HOSTNAME'' parameter must be set to the same value for all hosts sharing the "lrms" and this value can be arbitrary chosen among these submitting hostnames (you may choose the best one). <BR/> Otherwise leave it commented. <BR/> For example: <pre>DGAS_USE_CE_HOSTNAME="my-ce.my-domain"</pre> | 4.0.2-9 | | ''DGAS_MAIN_POLL_INTERVAL'' | O | UR Box parse interval, that is if all jobs have been processed: seconds to wait before looking for new jobs in UR Box. *default* value "5" | 4.0.11-4 | | ''DGAS_JOB_PER_TIME_INTERVAL'' | O | Number of jobs to process at each processing step (several steps per mainPollInterval, depending on the number of jobs found in chocolateBox). *default* value "40" | 4.0.11-4 | | ''DGAS_TIME_INTERVAL'' | O | Time in seconds to sleep after each processing step (if there are still jobs to process, otherwise start a new mainPollInterval). *default* value "3" | 4.0.11-4 | | ''DGAS_QUEUE_POLL_INTERVAL'' | O | Garbage clean-up interval in seconds. *default* value "2" | 4.0.11-4 | | ''DGAS_SYSTEM_LOG_LEVEL'' | O | The "systemLogLevel" parameter defines the log verbosity from 0(no logging) to 9(maximum Verbosity) *default* value "7" | 4.0.11-4 | ---+++++FTS * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#FTS][gLite FTS variables]] ---+++++GLEXEC_wn * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#GLEXEC_wn][gLite GLEXEC variables]] *Functional Description*: * The *gLExec system*, used in combination with the LCAS site-local authorization system and the LCMAPS local credential mapping service, provides an integrated solution for site access control to grid resources. With the introduction of gLExec, the submission model can be extended from the traditional gatekeeper models, where authorization and credential mapping only take place at the site’s ‘edge’. Retaining consistency in access control, gLExec allows a larger variety of job submission and management scenarios that include per-VO schedulers on the site and the late binding of workload to job slots in a scenario where gLExec in invoked by pilot jobs on the worker node. But it is also the mapping ingredient of a new generation of resource access services, like CREAM. More details [[https://twiki.cern.ch/twiki/bin/view/EGEE/GLExec][here]]. ---+++++GRELC *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/ig-grelc'' Please copy and edit that file in your ''<confdir>/services'' directory (have a look at [[#Configuration_files][Yaim configuration files]]"). For any details please refer to the [[http://grelc.unile.it/home.php][GRELC Documentation]] ---+++++HLR *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/ig-hlr'' Please copy and edit that file in your ''<confdir>/services'' directory (have a look at [[#Configuration_files][Yaim configuration files]]"). | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ''DGAS_HLR_ENABLE_URFORWARD'' | O | Enable the data forward to one or more second level HLRs. <BR/> Default value: ''yes'' | 4.0.2-9 | | ''DGAS_HLR_DB_NAME'' | O | Set the name of the database for HLR service. <BR/> Default value: ''hlr'' | 4.0.2-9 | | ''DGAS_HLR_DB_PASSWORD'' | **C** | Set the password to connect to the MySQL database server (with ''$DGAS_HLR_DB_USER'' user to ''$DGAS_HLR_DB_NAME'' database; not set by default). | 4.0.2-9 | | ''DGAS_HLR_DB_USER'' | O | Set the user for the MySQL database connection to ''$DGAS_HLR_DB_NAME'' database. <BR/> Default value: ''dgas'' | 4.0.2-9 | | ''DGAS_HLR_TMP_DB_NAME'' | O | Set the name of the temporary database for HLR service. <BR/> Default value: ''hlr_tmp'' | 4.0.2-9 | ---+++++HYDRA *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/ig-hydra_mysql'' Please copy and edit that file in your ''<confdir>/services'' directory (have a look at [[#Configuration_files][Yaim configuration files]]"). * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#HYDRA][gLite HYDRA variables]] *Example 1*: Three instances on the same server <pre> HYDRA_INSTANCES="1 2 3" HYDRA_DBNAME_1=hydra_db_1 HYDRA_DBUSER_1=hydra1 HYDRA_DBPASSWORD_1=xxxx HYDRA_CREATE_1=/enmr.eu/Role=NULL/Capability=NULL HYDRA_ADMIN_1=/enmr.eu/Role=admin HYDRA_DBNAME_2=hydra_db_2 HYDRA_DBUSER_2=hydra2 HYDRA_DBPASSWORD_2=xxxx HYDRA_CREATE_2=/enmr.eu/Role=NULL/Capability=NULL HYDRA_ADMIN_2=/enmr.eu/Role=admin HYDRA_DBNAME_3=hydra_db_3 HYDRA_DBUSER_3=hydra3 HYDRA_DBPASSWORD_3=xxxx HYDRA_CREATE_3=/enmr.eu/Role=NULL/Capability=NULL HYDRA_ADMIN_3=/enmr.eu/Role=admin </pre> *Example 2*: Single instance on a server, into a system of three separated HYDRA servers <pre> HYDRA_INSTANCES="1" HYDRA_DBNAME_1=hydra_db_1 HYDRA_DBUSER_1=hydra1 HYDRA_DBPASSWORD_1=xxxx HYDRA_CREATE_1=/enmr.eu/Role=NULL/Capability=NULL HYDRA_ADMIN_1=/enmr.eu/Role=admin HYDRA_PEERS="2 3" HYDRA_CREATE_2=/enmr.eu/Role=NULL/Capability=NULL HYDRA_ID_2=1 HYDRA_HOST_2=hydra.host.fqn HYDRA_CREATE_3=/enmr.eu/Role=NULL/Capability=NULL HYDRA_ID_3=1 HYDRA_HOST_3=hydra.host.fqn2 </pre> The section "HYDRA_PEERS" contains the information about the servers installed on remote machines, that have to work with the server you're installing. Please note that HYDRA_ID is the remote hydra server "instance id" specified into its site-info configuration file. If you have only one installation for every single machine, most probably the HYDRA_IDs are all "=1". Ask remote servers' site admins. After the Hydra installation and configuration, you'll be able to get service information by a LDAP query vs. <hydra_hostname>:2170. So you'll need to *register* it into your *site-BDII*. *All* the *Hydra services* must be found into a *top-BDII* to work together. ---+++++LB * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#LB][gLite LB variables]] ---+++++LFC * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#LFC][gLite LFC variables]] | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ''LFC_READONLY'' | O | Set this var to "yes" if your LFC server is a *replica* of a "central" one, and has to be read_only for users. <BR/> Default value: ''no'' | 4.0.4-2 | ---+++++MyProxy * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#PX][gLite MyProxy variables]] ---+++++SE dCache *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/glite-dcache'' Please copy and edit that file in your ''<confdir>/services'' directory (have a look at [[#Configuration_files][Yaim configuration files]]"). * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#dCache][gLite SE dCache variables]] Please pay attention on the configuration of a dCache node - follow the instructions present at: * [[https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400#Notes_on_configuring_the_dCache][Notes_on_configuring_the_dCache]] * [[http://trac.dcache.org/projects/dcache/wiki/dCacheConfigure.sh][dCacheConfigure.sh]] * [[http://trac.dcache.org/projects/dcache/wiki/manuals/AdvancedStieInfoDefForYaim][Advanced Site-Info.Def For Yaim]] ---+++++SE DPM *Configuration file:* Specific variables are in (copy only the one you need): * ''/opt/glite/yaim/examples/siteinfo/services/glite-se_dpm_disk'' * ''/opt/glite/yaim/examples/siteinfo/services/glite-se_dpm_mysql'' * ''/opt/glite/yaim/examples/siteinfo/services/glite-se_dpm_oracle'' Please copy and edit that file in your ''<confdir>/services'' directory (have a look at [[#Configuration_files][Yaim configuration files]]"). * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#DPM][gLite SE DPM variables]] ---+++++SE StoRM Backend *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/ig-se_storm_backend'' Please copy and edit that file in your ''<confdir>/services'' directory (have a look at [[#Configuration_files][Yaim configuration files]]"). | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ''STORM_ACLMODE'' | O | ACL enforcing mechanism (default value for all Storage Areas).<BR/> Note: you may change the settings for each SA acting on ''$STORM_<SA>_ACLMODE'' variable. <BR/> Available values: ''[aot jit]'' (use ''aot'' for WLCG experiments) <BR/> Default value: ''aot'' | 4.0.4-0 | | ''STORM_AUTH'' | O | Authorization mechanism (default value for all Storage Areas). <BR/> Note: you may change the settings for each SA acting on ''$STORM_<SA>_AUTH'' variable. <BR/> Available values: ''[permit-all deny-all <filename>]'' <BR/> Default value: ''permit-all'' | 4.0.7-0 | | ''STORM_BACKEND_HOST'' <BR/> (was ''STORM_HOST'') | *C* | In ig-site-info.def template. <BR/> Host name of the StoRM Backend server. | 4.0.9-0 | | ''STORM_CERT_DIR'' <BR/> (was ''STORM_HOSTCERT'' and ''STORM_HOSTKEY'') | O | Host certificate directory for StoRM Backend service. <BR/> Default value: ''/etc/grid-security/${STORM_USER}'' | 4.0.9-0 | | ''STORM_CKSUM_ALGORITHM'' | O | Algorithm used by checksum agents. <BR/> Available values: ''[Adler32 CRC32 MD2 MD5 SHA-1 SHA-256 SHA-384 SHA-512]'' <BR/> Default value: ''Adler32'' | 4.0.9-0 | | ''STORM_CKSUM_SERVICE_LIST'' | O | Checksum agents list. <BR/> *ATTENTION*: this variable define a list of three comma-separated values: ''hostname,status_port,service_port'' e.g.: <pre>STORM_CKSUM_SERVICE_LIST="host1,status_port1,service_port1 host2,status_port2,service_port2"</pre> Default value: ''${STORM_BACKEND_HOST},9995,9996'' | 4.0.9-0 | | ''STORM_CKSUM_SUPPORT'' | O | Enable the support to checksum agents. <BR/> Available values: ''[true false]'' <BR/> Default value: ''false'' | 4.0.9-0 | | ''STORM_DEFAULT_ROOT'' | *C* | In ig-site-info.def template. <BR/> Default directory for Storage Areas. | 4.0.2-9 | | ''STORM_DB_HOST'' | O | Host for database connection. <BR/> Default value: ''$STORM_BACKEND_HOST'' | 4.0.2-9 | | ''STORM_DB_PWD'' | *C* | Password for database connection. | 4.0.2-9 | | ''STORM_DB_USER'' | O | User for database connection. <BR/> Default value: ''storm'' | 4.0.2-9 | | ''STORM_FRONTEND_HOST_LIST'' | O | StoRM Frontend service host list: SRM endpoints can be more than one virtual host different from STORM_BACKEND_HOST (i.e. dynamic DNS for multiple StoRM Frontends). <BR/> Default value: ''$STORM_BACKEND_HOST'' | 4.0.2-9 | | ''STORM_FRONTEND_PATH'' | O | StoRM Frontend service path. <BR/> Default value: ''/srm/managerv2'' | 4.0.2-9 | | ''STORM_FRONTEND_PORT'' <BR/> (was ''STORM_PORT'') | O | StoRM Frontend service port. <BR/> Default value: ''8444'' | 4.0.2-9 | | ''STORM_FRONTEND_PUBLIC_HOST'' <BR/> (was ''STORM_ENDPOINT'') | O | StoRM Frontend service public host. <BR/> Default value: ''$STORM_BACKEND_HOST'' | 4.0.2-9 | | ''STORM_FSTYPE'' | O | File System Type (default value for all Storage Areas). <BR/> Note: you may change the settings for each SA acting on ''$STORM_<SA>_FSTYPE'' variable. <BR/> Available values: ''[posixfs xfs gpfs]'' <BR/> Default value: ''posixfs'' | 4.0.4-0 | | ''STORM_GRIDFTP_POOL_LIST'' | O | GRIDFTP servers pool list (default value for all Storage Areas). <BR/> Note: you may change the settings for each SA acting on ''$STORM_<SA>_GRIDFTP_POOL_LIST'' variable. <BR/> *ATTENTION*: this variable define a lsit of pair values comma-separated: hostname,weight, e.g.: <pre>STORM_GRIDFTP_POOL_LIST="host1,weight1 host2,weight2 host3,weight3"</pre> Weight has 0-100 range; if not specified, weight will be 100. <BR/> Default value: ''$STORM_BACKEND_HOST'' | 4.0.7-0 | | ''STORM_GRIDFTP_POOL_STRATEGY'' | O | Load balancing strategy for GRIDFTP servers pool (default value for all Storage Areas). <BR/> Note: you may change the settings for each SA acting on ''$STORM_<SA>_GRIDFTP_POOL_STRATEGY'' variable. <BR/> Available values: ''[round-robin random weight metric-1 metric-2]'' <BR/> Default value: ''round-robin'' | 4.0.7-0 | | ''STORM_INFO_FILE_SUPPORT'' <BR/> ''STORM_INFO_GRIDFTP_SUPPORT'' <BR/> ''STORM_INFO_RFIO_SUPPORT'' <BR/> ''STORM_INFO_ROOT_SUPPORT'' | O | If set to ''false'', the following variables prevent the corresponding protocol to be published by the StoRM gip. <BR/> Available values: ''[true false]'' <BR/> Default value: ''true'' | 4.0.2-9 | | ''STORM_INFO_OVERWRITE'' | O | This parameter tells YAIM to overwrite ''static-file-StoRM.ldif'' configuration file. <BR/> Available values: ''[true false]'' <BR/> Default value: ''true''| 4.0.2-9 | | ''STORM_NAMESPACE_OVERWRITE'' | O | This parameter tells YAIM to overwrite ''namespace.xml'' configuration file. <BR/> Available values: ''[true false]'' <BR/> Default value: ''true'' | 4.0.2-9 | | ''STORM_PROXY_HOME'' | O | Directory used to exchange proxies. <BR/> Default value: ''/opt/storm/backend/tmp'' | 4.0.2-9 | | ''STORM_RFIO_HOST'' | O | Rfio server (default value for all Storage Areas). <BR/> Note: you may change the settings for each SA acting on ''$STORM_<SA>_RFIO_HOST'' variable. <BR/> Default value: ''$STORM_BACKEND_HOST'' | 4.0.9-0 | | ''STORM_ROOT_HOST'' | O | Root server (default value for all Storage Areas). <BR/> Note: you may change the settings for each SA acting on ''$STORM_<SA>_ROOT_HOST'' variable. <BR/> Default value: ''$STORM_BACKEND_HOST'' | 4.0.9-0 | | ''STORM_SIZE_LIMIT'' | O | Limit Maximum available space on the Storage Area (default value for all Storage Areas). <BR/> Note: you may change the settings for each SA acting on ''$STORM_<SA>_SIZE_LIMIT'' variable. <BR/> Available values: ''[true false]'' <BR/> Default value: ''true'' | 4.0.7-0 | | ''STORM_STORAGEAREA_LIST'' | O | List of supported Storage Areas. <BR/> Usually at least one Storage Area for each VO specified in ''$VOS'' should be created. <BR/> Default value: ''$VOS'' | 4.0.2-9 | | ''STORM_STORAGECLASS'' | O | Storage Class type (default value for all Storage Areas). <BR/> Note: you may change the settings for each SA acting on ''$STORM_<SA>_STORAGECLASS'' variable. <BR/> Available values: ''[T0D1 T1D0 T1D1]'' - No default value. | 4.0.9-0 | | ''STORM_SURL_ENDPOINT_LIST'' | O | StoRM SURL endpoint list. <BR/> Default values: ''srm://${STORM_FRONTEND_PUBLIC_HOST}:${STORM_FRONTEND_PORT}${STORM_FRONTEND_PATH}'' | 4.0.9-0 | | ''STORM_USER'' | O | Service user. <BR/> Default value: ''storm'' | 4.0.2-9 | Then, *for each* ''<SA>'' "Storage Area" listed in ''STORM_STORAGEAREA_LIST'' variable you have to edit the following compulsory variables: *NOTE:* ''<SA>'' has to be written in capital letters as in the other ''<site-info.def>'' variables otherwise default values will be used! <BR/> *ATTENTION*: for the DNS-like names (using special characters as "." (dot), "-" (minus)) you have to remove the ".". "-": e.g. for ''STORM_STORAGEAREA_LIST="enmr.eu"'' ''<SA>'' should be ''ENMREU'' like: <pre>STORM_ENMREU_VONAME=enmr.eu</pre> | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ''STORM_<SA>_VONAME'' | *C* | Name of the VO that will use the Storage Area (use the complete name, e.g. "''lights.infn.it''"). | 4.0.2-9 | and eventually the following optional variables: | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ''STORM_<SA>_ACCESSPOINT'' | O | Path exposed by the SRM into the SURL. <BR/> Default value: ''/<sa>'' | 4.0.2-9 | | ''STORM_<SA>_ACLMODE'' | O | See ''STORM_ACLMODE'' definition. <BR/> Default value: ''$STORM_ACLMODE'' | 4.0.2-9 | | ''STORM_<SA>_AUTH'' | O | See ''STORM_AUTH'' definition. <BR/> Default value: ''$STORM_AUTH'' | 4.0.7-0 | | ''STORM_<SA>_DEFAULT_ACL_LIST'' | O | A list of ACL entries that specifies a set of local groups with correspondig permissions (R,W,RW) using the following syntax: <pre>groupname1:permissions1 [groupname2:permissions2 [...]]</pre> | 4.0.7-0 | | ''STORM_<SA>_FILE_SUPPORT'' <BR/> ''STORM_<SA>_GRIDFTP_SUPPORT'' <BR/> ''STORM_<SA>_RFIO_SUPPORT'' <BR/> ''STORM_<SA>_ROOT_SUPPORT'' | O | Enable the corresponding protocol. <BR/> Default value: ''$STORM_INFO_<PROTOCOL>_SUPPORT'' | 4.0.2-9 | | ''STORM_<SA>_FSTYPE'' | O | See ''STORM_<SA>_FSTYPE'' definition. <BR/> Default value: ''$STORM_FSTYPE'' | 4.0.2-9 | | ''STORM_<SA>_GRIDFTP_POOL_LIST'' | O | See ''STORM_GRIDFTP_POOL_LIST'' definition. <BR/> Default value: ''$STORM_GRIDFTP_POOL_LIST'' | 4.0.7-0 | | ''STORM_<SA>_GRIDFTP_POOL_STRATEGY'' | O | See ''STORM_GRIDFTP_POOL_STRATEGY'' definition. <BR/> Default value: ''$STORM_GRIDFTP_POOL_STRATEGY'' | 4.0.7-0 | | ''STORM_<SA>_QUOTA'' | O | Enables the quota management for the Storage Area and it works only on GPFS filesystem. <BR/> Available values: ''[true false]'' <BR/> Default value: ''false'' | 4.0.2-9 | | ''STORM_<SA>_QUOTA_DEVICE'' | O | GPFS device on which the quota is enabled. It is mandatory if ''STORM_<SA>_QUOTA'' variable is set. <BR/> No default value. | 4.0.9-0 | | ''STORM_<SA>_QUOTA_USER'' <BR/> ''STORM_<SA>_QUOTA_GROUP'' <BR/> ''STORM_<SA>_QUOTA_FILESET'' | O | GPFS quota scope. Only one of the following three will be used (the first one with the highest priority in this order: USER, then GROUP, then FILESET). <BR/> No default value. | 4.0.9-0 | | ''STORM_<SA>_RFIO_HOST'' | O | See ''STORM_RFIO_HOST'' definition. <BR/> Default value: ''$STORM_RFIO_HOST'' | 4.0.9-0 | | ''STORM_<SA>_ROOT'' | O | Physical storage path for the VO. <BR/> Default value: ''$STORM_DEFAULT_ROOT/<sa>'' | 4.0.2-9 | | ''STORM_<SA>_ROOT_HOST'' | O | See ''STORM_ROOT_HOST'' definition. <BR/> Default value: ''$STORM_ROOT_HOST'' | 4.0.9-0 | | ''STORM_<SA>_SIZE_LIMIT'' | O | See ''STORM_SIZE_LIMIT'' definition. <BR/> Default value: ''$STORM_SIZE_LIMIT'' | 4.0.7-0 | | ''STORM_<SA>_STORAGECLASS'' | O | See ''STORM_STORAGECLASS'' definition. <BR/> Available values: ''[T0D1 T1D0 T1D1 <null>]'' <BR/> No default value. | 4.0.2-9 | | ''STORM_<SA>_TOKEN'' | O | Storage Area token, e.g: ''LHCb_RAW'', ''INFNGRID_DISK'', etc. <BR/> No default value. | 4.0.2-9 | ---+++++SE StoRM Frontend *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/ig-se_storm_frontend'' Please copy and edit that file in your ''<confdir>/services'' directory (have a look at [[#Configuration_files][Yaim configuration files]]"). | *Variable name* | *Type* | *Descpription* | *>= ig-yaim version* | | ''STORM_BACKEND_HOST'' <BR/> (was ''STORM_HOST'') | *C* | In ig-site-info.def template. Host name of the StoRM Backend server. | 4.0.9-0 | | ''STORM_CERT_DIR'' <BR/> (was ''STORM_HOSTCERT'' and ''STORM_HOSTKEY'') | O | Host certificate directory for StoRM Frontend service. <BR/> Default value: ''/etc/grid-security/${STORM_USER}'' | 4.0.9-0 | | ''STORM_DB_HOST'' | O | Host for database connection. <BR/> Default value: ''$STORM_BACKEND_HOST'' | 4.0.2-9 | | ''STORM_DB_PWD'' | *C* | Password for database connection. | 4.0.2-9 | | ''STORM_DB_USER'' | O | User for database connection. <BR/> Default value: ''storm'' | 4.0.2-9 | | ''STORM_FE_BE_XMLRPC_HOST'' | O | StoRM Backend hostname. <BR/> Default value: ''localhost'' | 4.0.7-0 | | ''STORM_FE_BE_XMLRPC_PATH'' | O | StoRM Backend XMLRPC server path. <BR/> Default value: ''/RPC2'' | 4.0.7-0 | | ''STORM_FE_BE_XMLRPC_PORT'' | O | StoRM Backend XMLRPC server port. <BR/> Default value: ''8080'' | 4.0.7-0 | | ''STORM_FE_DISABLE_MAPPING'' | O | Disable the check in gridmapfile for client DN. <BR/> Available values: ''[true false]'' <BR/> Default value: ''false'' | 4.0.7-0 | | ''STORM_FE_DISABLE_VOMSCHECK'' | O | Disable the check in gridmapfile for client VOMS attributes. <BR/> Available values: ''[true false]'' <BR/> Default value: ''false'' | 4.0.7-0 | | ''STORM_FE_GSOAP_MAXPENDING'' | O | Max number of request pending in the GSOAP queue. <BR/> Default value: ''2000'' | 4.0.7-0 | | ''STORM_FE_LOG_LEVEL'' | O | StoRM Frontend log level. <BR/> Available values: ''[KNOWN ERROR WARNING INFO DEBUG DEBUG2]'' <BR/> Default value: ''INFO'' | 4.0.7-0 | | ''STORM_FE_THREADS_MAXPENDING'' | O | Max number of request pending in the Threads queue. <BR/> Default value: ''200'' | 4.0.7-0 | | ''STORM_FE_THREADS_NUMBER'' | O | Max number of threads to manage user's requests. <BR/> Default value: ''50'' | 4.0.7-0 | | ''STORM_FE_WSDL'' | O | WSDL to be returned to a GET request. <BR/> Default value: ''/opt/storm/frontend/wsdl/srm.v2.2.wsdl'' | 4.0.7-0 | | ''STORM_FRONTEND_OVERWRITE'' | O | This parameter tells YAIM to overwrite ''storm-frondend.conf'' configuration file. <BR/>Available values: ''[true false]'' <BR/> Default value: ''true'' | 4.0.2-9 | | ''STORM_PROXY_HOME'' | O | Directory used to exchange proxies. <BR/> Default value: ''/opt/storm/backend/tmp'' | 4.0.7-0 | | ''STORM_USER'' | O | User on which StoRM service will run. <BR/> Default value: ''storm'' | 4.0.2-9 | ---+++++UI * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#UI][gLite UI variables]] ---+++++VOBOX *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/glite-vobox'' Please copy and edit those files in your ''<confdir>/services'' directory (have a look at [[#Configuration_files][Yaim configuration files]]"). * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#VOBOX][gLite VOBOX variables]] ---+++++WMS *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/glite-wms'' Please copy and edit those files in your ''<confdir>/services'' directory (have a look at [[#Configuration_files][Yaim configuration files]]"). * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#WMS][gLite WMS variables]] ---+++++WN *Configuration file:* Specific variables are in: * ''/opt/glite/yaim/examples/siteinfo/services/glite-mpi'' * ''/opt/glite/yaim/examples/siteinfo/services/glite-mpi_wn'' Please copy and edit those files in your '<confdir>/services'' directory (have a look at [[#Configuration_files][Yaim configuration files]]"). * [[https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#WN][gLite WN variables]] | *Variable name* | *Type* | *Description* | *>= ig-yaim version* | | ''GRIDICE_HIDE_USER_DN'' | *C* | Publish user certificate subject. | 3.0.1-0 | | ''GRIDICE_MON_WN'' | *C* | Enable monitoring on WN of generic processes/daemons using GridICE. | 3.0.1-0 |
E
dit
|
A
ttach
|
PDF
|
H
istory
: r12
<
r11
<
r10
<
r9
<
r8
|
B
acklinks
|
V
iew topic
|
M
ore topic actions
Topic revision: r12 - 2012-02-02
-
SergioTraldi
Home
Site map
CEMon web
CREAM web
Cloud web
Cyclops web
DGAS web
EgeeJra1It web
Gows web
GridOversight web
IGIPortal web
IGIRelease web
MPI web
Main web
MarcheCloud web
MarcheCloudPilotaCNAF web
Middleware web
Operations web
Sandbox web
Security web
SiteAdminCorner web
TWiki web
Training web
UserSupport web
VOMS web
WMS web
WMSMonitor web
WeNMR web
IGI Documentation
Repositories specifications
Installation and Configuration Guides
Updates Guides
Services/Node Types List
IGI Updates Calendar
Tips & Tricks
Use Cases & Troubleshooting
Site Admin Corner
IGI Release Management
Integration Process
TODO List
IGI Testing & Certification
Certification Testbed
Blah testing
CREAM testing
HLR testing
Storm testing
UI testing
VOMS testing
WMS testing
WN testing
IGIRelease Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
P
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
E
dit
A
ttach
Copyright © 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback