Profiles | INSTALLATION Metapackages | CONFIGURATION Nodetypes | Release date![]() |
Required in a grid site? |
---|---|---|---|---|
SGE_utils | ''ig_SGE_utils'' | ''ig_SGE_utils'' | |
*NO* |
VOMS_mysql | ''ig_VOMS_mysql'' | ''ig_VOMS_mysql'' | |
*NO* |
BDII Site | ''ig_BDII_site'' | ''ig_BDII_site'' | |
*YES* |
BDII Top | ''ig_BDII_top'' | ''ig_BDII_top'' | |
*NO* |
CREAM CE | ''ig_CREAM'' ''ig_CREAM_LSF'' ''ig_CREAM_torque'' |
''ig_CREAM'' ''ig_CREAM_LSF'' ''ig_CREAM_torque'' |
|
*YES* |
SE dCache | ''ig_SE_dcache_info'' ''ig_SE_dcache_nameserver_chimera'' ''ig_SE_dcache_pool'' ''ig_SE_dcache_srm'' |
''ig_SE_dcache_info'' ''ig_SE_dcache_nameserver_chimera'' ''ig_SE_dcache_pool'' ''ig_SE_dcache_srm'' |
|
*NO* |
VOMS_oracle | ''ig_VOMS_oracle'' | ''ig_VOMS_oracle'' | |
*NO* |
GLEXEC_wn | ''ig_GLEXEC_wn'' | ''ig_GLEXEC_wn'' | |
*NO* |
FTA_oracle | ''ig_FTA_oracle'' | ''ig_FTA_oracle'' | |
*NO* |
FTM* | ''ig_FTM'' | ''ig_FTM'' | |
*NO* |
FTS_oracle | ''ig_FTS_oracle'' | ''ig_FTS_oracle'' | |
*NO* |
SE DPM | ''ig_SE_dpm_mysql' ' ''ig_SE_dpm_disk'' |
''ig_SE_dpm_mysql'' ''ig_SE_dpm_disk'' |
|
*NO* |
VOBOX | ''ig_VOBOX'' | ''ig_VOBOX'' | |
*NO* |
LFC | ''ig_LFC_mysql'' ''ig_LFC_oracle'' |
''ig_LFC_mysql'' ''ig_LFC_oracle'' |
08/02/2010 |
*NO* |
SE STORM | ''ig_SE_storm_backend'' ''ig_SE_storm_frontend'' ''ig_SE_storm_checksum'' ''ig_SE_storm_gridftp'' |
''ig_SE_storm_backend'' ''ig_SE_storm_frontend'' ''ig_SE_storm_checksum'' ''ig_SE_storm_gridftp'' |
|
*NO* |
WN | ''ig_WN'' ''ig_WN_noafs'' ''ig_WN_LSF'' ''ig_WN_LSF_noafs'' ''ig_WN_torque'' ''ig_WN_torque_noafs'' |
''ig_WN'' ''ig_WN_noafs'' ''ig_WN_LSF'' ''ig_WN_LSF_noafs'' ''ig_WN_torque'' ''ig_WN_torque_noafs'' |
|
*YES* |
ARGUS | ''ig_ARGUS'' | ''ig_ARGUS_server'' or ''ARGUS_server'' |
|
*NO* |
UI | ''ig_UI'' ''ig_UI_noafs'' |
''ig_UI'' ''ig_UI_noafs'' |
|
*NO* but recommended |
LB | ''ig_LB'' | ''ig_LB'' | |
*NO* |
HLR | ''ig_HLR'' | ''ig_HLR'' | |
*NO* |
NTP
, cron
and logrotate
are installed, otherwise install them!
hostname -fIt should print the fully qualified domain name (e.g.
prod-ce.mydomain.it
). Correct your network configuration if it prints only the hostname without the domain. If you are installing WN on private network the command must return the external FQDN for the CE and the SE (e.g. prod-ce.mydomain.it
) and the internal FQDN for the WNs (e.g. node001.myintdomain
).
# mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.disabledotherwise you'll get the following error:
Missing Dependency: libcares.so.0()(64bit) is needed by package glite-security-gss-2.0.0-3.sl5.x86_64 (glite-generic_sl5_x86_64_release)because of the presence of newer version of c-ares (1.4.0-1.el5). The middleware needs the 1.3.0-4.sl5 ! Remeber to enabled the dag.repo : With standard installation of SL5 it's possible that you have the DAG repository. Please check if it is enabled, if no please enabled it: cat /etc/yum.repos.d/dag.repo .... enabled=1 ....
Common repositories *x86_64* |
---|
dag.repo![]() |
ig.repo![]() |
egi-trustanchors.repo![]() |
yum clean all
yum install ca-policy-egi-core
yum groupinstall <WN_profile>where <WN_profile> could be one of: ig_WN, ig_WN_noafs, ig_WN_torque, ig_WN_torque_noafs, ig_WN_LSF, ig_WN_LSF_noafs or
yum groupinstall <UI_profile>where <UI_profile> could be one of: ig_UI, ig_UI_noafs If you are installing any othere profiles use:
yum install <metapackage>Where
<metapackage>
is one of those reported on the table above ( Metapackages column).
IMPORTANT NOTE:
When you are installing ig_CREAM or ig_CREAM_LSF or ig_CREAM_torque adding an exclude line to the .repo file, e.g. in /etc/yum.repos.d/slc5-updates.repo or /etc/yum.repos.d/sl5-security.repo
exclude=c-ares
yum install ig_CREAM_torqueInstall the other secondary CEs without batch server software as follows:
yum install ig_CREAM glite-TORQUE_utils
yum install glite-TORQUE_server glite-TORQUE_utilsfor the CEs (creamCEs), add the glite-torque_utils.repo
yum install ig_CREAM glite-TORQUE_utilsPlease pay attention also to the configuration for this special case
<confdir>
, in a safe place, which is not world readable. This directory should contain:
File | Scope | Details |
---|---|---|
<your-site-info.def> |
whole-site | List of configuration variables in the format of key-value pairs. It's a mandatory file. It's a parameter passed to the ig_yaim command. IMPORTANT: You should always check if your <your-site-info.def> is up-to-date comparing with the last /opt/glite/yaim/examples/siteinfo/ig-site-info.def template deployed with ig-yaim and get the differences you find. For example you may use vimdiff : vimdiff /opt/glite/yaim/examples/siteinfo/ig-site-info.def <confdir>/<your-site-info.def> |
<your-wn-list.conf> |
whole-site | Worker nodes list in the format of hostname.domainname per row. It's a mandatory file. It's defined by WN_LIST variable in <your-site-info.def> . |
<your-users.conf> |
whole-site | Pool account user mapping. It's a mandatory file. It's defined by USERS_CONF variable in <your-site-info.def> . IMPORTANT: You may create <your-users.conf> starting from the /opt/glite/yaim/examples/ig-users.conf template deployed with ig-yaim , but probably you have to fill it on the base of your site policy on uids/guis. We suggest to proceed as explained here: ”Whole site: How to create local users.conf and configure users”. |
<your-groups.conf> |
whole-site | VOMS group mapping. It's a mandatory file. It's defined by GROUPS_CONF variable in <your-site-info.def> . IMPORTANT: You may create <your-groups.conf> starting from the /opt/glite/yaim/examples/ig-groups.conf template deployed with ig-yaim . |
Directory | Scope | Details | |
---|---|---|---|
services/ |
service-specific | It contains a file per nodetype with the name format: ig-node-type . The file contains a list of configuration variables specific to that nodetype. Each yaim module distributes a configuration file in =/opt/glite/yaim/examples/siteinfo/services/[ig or glite]-node-type. It's a mandatory directory if required by the profile and you should copy it under the same directory where <your-site-info.def> is. |
|
nodes/ |
host-specific | It contains a file per host with the name format: hostname.domainname . The file contains host specific variables that are different from one host to another in a certain site. It's an optional directory. |
|
vo.d/ |
VO-specific | It contains a file per VO with the name format: vo_name , but most of VO settings are still placed in ig-site-info.def template. For example, for ”lights.infn.it ”: # cat vo.d/lights.infn.it It's an optional directory for “normal” VOs (like atlas, alice, babar), mandatory only for “fqdn-like” VOs. In case you support such VOs you should copy the structure vo.d/<vo.specific.file> under the same directory where <your-site-info.def> is. |
|
group.d/ |
VO-specific | It contains a file per VO with the name format: groups-<vo_name>.conf . The file contains VO specific groups and it replaces the former <your-groups.conf> file where all the VO groups were specified all together. It's an optional directory. |
Variable name | Type | Descpription | >= ig-yaim version |
---|---|---|---|
''BASE_SW_DIR'' | O | Directory exported for SGM (it will be mounted by WN on ''VO_SW_DIR'', see below). Comment it if you have your own mounting tool. | 3.0.1-0 |
''CE_INT_HOST'' | O | If PRIVATE_NETWORK=true, uncomment and write the internal FQDN hostname of CE. | 3.0.1-0 |
''CE_LOGCPU'' | C | Total number of logical CPUs in the system (i.e. number of cores/hyperthreaded CPUs). | 4.0.3-0 |
''CE_OS'' | C | OS type. Set using the output of the following command run on your WNs: # lsb_release -i | cut -f2 ScientificSLMore details here: "How to publish the OS name ![]() |
3.0.1-0 |
''CE_OS_ARCH'' | C | OS architecture. Set using the output of the following command run on your WNs: # uname -m i686More details here: "How to publish my machine architecture ![]() |
3.0.1-0 |
''CE_OS_RELEASE'' | C | OS release. Set using the output of the following command run on your WNs: # lsb_release -r | cut -f2 4.5More details here: "How to publish the OS name ![]() |
3.0.1-0 |
''CE_OS_VERSION'' | C | OS version. Set using the output of the following command run on your WNs: # lsb_release -c | cut -f2 BerylliumMore details here: "How to publish the OS name ![]() |
3.0.1-0 |
''CE_PHYSCPU'' | C | Total number of physical CPUs in the system (i.e. number of sockets). ATTENTION: if you have more than one CE and shared WNs set this variable to "0". More details here: "GIISQuery_Usage ![]() |
4.0.3-0 |
''HOST_SW_DIR'' | O | Host exporting the directory for SGM (usually a CE or a SE). | 3.0.1-0 |
''INT_NET'' | O | If PRIVATE_NETWORK=true, uncomment and write your internal network. | 3.0.1-0 |
''INT_HOST_SW_DIR'' | O | If PRIVATE_NETWORK=true, uncomment and write the internal hostname of the host exporting the directory used to install the application software. | 3.0.1-0 |
''MY_INT_DOMAIN'' | O | If PRIVATE_NETWORK=true, uncomment and write the internal domain name | 3.0.1-0 |
''NTP_HOSTS_IP'' | C | Space separated list of the IP addresses of the NTP servers (preferably set a local ntp server and a public one, ex. pool.ntp.org). | 3.0.1-0 |
''PRIVATE_NETWORK'' | O | Set PRIVATE_NETWORK=true to use WN on private network. | 3.0.1-0 |
Variable name | Type | Description | >= ig-yaim version |
---|---|---|---|
''BATCH_LOG_DIR'' | C | Path for the batch-system log files: "/var/torque" (for torque); "/lsf_install_path/work/cluster_name/logdir" (for LSF). In case of separate batch-master (not on the same machine as the CE) PLEASE make sure that this directories are READABLE from the CEs | 3.0.1-0 |
''BATCH_CONF_DIR'' | C | Only for LSF. Set path where ''lsf.conf'' file is located. | 3.0.1-0 |
Variable name | Type | Descpription | >= ig-yaim version |
---|---|---|---|
''BDII_ |
C | If you are configuring a 3.1 node change the port to ''2170'' and ''mds-vo-name'' to ''resource''. For example: BDII_CE_URL="ldap://$CE_HOST:2170/mds-vo-name=resource,o=grid" |
3.0.1-0 |
''CLOSE_SE_HOST'' | C | Set the close SE for the site. It is chosen between one of the site SEs. | >= 4.0.3-0 |
''SITE_BDII_HOST'' | C | Host name of site BDII | >= 3.0.1-0 |
''SITE_DESC'' | C | A long format name for your site | >= 4.0.4-0 |
''SITE_SECURITY_EMAIL'' | C | Contact email for security | >= 4.0.4-0 |
''SITE_OTHER_GRID'' | C | Grid to which your site belongs to, i.e. WLG or EGEE. Use: SITE_OTHER_GRID="EGEE" |
4.0.4-0 |
''SITE_OTHER_EGEE_ROC'' | C | Agree within your ROC what the field should be. USE: SITE_OTHER_EGI_ROC="Italy" |
4.0.4-0 |
[configuration] EGI = FALSE OSG = FALSE manual = True manual_file = /opt/glite/yaim/config/>file_containing_your_sites.conf> output_file = /opt/glite/etc/gip/top-urls.conf cache_dir = /var/cache/glite/glite-info-update-endpointsThe file containing the list of sites "known" to the BDII_top is updated every hour by the cron-job /etc/cron.hourly/glite-info-update-endpoints.
Variable name | Type | Descpription | >= ig-yaim version |
---|---|---|---|
''DGAS_ACCT_DIR'' | C | Path for the batch-system log files. - for torque/pbs: DGAS_ACCT_DIR=/var/torque/server_priv/accounting- for LSF: DGAS_ACCT_DIR=lsf_install_path/work/cluster_name/logdir |
4.0.8-3 |
''DGAS_IGNORE_JOBS_LOGGED_BEFORE'' | O | Bound date on jobs backward processing. The backward processing doesn't consider jobs prior to that date. Use the format as in this example: DGAS_IGNORE_JOBS_LOGGED_BEFORE="2007-01-01"Default value: ''2008-01-01'' |
4.0.2-9 |
''DGAS_JOBS_TO_PROCESS'' | O | Specify the type of job which the CE has to process. ATTENTION: set "all" on "the main CE" of the site (the one with the best hardware), "grid" on the others. Default value: ''all'' |
4.0.2-9 |
''DGAS_HLR_RESOURCE'' | C | Reference Resource HLR hostname. There is no need to specify the port as in the previous yaim versions (default value "''56568''" will be set by yaim). | 4.0.2-9 |
''DGAS_USE_CE_HOSTNAME'' | O | Only for LSF. Main CE of the site. ATTENTION: set this variable only in the case of site with a single LSF Batch Master (no need to set this variable in the Torque case) in which there are more than one CEs or local submission hosts (i.e. host from which you may submit jobs directly to the batch system.). In this case, ''DGAS_USE_CE_HOSTNAME'' parameter must be set to the same value for all hosts sharing the "lrms" and this value can be arbitrary chosen among these submitting hostnames (you may choose the best one). Otherwise leave it commented. For example: DGAS_USE_CE_HOSTNAME="my-ce.my-domain" |
4.0.2-9 |
''DGAS_MAIN_POLL_INTERVAL'' | O | UR Box parse interval, that is if all jobs have been processed: seconds to wait before looking for new jobs in UR Box. default value "5" | 4.0.11-4 |
''DGAS_JOB_PER_TIME_INTERVAL'' | O | Number of jobs to process at each processing step (several steps per mainPollInterval, depending on the number of jobs found in chocolateBox). default value "40" | 4.0.11-4 |
''DGAS_TIME_INTERVAL'' | O | Time in seconds to sleep after each processing step (if there are still jobs to process, otherwise start a new mainPollInterval). default value "3" | 4.0.11-4 |
''DGAS_QUEUE_POLL_INTERVAL'' | O | Garbage clean-up interval in seconds. default value "2" | 4.0.11-4 |
''DGAS_SYSTEM_LOG_LEVEL'' | O | The "systemLogLevel" parameter defines the log verbosity from 0(no logging) to 9(maximum Verbosity) default value "7" | 4.0.11-4 |
![]() |
![]() |
|
![]() |
|
![]() |