Tags:
,
view all tags
---+!! Regression Test Work Plan %TOC% ---+ BLAH ---++ Fixes provided with BLAH 1.16.4 ---+++ [[https://savannah.cern.ch/bugs/?88974][Bug #88974]] BUpdaterSGE and BNotifier don't start if sge_helperpath var is not fixed %RED%Not implemented%ENDCOLOR% Install and configure (via yaim) a CREAM-CE using GE as batch system. Make sure that in =/etc/blah.config= the variable =sge_helperpath= is commented/is not there. Try to restart the blparser: =/etc/init.d/glite-ce-blahparser restart= It should work without problems. In particular it should not report the following error: <verbatim> Starting BNotifier: /usr/bin/BNotifier: sge_helperpath not defined. Exiting [FAILED] Starting BUpdaterSGE: /usr/bin/BUpdaterSGE: sge_helperpath not defined. Exiting [FAILED] </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?89859][Bug 89859]] There is a memory leak in the updater for LSF, PBS and Condor %RED%Not implemented%ENDCOLOR% Configure a CREAM CE using the new blparser. Submit 1000 jobs using e.g. this JDL: <verbatim> [ executable="/bin/sleep"; arguments="100"; ] </verbatim> Keep monitoring the memory used by the bupdaterxxx process. It should basically not increase. The test should be done for both LSF and Torque/PBS. ---++ Fixes provided with BLAH 1.16.3 ---+++ [[https://savannah.cern.ch/bugs/?75854][Bug #75854]] Problems related to the growth of the blah registry) %RED%Not implemented%ENDCOLOR% Configure a CREAM CE using the new BLparser. Verify that in /etc/blah.config there is: =job_registry_use_mmap=yes= (default scenario). Submit 5000 jobs on a CREAM CE using the following JDL: <verbatim> [ executable="/bin/sleep"; arguments="100"; ] </verbatim> Monitor the BLAH processed. Verify that each of them doesn't use more than 50 MB. ---+++ [[https://savannah.cern.ch/bugs/?77776][Bug #77776]] (BUpdater should have an option to use cached batch system commands) %RED%Not implemented%ENDCOLOR% Add: <verbatim> lsf_batch_caching_enabled=yes batch_command_caching_filter=/usr/bin/runcmd.pl </verbatim> in =/etc/blah.config=. Create and fill =/usr/bin/runcmd.pl= with the following content: <verbatim> #!/usr/bin/perl #---------------------# # PROGRAM: argv.pl # #---------------------# $numArgs = $#ARGV + 1; open (MYFILE, '>>/tmp/xyz'); foreach $argnum (0 .. $#ARGV) { print MYFILE "$ARGV[$argnum] "; } print MYFILE "\n"; close (MYFILE); </verbatim> Submit some jobs. Check that in =/tmp/xyz= the queries to the batch system are recorded. E.g. for LSF something like that should be reported: <verbatim> /opt/lsf/7.0/linux2.6-glibc2.3-x86/bin/bjobs -u all -l /opt/lsf/7.0/linux2.6-glibc2.3-x86/bin/bjobs -u all -l ... </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?80805][Bug #80805]] (BLAH job registry permissions should be improved) %RED% Not implemented %ENDCOLOR% Check permissions and ownership under =/var/blah=. They should be: <verbatim> /var/blah: total 12 -rw-r--r-- 1 tomcat tomcat 5 Oct 18 07:32 blah_bnotifier.pid -rw-r--r-- 1 tomcat tomcat 5 Oct 18 07:32 blah_bupdater.pid drwxrwx--t 4 tomcat tomcat 4096 Oct 18 07:38 user_blah_job_registry.bjr /var/blah/user_blah_job_registry.bjr: total 16 -rw-rw-r-- 1 tomcat tomcat 1712 Oct 18 07:38 registry -rw-r--r-- 1 tomcat tomcat 260 Oct 18 07:38 registry.by_blah_index -rw-rw-rw- 1 tomcat tomcat 0 Oct 18 07:38 registry.locktest drwxrwx-wt 2 tomcat tomcat 4096 Oct 18 07:38 registry.npudir drwxrwx-wt 2 tomcat tomcat 4096 Oct 18 07:38 registry.proxydir -rw-rw-r-- 1 tomcat tomcat 0 Oct 18 07:32 registry.subjectlist /var/blah/user_blah_job_registry.bjr/registry.npudir: total 0 /var/blah/user_blah_job_registry.bjr/registry.proxydir: total 0 </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?81354][Bug #81354]] (Missing 'Iwd' Attribute when trasferring files with the 'TransferInput' attribute causes thread to loop) %RED% Not implemented %ENDCOLOR% Log on a cream ce as user tomcat. Create a proxy of yours and copy it as =/tmp/proxy= (change the ownership to tomcat.tomcat). Create the file =/home/dteam001/dir1/fstab= (you can copy /etc/fstab). Submit a job directly via blah (in the following change pbs and creamtest2 with the relevant batch system and queue names): <verbatim> $ /usr/bin/blahpd $GahpVersion: 1.16.2 Mar 31 2008 INFN\ blahpd\ (poly,new_esc_format) $ BLAH_SET_SUDO_ID dteam001 S Sudo\ mode\ on blah_job_submit 1 [cmd="/bin/cp";Args="fstab\ fstab.out";TransferInput="/home/dteam001/dir1/fstab";TransferOutput="fstab.out";TransferOutputRemaps="fstab.out=/home/dteam001/dir1/fstab.out";gridtype="pbs";queue="creamtest2";x509userproxy="/tmp/proxy"] S results S 1 1 0 No\ error pbs/20111010/304.cream-38.pd.infn.it </verbatim> Eventually check the content of =/home/dteam001/dir1/= where you see both =fstab= and =fstab.out=: <verbatim> $ ls /home/dteam001/dir1/ fstab fstab.out </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?81824][Bug #81824]] (yaim-cream-ce should manage the attribute bupdater_loop_interval) %RED%Not implemented%ENDCOLOR% Set =BUPDATER_LOOP_INTERVAL= to 30 in siteinfo.def and reconfigure via yaim. Then verify that in =blah.config= there is: <verbatim> bupdater_loop_interval=30 </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?82281][Bug #82281]] (blahp.log records should always contain CREAM job ID) %RED%Not implement%ENDCOLOR% Submit a job directly to CREAM using CREAM-CLI. Then submit a job to CREAM through the WMS. In the accounting log file (/var/log/cream/accounting/blahp.log-<date>) in both cases the clientID field should end with the numeric part of the CREAM jobid, e.g.: <verbatim> "timestamp=2011-10-10 14:37:38" "userDN=/C=IT/O=INFN/OU=Personal Certificate/L=Padova/CN=Massimo Sgaravatto" "userFQAN=/dteam/Role=NULL/Capability=NULL" "userFQAN=/dteam/NGI_IT/Role=NULL/Capability=NULL" "ceID=cream-38.pd.infn.it:8443/cream-pbs-creamtest2" "jobID=CREAM956286045" "lrmsID=300.cream-38.pd.infn.it" "localUser=18757" "clientID=cre38_956286045" "timestamp=2011-10-10 14:39:57" "userDN=/C=IT/O=INFN/OU=Personal Certificate/L=Padova/CN=Massimo Sgaravatto" "userFQAN=/dteam/Role=NULL/Capability=NULL" "userFQAN=/dteam/NGI_IT/Role=NULL/Capability=NULL" "ceID=cream-38.pd.infn.it:8443/cream-pbs-creamtest2" "jobID=https://devel19.cnaf.infn.it:9000/dLvm84LvD7w7QXtLZK4L0A" "lrmsID=302.cream-38.pd.infn.it" "localUser=18757" "clientID=cre38_315532638" </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?82297][Bug #82297]] (blahp.log rotation period is too short) %RED%Not implemented%ENDCOLOR% Check that in =/etc/logrotate.d/blahp-logrotate= rotate is equal to 365: <verbatim> # cat /etc/logrotate.d/blahp-logrotate /var/log/cream/accounting/blahp.log { copytruncate rotate 365 size = 10M missingok nomail } </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?83275][Bug #83275]] (Problem in updater with very short jobs that can cause no notification to cream) %RED%Not implemented%ENDCOLOR% Configure a CREAM CE using the new blparser. Submit a job using the following JDL: <verbatim> [ executable="/bin/echo"; arguments="ciao"; ] </verbatim> Check in the bnotifier log file (=/var/log/cream/glite-ce-bnotifier.log= that at least a notification is sent for this job, e.g.: <verbatim> 2011-11-04 14:11:11 Sent for Cream:[BatchJobId="927.cream-38.pd.infn.it"; JobStatus=4; ChangeTime="2011-11-04 14:08:55"; JwExitCode=0; Reason="reason=0"; ClientJobId="622028514"; BlahJobName="cre38_622028514";] </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?83347][Bug #83347]] (Incorrect special character handling for BLAH Arguments and Environment attributes) %RED%Not implemented%ENDCOLOR% Log on a cream ce as user tomcat. Create a proxy of yours and copy it as =/tmp/proxy= (change the ownership to tomcat.tomcat). Create the file =/home/dteam001/dir1/fstab= (you can copy /etc/fstab). Submit a job directly via blah (in the following change pbs and creamtest1 with the relevant batch system and queue names): <verbatim> BLAH_JOB_SUBMIT 1 [Cmd="/bin/echo";Args="$HOSTNAME";Out="/tmp/stdout_l15367";In="/dev/null";GridType="pbs";Queue="creamtest1";x509userproxy="/tmp/proxy";Iwd="/tmp";TransferOutput="output_file";TransferOutputRemaps="output_file=/tmp/stdout_l15367";GridResource="blah"] </verbatim> Verify that in the output file there is the hostname of the WN. ---+++ [[https://savannah.cern.ch/bugs/?87419][Bug #87419]] (blparser_master add some spurious character in the BLParser command line) %RED%Not implemented%ENDCOLOR% Configure a CREAM CE using the old blparser. Check the blparser process using ps. It shouldn't show urious characters: <verbatim> root 26485 0.0 0.2 155564 5868 ? Sl 07:36 0:00 /usr/bin/BLParserPBS -d 1 -l /var/log/cream/glite-pbsparser.log -s /var/torque -p 33333 -m 56565 </verbatim> -------------------------- ---+ CREAM ---++ Fixes provided with CREAM 1.14 ---+++ [[https://savannah.cern.ch/bugs/?59871][Bug #59871]] lcg-info-dynamic-software must split tag lines on white space - %RED% Not Implemented %ENDCOLOR% To verify the fix edit a VO.list file under =/opt/glite/var/info/cream-38.pd.infn.it/VO= adding: <verbatim> tag1 tag2 tag3 </verbatim> Then query the resource bdii, where you should see: <verbatim> ... GlueHostApplicationSoftwareRunTimeEnvironment: tag1 GlueHostApplicationSoftwareRunTimeEnvironment: tag2 GlueHostApplicationSoftwareRunTimeEnvironment: tag3 ... </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?68968][Bug #68968]] lcg-info-dynamic-software should protect against duplicate RTE tags - %RED% Not Implemented %ENDCOLOR% To verify the fix edit a VO.list file under =/opt/glite/var/info/cream-38.pd.infn.it/VO= adding: <verbatim> tag1 tag2 TAG1 tag1 </verbatim> Then query the resource bdii: <verbatim> ldapsearch -h <CE host> -x -p 2170 -b "o=grid" | grep -i tag </verbatim> This should return: <verbatim> GlueHostApplicationSoftwareRunTimeEnvironment: tag1 GlueHostApplicationSoftwareRunTimeEnvironment: tag2 </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?69854][Bug #69854]] CreamCE should publish non-production state when job submission is disabled - %RED% Not Implemented %ENDCOLOR% Disable job submission with =glite-ce-disable-submission=. Wait 3 minutes and then perform the following ldap query: <verbatim> # ldapsearch -h <CREAM CE node> -x -p 2170 -b "o=grid" | grep GlueCEStateStatus </verbatim> For each GlueCE this should return: <verbatim> GlueCEStateStatus: Draining </verbatim> ---+++ [[http://savannah.cern.ch/bugs/?69857][Bug #69857]] Job submission to CreamCE is enabled by restart of service even if it was previously disabled - %GREEN% Implemented %ENDCOLOR% STATUS: %GREEN% Implemented %ENDCOLOR% To test the fix: * disable the submission on the CE This can be achieved via the<code> `glite-ce-disable-submission host:port` </code> command (provided by the CREAM CLI package installed on the UI), that can be issued only by a CREAM CE administrator, that is the DN of this person must be listed in the =/etc/grid-security/admin-list= file of the CE. Output should be: "Operation for disabling new submissions succeeded" * restart tomcat on the CREAM CE (service tomcat restart - on CE) * verify if the submission is disabled (glite-ce-allowed-submission) This can be achieved via the `<code>glite-ce-enable-submission host:port` </code>command (provided by the CREAM CLI package installed on the UI). Output should be: "Job submission to this CREAM CE is disabled" ---+++ [[https://savannah.cern.ch/bugs/?77791][Bug #77791]] CREAM installation does not fail if sudo is not installed - %RED% Not Implemented %ENDCOLOR% Try to configure via yaim a CREAM-CE where the sudo executable is not installed, The configuration should fail saying: <verbatim> ERROR: sudo probably not installed ! </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?79362][Bug #79362]] location of python files provided with lcg-info-dynamic-scheduler-generic-2.3.5-0.sl5 - %RED% Not Implemented %ENDCOLOR% To verify the fix, do a: <verbatim> rpm -ql dynsched-generic </verbatim> and verify that the files are installed in =usr/lib/python2.4= and not more in =/usr/lib/python=. ---+++ [[https://savannah.cern.ch/bugs/?80410][Bug #80410]] CREAM bulk submission CLI is desirable - %RED% Not Implemented %ENDCOLOR% To test the fix, specify multiple JDLs in the =glite-ce-job-submit= command, e.g.: <verbatim> glite-ce-job-submit --debug -a -r cream-47.pd.infn.it:8443/cream-lsf-creamtest1 jdl1.jdl jdl2.jdl jdl3.jdl </verbatim> Considering the above example, verify that 3 jobs are submitted and 3 jobids are returned. ---+++ [[https://savannah.cern.ch/bugs/?81734][Bug #81734]] removed conf file retrieve from old path that is not EMI compliant - %RED% Not Implemented %ENDCOLOR% To test the fix, create the conf file =/etc/glite_cream.conf= with the following content: <verbatim> [ CREAM_URL_PREFIX="abc://"; ] </verbatim> Try then e.g. the following command: <verbatim> glite-ce-job-list --debug cream-47.pd.infn.it </verbatim> It should report that it is trying to contact =abc://cream-47.pd.infn.it:8443//ce-cream/services/CREAM2=: <verbatim> 2012-01-13 14:44:39,028 DEBUG - Service address=[abc://cream-47.pd.infn.it:8443//ce-cream/services/CREAM2] </verbatim> Move the conf file as =/etc/VO/glite_cream.conf= and repeat the test which should give the same result Then move the conf file as =~/.glite/VO/glite_cream.conf= and repeat the test which should give the same result ---+++ [[https://savannah.cern.ch/bugs/?82206][Bug #82206]] yaim-cream-ce: BATCH_LOG_DIR missing among the required attributes - %RED% Not Implemented %ENDCOLOR% Try to configure a CREAM CE with Torque using yaim without setting =BLPARSER_WITH_UPDATER_NOTIFIER= and without setting =BATCH_LOG_DIR=. It should fail saying: <verbatim> INFO: Executing function: config_cream_blah_check ERROR: BATCH_LOG_DIR is not set ERROR: Error during the execution of function: config_cream_blah_check </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?83314][Bug #83314]] Information about the RTEpublisher service should be available also in glue2 - %RED% Not Implemented %ENDCOLOR% Check if the resource BDII publishes glue 2 GLUE2ComputingEndPoint objectclasses with GLUE2EndpointInterfaceName equal to org.glite.ce.ApplicationPublisher. If the CE is configured in no cluster mode there should be one of such objectclass. If the CE is configured in cluster mode and the gLite-CLUSTER is deployed on a different node, there shouldn't be any of such objectclasses. <verbatim> ldapsearch -h <CREAM CE hostname> -x -p 2170 -b "o=glue" "(&(objectclass=GLUE2ComputingEndPoint)(GLUE2EndpointInterfaceName=org.glite.ce.ApplicationPublisher))" </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?83338][Bug #83338]] endpointType (in GLUE2ServiceComplexity) hardwired to 1 in CREAM CE is not always correct - %RED% Not Implemented %ENDCOLOR% Perform the following query on the resource bdii of the CREAM CE: <verbatim ldapsearch -x -h <CREAM CE hostname> -p 2170 -b "o=glue" | grep -i endpointtype </verbatim> endpointtype should be 3 if CEMon is deployed (=USE_CEMON= is true). 2 otherwise. ---+++ [[https://savannah.cern.ch/bugs/?83474][Bug #83474]] Some problems concerning glue2 publications of CREAM CE configured in cluster mode - %RED% Not Implemented %ENDCOLOR% Configure a CREAM CE in cluster mode, with the gLite-CLUSTER configured on a different host. * Check if the resource BDII publishes glue 2 GLUE2ComputingService objectclasses. There should be one GLUE2ComputingService objectclass: <verbatim> ldapsearch -h <CREAM CE hostname> -x -p 2170 -b "o=glue" objectclass=GLUE2ComputingService </verbatim> * Check if the resource BDII publishes glue 2 GLUE2ComputingEndPoint objectclasses with GLUE2EndpointInterfaceName equal to org.glite.ce.CREAM . There should be one of such GLUE2ComputingService objectclasses: <verbatim> ldapsearch -h <CREAM CE hostname> -x -p 2170 -b "o=glue" "(&(objectclass=GLUE2ComputingEndPoint)(GLUE2EndpointInterfaceName=org.glite.ce.CREAM))" </verbatim> * Check if the resource BDII publishes glue 2 GLUE2Manager objectclasses. There shouldn't be any GLUE2Manager objectclass. <verbatim> ldapsearch -h <CREAM CE hostname> -x -p 2170 -b "o=glue" objectclass=GLUE2Manager </verbatim> * Check if the resource BDII publishes glue 2 GLUE2Share objectclasses. There shouldn't be any GLUE2Share objectclass. <verbatim> ldapsearch -h <CREAM CE hostname> -x -p 2170 -b "o=glue" objectclass=GLUE2Share </verbatim> * Check if the resource BDII publishes glue 2 GLUE2ExecutionEnvironment objectclasses. There shouldn't be any GLUE2ExecutionEnvironment objectclass. <verbatim> ldapsearch -h <CREAM CE hostname> -x -p 2170 -b "o=glue" objectclass=GLUE2ExecutionEnvironment </verbatim> * Check if the resource BDII publishes glue 2 GLUE2ComputingEndPoint objectclasses with GLUE2EndpointInterfaceName equal to org.glite.ce.ApplicationPublisher. There shouldn't be any of such objectclasses. <verbatim> ldapsearch -h <CREAM CE hostname> -x -p 2170 -b "o=glue" "(&(objectclass=GLUE2ComputingEndPoint)(GLUE2EndpointInterfaceName=org.glite.ce.ApplicationPublisher))" </verbatim> ---+++ [[https://savannah.cern.ch/bugs/index.php?83592][Bug #83592]] CREAM client doesn't allow the delegation of RFC proxies - %RED% Not Implemented %ENDCOLOR% Create a RFC proxy, e.g.: <verbatim> voms-proxy-init -voms dteam -rfc </verbatim> and then submit using =glite-ce-job-submit= a job using ISB and OSB, e.g.: <verbatim> [ executable="ssh1.sh"; inputsandbox={"file:///home/sgaravat/JDLExamples/ssh1.sh", "file:///home/sgaravat/a"}; stdoutput="out3.out"; stderror="err2.err"; outputsandbox={"out3.out", "err2.err", "ssh1.sh", "a"}; outputsandboxbasedesturi="gsiftp://localhost"; ] </verbatim> Verify that the final status is =DONE-OK= ---+++ [[https://savannah.cern.ch/bugs/index.php?83593][Bug #83593]] Problems limiting RFC proxies in CREAM - %RED% Not Implemented %ENDCOLOR% Consider the same test done for bug #83592 ---+++ [[https://savannah.cern.ch/bugs/?84308][Bug #84308]] Error on glite_cream_load_monitor if cream db is on another host - %RED% Not Implemented %ENDCOLOR% Configure a CREAM CE with the database installed on a different host than the CREAM CE. Run: <verbatim> /usr/bin/glite_cream_load_monitor --show </verbatim> which shouldn't report any error. ---+++ [[https://savannah.cern.ch/bugs/?86522][Bug #86522]] glite-ce-job-submit authorization error message difficoult to understand - %RED% Not Implemented %ENDCOLOR% TBD ---+++ [[https://savannah.cern.ch/bugs/?86609][Bug #86609]] yaim variable CE_OTHERDESCR not properly managed for Glue2 - %RED% Not Implemented %ENDCOLOR% Try to set the yaim variable =CE_OTHERDESCR= to: <verbatim> CE_OTHERDESCR="Cores=1" </verbatim> Perform the following ldap query on the resource bdii: <verbatim> ldapsearch -h <CREAM CE node> -x -p 2170 -b "o=glue" objectclass=GLUE2ExecutionEnvironment GLUE2EntityOtherInfo </verbatim> This should also return: <verbatim> GLUE2EntityOtherInfo: Cores=1 </verbatim> Try then to set the yaim variable =CE_OTHERDESCR= to: <verbatim> CE_OTHERDESCR="Cores=1,Benchmark=150-HEP-SPEC06 </verbatim> and reconfigure via yaim. Perform the following ldap query on the resource bdii: <verbatim> ldapsearch -h <CREAM CE node> -x -p 2170 -b "o=glue" objectclass=GLUE2ExecutionEnvironment GLUE2EntityOtherInfo </verbatim> This should also return: <verbatim> GLUE2EntityOtherInfo: Cores=1 </verbatim> Then perform the following ldap query on the resource bdii: <verbatim> ldapsearch -h <CREAM CE node> -x -p 2170 -b "o=glue" objectclass=Glue2Benchmark </verbatim> This should return something like: <verbatim> dn: GLUE2BenchmarkID=cream-47.pd.infn.it_hep-spec06,GLUE2ResourceID=cream-47.pd.infn.it,GLUE2ServiceID=cream-47.pd.infn.it_ComputingElement,GLUE2GroupID=re source,o=glue GLUE2BenchmarkExecutionEnvironmentForeignKey: cream-47.pd.infn.it GLUE2BenchmarkID: cream-47.pd.infn.it_hep-spec06 GLUE2BenchmarkType: hep-spec06 objectClass: GLUE2Entity objectClass: GLUE2Benchmark GLUE2EntityCreationTime: 2012-01-13T14:04:48Z GLUE2BenchmarkValue: 150 GLUE2EntityOtherInfo: InfoProviderName=glite-ce-glue2-benchmark-static GLUE2EntityOtherInfo: InfoProviderVersion=1.0 GLUE2EntityOtherInfo: InfoProviderHost=cream-47.pd.infn.it GLUE2BenchmarkComputingManagerForeignKey: cream-47.pd.infn.it_ComputingElement_Manager GLUE2EntityName: Benchmark hep-spec06 </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?86694][Bug #86694]] A different port number than 9091 should be used for LRMS_EVENT_LISTENER - %RED% Not Implemented %ENDCOLOR% On a running CREAM CE, perform the following command: <verbatim> netstat -an | grep -i 9091 </verbatim> This shouldn't return anything. Then perform the following command: <verbatim> netstat -an | grep -i 49152 </verbatim> This should return: <verbatim> tcp 0 0 :::49152 :::* LISTEN </verbatim> [root@cream-47 ~]# netstat -an | grep -i 49153 [root@cream-47 ~]# netstat -an | grep -i 49154 [root@cream-47 ~]# netstat -an | grep -i 9091 ---+++ [[https://savannah.cern.ch/bugs/?86697][Bug #86697]] User application's exit code not recorded in the CREAM log file - %RED% Not Implemented %ENDCOLOR% Submit a job and wait for its completion. Then check the glite-ce-cream.log file on the CREAM CE. The user exit code should be reported (filed =exitCode=), e.g.: <verbatim> 13 Jan 2012 15:22:52,966 org.glite.ce.creamapi.jobmanagement.cmdexecutor.AbstractJobExecutor - JOB CREAM124031222 STATUS CHANGED: REALLY-RUNNING => DONE-OK [failureReason=reason=0] [exitCode=23] [localUser=dteam004] [workerNode=prod-wn-001.pn.pd.infn.it] [delegationId=7a52772caaeea96628a1ff9223e67a1f6c6dde9f] </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?86737][Bug #86737]] A different port number than 9909 should be used for CREAM_JOB_SENSOR - %RED% Not Implemented %ENDCOLOR% On a running CREAM CE, perform the following command: <verbatim> netstat -an | grep -i 9909 </verbatim> This shouldn't return anything. ---+++ [[https://savannah.cern.ch/bugs/?86773][Bug #86773]] wrong /etc/glite-ce-cream/cream-config.xml with multiple ARGUS servers set - %RED% Not Implemented %ENDCOLOR% To test the fix, set in the siteinfo,def: <verbatim> USE_ARGUS=yes ARGUS_PEPD_ENDPOINTS="https://cream-46.pd.infn.it:8154/authz https://cream-46-1.pd.infn.it:8154/authz" CREAM_PEPC_RESOURCEID="http://pd.infn.it/cream-47" </verbatim> i.e. 2 values for =ARGUS_PEPD_ENDPOINTS=. Then configure via yaim. In =/etc/glite-ce-cream/cream-config.xml= there should be: <verbatim> <argus-pep name="pep-client1" resource_id="http://pd.infn.it/cream-47" cert="/etc/grid-security/tomcat-cert.pem" key="/etc/grid-security/tomcat-key.pem" passwd="" mapping_class="org.glite.ce.cream.authz.argus.ActionMapping"> <endpoint url="https://cream-46.pd.infn.it:8154/authz" /> <endpoint url="https://cream-46-1.pd.infn.it:8154/authz" /> </argus-pep> </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?87690][Bug #87690]] Not possible to map different queues to different clusters for CREAM configured in cluster mode - %RED% Not Implemented %ENDCOLOR% Configure via yaim a CREAM CE in cluster mode with different queues mapped to different clusters, e.g.: <verbatim> CREAM_CLUSTER_MODE=yes CE_HOST_cream_47_pd_infn_it_QUEUES="creamtest1 creamtest2" QUEUE_CREAMTEST1_CLUSTER_UniqueID=cl1id QUEUE_CREAMTEST2_CLUSTER_UniqueID=cl2id </verbatim> Then query the resource bdii of the CREAM, and check the =GlueForeignKey= attributes of the different glueCEs: they should refer to the specified clusters: <verbatim> ldapsearch -h cream-47.pd.infn.it -p 2170 -x -b o=grid objectclass=GlueCE GlueForeignKey # extended LDIF # # LDAPv3 # base <o=grid> with scope subtree # filter: objectclass=GlueCE # requesting: GlueForeignKey # # cream-47.pd.infn.it:8443/cream-lsf-creamtest2, resource, grid dn: GlueCEUniqueID=cream-47.pd.infn.it:8443/cream-lsf-creamtest2,Mds-Vo-name=r esource,o=grid GlueForeignKey: GlueClusterUniqueID=cl12d # cream-47.pd.infn.it:8443/cream-lsf-creamtest1, resource, grid dn: GlueCEUniqueID=cream-47.pd.infn.it:8443/cream-lsf-creamtest1,Mds-Vo-name=r esource,o=grid GlueForeignKey: GlueClusterUniqueID=cl1id </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?87799][Bug #87799]] Add yaim variables to configure the GLUE 2 WorkingArea attributes - %RED% Not Implemented %ENDCOLOR% Set all (or some) of the following yaim variables: <verbatim> WORKING_AREA_SHARED WORKING_AREA_GUARANTEED WORKING_AREA_TOTAL WORKING_AREA_FREE WORKING_AREA_LIFETIME WORKING_AREA_MULTISLOT_TOTAL WORKING_AREA_MULTISLOT_FREE WORKING_AREA_MULTISLOT_LIFETIME </verbatim> and then configure via yaim. Then query the resource bdii of the CREAM CE and verify that the relevant attributes of the glue2 ComputingManager object are set. ---+++ [[https://savannah.cern.ch/bugs/?88078][Bug #88078]] CREAM DB names should be configurable - %RED% Not Implemented %ENDCOLOR% Configure from scratch a CREAM CE setting the yaim variables: =CREAM_DB_NAME= and =DELEGATION_DB_NAME=, e.g.: <verbatim> CREAM_DB_NAME=abc DELEGATION_DB_NAME=xyz </verbatim> and then configure via yaim. Then check if the two databases have been created: # mysql -u xxx -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 7176 Server version: 5.0.77 Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | abc | | test | | xyz | +--------------------+ 4 rows in set (0.02 sec) </verbatim> Try also a job submission to verify if everything works properly. ---+++ [[https://savannah.cern.ch/bugs/?89489][Bug #89489]] yaim plugin for CREAM CE does not execute a check function due to name mismatch - %RED% Not Implemented %ENDCOLOR% Configure a CREAM CE via yaim and save the yaim output. It should contain the string: <verbatim> INFO: Executing function: config_cream_gip_scheduler_plugin_check </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?89664][Bug #89664]] yaim-cream-ce doesn't manage spaces in CE_OTHERDESCR - %RED% Not Implemented %ENDCOLOR% Try to set the yaim variable =CE_OTHERDESCR= to: <verbatim> CE_OTHERDESCR="Cores=1" </verbatim> Perform the following ldap query on the resource bdii: <verbatim> ldapsearch -h <CREAM CE node> -x -p 2170 -b "o=glue" objectclass=GLUE2ExecutionEnvironment GLUE2EntityOtherInfo </verbatim> This should also return: <verbatim> GLUE2EntityOtherInfo: Cores=1 </verbatim> Try then to set the yaim variable =CE_OTHERDESCR= to: <verbatim> CE_OTHERDESCR="Cores=2, Benchmark=4-HEP-SPEC06" </verbatim> and reconfigure via yaim. Perform the following ldap query on the resource bdii: <verbatim> ldapsearch -h <CREAM CE node> -x -p 2170 -b "o=glue" objectclass=GLUE2ExecutionEnvironment GLUE2EntityOtherInfo </verbatim> This should also return: <verbatim> GLUE2EntityOtherInfo: Cores=2 </verbatim> Then perform the following ldap query on the resource bdii: <verbatim> ldapsearch -h <CREAM CE node> -x -p 2170 -b "o=glue" objectclass=Glue2Benchmark </verbatim> This should return something like: <verbatim> # cream-47.pd.infn.it_hep-spec06, cream-47.pd.infn.it, ppp, resource, glue dn: GLUE2BenchmarkID=cream-47.pd.infn.it_hep-spec06,GLUE2ResourceID=cream-47.pd.infn.it,GLUE2ServiceID=ppp,GLUE2GroupID=resource,o=glue GLUE2BenchmarkExecutionEnvironmentForeignKey: cream-47.pd.infn.it GLUE2BenchmarkID: cream-47.pd.infn.it_hep-spec06 GLUE2BenchmarkType: hep-spec06 objectClass: GLUE2Entity objectClass: GLUE2Benchmark GLUE2EntityCreationTime: 2012-01-13T17:07:52Z GLUE2BenchmarkValue: 4 GLUE2EntityOtherInfo: InfoProviderName=glite-ce-glue2-benchmark-static GLUE2EntityOtherInfo: InfoProviderVersion=1.0 GLUE2EntityOtherInfo: InfoProviderHost=cream-47.pd.infn.it GLUE2BenchmarkComputingManagerForeignKey: ppp_Manager GLUE2EntityName: Benchmark hep-spec06 </verbatim> ---+++ [[https://savannah.cern.ch/bugs/?89784][Bug #89784]] Improve client side description of authorization failure - %RED% Not Implemented %ENDCOLOR% Try to remove the lsc files for your VO and try a submission to that CE. It should return an authorization error. Then check the glite-ce-cream.log. It should report something like: <verbatim> 13 Jan 2012 18:21:21,270 org.glite.voms.PKIVerifier - Cannot find usable certificates to validate the AC. Check that the voms server host certificate is in your vomsdir directory. 13 Jan 2012 18:21:21,602 org.glite.ce.commonj.authz.gjaf.LocalUserPIP - glexec error: [gLExec]: LCAS failed, see '/var/log/glexec/lcas_lcmaps.log' for more info. 13 Jan 2012 18:21:21,603 org.glite.ce.commonj.authz.gjaf.ServiceAuthorizationChain - Failed to get the local user id via glexec: glexec error: [gLExec]: LCAS failed, see '/var/log/glexec/lcas_lcmaps.log' for more info. org.glite.ce.commonj.authz.AuthorizationException: Failed to get the local user id via glexec: glexec error: [gLExec]: LCAS failed, see '/var/log/glexec/lcas_lcmaps.log' for more info. </verbatim> ---++ Fixes provided with CREAM 1.13.3 ---+++ [[http://savannah.cern.ch/bugs/?81561][Bug #81561]] Make JobDBAdminPurger script compliant with CREAM EMI environment. - %GREEN% Implemented %ENDCOLOR% STATUS: %GREEN% Implemented %ENDCOLOR% To test the fix, simply run on the CREAM CE as root the JobDBAdminPurger.sh. E.g.: <verbatim> # JobDBAdminPurger.sh -c /etc/glite-ce-cream/cream-config.xml -u <user> -p <passwd> -s DONE-FAILED,0 START jobAdminPurger </verbatim> It should work without reporting error messages: <verbatim> ----------------------------------------------------------- Job CREAM595579358 is going to be purged ... - Job deleted. JobId = CREAM595579358 CREAM595579358 has been purged! ----------------------------------------------------------- STOP jobAdminPurger </verbatim> ---+++ [[http://savannah.cern.ch/bugs/?83238][Bug #83238]] Sometimes CREAM does not update the state of a failed job. - %GREEN% Implemented %ENDCOLOR% STATUS: %GREEN% Implemented %ENDCOLOR% To test the fix, try to kill by hand a job. The status of the job should eventually be: <verbatim> Status = [DONE-FAILED] ExitCode = [N/A] FailureReason = [Job has been terminated (got SIGTERM)] </verbatim> ---+++ [[http://savannah.cern.ch/bugs/?83749][Bug #83749]] JobDBAdminPurger cannot purge jobs if configured sandbox dir has changed. - %GREEN% Implemented %ENDCOLOR% STATUS: %RED% Not implemented %ENDCOLOR% To test the fix, submit some jobs and then reconfigure the service with a different value of =CREAM_SANDBOX_PATH=. Then try, with the =JobDBAdminPurger.sh= script, to purge some jobs submitted before the switch. It must be verified: * that the jobs have been purged from the CREAM DB (i.e. a =glite-ce-job-status= should not find them anymore) * that the relevant CREAM sandbox directories have been deleted ---+++ [[http://savannah.cern.ch/bugs/?84374][Bug #84374]] yaim-cream-ce: GlueForeignKey: GlueCEUniqueID: published using : instead of=. - %GREEN% Implemented %GREEN% STATUS: %GREEN% Implemented %ENDCOLOR% To test the fix, query the resource bdii of the CREAM-CE: <verbatim> ldapsearch -h <CREAM CE host> -x -p 2170 -b "o=grid" | grep -i foreignkey | grep -i glueceuniqueid </verbatim> Entries such as: <verbatim> GlueForeignKey: GlueCEUniqueID=cream-35.pd.infn.it:8443/cream-lsf-creamtest1 </verbatim> i.e.: <verbatim> GlueForeignKey: GlueCEUniqueID=<CREAM CE ID> </verbatim> should appear. ---+++ [[http://savannah.cern.ch/bugs/?86191][Bug #86191]] No info published by the lcg-info-dynamic-scheduler for one VOView - %GREEN% Implemented %ENDCOLOR% STATUS: %GREEN% Implemented %ENDCOLOR% To test the fix, issue the following ldapsearch query towards the resource bdii of the CREAM-CE: <verbatim> $ ldapsearch -h cream-35 -x -p 2170 -b "o=grid" | grep -i GlueCEStateWaitingJobs | grep -i 444444 </verbatim> It should not find anything ---+++ [[http://savannah.cern.ch/bugs/?87361][Bug #87361]] The attribute cream_concurrency_level should be configurable via yaim. - %GREEN% Implemented %ENDCOLOR% STATUS: %GREEN% Implemented %ENDCOLOR% To test the fix, set in =seiteinfo.def= the variable =CREAM_CONCURRENCY_LEVEL= to a certain number (n). After configuration verify that in =/etc/glite-ce-cream/cream-config.xml= there is: <verbatim> cream_concurrency_level="n" </verbatim> ---+++ [[http://savannah.cern.ch/bugs/?87492][Bug #87492]] CREAM doesn't handle correctly the jdl attribute "environment". - %RED% Not implemented %ENDCOLOR% STATUS: %RED% Not implemented %ENDCOLOR% To test the fix, submit the following JDL using =glite-ce-job-submit=: <verbatim> Environment = { "GANGA_LCG_VO='camont:/camont/Role=lcgadmin'", "LFC_HOST='lfc0448.gridpp.rl.ac.uk'", "GANGA_LOG_HANDLER='WMS'" }; executable="/bin/env"; stdoutput="out.out"; outputsandbox={"out.out"}; outputsandboxbasedesturi="gsiftp://localhost"; </verbatim> When the job is done, retrieve the output and check that in =out.out= the variables =GANGA_LCG_VO=, =LFC_HOST= and =GANGA_LOG_HANDLER= have exactly the values defined in the JDL. ---+ gLite-CLUSTER ---++[[https://savannah.cern.ch/bugs/?69318][Bug #69318]] The cluster publisher needs to publish in GLUE 2 too %RED% Not implemented %ENDCOLOR% * Check if the resource BDII publishes glue 2 GLUE2ComputingService objectclasses. There should be one GLUE2ComputingService objectclass: <verbatim> ldapsearch -h <gLite-CUSTER hostname> -x -p 2170 -b "o=glue" objectclass=GLUE2ComputingService </verbatim> * Check if the resource BDII publishes glue 2 GLUE2Manager objectclasses. There should be one GLUE2Manager objectclass. <verbatim> ldapsearch -h <gLite-CUSTER hostname> -x -p 2170 -b "o=glue" objectclass=GLUE2Manager </verbatim> * Check if the resource BDII publishes glue 2 GLUE2Share objectclasses. There should be one GLUE2Share objectclass per each VOview. <verbatim> ldapsearch -h <gLite-CUSTER hostname> -x -p 2170 -b "o=glue" objectclass=GLUE2Share </verbatim> * Check if the resource BDII publishes glue 2 GLUE2ExecutionEnvironment objectclasses. There should be at least one GLUE2ExecutionEnvironment objectclass. <verbatim> ldapsearch -h <gLite-CUSTER hostname> -x -p 2170 -b "o=glue" objectclass=GLUE2ExecutionEnvironment </verbatim> * Check if the resource BDII publishes glue 2 GLUE2ComputingEndPoint objectclasses with GLUE2EndpointInterfaceName equal to org.glite.ce.ApplicationPublisher. There should be one of such objectclass. <verbatim> ldapsearch -h <gLite-CUSTER hostname> -x -p 2170 -b "o=glue" "(&(objectclass=GLUE2ComputingEndPoint)(GLUE2EndpointInterfaceName=org.glite.ce.ApplicationPublisher))" </verbatim> ---++[[https://savannah.cern.ch/bugs/?86512][Bug #86512]] YAIM CLuster Publisher incorrectly configures GlueClusterService and GlueForeignKey for CreamCEs- %RED% Not implemented %ENDCOLOR% To test the fix issue a ldapsearch such as: <verbatim> ldapsearch -h <gLite-CLUSTER> -x -p 2170 -b "o=grid" | grep GlueClusterService </verbatim> Then issue a ldapsearch such as: <verbatim> ldapsearch -h <gLite-CLUSTER> -x -p 2170 -b "o=grid" | grep GlueForeignKey | grep -v Site </verbatim> Verify that for each returned line, the format is: <verbatim> <hostname>:8443/cream-<lrms>-<queue> </verbatim> ---++[[https://savannah.cern.ch/bugs/?87691][Bug #87691]] Not possible to map different queues of the same CE to different clusters - %RED% Not implemented %ENDCOLOR% To test this fix, configure a gLite-CLUSTER with at least two different queues mapped to different clusters (use the yaim variables =QUEUE_<queue>_CLUSTER_UniqueID=), e.g." <verbatim> QUEUE_CREAMTEST1_CLUSTER_UniqueID=cl1id QUEUE_CREAMTEST2_CLUSTER_UniqueID=cl2id </verbatim> Then query the resource bdii of the gLite-CLUSTER and verify that: * for the GlueCluster objectclass with =GlueClusterUniqueID= equal to =cl1id=, the attributes =GlueClusterService= and =GlueForeignKey= refers to CEIds with =creamtest1= as queue * for the GlueCluster objectclass with =GlueClusterUniqueID= equal to =cl2id=, the attributes =GlueClusterService= and =GlueForeignKey= refers to CEIds with =creamtest2= as queue ---++[[https://savannah.cern.ch/bugs/?87799][Bug #87799]] Add yaim variables to configure the GLUE 2 WorkingArea attributes - %RED% Not implemented %ENDCOLOR% Set all (or some) of the following yaim variables: <verbatim> WORKING_AREA_SHARED WORKING_AREA_GUARANTEED WORKING_AREA_TOTAL WORKING_AREA_FREE WORKING_AREA_LIFETIME WORKING_AREA_MULTISLOT_TOTAL WORKING_AREA_MULTISLOT_FREE WORKING_AREA_MULTISLOT_LIFETIME </verbatim> and then configure via yaim. Then query the resource bdii of the gLite cluster and verify that the relevant attributes of the glue2 ComputingManager object are set. ---+ CREAM Torque module ---++[[https://savannah.cern.ch/bugs/?17325][Bug #17325]] Default time limits not taken into account - %RED% Not implemented %ENDCOLOR% To test the fix for this bug, consider a PBS installation where for a certain queue both default and max values are specified, e.g.: <verbatim> resources_max.cput = A resources_max.walltime = B resources_default.cput = C resources_default.walltime = D </verbatim> Verify that the published value for GlueCEPolicyMaxCPUTime is C and that the published value for GlueCEPolicyMaxWallClockTime is D ---++[[https://savannah.cern.ch/bugs/?49653][Bug #49653]] lcg-info-dynamic-pbs should check pcput in addition to cput - %RED% Not implemented %ENDCOLOR% To test the fix for this bug, consider a PBS installation where for a certain queue both cput and pcput max values are specified, e.g.: <verbatim> resources_max.cput = A resources_max.pcput = B </verbatim> Verify that the published value for GlueCEPolicyMaxCPUTime is the minimum between A an B. Then consider a PBS installation where for a certain queue both cput and pcput max and default values are specified, e.g.: <verbatim> resources_max.cput = C resources_default.cput = D resources_max.pcput = E resources_default.pcput = F </verbatim> Verify that the published value for GlueCEPolicyMaxCPUTime is the minimum between D and F. ---++[[https://savannah.cern.ch/bugs/?76162][Bug #76162]] YAIM for APEL parsers to use the BATCH_LOG_DIR for the batch system log location - %RED% Not implemented %ENDCOLOR% To test the fix for this bug, set the yaim variable =BATCH_ACCT_DIR= and configure via yaim. Check the file =/etc/glite-apel-pbs/parser-config-yaim.xml= and verify the section: <verbatim> <Logs searchSubDirs="yes" reprocess="no"> <Dir>X</Dir> </verbatim> X should be the value specified for =BATCH_ACCT_DIR=. Then reconfigure without setting =BATCH_ACCT_DIR=. Check the file =/etc/glite-apel-pbs/parser-config-yaim.xml= and verify that the directory name is =${TORQUE_VAR_DIR}/server_priv/accounting= ---++[[https://savannah.cern.ch/bugs/?77106][Bug #77106]] PBS info provider doesn't allow - in a queue name - %RED% Not implemented %ENDCOLOR% To test the fix, configure a CREAM CE in a PBS installation where at least a queue has a - in its name. Then log as root on the CREAM CE and run: <verbatim> /sbin/runuser -s /bin/sh ldap -c "/var/lib/bdii/gip/plugin/glite-info-dynamic-ce" </verbatim> Check if the returned information is correct. ---+ CREAM LSF module ---++[[https://savannah.cern.ch/bugs/?88720][Bug #88720]] Too many '9' in GlueCEPolicyMaxCPUTime for LSF - %RED% Not implemented %ENDCOLOR% To test the fix, query the CREAM CE resource bdii in the following way: <verbatim> ldapsearch -h <CREAM CE node> -x -p 2170 -b "o=grid" | grep GlueCEPolicyMaxCPUTime | grep 9999999999 </verbatim> This shouldn't return anything. ---++[[https://savannah.cern.ch/bugs/?89767][Bug #89767]] The LSF dynamic infoprovider shouldn't publish GlueCEStateFreeCPUs and GlueCEStateFreeJobSlots - %RED% Not implemented %ENDCOLOR% To test the fix, log as root on the CREAM CE and run: <verbatim> /sbin/runuser -s /bin/sh ldap -c "/var/lib/bdii/gip/plugin/glite-info-dynamic-ce" </verbatim> Among the returned information, there shouldn't be GlueCEStateFreeCPUs and GlueCEStateFreeJobSlots. ---++[[https://savannah.cern.ch/bugs/?89794][Bug #89794]] LSF info provider doesn't allow - in a queue name - %RED% Not implemented %ENDCOLOR% To test the fix, configure a CREAM CE in a LSF installation where at least a queue has a - in its name. Then log as root on the CREAM CE and run: <verbatim> /sbin/runuser -s /bin/sh ldap -c "/var/lib/bdii/gip/plugin/glite-info-dynamic-ce" </verbatim> Check if the returned information is correct. ---++[[https://savannah.cern.ch/bugs/index.php?90113][Bug #90113]] missing yaim check for batch system - %RED% Not implemented %ENDCOLOR% To test the fix, configure a CREAM CE without having also installed LSF. yaim installation should fail saying that there were problems with LSF installation. -- Main.MassimoSgaravatto - 2011-11-07
Edit
|
Attach
|
PDF
|
H
istory
:
r111
|
r27
<
r26
<
r25
<
r24
|
B
acklinks
|
V
iew topic
|
More topic actions...
Topic revision: r25 - 2012-01-18
-
SaraBertocco
Home
Site map
CEMon web
CREAM web
Cloud web
Cyclops web
DGAS web
EgeeJra1It web
Gows web
GridOversight web
IGIPortal web
IGIRelease web
MPI web
Main web
MarcheCloud web
MarcheCloudPilotaCNAF web
Middleware web
Operations web
Sandbox web
Security web
SiteAdminCorner web
TWiki web
Training web
UserSupport web
VOMS web
WMS web
WMSMonitor web
WeNMR web
General Doc
Functional Description
Batch System Support
CREAM and Information Service
Release Notes
Known Issues
Security in CREAM
Nagios Probes to monitor CREAM and WN
Papers
Presentations
User Doc
CREAM User Guide for EMI-1
CREAM User Guide for EMI-2
CREAM User Guide for EMI-3
CREAM JDL Guide
BLAH User Guide
Troubleshooting Guide
System Administrator Doc
System Administrator Guide for CREAM (EMI-3 release)
System Administrator Guide for CREAM (EMI-2 release)
System Administrator Guide for CREAM (EMI-1 release)
The CREAM configuration file
The CEMonitor configuration file
The CREAM CE Service Reference Card (EMI-2 release)
The CREAM CE Service Reference Card (EMI-1 release)
Batch System related documentation
Troubleshooting Guide
The guide for integrating EMIR in CREAM
]
Developers Doc
CREAM Client API C++ Documentation
CREAM Client API for Python
Other Doc
Contacts
Moving to CREAM from LCG-CE
Testing
Internal Collaboration Information
Credits
CREAM Web utilities
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
P
P
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Edit
Attach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback