Difference: ReleaseNotes1841 (1 vs. 32)

Revision 322011-02-10 - MassimoSgaravatto

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Release notes for Patch #1841

Line: 16 to 16
 

    Changed:
    <
    <
  • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds * note: only about one second with the subsequent patches *. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter in the WorkloadManager section of the configuration file, as shown in the following example:
  • >
    >
  • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds * note: only about one second with the subsequent patches *. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter in the WorkloadManager section of the configuration file, as shown in the following example:
  •  

    Revision 312011-02-10 - MarcoCecchi

    Line: 1 to 1
     
    META TOPICPARENT name="WebHome"

    Release notes for Patch #1841

    Line: 16 to 16
     

      Changed:
      <
      <
    • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter in the WorkloadManager section of the configuration file, as shown in the following example:
    • >
      >
    • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds * note: only about one second with the subsequent patches *. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter in the WorkloadManager section of the configuration file, as shown in the following example:
    •  

      Revision 302009-02-12 - MarcoCecchi

      Line: 1 to 1
       
      META TOPICPARENT name="WebHome"

      Release notes for Patch #1841

      Line: 16 to 16
       

        Changed:
        <
        <
      • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter within the WorkloadManager section of the configuration file, as shown in the following example:
      • >
        >
      • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter in the WorkloadManager section of the configuration file, as shown in the following example:
      •  

        Revision 292009-01-22 - MarcoCecchi

        Line: 1 to 1
         
        META TOPICPARENT name="WebHome"

        Release notes for Patch #1841

        Line: 13 to 13
         
        • "JobDir" is a mailbox-based persistent communication mechanism, for the moment adopted between the WM proxy and the WM. In the present release it is enabled by default. A tool is available for converting from the former mechanism based on filelist (conversion in the opposite way is also supported). At the moment this not done automatically. Of course, another option to handle this transition will consist in putting the WMS in drain and wait for the filelist to be empty.
        Deleted:
        <
        <
        • Modified design to allow for DNS-based load balancing mechanism

         

          Revision 282008-10-29 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 63 to 63
           
        • Bug #42590: the WM terminates unexpectedly handing a cancel request.
        • Added:
          >
          >
        • Bug #43368: Long Nordugrid ARC Jobs go into the HELD state and get resubmitted
        •  

          Revision 272008-10-24 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 54 to 54
           
        • Bug #35244: Can't submit jobs using voms proxies with roles due to a mapping problem
        • Changed:
          <
          <
          >
          >
          *fixed by patch 2055*
           
        • Bug #40951: Cleared event is not logged for nodes
        • Bug #40982: When a collection is aborted the "Abort" event should be logged for the sub-nodes as well /2
        • Revision 262008-10-23 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 55 to 55
           
        • Bug #35244: Can't submit jobs using voms proxies with roles due to a mapping problem
        • Deleted:
          <
          <
        • Bug #39641: User proxy mixup for job submissions too close in time
        •  
        • Bug #40951: Cleared event is not logged for nodes
        • Bug #40982: When a collection is aborted the "Abort" event should be logged for the sub-nodes as well /2
        • Revision 252008-10-23 - FabioCapannini

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 61 to 61
           
        • Bug #40982: When a collection is aborted the "Abort" event should be logged for the sub-nodes as well /2
        • Added:
          >
          >
        • Bug #42587: Error processing DAG dependencies while generating the ISB for final node
        • Bug #42590: the WM terminates unexpectedly handing a cancel request.
        •  

          Revision 242008-09-19 - FrancescoGiacomini

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 32 to 32
           
          • Purchasing from CEMon has been temporarily disabled
          Changed:
          <
          <
          • Purchasing from R-GMA has been dismissed
          >
          >
          • Purchasing from R-GMA has been removed
           
          • Added support for MPI jobs according to the latest specifications from the MPI working group. The value "MPICH" for the JDL attribute JobType becomes deprecated from now on, just set it to "Normal" and follow the new guideline instead

          Revision 232008-09-19 - MassimoSgaravatto

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 38 to 38
           
          • Support for interactive jobs has been dismissed. However, the functionality is not compromised because it can be achieved using a tool called i2glogin (formerly known as glogin). This different approach is actually more flexible, the user being totally in charge, and it follows the trend set by the new handling for MPI jobs).
          Changed:
          <
          <
          • Known issues:
            • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. This is not about a memory leak, but simply the effect of a well-known problem with the allocator which comes with the glibc (the so called ptmalloc2). See tcmalloc for a more detailed explanation. This problem can be avoided using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. It is highly suggested doing so wherever RAM is less than or equal to 4Gb. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license:
              • install the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (just pick up the latest version, older versions should work anyway, just in case),
              • enable the malloc redirection for the WM by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
          >
          >
          • Known issues:
            • Performance problems in the newly introduced ICE component when it has to deal with several CEs
            • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. This is not about a memory leak, but simply the effect of a well-known problem with the allocator which comes with the glibc (the so called ptmalloc2). See tcmalloc for a more detailed explanation. This problem can be avoided using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. It is highly suggested doing so wherever RAM is less than or equal to 4Gb. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license:
              • install the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (just pick up the latest version, older versions should work anyway, just in case),
              • enable the malloc redirection for the WM by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
           #use_google_perf_tools=1
          Changed:
          <
          <
            • Bug #35244: Can't submit jobs using voms proxies with roles due to a mapping problem
            • Bug #39641: User proxy mixup for job submissions too close in time
            • Bug #40951: Cleared event is not logged for nodes
            • Bug #40982: When a collection is aborted the "Abort" event should be logged for the sub-nodes as well /2
          >
          >
        • Bug #35244: Can't submit jobs using voms proxies with roles due to a mapping problem
        • Bug #39641: User proxy mixup for job submissions too close in time
        • Bug #40951: Cleared event is not logged for nodes
        • Bug #40982: When a collection is aborted the "Abort" event should be logged for the sub-nodes as well /2
        •   -- AlessioGianelle - 27 May 2008 \ No newline at end of file

          Revision 222008-09-09 - AlessioGianelle

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 44 to 44
           
              • enable the malloc redirection for the WM by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
                #use_google_perf_tools=1
                
          Added:
          >
          >
            • Bug #35244: Can't submit jobs using voms proxies with roles due to a mapping problem
           
            • Bug #39641: User proxy mixup for job submissions too close in time
            • Bug #40951: Cleared event is not logged for nodes
            • Bug #40982: When a collection is aborted the "Abort" event should be logged for the sub-nodes as well /2

          Revision 212008-09-09 - AlessioGianelle

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 41 to 41
           
          • Known issues:
            • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. This is not about a memory leak, but simply the effect of a well-known problem with the allocator which comes with the glibc (the so called ptmalloc2). See tcmalloc for a more detailed explanation. This problem can be avoided using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. It is highly suggested doing so wherever RAM is less than or equal to 4Gb. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license:
              • install the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (just pick up the latest version, older versions should work anyway, just in case),
          Changed:
          <
          <
              • enable the malloc redirection for the WM by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
          >
          >
              • enable the malloc redirection for the WM by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
           #use_google_perf_tools=1
          Deleted:
          <
          <
           
            • Bug #39641: User proxy mixup for job submissions too close in time
            • Bug #40951: Cleared event is not logged for nodes
            • Bug #40982: When a collection is aborted the "Abort" event should be logged for the sub-nodes as well /2

          Revision 202008-09-08 - AlessioGianelle

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 47 to 47
           #use_google_perf_tools=1
          Added:
          >
          >
            • Bug #39641: User proxy mixup for job submissions too close in time
           
            • Bug #40951: Cleared event is not logged for nodes
            • Bug #40982: When a collection is aborted the "Abort" event should be logged for the sub-nodes as well /2

          Revision 192008-09-03 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 28 to 28
           
          Added:
          >
          >
          • LDAP queries to the BDII can now be done asynchronously (attribute IsmIiLDAPSearchAsync = true in the WM section). This mode is typically faster than the usual synchronous one.
           
          • Purchasing from CEMon has been temporarily disabled

          • Purchasing from R-GMA has been dismissed

          Revision 182008-09-03 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 13 to 13
           
          • "JobDir" is a mailbox-based persistent communication mechanism, for the moment adopted between the WM proxy and the WM. In the present release it is enabled by default. A tool is available for converting from the former mechanism based on filelist (conversion in the opposite way is also supported). At the moment this not done automatically. Of course, another option to handle this transition will consist in putting the WMS in drain and wait for the filelist to be empty.
          Changed:
          <
          <
          • Modified design to allow for DNS-based load balancing mechanisms
          >
          >
          • Modified design to allow for DNS-based load balancing mechanism

          • The output sandbox can be limited: How the OSB limit works
           
          • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter within the WorkloadManager section of the configuration file, as shown in the following example:

          Revision 172008-09-03 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 6 to 6
           
          • Enabled submission to CREAM CE. A newly introduced component in the WMS internal architecture, called ICE, implements the job submission service to CREAM. Its functionality can be compared to what the three components JC. LM and CondorG do for the submission to LCG CE
          Changed:
          <
          <
          >
          >
           Important: 1) if the recovery is not enabled, simply starting and stopping the glite-wms-workload_manager process (and of course restarting after whatever kind of interruption) might cause duplicating requests. 2) the recovery only works with "JobDir" (see below)
          Line: 15 to 15
           
          • Modified design to allow for DNS-based load balancing mechanisms
          Changed:
          <
          <
          • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter within the WorkloadManager section of the configuration file, as shown in the following example:
            • IsmIiLDAPCEFilterExt="(|(GlueCEAccessControlBaseRule=VO:cms)(GlueCEAccessControlBaseRule=VOMS:/cms/*))"
          >
          >
          • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter within the WorkloadManager section of the configuration file, as shown in the following example:
            • IsmIiLDAPCEFilterExt="(|(GlueCEAccessControlBaseRule=VO:cms)(GlueCEAccessControlBaseRule=VOMS:/cms/*))"
              
           
          • Purchasing from CEMon has been temporarily disabled
          Line: 30 to 39
           
            • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. This is not about a memory leak, but simply the effect of a well-known problem with the allocator which comes with the glibc (the so called ptmalloc2). See tcmalloc for a more detailed explanation. This problem can be avoided using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. It is highly suggested doing so wherever RAM is less than or equal to 4Gb. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license:
              • install the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (just pick up the latest version, older versions should work anyway, just in case),
              • enable the malloc redirection for the WM by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
          Changed:
          <
          <
          #use_google_perf_tools=1
          .
          >
          >
          #use_google_perf_tools=1
          
           
            • Bug #40951: Cleared event is not logged for nodes
          Added:
          >
          >
            • Bug #40982: When a collection is aborted the "Abort" event should be logged for the sub-nodes as well /2
            -- AlessioGianelle - 27 May 2008

          Revision 162008-09-02 - AlessioGianelle

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 31 to 31
            * install the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (just pick up the latest version, older versions should work anyway, just in case), * enable the malloc redirection for the WM by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
          #use_google_perf_tools=1
          .
          Changed:
          <
          <
          >
          >
            • Bug #40951: Cleared event is not logged for nodes
            -- AlessioGianelle - 27 May 2008

          Revision 152008-08-29 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Changed:
          <
          <
          Release 08_98 of the WMS for gLite3.1/SL4. Changes with respect to the current production version (patch #1726*):
          >
          >
          Release 08_98 of the WMS for gLite3.1/SL4. Changes with respect to the current production version (patch #1726):
           
          • Enabled submission to CREAM CE. A newly introduced component in the WMS internal architecture, called ICE, implements the job submission service to CREAM. Its functionality can be compared to what the three components JC. LM and CondorG do for the submission to LCG CE
          Line: 11 to 11
           1) if the recovery is not enabled, simply starting and stopping the glite-wms-workload_manager process (and of course restarting after whatever kind of interruption) might cause duplicating requests. 2) the recovery only works with "JobDir" (see below)
          Changed:
          <
          <
          • "JobDir", the newest mailbox-based persistent communication mechanism between the WM proxy and the WM is with the present release enabled by default. A tool is available for converting from the former mechanism based on filelist (conversion in the opposite way is also supported). At the moment this not done automatically. Of course, another option to handle this transition will consist in putting the WMS in drain and wait for the filelist to be empty.
          >
          >
          • "JobDir" is a mailbox-based persistent communication mechanism, for the moment adopted between the WM proxy and the WM. In the present release it is enabled by default. A tool is available for converting from the former mechanism based on filelist (conversion in the opposite way is also supported). At the moment this not done automatically. Of course, another option to handle this transition will consist in putting the WMS in drain and wait for the filelist to be empty.
           
          • Modified design to allow for DNS-based load balancing mechanisms
          Line: 20 to 20
           
          • Purchasing from CEMon has been temporarily disabled
          Changed:
          <
          <
          • Purchasing from RGMA has been dismissed
          >
          >
          • Purchasing from R-GMA has been dismissed
           
          • Added support for MPI jobs according to the latest specifications from the MPI working group. The value "MPICH" for the JDL attribute JobType becomes deprecated from now on, just set it to "Normal" and follow the new guideline instead
          Changed:
          <
          <
          • Support for interactive jobs has been dismissed. However, the functionality is not compromised at all because it can be achieved using a tool called i2glogin (formerly known as glogin). This different approach is actually more flexible, the user being totally in charge, and it follows the trend set by the aforementioned MPI jobs). Being the support dismissed (not deprecated), JobType = "Interactive" in the JDL would result in an error

          • Build system changes
            • the component org.glite.wms.partitioner has been removed (*)
            • the two components org.glite.wms.interactive and glite-wms-thirdparty-bypass have been deprecated (according to the removed support for interactive jobs)
          >
          >
          • Support for interactive jobs has been dismissed. However, the functionality is not compromised because it can be achieved using a tool called i2glogin (formerly known as glogin). This different approach is actually more flexible, the user being totally in charge, and it follows the trend set by the new handling for MPI jobs).
           
          • Known issues:
            • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. This is not about a memory leak, but simply the effect of a well-known problem with the allocator which comes with the glibc (the so called ptmalloc2). See tcmalloc for a more detailed explanation. This problem can be avoided using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. It is highly suggested doing so wherever RAM is less than or equal to 4Gb. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license: * install the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (just pick up the latest version, older versions should work anyway, just in case),
          Changed:
          <
          <
          * enable the performance tools by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
          >
          >
          * enable the malloc redirection for the WM by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
           
          #use_google_perf_tools=1
          .

          Revision 142008-08-28 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Changed:
          <
          <
          Release 08_98 of the WMS for gLite3.1/SL4. Changes with respect to the current production version (patch #1726?):
          >
          >
          Release 08_98 of the WMS for gLite3.1/SL4. Changes with respect to the current production version (patch #1726*):
           
          • Enabled submission to CREAM CE. A newly introduced component in the WMS internal architecture, called ICE, implements the job submission service to CREAM. Its functionality can be compared to what the three components JC. LM and CondorG do for the submission to LCG CE

          Important:
          Changed:
          <
          <
          1) if the recovery is not enabled, simply starting and stopping the glite-wms-workload_manager process (and of course restarting after whatever kind of crash) might cause duplicating requests.
          >
          >
          1) if the recovery is not enabled, simply starting and stopping the glite-wms-workload_manager process (and of course restarting after whatever kind of interruption) might cause duplicating requests.
           2) the recovery only works with "JobDir" (see below)
          Changed:
          <
          <
          • "JobDir", the newest mailbox-based persistent communication mechanism between the WM proxy and the WM is now enabled by default. A tool is available for converting from the former mechanism based on filelist (conversion in the opposite way is also supported). At the moment this not done automatically. Of course, one can always think of handling this transition by putting the WMS in drain and wait for the filelist to be empty.
          >
          >
          • "JobDir", the newest mailbox-based persistent communication mechanism between the WM proxy and the WM is with the present release enabled by default. A tool is available for converting from the former mechanism based on filelist (conversion in the opposite way is also supported). At the moment this not done automatically. Of course, another option to handle this transition will consist in putting the WMS in drain and wait for the filelist to be empty.
           
          Changed:
          <
          <
          • Modified design to allow DNS load balancing
          >
          >
          • Modified design to allow for DNS-based load balancing mechanisms
           
          • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter within the WorkloadManager section of the configuration file, as shown in the following example:
            • IsmIiLDAPCEFilterExt="(|(GlueCEAccessControlBaseRule=VO:cms)(GlueCEAccessControlBaseRule=VOMS:/cms/*))"
          Line: 22 to 22
           
          • Purchasing from RGMA has been dismissed
          Changed:
          <
          <
          • Deprecated??? components:
            • xxxcheckpointing (glite-wms-checkpointing) (already in patch 1726)
            • partitioner (glite-wms-partitioner)
            • interactive (glite-wms-interactive and glite-wms-thirdparty-bypass), suggest to use i2glogin with appropriate links
          >
          >
          • Added support for MPI jobs according to the latest specifications from the MPI working group. The value "MPICH" for the JDL attribute JobType becomes deprecated from now on, just set it to "Normal" and follow the new guideline instead

          • Support for interactive jobs has been dismissed. However, the functionality is not compromised at all because it can be achieved using a tool called i2glogin (formerly known as glogin). This different approach is actually more flexible, the user being totally in charge, and it follows the trend set by the aforementioned MPI jobs). Being the support dismissed (not deprecated), JobType = "Interactive" in the JDL would result in an error

          • Build system changes
            • the component org.glite.wms.partitioner has been removed (*)
            • the two components org.glite.wms.interactive and glite-wms-thirdparty-bypass have been deprecated (according to the removed support for interactive jobs)
           
          • Known issues:
          Changed:
          <
          <
          • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. This is not about a memory leak, but simply the effect of a well-known problem with the allocator which comes with the glibc (the so called ptmalloc2). See tcmalloc for a more detailed explanation.
          This problem can be avoided using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. It is highly suggested doing so wherever RAM is less than or equal to 4Gb. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license:
          >
          >
            • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. This is not about a memory leak, but simply the effect of a well-known problem with the allocator which comes with the glibc (the so called ptmalloc2). See tcmalloc for a more detailed explanation. This problem can be avoided using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. It is highly suggested doing so wherever RAM is less than or equal to 4Gb. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license:
            * install the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (just pick up the latest version, older versions should work anyway, just in case),
          Changed:
          <
          <
          * enable the performance tools by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
          #use_google_perf_tools=1
          .
          >
          >
          * enable the performance tools by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
          #use_google_perf_tools=1
          .
           

          -- AlessioGianelle - 27 May 2008

          Revision 132008-08-28 - SalvatoreMonforte

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 15 to 15
           
          • Modified design to allow DNS load balancing
          Changed:
          <
          <
          • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set using an environment variable, as shown in the following example:
          >
          >
          • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set by assigning the relevant parameter within the WorkloadManager section of the configuration file, as shown in the following example:
            • IsmIiLDAPCEFilterExt="(|(GlueCEAccessControlBaseRule=VO:cms)(GlueCEAccessControlBaseRule=VOMS:/cms/*))"
           
          • Purchasing from CEMon has been temporarily disabled
          Line: 32 to 32
           This problem can be avoided using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. It is highly suggested doing so wherever RAM is less than or equal to 4Gb. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license: * install the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (just pick up the latest version, older versions should work anyway, just in case),
          Changed:
          <
          <
          * enable redirection to tcmalloc with: export LD_PRELOAD="/usr/lib/libtcmalloc.so" in the glite-wms-wm script. Even if not specifically recommended (actually it may not work with everything) run-time malloc redirection has proven to get along well with the glite workload manager.
          >
          >
          * enable the performance tools by editing the glite-wms-wm script. It is just a matter of removing the comment in the following line:
          #use_google_perf_tools=1
          .
           

          -- AlessioGianelle - 27 May 2008 \ No newline at end of file

          Revision 122008-08-28 - AlessandroCavalli

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 22 to 22
           
          • Purchasing from RGMA has been dismissed
          Deleted:
          <
          <
          * UpdateRate*
           
          • Deprecated??? components:
            • xxxcheckpointing (glite-wms-checkpointing) (already in patch 1726)
            • partitioner (glite-wms-partitioner)

          Revision 112008-08-27 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Changed:
          <
          <
          New release (08_98) of the WMS for gLite3.1/SL4. Changes with respect to the current production version:
          >
          >
          Release 08_98 of the WMS for gLite3.1/SL4. Changes with respect to the current production version (patch #1726?):
           
          Changed:
          <
          <
          • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. Despite what one might think at first sight, this is not about a memory leak, but simply the effect of the so called per-thread arenas. The WM in fact, keeps in memory requests and a very large data structure containing up-to-date resource information, the so called information supermarket, which is accessed by several threads. With the linux standard malloc implementation (ptmalloc2) all this data cause the memory to grow large. See tcmalloc for a detailed explanation. Especially wherever RAM is less than or equal to 4Gb, it is highly suggested to workaround this problem using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license:
            • after installing the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (whichever version should be fine),
            • you can use tcmalloc this way: export LD_PRELOAD="/usr/lib/libtcmalloc.so" in the glite-wms-wm script. Even if not specifically recommended (actually it may not work with everything) run-time malloc redirection has proven to get well along with the glite workload manager, so that it can be used without any problem in such circumstances.
          >
          >
          • Enabled submission to CREAM CE. A newly introduced component in the WMS internal architecture, called ICE, implements the job submission service to CREAM. Its functionality can be compared to what the three components JC. LM and CondorG do for the submission to LCG CE
           
          Changed:
          <
          <
          • LDAP queries used to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This is very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making time for a job can take up to ten seconds. Using the filter on the VO name, as for the mentioned use-case, significantly reduces the times for MM. The filter expression is specified using an environment variable, as shown in this example:
          >
          >
          Important: 1) if the recovery is not enabled, simply starting and stopping the glite-wms-workload_manager process (and of course restarting after whatever kind of crash) might cause duplicating requests. 2) the recovery only works with "JobDir" (see below)
           
          Changed:
          <
          <
          • Temporarily removed the purchasing from CEMon.
          >
          >
          • "JobDir", the newest mailbox-based persistent communication mechanism between the WM proxy and the WM is now enabled by default. A tool is available for converting from the former mechanism based on filelist (conversion in the opposite way is also supported). At the moment this not done automatically. Of course, one can always think of handling this transition by putting the WMS in drain and wait for the filelist to be empty.
           
          Changed:
          <
          <
          • Added the possibility to submit to a CREAM-CE through a new component called ICE
          >
          >
          • Modified design to allow DNS load balancing
           
          Changed:
          <
          <
          • DNS load balancing
          >
          >
          • LDAP queries to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This can be very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making for a job can take a time of the order of ten seconds. Using the filter on the VO name, as for the aforementioned use-case, significantly reduces the MM time. The filtering expression has to be set using an environment variable, as shown in the following example:
           
          Changed:
          <
          <
          • JobDir as input mechanism to WM in place of FileList. Tool available for conversion. Should it be done automatically? Or would it be better to put the WMS in drain during the transition?
          >
          >
          • Purchasing from CEMon has been temporarily disabled
           
          Changed:
          <
          <
          • Add recovery procedure for the WM (EnableRecovery = true). Notice: if the recovery is not enabled, starting and stopping the workload_manager might cause duplicating requests.
          >
          >
          • Purchasing from RGMA has been dismissed
           
          Changed:
          <
          <
          • No more support for RGMA as information system.
          >
          >
          * UpdateRate*
           
          Changed:
          <
          <
          • Deprecated components:
            • checkpointing (glite-wms-checkpointing)
          >
          >
          • Deprecated??? components:
            • xxxcheckpointing (glite-wms-checkpointing) (already in patch 1726)
           
            • partitioner (glite-wms-partitioner)
            • interactive (glite-wms-interactive and glite-wms-thirdparty-bypass), suggest to use i2glogin with appropriate links
          Changed:
          <
          <
            • rgma (info system)
          >
          >
          • Known issues:
          • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. This is not about a memory leak, but simply the effect of a well-known problem with the allocator which comes with the glibc (the so called ptmalloc2). See tcmalloc for a more detailed explanation.
          This problem can be avoided using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. It is highly suggested doing so wherever RAM is less than or equal to 4Gb. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license: * install the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (just pick up the latest version, older versions should work anyway, just in case), * enable redirection to tcmalloc with: export LD_PRELOAD="/usr/lib/libtcmalloc.so" in the glite-wms-wm script. Even if not specifically recommended (actually it may not work with everything) run-time malloc redirection has proven to get along well with the glite workload manager.
            -- AlessioGianelle - 27 May 2008

          Revision 102008-07-29 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 8 to 8
           
            • after installing the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (whichever version should be fine),
            • you can use tcmalloc this way: export LD_PRELOAD="/usr/lib/libtcmalloc.so" in the glite-wms-wm script. Even if not specifically recommended (actually it may not work with everything) run-time malloc redirection has proven to get well along with the glite workload manager, so that it can be used without any problem in such circumstances.
          Changed:
          <
          <
          • No more support for RGMA as information system.
          >
          >
          • LDAP queries used to fetch information in the Information Supermarket from the BDII can now be pre-filtered. This is very helpful whenever a WMS instance is dedicated to only one VO. Typically, using a production BDII, the ISM reaches a size of 6-7000 entries, with the consequence that the match-making time for a job can take up to ten seconds. Using the filter on the VO name, as for the mentioned use-case, significantly reduces the times for MM. The filter expression is specified using an environment variable, as shown in this example:
           
          • Temporarily removed the purchasing from CEMon.
          Line: 18 to 19
           
          • JobDir as input mechanism to WM in place of FileList. Tool available for conversion. Should it be done automatically? Or would it be better to put the WMS in drain during the transition?
          Added:
          >
          >
          • Add recovery procedure for the WM (EnableRecovery = true). Notice: if the recovery is not enabled, starting and stopping the workload_manager might cause duplicating requests.

          • No more support for RGMA as information system.
           
          • Deprecated components:
            • checkpointing (glite-wms-checkpointing)
            • partitioner (glite-wms-partitioner)
            • interactive (glite-wms-interactive and glite-wms-thirdparty-bypass), suggest to use i2glogin with appropriate links
            • rgma (info system)
          Deleted:
          <
          <
          • Add recovery procedure for the WM (EnableRecovery = true). Notice: if the recovery is not enabled, starting and stopping the workload_manager might cause duplicating requests.
           -- AlessioGianelle - 27 May 2008

          Revision 92008-07-16 - MarcoCecchi

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 16 to 16
           
          • DNS load balancing
          Changed:
          <
          <
          • JobDir as input mechanism to WM in place of FileList. Tool available for conversion. Should it be done automatically?
          >
          >
          • JobDir as input mechanism to WM in place of FileList. Tool available for conversion. Should it be done automatically? Or would it be better to put the WMS in drain during the transition?
           
          • Deprecated components:
            • checkpointing (glite-wms-checkpointing)
          Line: 24 to 24
           
            • interactive (glite-wms-interactive and glite-wms-thirdparty-bypass), suggest to use i2glogin with appropriate links
            • rgma (info system)
          Changed:
          <
          <
          >
          >
          • Add recovery procedure for the WM (EnableRecovery = true). Notice: if the recovery is not enabled, starting and stopping the workload_manager might cause duplicating requests.
            -- AlessioGianelle - 27 May 2008 \ No newline at end of file

          Revision 82008-07-03 - FrancescoGiacomini

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 21 to 21
           
          • Deprecated components:
            • checkpointing (glite-wms-checkpointing)
            • partitioner (glite-wms-partitioner)
          Changed:
          <
          <
            • interactive (glite-wms-interactive and glite-wms-thirdparty-bypass)
          >
          >
            • interactive (glite-wms-interactive and glite-wms-thirdparty-bypass), suggest to use i2glogin with appropriate links
           
            • rgma (info system)

          Revision 72008-06-18 - AlessioGianelle

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 18 to 18
           
          • JobDir as input mechanism to WM in place of FileList. Tool available for conversion. Should it be done automatically?
          Added:
          >
          >
          • Deprecated components:
            • checkpointing (glite-wms-checkpointing)
            • partitioner (glite-wms-partitioner)
            • interactive (glite-wms-interactive and glite-wms-thirdparty-bypass)
            • rgma (info system)

           -- AlessioGianelle - 27 May 2008 \ No newline at end of file

          Revision 62008-06-04 - FrancescoGiacomini

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 10 to 10
           
          • No more support for RGMA as information system.
          Added:
          >
          >
          • Temporarily removed the purchasing from CEMon.
           
          • Added the possibility to submit to a CREAM-CE through a new component called ICE

          • DNS load balancing

          Revision 52008-05-30 - FrancescoGiacomini

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 12 to 12
           
          • Added the possibility to submit to a CREAM-CE through a new component called ICE
          Added:
          >
          >
          • DNS load balancing

          • JobDir as input mechanism to WM in place of FileList. Tool available for conversion. Should it be done automatically?
           -- AlessioGianelle - 27 May 2008

          Revision 42008-05-27 - AlessioGianelle

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Added:
          >
          >
          New release (08_98) of the WMS for gLite3.1/SL4. Changes with respect to the current production version:
           
          • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. Despite what one might think at first sight, this is not about a memory leak, but simply the effect of the so called per-thread arenas. The WM in fact, keeps in memory requests and a very large data structure containing up-to-date resource information, the so called information supermarket, which is accessed by several threads. With the linux standard malloc implementation (ptmalloc2) all this data cause the memory to grow large. See tcmalloc for a detailed explanation. Especially wherever RAM is less than or equal to 4Gb, it is highly suggested to workaround this problem using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license:
            • after installing the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (whichever version should be fine),
            • you can use tcmalloc this way: export LD_PRELOAD="/usr/lib/libtcmalloc.so" in the glite-wms-wm script. Even if not specifically recommended (actually it may not work with everything) run-time malloc redirection has proven to get well along with the glite workload manager, so that it can be used without any problem in such circumstances.
          Changed:
          <
          <
          • no more support for RGMA as information system
          >
          >
          • No more support for RGMA as information system.

          • Added the possibility to submit to a CREAM-CE through a new component called ICE
            -- AlessioGianelle - 27 May 2008

          Revision 32008-05-27 - AlessioGianelle

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Revision 22008-05-27 - FrancescoGiacomini

          Line: 1 to 1
           
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          Line: 7 to 7
           
            • you can use tcmalloc this way: export LD_PRELOAD="/usr/lib/libtcmalloc.so" in the glite-wms-wm script. Even if not specifically recommended (actually it may not work with everything) run-time malloc redirection has proven to get well along with the glite workload manager, so that it can be used without any problem in such circumstances.
          Added:
          >
          >
          • no more support for RGMA as information system
           -- AlessioGianelle - 27 May 2008

          Revision 12008-05-27 - AlessioGianelle

          Line: 1 to 1
          Added:
          >
          >
          META TOPICPARENT name="WebHome"

          Release notes for Patch #1841

          • Very often, especially under high loads, the virtual memory occupation for the glite-wms-workload_manager process may reach very high values, such as one Gigabyte and more. Despite what one might think at first sight, this is not about a memory leak, but simply the effect of the so called per-thread arenas. The WM in fact, keeps in memory requests and a very large data structure containing up-to-date resource information, the so called information supermarket, which is accessed by several threads. With the linux standard malloc implementation (ptmalloc2) all this data cause the memory to grow large. See tcmalloc for a detailed explanation. Especially wherever RAM is less than or equal to 4Gb, it is highly suggested to workaround this problem using run-time redirection to whatever lock-free, optimized alternative allocator, to avoid excessive swap activity. Here is our recipe which makes use of the TCmalloc, such an alternate allocator distributed by Google under BSD license:
            • after installing the two rpms, google-perftools-devel-???.rpm and google-perftools-???.rpm (whichever version should be fine),
            • you can use tcmalloc this way: export LD_PRELOAD="/usr/lib/libtcmalloc.so" in the glite-wms-wm script. Even if not specifically recommended (actually it may not work with everything) run-time malloc redirection has proven to get well along with the glite workload manager, so that it can be used without any problem in such circumstances.

          -- AlessioGianelle - 27 May 2008

           
          This site is powered by the TWiki collaboration platformCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
          Ideas, requests, problems regarding TWiki? Send feedback