Difference: Applications (1 vs. 24)

Revision 242012-10-26 - DanieleCesini

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 21 to 21
 The NEMO 3.4 oofs2 multiscale simulation package has been ported in the Grid environment in order to run parallel calculations. The NEMO code has significant CPU and memory demand (from our calculations we estimated 1GB/core for a 8 cores simulation). The NEMO code can be used for production as well as for testing purposes by modifying the model (this means recompiling the source code) or varying the input parameters. The user community was interested in exploiting the Grid for the second use case (model testing and tuning) which implies that the package must be executed several times in a parameter sweeping approach. An additional benefit coming from this work is that scientist operating in the field of oceanographic who would be interested in the execution of the code, can share application and results using the Grid data management and sharing facilities.
Changed:
<
<
Namd - Alessandro Venturini (CNR-BO)
>
>
Namd - Alessandro Venturini (ISOF-CNR-BO)
 NAMD is a powerful parallel Molecular Mechanics(MM)/Molecular Dynamics(MD) code particularly suited for the study of large biomolecules. However, it is also compatible with different force fields, making possible the simulation of systems of quite different characteristics. NAMD can be efficiently used on large multi-core platforms and clusters.

The NAMD use case was a simulation of a 36000 atoms lipid provided by a CNR-ISOF [21] group located in Bologna. To have a real-life use case the simulation had to be run for at least 25 nanoseconds of simulated time resulting on a wallclock time of about 40 days if run on a 8 cores machine.

Line: 33 to 33
 
  • the length of the simulation implied many computation checkpoints given the time limits on the batch system queues of the sites matching the requirements. We decided to split the simulation in 50 steps each 500 ps of simulated time long, allowing to complete each step without reaching the queues time limits.
Added:
>
>
Gaussian - Stefano Ottani (ISOF-CNR-BO)
provides state-of-the-art capabilities for electronic structure modeling can run on single CPU systems and in parallel on shared-memory multiprocessor systems Starting from the fundamental laws of quantum mechanics, Gaussian 09 predicts the energies, molecular structures, vibrational frequencies and molecular properties of molecules and reactions in a wide variety of chemical environments. Uses a couston system to parallelize the code or linda.

The use case foresees an umbrella sampling calculation with many short parallel simulations whose output is statistically analysed.

Globo - (ISAC-CNR-BO)
 
RegCM - Stefano Cozzini (SISSA)

RegCM is the first limited area model developed for long term regional climate simulation currently being developed at the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy. RegCM4 is a regional climate model based on the concept of one-way nesting, in which large scale meteorological fields from a Global Circulation Model (GCM) run provides initial and time-dependent meteorological boundary conditions for high resolution simulations on a specific region. The RegCM4 computational engine is CPU intensive using MPI parallel software.

Revision 232012-10-22 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 57 to 57
 
Deleted:
<
<

Requirements Outline:

GAIA Einstein-Tk NEMO NAMD QuantumEpresso RegCM
1 Code Developer - E.g.: P(=personal) ,OS(=OpenSource), C(=Commercial), ...
OS+P OS+P OS+P OS OS OS
2 Parallelization Model - E.g.: MPI, openMP, Hybrid, GPU, ..
MPI/openMP MPI/openMP MPI/openMP MPI/openMP MPI/openMP GPU MPI
3 CPU requirements - E.g.: 1-10 WholeNodes, 16-24 CPUs, ..
8-64 WholeNodes 1-64 WholeNodes 1 or more WholeNode Node 1-1024 8-128 WholeNodes 1 WholeNode
4 Memory requirements - E.g.: No, 1-2 GB per core, Memory-Bound(=whole avail. mem.), ..
Mem-bound 1GB per core 1GB per core no 1-2 GB No
5 Match Making - Yes(wms can find a suitable resource),  No( I select the resource)
No No No Yes (no preinstalled exe) No Yes
6 Compiler/Version needed on the WN - E.g.: No=compiled on the UI, gcc/g++, icc, Java ..)
gcc-c++ f90, gcc-c++ NO NO No No
7 Libraries/version needed on the WN (COMPILE time) - E.g.: no(=compiled on the UI), Blas, Lapack, Gsl, ..
cfitsio blas, lapack, gsl, hdf5, fftw3 if compiled, netcdf no no no
8 Libraries needed on the WN (EXECUTION time) - E.g.: Blas, Lapack, Gsl, ..
cfitsio blas, lapack, gsl, hdf5, fftw3 no (netcdf statically linked) no no no
9 Interpreter needed on the WN - E.g.: Python, perl, ..
no no Pyhton wrapper created No no no
10 Applications on the WN - Name and version (if important) of a needed Application (QE, gaussian,..)
no no no no QE-latest-vers no
11 Large amount of Data - E.g.: No, 10GB Input, 20-30 GB output, ...
400GB 20GB output 10GB input 10GB output 10 GB output 100GB
12 Data Intensive - E.g.: No(=don't care), SAN(=Shared High speed SAN), Local(=1 Local scratch disk per Node, .. )
No SAN No No SAN/Local No, files are saved to custom storage via OpenDAP
13 Expected Execution time - E .g.: No=within a few hours, 24-48h, 72h or more,..
? >3weeks >72h >3 weeks 72 or more 72 or more
14 Checkpoints - E.g.: No(=don't care) Yes(=I need a safe local storage area to save CheckPoints )
Yes on the same site Yes can continue on different sites No, output is retrieved via SE Yes, on the same site No, files are saved to custom storage via OpenDAP
15 Communication Intensive - E.g.: No (=don't care) , IB(=Infiniband needed) , ...
No IB No IB if more than 3 physical nodes used IB no
16 Web portal - E.g.: No(not needed), D(=yes for Data management), CI ( yes with Custmized Interface), CHK=(Checkpoint support)...
CI CI CI No No D
17 Main requirements and Expected outcomes (e.g.: optimized compilers/libraries, distributed File-systems, execution speedup, ..)
           

Revision 222012-10-19 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 93 to 93
 
No IB No IB if more than 3 physical nodes used IB no
16 Web portal - E.g.: No(not needed), D(=yes for Data management), CI ( yes with Custmized Interface), CHK=(Checkpoint support)...
CI CI CI No No D
\ No newline at end of file
Added:
>
>
17 Main requirements and Expected outcomes (e.g.: optimized compilers/libraries, distributed File-systems, execution speedup, ..)
           

Revision 212012-10-16 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 82 to 82
 
10 Applications on the WN - Name and version (if important) of a needed Application (QE, gaussian,..)
no no no no QE-latest-vers no
11 Large amount of Data - E.g.: No, 10GB Input, 20-30 GB output, ...
Changed:
<
<
No 20GB output 10GB input 10GB output 10 GB output 100GB
>
>
400GB 20GB output 10GB input 10GB output 10 GB output 100GB
 
12 Data Intensive - E.g.: No(=don't care), SAN(=Shared High speed SAN), Local(=1 Local scratch disk per Node, .. )
No SAN No No SAN/Local No, files are saved to custom storage via OpenDAP
13 Expected Execution time - E .g.: No=within a few hours, 24-48h, 72h or more,..
Changed:
<
<
? >72h >72h >3 weeks 72 or more 72 or more
>
>
? >3weeks >72h >3 weeks 72 or more 72 or more
 
14 Checkpoints - E.g.: No(=don't care) Yes(=I need a safe local storage area to save CheckPoints )
Changed:
<
<
No Yes can continue on different sites No, output is retrieved via SE Yes, on the same site No, files are saved to custom storage via OpenDAP
>
>
Yes on the same site Yes can continue on different sites No, output is retrieved via SE Yes, on the same site No, files are saved to custom storage via OpenDAP
 
15 Communication Intensive - E.g.: No (=don't care) , IB(=Infiniband needed) , ...
No IB No IB if more than 3 physical nodes used IB no
16 Web portal - E.g.: No(not needed), D(=yes for Data management), CI ( yes with Custmized Interface), CHK=(Checkpoint support)...

Revision 202012-10-16 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 86 to 86
 
12 Data Intensive - E.g.: No(=don't care), SAN(=Shared High speed SAN), Local(=1 Local scratch disk per Node, .. )
No SAN No No SAN/Local No, files are saved to custom storage via OpenDAP
13 Expected Execution time - E .g.: No=within a few hours, 24-48h, 72h or more,..
Changed:
<
<
? >72h >72h >3 weeks 72 or more 72 or mode
>
>
? >72h >72h >3 weeks 72 or more 72 or more
 
14 Checkpoints - E.g.: No(=don't care) Yes(=I need a safe local storage area to save CheckPoints )
No Yes can continue on different sites No, output is retrieved via SE Yes, on the same site No, files are saved to custom storage via OpenDAP
15 Communication Intensive - E.g.: No (=don't care) , IB(=Infiniband needed) , ...

Revision 192012-10-13 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 60 to 60
 

Requirements Outline:

Changed:
<
<
GAIA Einstein-Tk NEMO NAMD QuantumEpresso RegCM
1 - Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
MPI/openMP MPI/openMP MPI/openMP MPI/openMP MPI/openMP GPU MPI
2- CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
8-64 WholeNodes 1-64 WholeNodes SingleNode or Multinode with SharedHome 1-1024 WholeNodes 8-128 WN 1
3 - Memory requirements ( No, 1-2 GB per core, Memory-Bound=whole avail. mem. )
Mem-bound 1GB per core 1GB per core no 1-2 GB No
4 - Match Making (Yes=WMS can find a suitable resource, No = I select the resource )
No No No Yes (no preinstalled exe) No Yes
5 - Code Developer (P=personal ,OS=OpenSource, C=Commercial, ...)
OS+P OS+P P+OS OS OS OS
6 - Compiler/Version needed on the WN (No=compiled on the UI, gcc/g++, icc, Java ..)
gcc-c++ f90, gcc-c++ NO NO No No
7 - Libraries/version needed on the WN (COMPILE time) (no=compiled on the UI, Blas, Lapack, Gsl, ..)
cfitsio blas, lapack, gsl, hdf5, fftw3 if compiled, netcdf no no no
8 - Libraries needed on the WN (EXECUTION time) (Blas, Lapack, Gsl, ..)
cfitsio blas, lapack, gsl, hdf5, fftw3 no (netcdf statically linked) no no no
9 - Interpreter needed on the WN (Python, perl, ..)
no no Pyhton wrapper created No no no
10 - Applications needed on the WN. Specify version if important (QE, gaussian,..)
no no no no QE-yes no
11 - Large amount of Data (No, 10GB Input, 20-30 GB output)
No 20GB output 10GB input 10GB output 10 GB output 100GB
12 - Data Intensive (No, SAN=High speed SAN, Local=Local scratch disks, .. )
No SAN No No SAN/Local No
13 - Expected Execution time (No=within a few hours, 24-48h, 72h or more )
? >72h >72h >3 weeks 72 or more 72 or mode
14 - Checkpoints (No, Yes=I need a safe local storage area to save CheckPoints )
No Yes can continue on different sites No, output is retrieved via SE Yes, on the same site No, files are saved to custom storage via OpenDAP
15 - Communication Intensive (No, Infiniband, ... )
No Infiniband no (single node) infiniband if more than 3 physical nodes used IB no
16 - Web portal (No=not needed, D=yes for Data management, CI= yes with Custmized Interface, ... )
CI CI CI No No D
>
>
GAIA Einstein-Tk NEMO NAMD QuantumEpresso RegCM
1 Code Developer - E.g.: P(=personal) ,OS(=OpenSource), C(=Commercial), ...
OS+P OS+P OS+P OS OS OS
2 Parallelization Model - E.g.: MPI, openMP, Hybrid, GPU, ..
MPI/openMP MPI/openMP MPI/openMP MPI/openMP MPI/openMP GPU MPI
3 CPU requirements - E.g.: 1-10 WholeNodes, 16-24 CPUs, ..
8-64 WholeNodes 1-64 WholeNodes 1 or more WholeNode Node 1-1024 8-128 WholeNodes 1 WholeNode
4 Memory requirements - E.g.: No, 1-2 GB per core, Memory-Bound(=whole avail. mem.), ..
Mem-bound 1GB per core 1GB per core no 1-2 GB No
5 Match Making - Yes(wms can find a suitable resource),  No( I select the resource)
No No No Yes (no preinstalled exe) No Yes
6 Compiler/Version needed on the WN - E.g.: No=compiled on the UI, gcc/g++, icc, Java ..)
gcc-c++ f90, gcc-c++ NO NO No No
7 Libraries/version needed on the WN (COMPILE time) - E.g.: no(=compiled on the UI), Blas, Lapack, Gsl, ..
cfitsio blas, lapack, gsl, hdf5, fftw3 if compiled, netcdf no no no
8 Libraries needed on the WN (EXECUTION time) - E.g.: Blas, Lapack, Gsl, ..
cfitsio blas, lapack, gsl, hdf5, fftw3 no (netcdf statically linked) no no no
9 Interpreter needed on the WN - E.g.: Python, perl, ..
no no Pyhton wrapper created No no no
10 Applications on the WN - Name and version (if important) of a needed Application (QE, gaussian,..)
no no no no QE-latest-vers no
11 Large amount of Data - E.g.: No, 10GB Input, 20-30 GB output, ...
No 20GB output 10GB input 10GB output 10 GB output 100GB
12 Data Intensive - E.g.: No(=don't care), SAN(=Shared High speed SAN), Local(=1 Local scratch disk per Node, .. )
No SAN No No SAN/Local No, files are saved to custom storage via OpenDAP
13 Expected Execution time - E .g.: No=within a few hours, 24-48h, 72h or more,..
? >72h >72h >3 weeks 72 or more 72 or mode
14 Checkpoints - E.g.: No(=don't care) Yes(=I need a safe local storage area to save CheckPoints )
No Yes can continue on different sites No, output is retrieved via SE Yes, on the same site No, files are saved to custom storage via OpenDAP
15 Communication Intensive - E.g.: No (=don't care) , IB(=Infiniband needed) , ...
No IB No IB if more than 3 physical nodes used IB no
16 Web portal - E.g.: No(not needed), D(=yes for Data management), CI ( yes with Custmized Interface), CHK=(Checkpoint support)...
CI CI CI No No D

Revision 182012-10-11 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 62 to 62
 
GAIA Einstein-Tk NEMO NAMD QuantumEpresso RegCM
1 - Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
Changed:
<
<
MPI/openMP MPI/openMP MPI/openMP MPI/openMP    
>
>
MPI/openMP MPI/openMP MPI/openMP MPI/openMP MPI/openMP GPU MPI
 
2- CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
Changed:
<
<
8-64 WholeNodes 1-64 WholeNodes SingleNode or Multinode with SharedHome 1-1024    
>
>
8-64 WholeNodes 1-64 WholeNodes SingleNode or Multinode with SharedHome 1-1024 WholeNodes 8-128 WN 1
 
3 - Memory requirements ( No, 1-2 GB per core, Memory-Bound=whole avail. mem. )
Changed:
<
<
Mem-bound 1GB per core 1GB per core no    
>
>
Mem-bound 1GB per core 1GB per core no 1-2 GB No
 
4 - Match Making (Yes=WMS can find a suitable resource, No = I select the resource )
Changed:
<
<
No No No Yes (no preinstalled exe)    
>
>
No No No Yes (no preinstalled exe) No Yes
 
5 - Code Developer (P=personal ,OS=OpenSource, C=Commercial, ...)
Changed:
<
<
OS+P OS+P P+OS OS    
>
>
OS+P OS+P P+OS OS OS OS
 
6 - Compiler/Version needed on the WN (No=compiled on the UI, gcc/g++, icc, Java ..)
Changed:
<
<
gcc-c++ f90, gcc-c++ NO NO    
>
>
gcc-c++ f90, gcc-c++ NO NO No No
 
7 - Libraries/version needed on the WN (COMPILE time) (no=compiled on the UI, Blas, Lapack, Gsl, ..)
Changed:
<
<
cfitsio blas, lapack, gsl, hdf5, fftw3 if compiled, netcdf no    
>
>
cfitsio blas, lapack, gsl, hdf5, fftw3 if compiled, netcdf no no no
 
8 - Libraries needed on the WN (EXECUTION time) (Blas, Lapack, Gsl, ..)
Changed:
<
<
cfitsio blas, lapack, gsl, hdf5, fftw3 no (netcdf statically linked) no    
>
>
cfitsio blas, lapack, gsl, hdf5, fftw3 no (netcdf statically linked) no no no
 
9 - Interpreter needed on the WN (Python, perl, ..)
Changed:
<
<
no no Pyhton wrapper created No    
>
>
no no Pyhton wrapper created No no no
 
10 - Applications needed on the WN. Specify version if important (QE, gaussian,..)
Changed:
<
<
no no no no    
>
>
no no no no QE-yes no
 
11 - Large amount of Data (No, 10GB Input, 20-30 GB output)
Changed:
<
<
No 20GB output 10GB input 10GB output    
>
>
No 20GB output 10GB input 10GB output 10 GB output 100GB
 
12 - Data Intensive (No, SAN=High speed SAN, Local=Local scratch disks, .. )
Changed:
<
<
No SAN No No  
>
>
No SAN No No SAN/Local No
 
13 - Expected Execution time (No=within a few hours, 24-48h, 72h or more )
Changed:
<
<
? >72h >72h >3 weeks    
14 - Checkpoints (No, Yes=I need a safe storage area to save checkpoints )
No Yes can continue on different sites can continue on different site    
>
>
? >72h >72h >3 weeks 72 or more 72 or mode
14 - Checkpoints (No, Yes=I need a safe local storage area to save CheckPoints )
No Yes can continue on different sites No, output is retrieved via SE Yes, on the same site No, files are saved to custom storage via OpenDAP
 
15 - Communication Intensive (No, Infiniband, ... )
Changed:
<
<
No Infiniband no (single node) infiniband if more than 3 physical nodes used    
>
>
No Infiniband no (single node) infiniband if more than 3 physical nodes used IB no
 
16 - Web portal (No=not needed, D=yes for Data management, CI= yes with Custmized Interface, ... )
Deleted:
<
<
CI CI CI No    
 \ No newline at end of file
Added:
>
>
CI CI CI No No D

Revision 172012-10-11 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 64 to 64
 
1 - Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
MPI/openMP MPI/openMP MPI/openMP MPI/openMP    
2- CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
Changed:
<
<
8-64 WholeNodes 1-64 WholeNodes SingleNode or SharedHome 1-1024    
>
>
8-64 WholeNodes 1-64 WholeNodes SingleNode or Multinode with SharedHome 1-1024    
 
3 - Memory requirements ( No, 1-2 GB per core, Memory-Bound=whole avail. mem. )
Mem-bound 1GB per core 1GB per core no    
4 - Match Making (Yes=WMS can find a suitable resource, No = I select the resource )

Revision 162012-10-11 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Einstein Tk - Roberto De Pietri (INFN-Parma)
Changed:
<
<
The Einstein Toolkit [18] is an open software that provide the core computational tools needed by relativistic astrophysics, i.e., to solve the Einstein's equations coupled to matter and magnetic fields. In practice, the toolkit solves time-dependent partial differential equations on mesh refined three-dimensional grids. The code has been parallelized using MPI/OpenMP and is actually in production with simulation involving up to 256 cores on the PISA site.
>
>
The Einstein Toolkit is an open software that provide the core computational tools needed by relativistic astrophysics, i.e., to solve the Einstein's equations coupled to matter and magnetic fields. In practice, the toolkit solves time-dependent partial differential equations on mesh refined three-dimensional grids. The code has been parallelized using MPI/OpenMP and is actually in production with simulation involving up to 256 cores on the PISA site.
 
Line: 61 to 61
 

Requirements Outline:

GAIA Einstein-Tk NEMO NAMD QuantumEpresso RegCM
Changed:
<
<
Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
>
>
1 - Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
 
MPI/openMP MPI/openMP MPI/openMP MPI/openMP    
Changed:
<
<
CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
>
>
2- CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
 
8-64 WholeNodes 1-64 WholeNodes SingleNode or SharedHome 1-1024    
Changed:
<
<
Memory requirements ( No, 1-2 GB per core, Memory-Bound=whole avail. mem. )
Mem-bound 1GB per core 1GB per core no mem bound    
Match Making (Yes=WMS can find a suitable resource, No = I select the resource )
>
>
3 - Memory requirements ( No, 1-2 GB per core, Memory-Bound=whole avail. mem. )
Mem-bound 1GB per core 1GB per core no    
4 - Match Making (Yes=WMS can find a suitable resource, No = I select the resource )
 
No No No Yes (no preinstalled exe)    
Changed:
<
<
Code Developer (P=personal ,OS=OpenSource, C=Commercial, ...)
>
>
5 - Code Developer (P=personal ,OS=OpenSource, C=Commercial, ...)
 
OS+P OS+P P+OS OS    
Changed:
<
<
Specific Compiler/Version needed on the WN (No=compiled on the UI, gcc/g++, icc, Java ..)
>
>
6 - Compiler/Version needed on the WN (No=compiled on the UI, gcc/g++, icc, Java ..)
 
gcc-c++ f90, gcc-c++ NO NO    
Changed:
<
<
Interpreter needed on the WN (Python, perl, ..)
>
>
7 - Libraries/version needed on the WN (COMPILE time) (no=compiled on the UI, Blas, Lapack, Gsl, ..)
cfitsio blas, lapack, gsl, hdf5, fftw3 if compiled, netcdf no    
8 - Libraries needed on the WN (EXECUTION time) (Blas, Lapack, Gsl, ..)
cfitsio blas, lapack, gsl, hdf5, fftw3 no (netcdf statically linked) no    
9 - Interpreter needed on the WN (Python, perl, ..)
 
no no Pyhton wrapper created No    
Changed:
<
<
Applications needed on the WN. Specify version if important (QE, gaussian,..)
>
>
10 - Applications needed on the WN. Specify version if important (QE, gaussian,..)
 
no no no no    
Changed:
<
<
Libraries needed on the WN at EXECUTION time (Blas, Lapack, Gsl, ..)
cfitsio blas, lapack, gsl, hdf5, fftw3 netcdf statically linked no    
Libraries needed on the WN at COMPILE time (Blas, Lapack, Gsl, ..)
cfitsio blas, lapack, gsl, hdf5, fftw3 if compiled, netcdf    
Large amount of Data (No, 10GB Input, 20-30 GB output)
>
>
11 - Large amount of Data (No, 10GB Input, 20-30 GB output)
 
No 20GB output 10GB input 10GB output  
Changed:
<
<
Data Intensive (No, SAN=High speed SAN, Local=Local scratch disks, .. )
>
>
12 - Data Intensive (No, SAN=High speed SAN, Local=Local scratch disks, .. )
 
No SAN No No  
Changed:
<
<
Expected Execution time (No=within a few hours, 24-48h, 72h or more )
>
>
13 - Expected Execution time (No=within a few hours, 24-48h, 72h or more )
 
? >72h >72h >3 weeks  
Changed:
<
<
Checkpoints (No, Yes=I need a safe storage area to save checkpoints )
>
>
14 - Checkpoints (No, Yes=I need a safe storage area to save checkpoints )
 
No Yes can continue on different sites can continue on different site  
Changed:
<
<
Communication Intensive (No, Infiniband, ... )
No Infiniband we used only a single node up to now infiniband if more than 3 physical nodes used  
Web portal (No=not needed, D=yes for Data management, CI= yes with Custmized Interface, ... )
>
>
15 - Communication Intensive (No, Infiniband, ... )
No Infiniband no (single node) infiniband if more than 3 physical nodes used    
16 - Web portal (No=not needed, D=yes for Data management, CI= yes with Custmized Interface, ... )
 
CI CI CI No    
\ No newline at end of file

Revision 152012-10-11 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 80 to 80
 
Libraries needed on the WN at EXECUTION time (Blas, Lapack, Gsl, ..)
cfitsio blas, lapack, gsl, hdf5, fftw3 netcdf statically linked no    
Libraries needed on the WN at COMPILE time (Blas, Lapack, Gsl, ..)
Changed:
<
<
cfitsio blas, lapack, gsl, hdf5, fftw3 if comiled, netcdf    
>
>
cfitsio blas, lapack, gsl, hdf5, fftw3 if compiled, netcdf    
 
Large amount of Data (No, 10GB Input, 20-30 GB output)
No 20GB output 10GB input 10GB output  
Data Intensive (No, SAN=High speed SAN, Local=Local scratch disks, .. )
No SAN No No  
Expected Execution time (No=within a few hours, 24-48h, 72h or more )
? >72h >72h >3 weeks  
Changed:
<
<
Checkpoints (No, Yes=I'll continue the run on the same site )
>
>
Checkpoints (No, Yes=I need a safe storage area to save checkpoints )
 
No Yes can continue on different sites can continue on different site  
Communication Intensive (No, Infiniband, ... )
No Infiniband we used only a single node up to now infiniband if more than 3 physical nodes used  

Revision 142012-10-09 - DanieleCesini

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 62 to 62
 
GAIA Einstein-Tk NEMO NAMD QuantumEpresso RegCM
Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
Changed:
<
<
MPI/openMP MPI/openMP        
>
>
MPI/openMP MPI/openMP MPI/openMP MPI/openMP    
 
CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
Changed:
<
<
8-64 WholeNodes 1-64 WholeNodes        
>
>
8-64 WholeNodes 1-64 WholeNodes SingleNode or SharedHome 1-1024    
 
Memory requirements ( No, 1-2 GB per core, Memory-Bound=whole avail. mem. )
Changed:
<
<
Mem-bound 1GB per core        
>
>
Mem-bound 1GB per core 1GB per core no mem bound    
 
Match Making (Yes=WMS can find a suitable resource, No = I select the resource )
Changed:
<
<
No No        
>
>
No No No Yes (no preinstalled exe)    
 
Code Developer (P=personal ,OS=OpenSource, C=Commercial, ...)
Changed:
<
<
OS+P OS+P        
>
>
OS+P OS+P P+OS OS    
 
Specific Compiler/Version needed on the WN (No=compiled on the UI, gcc/g++, icc, Java ..)
Changed:
<
<
gcc-c++ f90, gcc-c++        
>
>
gcc-c++ f90, gcc-c++ NO NO    
 
Interpreter needed on the WN (Python, perl, ..)
Changed:
<
<
no no        
>
>
no no Pyhton wrapper created No    
 
Applications needed on the WN. Specify version if important (QE, gaussian,..)
Changed:
<
<
no no        
>
>
no no no no    
 
Libraries needed on the WN at EXECUTION time (Blas, Lapack, Gsl, ..)
Changed:
<
<
cfitsio blas, lapack, gsl, hdf5, fftw3        
>
>
cfitsio blas, lapack, gsl, hdf5, fftw3 netcdf statically linked no    
 
Libraries needed on the WN at COMPILE time (Blas, Lapack, Gsl, ..)
Changed:
<
<
cfitsio blas, lapack, gsl, hdf5, fftw3      
>
>
cfitsio blas, lapack, gsl, hdf5, fftw3 if comiled, netcdf    
 
Large amount of Data (No, 10GB Input, 20-30 GB output)
Changed:
<
<
No 20GB output      
>
>
No 20GB output 10GB input 10GB output  
 
Data Intensive (No, SAN=High speed SAN, Local=Local scratch disks, .. )
Changed:
<
<
No SAN      
>
>
No SAN No No  
 
Expected Execution time (No=within a few hours, 24-48h, 72h or more )
Changed:
<
<
? >72h      
>
>
? >72h >72h >3 weeks  
 
Checkpoints (No, Yes=I'll continue the run on the same site )
Changed:
<
<
No Yes      
>
>
No Yes can continue on different sites can continue on different site  
 
Communication Intensive (No, Infiniband, ... )
Changed:
<
<
No Infiniband      
>
>
No Infiniband we used only a single node up to now infiniband if more than 3 physical nodes used  
 
Web portal (No=not needed, D=yes for Data management, CI= yes with Custmized Interface, ... )
Deleted:
<
<
CI CI        
 \ No newline at end of file
Added:
>
>
CI CI CI No    
 \ No newline at end of file

Revision 132012-10-08 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applications

Line: 6 to 6
  The Einstein Toolkit [18] is an open software that provide the core computational tools needed by relativistic astrophysics, i.e., to solve the Einstein's equations coupled to matter and magnetic fields. In practice, the toolkit solves time-dependent partial differential equations on mesh refined three-dimensional grids. The code has been parallelized using MPI/OpenMP and is actually in production with simulation involving up to 256 cores on the PISA site.
Changed:
<
<
>
>
 
GAIA Mission - Ugo Becciani (INAF)
The parallel application is for the development and test of the core part of the AVU-GSR (Astrometric Verification Unit - Global Sphere Reconstruction) software developed for the ESA Gaia Mission. The main goal of this mission is the production of a microarcsecond-level 5 parameters astrometric catalog - i.e. including positions, parallaxes and the two components of the proper motions - of about 1 billion stars of our Galaxy, by means of high-precision astrometric measurements conducted by a satellite sweeping continuously the celestial sphere during its 5-years mission.

Revision 122012-10-08 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Changed:
<
<

MPI-multicore: Applicazioni

>
>

MPI-multicore: Applications

 
Changed:
<
<
  • SISSA, Stefano Cozzini
    • Quantum/espresso: L'utilizzo sara' principalmente su nodi multicore ( wholenode); possibile necessita' di 32/64 core. Il codice richiede Intel compiler per essere efficiente + MKL +FFTW3
    • RegCM: qui continueremo la sperimentazione all'interno di un nodo multicore (wholenodes). L'idea e' di girare in grid la verfica e la validazione dei vari modelli: quindi molti run brevi ma non lunghissimi. Le dipendenze software qui sono nulle perche' manderemo in grid il relocatable package che contiene tutto quanto
    • LAMMPS: codice di dinamica molecolare che e' usato da studenti Sissa e con il quale vorremmo fare qualche esperimento in grid. L'idea anche qui e' di usarlo su architetture multicore per molti runs indipendenti.
    • Refs:Calculation of Phonon Dispersions on the Grid Using Quantum ESPRESSO - Protein Folding by Bias Exchange Metadynamics on a Grid Infrastructure

  • CNR-BO
    • Namd (Venturini)
    • Quantum/espresso in modalita' MPI: primo test su Napoli con inserimento di Degli Esposti nella VO unina.it
>
>
Einstein Tk - Roberto De Pietri (INFN-Parma)
 
Added:
>
>
The Einstein Toolkit [18] is an open software that provide the core computational tools needed by relativistic astrophysics, i.e., to solve the Einstein's equations coupled to matter and magnetic fields. In practice, the toolkit solves time-dependent partial differential equations on mesh refined three-dimensional grids. The code has been parallelized using MPI/OpenMP and is actually in production with simulation involving up to 256 cores on the PISA site.
 
Deleted:
<
<
  • Parma: Roberto De Pietri con una applicazione di cromodinamica quantistica, e di gravitazione numerica, gia' inproduzione sul cluster Theophys di Pisa .
 
Changed:
<
<
  • INAF: Ugo Becciani, con applicazioni di "GAIA Mission" di tipo parallelo/seriale, disponibilita' di tempo dall'inizio di maggio
>
>
GAIA Mission - Ugo Becciani (INAF)
The parallel application is for the development and test of the core part of the AVU-GSR (Astrometric Verification Unit - Global Sphere Reconstruction) software developed for the ESA Gaia Mission. The main goal of this mission is the production of a microarcsecond-level 5 parameters astrometric catalog - i.e. including positions, parallaxes and the two components of the proper motions - of about 1 billion stars of our Galaxy, by means of high-precision astrometric measurements conducted by a satellite sweeping continuously the celestial sphere during its 5-years mission.

The memory request to solve the AVU-GSR module depends on the number of stars, the number of observations and the number of computing nodes available in the system. During the mission, the code will be used in a range of 300,000 to 50 million stars at most. The estimated memory requirements are between 5 GB up to 8 TByte of RAM. The parallel code uses MPI and openMP (where available) is characterized by an extremely low communication level between the processes, so that preliminary speed-up tests show a behavior close to the theoretical speed-up.

Since AVU-GSR is very demanding on hardware resources, the typical execution environment is provided by Supercomputers, but the resources provided by IGI are very attractive for debugging purpose and to explore the simulation behaviour for a limited number of stars.

NEMO - Massimiliano Drudi (INGV)

NEMO is an ocean modelling framework which is composed of "engines" nested in an "environment". The "engines" provide numerical solutions of ocean, sea-ice, tracers and biochemistry equations and their related physics. The "environment" consists of the pre- and post-processing tools, the interface to the other components of the Earth System, the user interface, the computer dependent functions and the documentation of the system. The NEMO 3.4 oofs2 multiscale simulation package has been ported in the Grid environment in order to run parallel calculations. The NEMO code has significant CPU and memory demand (from our calculations we estimated 1GB/core for a 8 cores simulation). The NEMO code can be used for production as well as for testing purposes by modifying the model (this means recompiling the source code) or varying the input parameters. The user community was interested in exploiting the Grid for the second use case (model testing and tuning) which implies that the package must be executed several times in a parameter sweeping approach. An additional benefit coming from this work is that scientist operating in the field of oceanographic who would be interested in the execution of the code, can share application and results using the Grid data management and sharing facilities.

Namd - Alessandro Venturini (CNR-BO)
NAMD is a powerful parallel Molecular Mechanics(MM)/Molecular Dynamics(MD) code particularly suited for the study of large biomolecules. However, it is also compatible with different force fields, making possible the simulation of systems of quite different characteristics. NAMD can be efficiently used on large multi-core platforms and clusters.

The NAMD use case was a simulation of a 36000 atoms lipid provided by a CNR-ISOF [21] group located in Bologna. To have a real-life use case the simulation had to be run for at least 25 nanoseconds of simulated time resulting on a wallclock time of about 40 days if run on a 8 cores machine.

To port NAMD to the Grid environment, the whole application was rebuilt on Scientific Linux 5 with OpenMPI libraries linked dynamically. Sites supporting the MPI-Start framework and OpenMPI were selected to run the jobs through JDL requirements.

The porting was challenging for two main reasons:

  • a data management strategy was needed because we had to make available the output files to the ISOF researchers and the size of the output could not be easily handled via the WMS. This was obtained through “pre-run” and “post-run” scripts, both enabled via MPI-Start.
  • the length of the simulation implied many computation checkpoints given the time limits on the batch system queues of the sites matching the requirements. We decided to split the simulation in 50 steps each 500 ps of simulated time long, allowing to complete each step without reaching the queues time limits.
 
Changed:
<
<
  • INGV; Massimiliano Drudi con l' applicazione parallela "NEMO" di OCEANOGRAFIA. Non richiede un elevato numero di nodi e si decide di cominciare a farla girare sul sito di Perugia (A.Costantini). Disponibilita' di Drudi dal 10 di aprile.
>
>
RegCM - Stefano Cozzini (SISSA)
 
Added:
>
>
RegCM is the first limited area model developed for long term regional climate simulation currently being developed at the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy. RegCM4 is a regional climate model based on the concept of one-way nesting, in which large scale meteorological fields from a Global Circulation Model (GCM) run provides initial and time-dependent meteorological boundary conditions for high resolution simulations on a specific region. The RegCM4 computational engine is CPU intensive using MPI parallel software.

Standard climate RegCM simulations require large dataset (ranging from a few gigabyte for small region up to hundreds of Gigabytes for the largest ones) to be downloaded on Grid and transferred back and forth several times during the model execution. There are however other kinds of computational experiments that can be conducted in a Grid environment: validation runs. This experiment requires to run many different short simulations with different initial conditions. This mixed HTC/HPC approach could be efficiently done on multiple SMP resources made available by Grid resources.

We therefore provide the possibility to run RegCM (or any other MPI parallel application actually) through a “relocatable package” approach. With this approach all the software needed, starting from an essential OpenMPI distribution is moved to the CEs by the job. All the libraries needed by the program have to be precompiled elsewhere and packaged for easy deployability on any architecture the job will land on. The main advantage of this solution is that it will run on almost every machine available on the Grid and the user will not even need to know what the GRID will have assigned to him. The code itself will need to be compiled with the same “relocatable” libraries and shipped to the CE by the job. This alternative approach allows a user to run a small RegCM simulation on any kind of SMP resource available to her, quite widely available nowadays. The main drawback of this solution is that a precompiled MPI distribution will not take advantage of any high speed network available and will not be generally able to use more than one computing node.

Qantum Espresso - Stefano Cozzini (SISSA)

QUANTUM ESPRESSO (Q/E) is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory. The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization [23] The suite contains several heterogeneous codes with a wide range of simulation techniques in the area of quantum simulation for material science. Typical CPU and memory requirements for Q/E vary by orders of magnitude depending on the type of system and on the calculated physical property, but in general, both CPU and memory usage quickly increase with the number of atoms simulated. Only tightly-coupled MPI parallelization with memory distribution across processors allows to solve large problems, i.e. systems requiring a large number of atoms. The resulting MPI programs which composes the Q/E suite need fast communications and low latency requires the need to access via Grid HPC cluster resources. Our goal here is to check and evaluate which kind of highly intensive parallel production runs can be done on the top of the IGI MPI infrastructure.

* Refs:Calculation of Phonon Dispersions on the Grid Using Quantum ESPRESSO

Qantun Espresso - Cristian Degli Esposti (CNR-BO)

primo test su Napoli con inserimento di Degli Esposti nella VO unina.it

 
Changed:
<
<

Sommario requirements:

>
>

Requirements Outline:

 
GAIA Einstein-Tk NEMO NAMD QuantumEpresso RegCM
Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
Line: 40 to 66
 
CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
8-64 WholeNodes 1-64 WholeNodes        
Memory requirements ( No, 1-2 GB per core, Memory-Bound=whole avail. mem. )
Changed:
<
<
1-10 GB per core 1GB per core        
>
>
Mem-bound 1GB per core        
 
Match Making (Yes=WMS can find a suitable resource, No = I select the resource )
No No        
Code Developer (P=personal ,OS=OpenSource, C=Commercial, ...)
Changed:
<
<
P/OS P/OS        
>
>
OS+P OS+P        
 
Specific Compiler/Version needed on the WN (No=compiled on the UI, gcc/g++, icc, Java ..)
gcc-c++ f90, gcc-c++        
Interpreter needed on the WN (Python, perl, ..)
no no        
Applications needed on the WN. Specify version if important (QE, gaussian,..)
no no        
Changed:
<
<
Libraries needed on the WN at execution time (Blas, Lapack, Gsl, ..)
>
>
Libraries needed on the WN at EXECUTION time (Blas, Lapack, Gsl, ..)
 
cfitsio blas, lapack, gsl, hdf5, fftw3        
Changed:
<
<
Libraries needed on the WN at compile time (Blas, Lapack, Gsl, ..)
>
>
Libraries needed on the WN at COMPILE time (Blas, Lapack, Gsl, ..)
 
cfitsio blas, lapack, gsl, hdf5, fftw3      
Large amount of Data (No, 10GB Input, 20-30 GB output)
No 20GB output      

Revision 112012-10-08 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applicazioni

Line: 36 to 36
 
GAIA Einstein-Tk NEMO NAMD QuantumEpresso RegCM
Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
Changed:
<
<
MPI/openMP          
>
>
MPI/openMP MPI/openMP        
 
CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
Changed:
<
<
8-64 WholeNodes          
>
>
8-64 WholeNodes 1-64 WholeNodes        
 
Memory requirements ( No, 1-2 GB per core, Memory-Bound=whole avail. mem. )
Changed:
<
<
1-10 GB per core          
Match Making (Yes=WMS can find a suitable resource, No = I select the resource
No          
>
>
1-10 GB per core 1GB per core        
Match Making (Yes=WMS can find a suitable resource, No = I select the resource )
No No        
 
Code Developer (P=personal ,OS=OpenSource, C=Commercial, ...)
Changed:
<
<
P/OS          
>
>
P/OS P/OS        
 
Specific Compiler/Version needed on the WN (No=compiled on the UI, gcc/g++, icc, Java ..)
Changed:
<
<
gcc-c++          
>
>
gcc-c++ f90, gcc-c++        
 
Interpreter needed on the WN (Python, perl, ..)
Changed:
<
<
no          
>
>
no no        
 
Applications needed on the WN. Specify version if important (QE, gaussian,..)
Changed:
<
<
           
>
>
no no        
 
Libraries needed on the WN at execution time (Blas, Lapack, Gsl, ..)
Changed:
<
<
cfitsio          
>
>
cfitsio blas, lapack, gsl, hdf5, fftw3        
 
Libraries needed on the WN at compile time (Blas, Lapack, Gsl, ..)
Changed:
<
<
cfitsio        
>
>
cfitsio blas, lapack, gsl, hdf5, fftw3      
 
Large amount of Data (No, 10GB Input, 20-30 GB output)
Changed:
<
<
No        
Data Intensive (No, High speed SAN, local scratch disks, .. )
No        
>
>
No 20GB output      
Data Intensive (No, SAN=High speed SAN, Local=Local scratch disks, .. )
No SAN      
 
Expected Execution time (No=within a few hours, 24-48h, 72h or more )
Changed:
<
<
?        
>
>
? >72h      
 
Checkpoints (No, Yes=I'll continue the run on the same site )
Changed:
<
<
No        
>
>
No Yes      
 
Communication Intensive (No, Infiniband, ... )
Changed:
<
<
No        
Web portal (No=not needed, D=yes for data management, CI= yes with Custmized Interface, ... )
CI        
>
>
No Infiniband      
Web portal (No=not needed, D=yes for Data management, CI= yes with Custmized Interface, ... )
CI CI        

Revision 102012-10-07 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applicazioni

Line: 39 to 39
 
MPI/openMP          
CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
8-64 WholeNodes          
Changed:
<
<
Memory requirements ( No, 1-10 GB per core)
>
>
Memory requirements ( No, 1-2 GB per core, Memory-Bound=whole avail. mem. )
 
1-10 GB per core          
Changed:
<
<
Match Making (Yes=WMS can find a suitable resourse, No = I select the resource
>
>
Match Making (Yes=WMS can find a suitable resource, No = I select the resource
 
No          
Code Developer (P=personal ,OS=OpenSource, C=Commercial, ...)
P/OS          
Changed:
<
<
Where the code is compiled (UI, WN)
WN          
Specific Compiler/Version needed on the WN (No, gcc/gpp, icc, Java ..)
>
>
Specific Compiler/Version needed on the WN (No=compiled on the UI, gcc/g++, icc, Java ..)
 
gcc-c++          
Interpreter needed on the WN (Python, perl, ..)
no          
Applications needed on the WN. Specify version if important (QE, gaussian,..)
         
Changed:
<
<
Libraries needed to run on the WN (Blas, Lapack, Gsl, ..)
>
>
Libraries needed on the WN at execution time (Blas, Lapack, Gsl, ..)
 
cfitsio          
Changed:
<
<
Libraries needed to compile on the WN (Blas, Lapack, Gsl, ..)
>
>
Libraries needed on the WN at compile time (Blas, Lapack, Gsl, ..)
 
cfitsio        
Large amount of Data (No, 10GB Input, 20-30 GB output)
No        
Data Intensive (No, High speed SAN, local scratch disks, .. )
No        
Changed:
<
<
Expected Execution time (No, few hours, 24-48h, 72h or more )
>
>
Expected Execution time (No=within a few hours, 24-48h, 72h or more )
 
?        
Checkpoints (No, Yes=I'll continue the run on the same site )
No        

Revision 92012-10-07 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applicazioni

Line: 16 to 16
 

  • CNR-BO
Changed:
<
<
Quantum/espresso in modalita' MPI; primo test su Napoli con inserimento di Degli Esposti nella VO unina.it
>
>
    • Namd (Venturini)
    • Quantum/espresso in modalita' MPI: primo test su Napoli con inserimento di Degli Esposti nella VO unina.it
 

  • Parma: Roberto De Pietri con una applicazione di cromodinamica quantistica, e di gravitazione numerica, gia' inproduzione sul cluster Theophys di Pisa .
Line: 37 to 39
 
MPI/openMP          
CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
8-64 WholeNodes          
Changed:
<
<
Memory requirements ( No, 1-10 GM percore)
>
>
Memory requirements ( No, 1-10 GB per core)
 
1-10 GB per core          
Match Making (Yes=WMS can find a suitable resourse, No = I select the resource
No          
Line: 46 to 48
 
Where the code is compiled (UI, WN)
WN          
Specific Compiler/Version needed on the WN (No, gcc/gpp, icc, Java ..)
Changed:
<
<
gcc, gfortan          
>
>
gcc-c++          
 
Interpreter needed on the WN (Python, perl, ..)
no          
Applications needed on the WN. Specify version if important (QE, gaussian,..)

Revision 82012-10-07 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applicazioni

Line: 32 to 32
 

Sommario requirements:

Changed:
<
<
NEMO GAIA Einstein-Tk Reg-CM NAMD
Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
x y z    
Hardware requirements (1-10 WholeNodes, 16-24 CPUs, ..)
         
Type of code (own, Open Source, Commercial)
         
>
>
GAIA Einstein-Tk NEMO NAMD QuantumEpresso RegCM
Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
MPI/openMP          
CPU requirements (1-10 WholeNodes, 16-24 CPUs, ..)
8-64 WholeNodes          
Memory requirements ( No, 1-10 GM percore)
1-10 GB per core          
Match Making (Yes=WMS can find a suitable resourse, No = I select the resource
No          
Code Developer (P=personal ,OS=OpenSource, C=Commercial, ...)
P/OS          
 
Where the code is compiled (UI, WN)
Changed:
<
<
         
Compiler needed on the WN ( C/C++, Fortran, Java ..)
         
>
>
WN          
Specific Compiler/Version needed on the WN (No, gcc/gpp, icc, Java ..)
gcc, gfortan          
 
Interpreter needed on the WN (Python, perl, ..)
Changed:
<
<
         
>
>
no          
 
Applications needed on the WN. Specify version if important (QE, gaussian,..)
Changed:
<
<
         
>
>
         
 
Libraries needed to run on the WN (Blas, Lapack, Gsl, ..)
Changed:
<
<
         
>
>
cfitsio          
 
Libraries needed to compile on the WN (Blas, Lapack, Gsl, ..)
Changed:
<
<
         
Large amount of Data (no, 10GB Input, 20-30 GB output)
         
Data Intensive (no, High speed SAN, local scratch disks, .. )
         
Expected Execution time (no, few hours, 24-48h, 72h or more )
         
Checkpoints (no, yes=I'll continue the run on the same site )
         
Communication Intensive (no, Infiniband, ... )
         
Web portal (not needed, very useful , yes but is required custmized interface... )
         
>
>
cfitsio        
Large amount of Data (No, 10GB Input, 20-30 GB output)
No        
Data Intensive (No, High speed SAN, local scratch disks, .. )
No        
Expected Execution time (No, few hours, 24-48h, 72h or more )
?        
Checkpoints (No, Yes=I'll continue the run on the same site )
No        
Communication Intensive (No, Infiniband, ... )
No        
Web portal (No=not needed, D=yes for data management, CI= yes with Custmized Interface, ... )
CI        

Revision 72012-10-06 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applicazioni

Line: 32 to 32
 

Sommario requirements:

Changed:
<
<
  GAIA Einstein-Tk Reg-CM
Parallelismo
MPI/openMP, ibrido,      
Codice      
proprio/comm/ Open      
Generazione codice
precomp/ compliato su WN, ..      
Compliatori su WN
C/C++/Fortran/Java      
Interpreti su WN
C/C++/Fortran/Java      
Applicazioni specifiche
C/C++/Fortran/Java      
Librerie dinamiche
BLAS/        
>
>
NEMO GAIA Einstein-Tk Reg-CM NAMD
Parallelization Model (MPI, openMP, Hybrid, GPU, .. )
x y z    
Hardware requirements (1-10 WholeNodes, 16-24 CPUs, ..)
         
Type of code (own, Open Source, Commercial)
         
Where the code is compiled (UI, WN)
         
Compiler needed on the WN ( C/C++, Fortran, Java ..)
         
Interpreter needed on the WN (Python, perl, ..)
         
Applications needed on the WN. Specify version if important (QE, gaussian,..)
         
Libraries needed to run on the WN (Blas, Lapack, Gsl, ..)
         
Libraries needed to compile on the WN (Blas, Lapack, Gsl, ..)
         
Large amount of Data (no, 10GB Input, 20-30 GB output)
         
Data Intensive (no, High speed SAN, local scratch disks, .. )
         
Expected Execution time (no, few hours, 24-48h, 72h or more )
         
Checkpoints (no, yes=I'll continue the run on the same site )
         
Communication Intensive (no, Infiniband, ... )
         
Web portal (not needed, very useful , yes but is required custmized interface... )
         

Revision 62012-10-06 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applicazioni

Line: 28 to 28
 


Added:
>
>

Sommario requirements:

  GAIA Einstein-Tk Reg-CM
Parallelismo
MPI/openMP, ibrido,      
Codice      
proprio/comm/ Open      
Generazione codice
precomp/ compliato su WN, ..      
Compliatori su WN
C/C++/Fortran/Java      
Interpreti su WN
C/C++/Fortran/Java      
Applicazioni specifiche
C/C++/Fortran/Java      
Librerie dinamiche
BLAS/        

Revision 52012-04-20 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applicazioni

  • SISSA, Stefano Cozzini
    • Quantum/espresso:
Changed:
<
<
L"utilizzo sara' principalmente su nodi multicore ( wholenode),; possibile necessita' di 32/64 core.
>
>
L'utilizzo sara' principalmente su nodi multicore ( wholenode); possibile necessita' di 32/64 core.
  Il codice richiede Intel compiler per essere efficiente + MKL +FFTW3
Changed:
<
<
qui continueremo la sperimentazione all'interno di un nodo multicore (wholenode).
>
>
qui continueremo la sperimentazione all'interno di un nodo multicore (wholenodes).
  L'idea e' di girare in grid la verfica e la validazione dei vari modelli: quindi molti run brevi ma non lunghissimi. Le dipendenze software qui sono nulle perche' manderemo in grid il relocatable package che contiene tutto quanto
    • LAMMPS:
Changed:
<
<
codice di dinamica molecolare che e' usato da studenti Sissa e con iil quale vorremmo fare qualche esperimento in grid. L"idea anche qui e' di usarlo su architetture multicore per molti runs indipendenti.
>
>
codice di dinamica molecolare che e' usato da studenti Sissa e con il quale vorremmo fare qualche esperimento in grid. L'idea anche qui e' di usarlo su architetture multicore per molti runs indipendenti.
 

  • CNR-BO
Changed:
<
<
Quantum/espresso in modalita MPI; primo test su Napoli con inserimento di Degli Esposti nella VO unina.it
>
>
Quantum/espresso in modalita' MPI; primo test su Napoli con inserimento di Degli Esposti nella VO unina.it
 

  • Parma: Roberto De Pietri con una applicazione di cromodinamica quantistica, e di gravitazione numerica, gia' inproduzione sul cluster Theophys di Pisa .

Revision 42012-03-29 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applicazioni

Changed:
<
<
  • Quantum/espresso: S. Cozzini, C. Degli Esposti, R. Alfieri
  • RegCM S. Cozzini
  • LAMMPS: S. Cozzini
  • Croma Library Roberto DePietri
  • Einstein Toolkit Roberto Depietri * ...

Esperienze sull'uso di MPI in grid

>
>
  • SISSA, Stefano Cozzini
    • Quantum/espresso: L"utilizzo sara' principalmente su nodi multicore ( wholenode),; possibile necessita' di 32/64 core. Il codice richiede Intel compiler per essere efficiente + MKL +FFTW3
    • RegCM: qui continueremo la sperimentazione all'interno di un nodo multicore (wholenode). L'idea e' di girare in grid la verfica e la validazione dei vari modelli: quindi molti run brevi ma non lunghissimi. Le dipendenze software qui sono nulle perche' manderemo in grid il relocatable package che contiene tutto quanto
    • LAMMPS: codice di dinamica molecolare che e' usato da studenti Sissa e con iil quale vorremmo fare qualche esperimento in grid. L"idea anche qui e' di usarlo su architetture multicore per molti runs indipendenti.
    • Refs:Calculation of Phonon Dispersions on the Grid Using Quantum ESPRESSO - Protein Folding by Bias Exchange Metadynamics on a Grid Infrastructure

  • CNR-BO Quantum/espresso in modalita MPI; primo test su Napoli con inserimento di Degli Esposti nella VO unina.it

  • Parma: Roberto De Pietri con una applicazione di cromodinamica quantistica, e di gravitazione numerica, gia' inproduzione sul cluster Theophys di Pisa .

  • INAF: Ugo Becciani, con applicazioni di "GAIA Mission" di tipo parallelo/seriale, disponibilita' di tempo dall'inizio di maggio

  • INGV; Massimiliano Drudi con l' applicazione parallela "NEMO" di OCEANOGRAFIA. Non richiede un elevato numero di nodi e si decide di cominciare a farla girare sul sito di Perugia (A.Costantini). Disponibilita' di Drudi dal 10 di aprile.
 


Revision 32012-03-26 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

MPI-multicore: Applicazioni

Changed:
<
<
>
>
  • Quantum/espresso: S. Cozzini, C. Degli Esposti, R. Alfieri
  • RegCM S. Cozzini
  • LAMMPS: S. Cozzini
  • Croma Library Roberto DePietri
  • Einstein Toolkit Roberto Depietri * ...
 

Esperienze sull'uso di MPI in grid

Revision 22012-02-26 - RobertoAlfieri

Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Changed:
<
<

Applicazioni

>
>

MPI-multicore: Applicazioni

 

Revision 12012-02-21 - RobertoAlfieri

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="WebHome"

Applicazioni

Esperienze sull'uso di MPI in grid


 
This site is powered by the TWiki collaboration platformCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback