Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
![]() | ||||||||
Changed: | ||||||||
< < | Here is presented an overview of the applications currently ported in the GRID environment . | |||||||
> > | Here is presented an overview of the Computational Chemistry applications currently ported in the GRID environment . | |||||||
Installation and porting guides |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Changed: | ||||||||
< < | Users which their main field is Computational Chemistry may consult the following Application Database | |||||||
> > | Computational Chemistry users may also consult the EGI.EU Application Database![]() | |||||||
Changed: | ||||||||
< < |
| |||||||
> > | Here is presented an overview of the applications currently ported in the GRID environment . | |||||||
Installation and porting guides | ||||||||
Changed: | ||||||||
< < | The following best practices document aims to provide some hints and examples on how to configure and compile some Computational Chemistry related applications on a grid based infrastructure. | |||||||
> > | The following best practices document provides some hints and examples on how to configure and compile some Computational Chemistry related applications on a grid based infrastructure. | |||||||
DL_POLYApplication description | ||||||||
Changed: | ||||||||
< < | DL_POLY is a package of subroutines, programs and data files, designed to facilitate molecular dynamics simulations of macromolecules, polymers, ionic systems, solutions and other molecular systems on a distributed memory parallel computer. The package was written to support the UK project CCP5 by Bill Smith and Tim Forester under grants from the Engineering and Physical Sciences Research Council and is the property of the Science and Technology Facilities Council (STFC). | |||||||
> > | DL_POLY is a package of subroutines, programs and data files, designed to facilitate molecular dynamics simulations of macromolecules, polymers, ionic systems, solutions and other molecular systems on a distributed memory parallel computer. The package was written to support the UK project CCP5 by Bill Smith and Tim Forester under grants from the Engineering and Physical Sciences Research Council and is the property of the Science and Technology Facilities Council (STFC). | |||||||
Two forms of DL_POLY exist. DL_POLY_2 is the earlier version and is based on a replicated data parallelism. It is suitable for simulations of up to 30,000 atoms on up to 100 processors. DL_POLY_3 is a domain decomposition version, written by I.T. Todorov and W. Smith, and is designed for systems beyond the range of DL_POLY_2 - up to 10,000,000 atoms (and beyond) and 1000 processors.
| ||||||||
Changed: | ||||||||
< < |
| |||||||
> > | ||||||||
<-- * VO Contact: Pacifici Leonardo, University of Perugia (Italy) – xleo@dyn.unipg.it --> DL_POLY 2.20Sequential executable | ||||||||
Changed: | ||||||||
< < | Needed for compilation are:
| |||||||
> > | To compile it, it's required :
| |||||||
Contact your System Admin if the needed software is missing or not available. | ||||||||
Changed: | ||||||||
< < | A. Download or copy the tar file of DL_POLY_2.20 MD package in a machine with the gLite3.2 middleware installed, untar it in an appropriate sub-directory. Copy the file named MakeSEQ and stored in the build directory into the srcmod directory # cp build/MakeSEQ srcmod/MakefileThe file enable to compile the source code to obtain the sequential version of the executable. | |||||||
> > | A. Download or copy the tar file of DL_POLY_2.20 MD package in a machine with the gLite3.2 middleware installed, and untar the package in an appropriate sub-directory. Copy the file named MakeSEQ and stored in the build directory into the srcmod directory # cp build/MakeSEQ srcmod/Makefile. The file enable to compile the source code to obtain the sequential version of the executable. | |||||||
B. Edit the Makefile as follow
| ||||||||
Line: 57 to 55 | ||||||||
After the compile procedure you should find into the executable directory the DL_POLY executable. To be sure that the executable is statically linked, run the following command
# ldd < executable_name >" not a dynamic executable " should be visualized. | ||||||||
Changed: | ||||||||
< < | You can use the executable and submit it to the GRID environment. | |||||||
> > | You can now use the executable and submit it to the GRID environment. | |||||||
Parallel executable | ||||||||
Changed: | ||||||||
< < | Needed for compilation are:
| |||||||
> > | It's needed
| |||||||
Contact your Admin if the needed software is missing or not available. | ||||||||
Changed: | ||||||||
< < | A. Download or copy the tar file of DL_POLY_2.20 MD package in a machine with the gLite3.2 middleware installed, untar it in an appropriate sub-directory. | |||||||
> > | A. Download or copy the tar file of DL_POLY_2.20 MD package in a machine with the gLite3.2 middleware installed, and untar it in an appropriate sub-directory. | |||||||
Copy the file named MakePAR and stored in the build directory into the srcmod directory
# cp build/MakePAR srcmod/MakefileThe file enable to compile the source code to obtain the parallel version of the executable | ||||||||
Line: 103 to 101 | ||||||||
Sequential executableNeeded for compilation are: | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
Contact your System Admin if the needed software is missing or not available. | ||||||||
Changed: | ||||||||
< < | A. Download or copy the tar file of DL_POLY_4.02 MD package in a machine with the EMI1 middleware installed, untar it in an appropriate sub-directory. Copy the file named Makefile_SRL1 and stored in the build directory into the source directory # cp build/Makefile_SRL1 srcmod/MakefileThe file enable to compile the source code to obtain the sequential version of the executable. | |||||||
> > | A. Download or copy the tar file of DL_POLY_4.02 MD package in a machine with the EMI middleware installed, and untar it in an appropriate sub-directory. Copy the file named Makefile_SRL1 and stored in the build directory into the source directory # cp build/Makefile_SRL1 srcmod/MakefileThe file enable to compile the source code to obtain the sequential version of the executable. | |||||||
B. Edit the Makefile as follow
| ||||||||
Line: 137 to 135 | ||||||||
Parallel executableNeeded for compilation are: | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
Contact your Admin if the needed software is missing or not available. | ||||||||
Changed: | ||||||||
< < | A. Download or copy the tar file of DL_POLY_4.02 MD package in a machine with the EMI1 middleware installed, untar it in an appropriate sub-directory. | |||||||
> > | A. Download or copy the tar file of DL_POLY_4.02 MD package in a machine with the EMI1 middleware installed, and untar it in an appropriate sub-directory. | |||||||
Copy the file named Makefile_MPI and stored in the build directory into the source directory
# cp build/Makefile_MPI srcmod/MakefileThe file enable to compile the source code to obtain the parallel version of the executable | ||||||||
Line: 171 to 169 | ||||||||
Changed: | ||||||||
< < | GROMCAS | |||||||
> > | GROMACS | |||||||
Application descriptionGROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers. GROMACS supports all the usual algorithms you expect from a modern molecular dynamics implementation, (check the online reference or manual for details). | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > | ||||||||
<-- VO Contact: Alessandro Costantini, University of Perugia (Italy) – alessandro.costantini@dmi.unipg.it --> | ||||||||
Line: 185 to 183 | ||||||||
Sequential executable | ||||||||
Changed: | ||||||||
< < | Needed for compilation are:
| |||||||
> > | To compile it, It is needed for compilation:
| |||||||
Contact your System Admin if the needed software is missing or not available. A. Download or copy the tar file of gromacs-4.5.5.tar.gz MD package in a machine with the gLite3.2 middleware installed, untar it in an appropriate sub-directory. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Added: | ||||||||
> > | Users which their main field is Computational Chemistry may consult the following Application Database
| |||||||
Installation and porting guides |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Added: | ||||||||
> > |
Installation and porting guidesThe following best practices document aims to provide some hints and examples on how to configure and compile some Computational Chemistry related applications on a grid based infrastructure.DL_POLYApplication descriptionDL_POLY is a package of subroutines, programs and data files, designed to facilitate molecular dynamics simulations of macromolecules, polymers, ionic systems, solutions and other molecular systems on a distributed memory parallel computer. The package was written to support the UK project CCP5 by Bill Smith and Tim Forester under grants from the Engineering and Physical Sciences Research Council and is the property of the Science and Technology Facilities Council (STFC). Two forms of DL_POLY exist. DL_POLY_2 is the earlier version and is based on a replicated data parallelism. It is suitable for simulations of up to 30,000 atoms on up to 100 processors. DL_POLY_3 is a domain decomposition version, written by I.T. Todorov and W. Smith, and is designed for systems beyond the range of DL_POLY_2 - up to 10,000,000 atoms (and beyond) and 1000 processors.
<-- * VO Contact: Pacifici Leonardo, University of Perugia (Italy) – xleo@dyn.unipg.it --> DL_POLY 2.20Sequential executableNeeded for compilation are:
# cp build/MakeSEQ srcmod/MakefileThe file enable to compile the source code to obtain the sequential version of the executable. B. Edit the Makefile as follow
#======== ifort (serial) ======================================= ifort: $(MAKE) LD="ifort -o " LDFLAGS="-static" FC=ifort \ FFLAGS="-c -O2" \ EX=$(EX) BINROOT=$(BINROOT) $(TYPE)C. Compile the source code # make < target_architecture >Note: for other architectures, please refer to the appropriate OS user guide or contact the System Admin. After the compile procedure you should find into the executable directory the DL_POLY executable. To be sure that the executable is statically linked, run the following command # ldd < executable_name >" not a dynamic executable " should be visualized. You can use the executable and submit it to the GRID environment. Parallel executableNeeded for compilation are:
# cp build/MakePAR srcmod/MakefileThe file enable to compile the source code to obtain the parallel version of the executable B. Edit the Makefile as follow
#======== ifort (parallel) ======================================= ifort: $(MAKE) LD=" mpif90 -o " LDFLAGS=" " FC=mpif90 \ FFLAGS="-c -O2" \ EX=$(EX) BINROOT=$(BINROOT) $(TYPE)C. Compile the source code # make < target_architecture >Note: for other architectures, please refer to the appropriate OS user guide or contact the System Admin. After the compile procedure you should find into the executable directory the DL_POLY executable. In this case the executable is dynamically linked. To obtain an executable statically linked, add “- static” to the LDFLAGS variable under the “gfortran” target architecture: LDFLAGS="-static"You can use the executable and submit it to the GRID environment. DL_POLY 4.02Sequential executableNeeded for compilation are:
# cp build/Makefile_SRL1 srcmod/MakefileThe file enable to compile the source code to obtain the sequential version of the executable. B. Edit the Makefile as follow
Generic target templateas follow
ifort: $(MAKE) LD="ifort -o " LDFLAGS="-static" FC=ifort \ FFLAGS="-c -O2" \ EX=$(EX) BINROOT=$(BINROOT) $(TYPE)C. Compile the source code # make < target_architecture >Note: for other architectures, please refer to the appropriate OS user guide or contact the System Admin. After the compile procedure you should find into the executable directory the DL_POLY executable. To be sure that the executable is statically linked, run the following command # ldd < executable_name >" not a dynamic executable " should be visualized. You can use the executable and submit it to the GRID environment. Parallel executableNeeded for compilation are:
# cp build/Makefile_MPI srcmod/MakefileThe file enable to compile the source code to obtain the parallel version of the executable B. Edit the Makefile as follow
#======== ifort (parallel) ======================================= ifort: $(MAKE) LD=" mpif90 -o " LDFLAGS=" " FC=mpif90 \ FFLAGS="-c -O2" \ EX=$(EX) BINROOT=$(BINROOT) $(TYPE)C. Compile the source code # make < target_architecture >Note: for other architectures, please refer to the appropriate OS user guide or contact the System Admin. After the compile procedure you should find into the executable directory the DL_POLY executable. In this case the executable is dynamically linked. To obtain an executable statically linked, add “- static” to the LDFLAGS variable under the “gfortran” target architecture: LDFLAGS="-static"You can use the executable and submit it to the GRID environment. GROMCASApplication descriptionGROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers. GROMACS supports all the usual algorithms you expect from a modern molecular dynamics implementation, (check the online reference or manual for details).
<-- VO Contact: Alessandro Costantini, University of Perugia (Italy) – alessandro.costantini@dmi.unipg.it --> GROMACS 4.5.5Sequential executableNeeded for compilation are:
# export CPPFLAGS=-I$FFTPATH/include # export LDFLAGS=-L$FFTPATH/libC. Compile the source code in a X86_64 architecture # ./configure --prefix=$GROMACSPATH/gromacs --disable-x86-64-sse --with-fft={fftw3,fftw2,mkl} --enable-all-static # make # make installNote: for other architectures, please refer to the appropriate OS user guide or contact the System Admin. After the compile procedure you should find into the $GROMACSPATH/gromacs/bin directory the mdrun executable. To be sure that the executable is statically linked, run the following command # ldd mdrun" not a dynamic executable " should be visualized. You can use the executable and submit it to the GRID environment. Parellel executableNeeded for compilation are:
# export CPPFLAGS=-I$FFTPATH/include $MPIPATH/include # export LDFLAGS=-L$FFTPATH/lib $MPIPATH/libC. Compile the source code in a X86_64 architecture # ./configure --prefix=$GROMACSPATH/gromacs --program-suffix=-mpi --disable-x86-64-sse --with-fft={fftw3,fftw2,mkl} --enable-all-static --enable-mpi # make # make installNote: for other architectures, please refer to the appropriate OS user guide or contact the System Admin. After the compile procedure you should find into the $GROMACSPATH/gromacs/bin directory the mdrun-mpi executable. To be sure that the executable is statically linked, run the following command # ldd mdrun-mpi" not a dynamic executable " should be visualized. You can use the executable and submit it to the GRID environment. NAMD (2.9)Application descriptionNAMD is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD is distributed free of charge and includes source code.Parellel executableBuilding a complete NAMD binary from source code requires working C and C++ compilers, Charm++/Converse, TCL, and FFTW. NAMD will compile without TCL or FFTW but certain features will be disabled. A. Unpack NAMD and matching Charm++ source code and enter directory:tar xzf NAMD_2.9_Source.tar.gz cd NAMD_2.9_Source tar xf charm-6.4.0.tar cd charm-6.4.0B. Build and test the Charm++/Converse library (multicore version): ./build charm++ mpi-linux-x86_64 --with-production cd mpi-linux-x86_64/tests/charm++/megatest make pgm ./pgm +p4 (multicore implementation does not support multiple nodes) cd ../../../../..C. Build and test the Charm++/Converse library (MPI version): env MPICXX=mpicxx ./build charm++ mpi-linux-x86_64 --with-production cd mpi-linux-x86_64/tests/charm++/megatest make pgm mpirun -n 4 ./pgm (run as any other MPI program on your cluster) cd ../../../../..D. Download and install TCL and FFTW libraries: (cd to NAMD_2.9_Source if you're not already there) wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz tar xzf fftw-linux-x86_64.tar.gz mv linux-x86_64 fftw wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64.tar.gz wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64-threaded.tar.gz tar xzf tcl8.5.9-linux-x86_64.tar.gz tar xzf tcl8.5.9-linux-x86_64-threaded.tar.gz mv tcl8.5.9-linux-x86_64 tcl mv tcl8.5.9-linux-x86_64-threaded tcl-threadedE. Edit configuration files as follow fill in the path of the needed libraries: $ cat arch/Linux-x86_64-grid.arch NAMD_ARCH = Linux-x86_64 CHARMARCH = mpi-linux-x86_64 CXX = /opt/openmpi-1.4.3-gfortran44/bin/mpic++ -m64 -O3 CXXOPTS = -fexpensive-optimizations -ffast-math CC = /opt/openmpi-1.4.3-gfortran44/bin/mpicc -m64 -O3 COPTS = -fexpensive-optimizations -ffast-mathF. Set up build directory and compile: MPI version: ./config Linux-x86_64-grid --charm-arch mpi-linux-x86_64 cd Linux-x86_64-grid make (or gmake -j4, which should run faster)G. Quick tests using one and two processes: (this is a 66-atom simulation so don't expect any speedup) ./namd2 src/alanin(for MPI version, run namd2 binary as any other MPI executable) GaussianTo be completed by DanieleCRYSTALTo be completed by AlessandroToolsLinks to GRIF and GCRES -- DanieleCesini - 2012-11-16 |