Compiling with OpenMPI

Qbox installation issues
Forum rules
You must be a registered user to post in this forum. Registered users may also post new topics if they consider that their subject do not correspond to any topic already present on the forum.
Post Reply
Posts: 6
Joined: Fri Dec 13, 2013 10:31 am

Compiling with OpenMPI

Post by ckande »

For anyone interested in compiling with OpenMPI, this was the mk file that I used. The overall combination was:

- Intel 13.1
- OPENMPI-1.6.5 (./configure --with-tm --with-openib)
- FFTW-2.1.5 (./configure) (MKL FFTW2XF did not work for me)
- XERCES-2.8.0 (./runConfigure -p linux -r none -s -c icc -x icpc)

mk file

Code: Select all

# modules required: intel impi mkl fftw2
# Before using make, use:
# $ module load intel mkl fftw2
# $ module swap mvapich2 impi 
# Copyright (c) 2013 The Regents of the University of California
# This file is part of Qbox
# Qbox is distributed under the terms of the GNU General Public License 
# as published by the Free Software Foundation, either version 2 of 
# the License, or (at your option) any later version.
# See the file COPYING in the root directory of this distribution
# or <>.
MKLROOT = $(HOME)/opt/intel/mkl
FFTWDIR = $(HOME)/opt/fftw/2.1.5

CXX = mpicxx
LD = mpicxx

#PLTFLAGS += -D__linux__

INCLUDE = -I$(FFTWDIR)/include -I$(XERCESCDIR)/include

CXXFLAGS= -O3 $(INCLUDE) $(PLTFLAGS) $(DFLAGS) #-g -debug parallel #-openmp #-vec-report1

LDFLAGS = $(LIBPATH) $(LIBS) #-g -debug parallel #-openmp

LIBS =  -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64  \
         -lmkl_sequential -lmkl_core -lmkl_blacs_openmpi_lp64 -lpthread \
         -lmkl_lapack95_lp64 -lm \
         -L$(FFTWDIR)/lib -lfftw \
         -L$(XERCESCDIR)/lib -lxerces-c
         #-luuid \
         #-Bstatic \

Site Admin
Posts: 139
Joined: Tue Jun 17, 2008 7:03 pm

Re: Compiling with OpenMPI

Post by fgygi »

Thanks for the post.
Also note a particular feature of openmpi: the parameters used with the mpirun command are important to determine which transport layer is used (e.g. infiniband vs gigabit ethernet). On our cluster, using the default parameters causes openmpi to use gigabit ethernet, which gives very poor performance for Qbox. In order to get infiniband to be used, the following parameters must be used with the mpirun command:

Code: Select all

--mca btl sm,self,openib
Note that these parameters may depend on the openmpi installation on your cluster.
Post Reply