Qbox for effective polarizabilities of large systems
Posted: Fri Sep 27, 2024 1:59 am
Hello,
I am currently trying to use Qbox to compute the effective molecular polarizability for individual water molecules, this time in a periodic system of 1024 water molecules. Essentially, this boils down to computing the MLWF centers around each water molecule and taking a finite difference of the dipoles over the electric field perturbation, as per my previous issue: viewtopic.php?t=306&sid=2f7d50243097783 ... 5644765d51
I am currently applying my workflow on DOE NERSC Perlmutter, but it is taking a very long time for a single frame (over 19 hours and still running) for a periodic system of 1024 water molecules. Below is my job script that I am using to run the calculation:
This job requests 128 MPI tasks with a physical core per task, essentially consuming an entire perlmutter CPU node. Looking at the output file, the value of
Besides running timing tests to try optimizing the value of the
Any help would be greatly appreciated, and thank you in advance!
I am currently trying to use Qbox to compute the effective molecular polarizability for individual water molecules, this time in a periodic system of 1024 water molecules. Essentially, this boils down to computing the MLWF centers around each water molecule and taking a finite difference of the dipoles over the electric field perturbation, as per my previous issue: viewtopic.php?t=306&sid=2f7d50243097783 ... 5644765d51
I am currently applying my workflow on DOE NERSC Perlmutter, but it is taking a very long time for a single frame (over 19 hours and still running) for a periodic system of 1024 water molecules. Below is my job script that I am using to run the calculation:
Code: Select all
#!/bin/bash -l
#SBATCH --time=24:00:00
#SBATCH --constraint cpu
#SBATCH --qos=regular
#SBATCH --nodes=1
#SBATCH --ntasks=128
#SBATCH --tasks-per-node=128
#SBATCH --cpus-per-task=2
#SBATCH -A m4026
#SBATCH -J Qbox_W1024
export OMP_NUM_THREADS=1
export OMP_PLACES=threads
export OMP_PROC_BIND=spread
# default job name is job script file name
# To change job name, submit with: sbatch -n 64 -J jobname file.job
module load PrgEnv-gnu cray-fftw
PROJECTDIR=/global/cfs/projectdirs/qbox
exe=$PROJECTDIR/bin/qbox-1.76.3_prl
export XERCES_C_DIR=$PROJECTDIR/software/xerces/xerces-c-3.1.4_gnu
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$XERCES_C_DIR/src/.libs
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PROJECTDIR/software/AMD/amd-libflame/lib/LP64:$PROJECTDIR/software/AMD/amd-blis/lib/LP64
export QBOX_OPTS='-nstb 2'
#cd /pscratch/sd/f/frankhu/qbox_calcs/082624_REDO_monomer_tst/qbox_inp_W64_dset
#echo $PWD
#infile=inp_0.i
#outfile=inp_0.r
#srun --cpu_bind=cores $exe $QBOX_OPTS $infile > $outfile
#
#
#
#
#cd /pscratch/sd/f/frankhu/qbox_calcs/082624_REDO_monomer_tst/qbox_inp_RPBE
#echo $PWD
#infile=inp_0.i
#outfile=inp_0.r
#srun --cpu_bind=cores $exe $QBOX_OPTS $infile > $outfile
cd /pscratch/sd/f/frankhu/qbox_calcs/092524_W1024_test_frame/W1024_single_frame_test
echo $PWD
infile=inp_0.i
outfile=inp_0.r
srun --cpu_bind=cores $exe $QBOX_OPTS $infile > $outfile
np2v
is 360, so I have set the value of nstb
to 2 for now.Besides running timing tests to try optimizing the value of the
nstb
parameter, are there any other suggestions for accelerating the calculation? Ideally I would like to use Qbox to compute the MLWFs (and consequently the polarizabilities) of many such frames, not just a single one. Any help would be greatly appreciated, and thank you in advance!