Hi,
I am trying to learn ITensor on Julia and I plan to use an HPC to run my codes. My problem is as follows -
I am using the dmrg code available on the ITensor documentation website. When I run the code using SBATCH it takes much longer (more than 10 times) than when I run the code using bash or when I simply run the code using the command julia filename.jl
. Following is the code that I have used -
j
using ITensors
using Dates
t_start = time_ns()
N = 100 # number of sites
sites = siteinds("S=1",N) # create N sites with spin 1
os = OpSum() # create an empty operator sum
for j=1:N-1
global os += "Sz",j,"Sz",j+1
global os += 1/2,"S+",j,"S-",j+1
global os += 1/2,"S-",j,"S+",j+1
end
H = MPO(os,sites) # create the Hamiltonian MPO
psi0 = randomMPS(sites,10) # create a random initial wavefunction in MPS form
println("time taken to create MPO and MPS = ", (time_ns() - t_start)/1e9, "s")
nsweeps = 5 # number of sweeps to perform
maxdim = [10,20,100,100,200] # max bond dimension to keep after each sweep
cutoff = [1E-10]
t_dmrg_start = time_ns()
energy, psi = dmrg(H,psi0; nsweeps, maxdim, cutoff);
println("time taken to run DMRG = ", (time_ns() - t_dmrg_start)/1e9, "s")
Following the serial job submit script that I use -
#!/bin/bash
#job name
#SBATCH --job-name=test_dmrg2
#
# Set partition
#SBATCH --partition=short
#
# STDOUT file; "n" is node number and "j" job id number
#SBATCH --output=error_files/%x_check%N_%j.out
# STDER file; "N" is node number and "j" job id number
#SBATCH --error=error_files/%x_check%N_%j.err
#
# Number of processes
# SBATCH --ntasks=1
# Number of nodes
#SBATCH --nodes=1
# Memory requirement per CPU
#SBATCH --mem-per-cpu=50G
#
# Total wall-time
### SBATCH --time=06:30:00
#
# Uncomment to get email alert
# SBATCH --mail-user=tamoghna.ray@icts.res.in
# SBATCH --mail-type=ALL
#SBATCH --array=0
time julia ~/MPS/test.jl $SLURM_ARRAY_TASK_ID
date
You can find the details of the HPC used here.
Please let me know if there is something that I am doing wrong or is this something expected to happen.