I am facing some problems with memory allocation at the HPC cluster at my institute. Each node in this cluster has available a memory of 44gb and eight CPUs (NProcShared=8). So if I increase the number of nodes (LindaWorkers) the amount of memory available increases by 44 gb for each additional node.
Now, if I try to assign on a single node for an input :
%mem=44GB
I get the error:
Number of steps in this run= 648 maximum allowed number of steps= 648.
Out-of-memory error in routine Optmz4 (IEnd= 6770633057 MxCore= 5905580032)
Use %mem=6457MW to provide the minimum amount of memory required to complete this step.
Error termination via Lnk1e in /opt/software/g09/l103.exe at Wed Mar 6 04:08:32 2013.
Job cpu time: 0 days 0 hours 6 minutes 17.0 seconds.
When I try to use 2 nodes (I should have 88 gb memory available to me now) with the input line :
%mem=44GB
I get the same error:
Number of steps in this run= 648 maximum allowed number of steps= 648.
Out-of-memory error in routine Optmz4 (IEnd= 6770633057 MxCore= 5905580032)
Use %mem=6457MW to provide the minimum amount of memory required to complete this step.
Error termination via Lnk1e in /opt/software/g09/l103.exe at Thu Mar 7 10:42:59 2013.
Job cpu time: 0 days 0 hours 6 minutes 3.6 seconds.
When I try to use 3 nodes (I should have 132 gb memory available to me now) and change the input line to:
%mem=132GB
This ends with error:
Leave Link 1 at Fri Mar 8 01:49:58 2013, MaxMem=17716740096 cpu: 0.8
galloc: could not allocate memory.
Also:
Operating System: Red Hat Enterprise Linux Server release 5.4
Processor :Intel Xenon Nehlam Processors (2.93GHz) with 8 cores
Can anyone help me out with this?