FastFieldSolvers Forum
FastFieldSolvers Forum
Home | Profile | Register | Active Topics | Members | Search | FAQ
Username:
Password:
Save Password
 All Forums
 FastFieldSolvers
 FasterCap and FastCap2
 FasterCap not going out of core
 New Topic  Reply to Topic
 Printer Friendly
Author  Topic Next Topic  

chgad

5 Posts

Posted - May 03 2019 :  10:08:11  Show Profile  Reply with Quote
Hello everyone,

I have encountered a setup where my machines available RAM (approx. 27 GB) isn't enough to solve the problem and at a certain point of FasterCaps discretization process my machine halts (loosing all info about previous results).

This is the moment where i re-read the Docs of FasterCap and thought about modifying the "Out-of-Core free memory to link memory condition -f". But i'm not quite sure if I understand it correctly :

Let the Value for -f be n.

When ever FasterCap creates a new discretization block it checks whether the n * (link memory) is greater than the free (RAM) memory.
If this condition is fulfilled FasterCap goes out of Core, if not it allocates RAM memory.

Is this understanding correct ?

If so i still encounter FasterCap causing my machine to halt and again
losing all previous results.

Furthermore i read in a Post in this Forum about modifying the
"Direct potential interaction coefficient to mesh refinement ration -d". This parameter should control the "correlation" of how many
panels are considered when calculating the contribution of another panel.

Is this correct ?

I'd really like to understand those options to get my models working.
Processing time isn't really a problem right now and I'm well aware that going out of core will increase the time needed.

Thanks for any advice and answers in advance.

Enrico

385 Posts

Posted - May 06 2019 :  13:17:19  Show Profile  Reply with Quote

quote:
When ever FasterCap creates a new discretization block it checks whether the n * (link memory) is greater than the free (RAM) memory.
If this condition is fulfilled FasterCap goes out of Core, if not it allocates RAM memory.


Yes this is correct, but the check is not done when creating a new discretization block, but when estimating the memory required for the total of the links for a given discretization and interaction value set.

However you need to consider that with modern operating systems (i.e. all..), the memory management is fully in charge of the OS, that is usually leveraging also some swap space on the hard disk. So the sum of the memory required by all application / processes can be larger than the actual physical memory, as the OS virtualizes it and swaps to the hard disk the 'most unused' sections of memory. Unfortunately, for Out-of-core applications this is an issue, if you cannot instruct the OS to avoid the swap. You want to control that because the intended serialization of the memory out-of-core is much more efficient than the generic (even if intelligent) algorthms used by the OS to handle the memory allocation requests. So instead you end up getting closer to the physical memory limit BEFORE going Out-of-core (as the OS still accepts memory allocation requests and/or shows more free memory than it has), and you start using the swap, with a great slow-down. Only when the swap is not sufficient any more you go out-of-core, but this is double slow as the memory you are carrying out of core is actually on the hard disk swap file / partition. This is the ultimate reason why the condition to go out-of-core in FasterCap is the need of a *fraction* only of the overall free memory as reported by the OS. You may try to lower this threshold, i.e. increase the -f value. Usually a value of 5 will do, but it really depends. The cons of this approach is of course that you go out-of-core much earlier than needed, with an overall unneeded slow down. Note by the way that if you have SSD hard disk the penalty to go out-of-coure is much reduced, as access is faster.

quote:
"Direct potential interaction coefficient to mesh refinement ration -d". This parameter should control the "correlation" of how many
panels are considered when calculating the contribution of another panel.


Yes, or said in other words, how many 'links' per panel are considered.

However, if even reducing the number of links and going out-of-core, you still get out of memory, the problem is possibly in an excessive number of panels altogether. The Out-of-core algorithm will serialize the links (that are usually linear with the number of panels, but through a multiplication coefficient, so they are predominant; if you cannot fit the links, for sure you cannot fit the panels in memory, while the vice-versa most of the times is possible). So if you end up with too many panels, I'm afraid that you cannot solve your problem for the current memory configuration you have.

Note however that if you select '-i' and '-v' options you should have more detailed information about what is happening, and the actual memory consumption. This should help you understanding where the bottleneck is.

Best Regards,
Enrico
Go to Top of Page
   Topic Next Topic  
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
FastFieldSolvers Forum © 2018 FastFieldSolvers S.R.L. Go To Top Of Page
Powered By: Snitz Forums 2000 Version 3.4.06