PrePoMax has 6 types of solvers.
I read the Manual, but the characteristics of the 6 types of solvers are not described.
What solver is suitable for static analysis and contact analysis, and non-linear analysis?
description below are extracted from document reference (p.538),
SPOOLES is very fast, but has no out-of-core capability: the size of systems you can solve is limited by your RAM memory.
PARDISO is the Intel proprietary solver and is about a factor of two faster than SPOOLES
PaStiX from INRIA. It is really fast and can use the GPU, an acceleration of a factor between 3 and 8 compared to PARDISO.
ITERATIVE CHOLESKY, better convergence and maybe to shorter execution times. however, it requires additional storage.
ITERATIVE SCALING, last resort for a short of memory.
so, it depend on resource and how large is your model ? and right, in my experienced of large multipart contact analysis with plasticity. PaStiX is faster to convergence than MKL Pardiso.
I want to ask if all of these solvers are available in PPM because the last time I tried changing the default one (not sure if I attempted Spooles or Pardiso instead of Pastix) I received an error message saying that this solver was not found.
PaStiX and Spooles solver are already to run from official distribution. also for Pardiso, however it requires additional dynamic linking library (dll) from Intel MKL to be copied in the same directory.
so, it depend on resource and how large is your model ? and right, in my experienced of large multipart contact analysis with plasticity. PaStiX is faster to convergence than MKL Pardiso.
my notes above are for Intel MKL 2020/2021 versions. sorry i did not yet updates to the latest version (2022), but it seems the folder you shown only an extracted files not installation folder, is this right?
another easy method and straight forward are using ‘search’ feature of windows explorer in active or selected directories.
**edited
reading at another forums, for the latest version (2022) a substitute need change from 1 number to 2.
libiomp5md
mkl_core.2
mkl_def.2
mkl_intel_thread.2
mkl_rt.2 or mkl_rt.1(by renames)
mkl_mc3.2
mkl_avx2.2 or mkl_avx512.2 (chose one)
notes, environment variables setting plays an important role (e.g processor type Intel(gen) or AMD, large of RAM memory, drive type/speed and space)
@synt I tried the newer MKL library (2022.0.1). However, I get a segmentation fault error (see below). Have you tried the newer version at all?
[Thread 4600.0x1f18 exited with code 0]
Factoring the system of equations using the symmetric pardiso solver number of threads = 16
gdb: unknown target exception 0xc06d007e at 0x7ffd6ecc4fd9
Thread 1 received signal ?, Unknown signal.
0x00007ffd6ecc4fd9 in RaiseException () from C:\WINDOWS\System32\KernelBase.dll
IMHO the best practical information about CCX solvers is available at this Mecway forum page. Here you can find detailed and relatively easy to follow instructions how to compile the Out-Of-Core Pardiso version.
@i_rokach i read the pages and graph by translate (Google), also took some comparison on my machine with Iterative Cholesky solver but running time not shown faster than PaStiX as describe.
another curious, did you manage to compared Out-Of-Core and Dynamically Linking versions of Pardiso solver?
It is typical for performance of iterative methods to be highly problem-dependent. In my case, a very simple static problem was solved. Due to this reason Cholesky solver was abnormally good. This is an educational web page and some examples are purely educational. When more real-life FE models are used, performance of CCX iterative solvers is usually poor. BTW, PaStiX solver is the mixed direct-iterative one. This is one of the reason it is so good for small and medium size problems.
I have never tried to compare OOC and in-core versions of Pardiso.
thanks for sharing experiences. i did not try personally at very big models, however another user with PaStiX subset “_i8” versions reported successfully for large problems.