I’m thinking to get a machine for work specifically for FEA for PrePoMax, but I am not sure what to look for. Should I get a server with 128 cores at 2.5GHz with 3.5GHz max speed? or a desktop with only 24 cores at 3GHz max 6GHz. I’ve never maxed out the ram on our current machine so I am just focusing on the core count. I assume that faster ram would allow faster processor use so more significant core count? Unfortunately, we would need to run the prepomax test to know for sure how many cores would be ideal, but I have yet to see anyone saying that they are using more than say 20 cores efficiently. Right now I am using 16 cores.
And what about the RAM which is as important as the CPU; indeed if you’ve not enough memory, the system will swap = a drastic performance speed down
indeed, it’s recommended to use large memory as possible, after that is processor speed, number off processor also known could help to accelerate in Pardiso solver
I had a PC custom built for CAE in 2012, still working fine.
RAM wins hands down over n_CPUs & speed.
Main job in CAE solve is that you can fit jobs in RAM (in-core) 1st, then speed you don’t even care about if the model has been built correctly, you fine tune after. But running out-of-core (not enough RAM) is a total kill-joy.
OK, I see various people voting for more ram. But at least for PreProMax, I haven’t gotten to a point when I can reliably think to myself…hmm, this run will solve even if it takes 3 days. Like that situation borderlines in just go to the shop and use a force/strain-gauge to figure it out in an actual experiment. I exaggerate, but I usually run problems where I don’t want to spend more than 15 minutes waiting for an answer. specially if the answer is that the program failed to compute some where. I guess I should then I ask, how big/complex of a model do any of you run? what is the longest successful run you perform regularly?
As for the CPU speed, I think single-core performance is more important than multi-core performance if you are running a single job at the same time. If you run multiple jobs, this might be different.
In FEA is very hard to summarise model size/complexity vs time.
Unfortunately, it is very application dependant, what kind of thing are you after?
Linear / non-linear, static / dynamic, contacts / no-contacts?
You can run a linear model (in general, dunno CCX) with 2M nodes in 20mins; or take a full day to solve a very non-linear one with a small mesh.
CalculiX solver seems fast enough in convergences. I ran large number of bolt contact including plasticity in reliable times, it’s below one hours with only dual core processor and 16gb ram. Solver selection being used is Pardiso since PaStiX not stable at the day even known to be faster.
symetric load in tension like this took below half an hour to convergences.
I usually keep models as small and linear as possible. But now thinking ahead to new hardware, maybe ram would become a limitation on large assamblies. Our FEA machines I think has 128gb. It’s a z station model from 2015? I think. But the problem I see with thid hp model is that the 2ghz Zeon Gold processor which is relatively slow compared to our updated workstation model. Sure it’s got cores galore, but if I assign more than say 8 cores to calculix, the machine runs the problem significantly slower than the newer single CPU/32 core machines.
So that’s where my question is really coming from. If we buy machines with dual CPUs and end up with only ever using 8 or 32, then it would be a wasted investment. I would love to have a reliable benchmark like passmark where all CPUs have a nice ranking to compare… But based on PrepoMax.
probably it’s required testing personally about number of cores related to computational times at the same machine, from single core to maximum physical available. Different machine can lead to inconsistently due to memory type and processor speed. Solver selection between Pardiso and PaStiX also interesting to do compares.
btw, i used quadratic tetrahedral with default medium mesh size in previous example.