Hello everyone.
I did a thermal simulation with and without the amplitude function. I placed all 1 in the Amplitude table, so that I should get the same thing as with Amplitude chosen as Default. But the result is not the same!
Should I look at the construction level of my model?
In the drawing I include we see on the left the Amplitude table which is put in load of a BodyFlux. At the bottom left we see the temperature values obtained at point 534.
On the right we see the Amplitude field set to Default. and at the top the resulting graph points to the same point.
It seems to me that the two curves should have been the same. What don’t I understand?
OK thank you, I think I have understood the role of the ramp to establish the stable state of the system as you proposed to me. Now the fact remains that I don’t understand the result showing curves showing temperatures of more than 500 degrees in a transient simulation, which makes no sense in my opinion.
Maybe my simulation is not consistent too.
Thank you for your patience, I will make a basic case to better understand what is happening.
Daniel.
ÂżIs this a ccx bug or maybe some missing card from Prepomax?
When using the Default incrementation ccx doesn’t manage the amplitude card correctly.
From my point of view, It should at least use maximum time increment equal to the one used in the amplitude sample points ÂżRight?. To get a good sample of the user defined points. If not, if it can solve the problem in just a few increments, it does it and is solving a completely different problem.
Abaqus does the same, it’s the user’s responsibility to adjust the incrementation and output frequency accordingly:
If the amplitude varies rapidly—as with the ground acceleration in an earthquake, for example—you must ensure that the time increment used in the analysis is small enough to pick up the amplitude variation accurately since Abaqus samples the amplitude definition only at the times corresponding to the increments being used.
I guess that’s for custom user settings.
But Default = Automatic right?.
Meaning the Software asumes the control and it is supposed to do it “better” .
Imagine. It’s your first time driving. You autopilot tells you, “be carefull, you must drive slower than 60 on this road or you might have an accident”.
You respond, Ok , I think it’s better you do it for me this time. I’m not experience enought yet.
One should expect the automatic driver not to drive fasater than 60. It doesn’t have sense.
It can be as easy as using the same time steps as in the sample points of the amplitude card.
In Abaqus, there’s no Default setting but those are the values used if the user doesn’t change anything:
*Heat transfer
1, 1, 1e-5, 1
So PrePoMax and Abaqus use pretty much the same settings. And if you leave those default settings in Abaqus but use an arbitrary amplitude, it may solve the problem in one increment and skip the amplitude values too.
Maybe it should be more “smart” but if Abaqus does that, I wouldn’t blame CalculiX too much
Don’t read me wrong. I’m not saying anybody is not smart enought. You know I’m not refering to that. I mean we can improve that.
-Note those parameters are completely different.
With 1e30 ccx will always try to advance as fast as possible. That enforces to solve in one step if possible. Abaqus at least set a limit to that value to 1 so if the user enters 32 seconds in the transient there will be at least 32 time steps solving correctly.
I don’t know your inputs but Abaqus also provides a different result in an analysis with amplitude definition if default incrementation settings are used (and it can solve the problem in one increment) when compared with the case where the incrementation is manually adjusted to fit the amplitude.
Ok. If Prepomax is following ccx default values, then ccx forum is a more convenient place to expose this issue.
I think “default” iteration scheme should help the user in some way.
If someone define an amplitude card, from my point of view , it anticipates the load has something peculiar in the way it is introduced in the system.
The linear approach with an advancing time step as large as possible maybe it is not the best.
I would suggest using the user time discretization in the Amplitude card.
Regarding Dapineau problem, most probably can be correctly solved setting the default iteration to Automatic and selecting the maximum time step small enough (Time period / 50) for example.
Right. Or maybe even CalculiX GitHub since Guido rarely checks the forum and bug reports or improvement ideas tend to get lost there within other posts. GitHub issues are separated from other stuff at least. And I remind Guido to check them once in a while. On the other hand, the forum is better to collect opinions when it’s not an obvious bug that just needs to be reported to the dev.
Hello, just as an example of how a non-specialist was able to get away with it, taking inspiration from your comments, I managed to obtain a satisfactory result, but without checking the “precision”, but at least it seems coherent.
Direct means fixed incrementation so you force specific increment size. This should be avoided in most cases since you can e.g. encounter convergence issues. It would be better to adjust the settings of automatic incrementation - reduce the initial, max and min increment size properly.