The regenerate button will regenerate a little too much

I usually re-use a previously working analysis by “save as” to a new name, then remove the geometry and load a new model. Well I just tried the regen button and it seemed to have gone and re-created the old deleted geometry, re-meshed it, re-deleted it, re-imported the new geometry and then re-meshed it. I wouldn’t consider it a high priority, just thought to make a note of this.

Hm, the regenerate is there to regenerate the whole model. At the moment it repeates all the steps the user used from the time point of creating a new model. So it works as intended. Deleting the geometry is something that lefts a lot of things behind. Like sets, materials, steps, … And it is not possilbe to determine which user astions were used to create them and keep them in the regenerate procedure.

Any idea on how to regenerate less?

I was thinking that since I use “save as” to new for example, maybe it can take a snapshot of what is there currently and cull off the rest? certainly would not be good to do a full regen during save, that would feel really clunky/slow. Definitely, if I delete an item, like a BC, or a part of the geometry for example, I wouldn’t want to see it in the regen if at all possible. But this is just my opinion, I don’t really know what exactly the regen button is actually useful for when it comes down to FEA. I literally just pressed the button with accidental purpose trying to figure out what it did. There may be a really obvious reason for having it that I just don’t know about.

One other interesting thing that could be done is to have a “replicate study” button. So instead of creating a whole new file and deleting the geometry step by step, maybe this replicate button removes the history and has a settings window where you can pick to delete geometry analyses results etc with check boxes so you can pick what you want to keep in the copy. Basically some sort of generic template generator for new studies similar to previous ones. You could get to keep the same materials and parts for example and do everything else differently.

Just ideas of mine.

Regenerate was initially added to enable macro recordings and automatization. The idea behind it is that a user can create a model (and record a macro at the same time) and then use the same steps to create a model on another geometry (Regenerate using other files). The problem at the moment is that the user cannot change any recorded commands since there is no “commands editor”. So for now, it only has the function of debugging by repeating all commands.

At the moment, there is no possibility to record a “user macro” that would contain only some commands that the user would like to repeat other times. Like the creation of the material…

Well, it is not so straightforward to determine all the commands that are connected to an item that was deleted by a user and remove them (the commands). Many commands could be involved if the user modified, propagated… the item.

I don’t think this is a bug, so I will change the category of this topic.

Does the history keep adding to the model size? for example, does it store all meshes I have tried? or does the history only store the way to get those meshes made from scratch again?

Just wondering if my files are getting large due to history.

The history does contain some additional information, and there was an issue with increasing the file size due to some non-empty pointers. But I thought that I fixed it a while ago. Can you share some file that got really big?

I think the regenerate button does have some drawbacks. A simple example. I created a bar as an assembly of 1mm diameter by 0.5m height pucks. Each puck is a different material. I apply a simple compressive displacement. So initially I make the mesh really fine and run this for 14 hours. As it completes, it generates a 1gb PMX and a 3gm FRD files. That was fantastic by the way. But then I wanted to compare the same result but with less cells. So I reduce the cells to a few thousand, but my PMX is still at 1gb size, which then makes subsequent work be as slow as when working with a larger model. I think the issue is that history is retained so the file just keeps getting bigger?
what are your thoughts on this? could we get a “keep history” button?

In my experience, a large .pmx file is caused by saving results in the file. If I go to the File menu and select “Close All Results” the results are deleted from the .pmx file and it returns to a reasonable size. The results can still be viewed by opening the .frd file.

but I mean, the file is still very large after running the analysis with the smaller amount of elements. but I understand what you are saying. I did set to save the results in the PMX. if not, I thought there was currently a bug in version 1.3.5.1?

This may or may not help:

I also save results in the .pmx file. But I have noticed that if you run multiple analyses (with different names) from the same .pmx file, the results of all analyses are retained in the .pmx file until they are closed. Consequently, if you run the smaller analysis without closing the results from the large analysis, the results of both analyses are still stored in the .pmx file resulting in a large file. The results from a single analysis can be closed, or all results can be closed at the same time.

Okay, this may be it. I will have to try.

PrePoMax stores multiple results in the newest versions. That means that you can have multiple results opened, and if one of them is huge, the .pmx file will be huge and the program performance will also be impacted. You can close all results or currently viewed results to reduce the file size.

To reduce the results file size, some measures were taken in version v1.4.0, and you can also delete some of the results fields, like strains, if you will not use them later on.

I was able to test this in version 1.3.5.1 today and seems to work well. Thank you guys so much!