Being somewhat obsessed with MOO… and the high volume of models needing to be ran when using E+ with MOO…
I had a question/idea:
I’ve used ensims JESS before; and they provide an ‘accelerated annual simulation’ option which breaks up a 8760 model into 12 models, runs them all individually and concatenates/sums results into a single E+ results file:
The question/idea is in terms of applying the same MO to en-masse sets of models for MOO:
Lets say the starting population is 100:
run the 1200 ‘deconstruct sim by month’ models run simultaneously; continue with the next generations produced via results population etc…
Is that something(sans any MOO specifics as still researching the applicability of this MO… to MOO (lol))
That could be facilitated perhaps?
understanding there may be deviation in the ‘monthly convergence’ of 12 ‘post sim combined’ models vs a single 8760hr sim: the notion is that in the stages this would be applied the theoretical loss in ‘accuracy’ (has faith in E+ convergence deities) : would be very acceptable.
if you’ve made it this far: Thanks for reading the ideas of a nutter!
This is something that can be done as a recipe. After creating the IDF file you need a for loop that loops through different months as analysis periods. I’m not sure if we want to support this as an official Ladybug tools recipe exactly for the reason that you mentioned. It comes with some inaccuracies and I would not use it except if you are looking for monthly results.
@chriswmackey, what are your thoughts about this?
I’ve done the acceleration via multi-runs via jupyter and opyplus… In a dir with 12 E+ install dirs🤣
I know where the recipe templates are on git: wish I’d connected the dots that a recipe would facilitate: otherwise I’d not have shown up empty handed
Hey @tfedyna and @mostapha ,
If you wanted to create your own recipe for this, generating the simulation parameters with the 12 run period should be pretty straightforward. The potentially tougher part would be merging the results back together but there are some functions that you could already use to process multiple .sql files into a combined value like the energy-use-intensity function.
My thoughts on this as an official LBT recipe are so-so and I’m inclined to follow the direction that the E+ developers decided to go with this type of parallelization. That is, this type of parellization was natively supported by E+ 7.2 but the developers decided to abandon in in EnergyPlus 8.0 because it really didn’t change the simulation time by much. Given that the EnergyPlus warmup calculation is so long, you usually don’t get a speed increase more than 1.5x to 2x by using this method. Taken altogether, the small speed increase just doesn’t seem to be worth the drop in accuracy. Or, to put it another way, you might be better off testing some other ways to cut the simulation time down that also affect the accuracy. For example, using a lower timestep (eg. 4 instead of 6).
The other reason why I don’t want to sink to much time into this is that the E+ 10X project has found a much better way to make use of parallel processing in E+ that does not change the simulation results at all. Form what I understand, the calculation involves some zone-by-zone parallelization and then brings the results back together when it comes time to run the building-wide heat transfer calculation. From their tests, the speed increases that they are getting from it seem a bit better than the 12-months method and are around 3X (depending on the model, of course). I’m not sure if their improvements have made it into E+ 9.6 but, if not, they should be implemented soon. So I’d rather focus on preparing to support this before trying to recreate the old E+ 7.2 parallelization methods.
I’ve got lots of tooling I’ve put together (somewhere… I need to purge my HD’s lol) from when I was using ensims for post proc/concatenating broken up sims and stuff from the *.sql’s and other outputs.
thanks devs lol.
yeah… for context: I was using ensims for some enormous designbuilder models with rediculous HVAC systems chocked full of EMS; which… with how oddly heavy and slow DB models sim… it was notable the time difference but; we’ve stopped all work with DB (yay lol).
HOLY CRAP I read the presentation ant the bottom of the E+ 10 hyperlink like 5 times last night! That’s so exciting I’m super happy.
I’d been meaning to put a linux VM together to try using some Nvidia HPC compilers to see if per-chance std E+ would be any faster (prolly not since isn’t coded at the moment to utilize)
That’s so exciting! after the past couple years of trying to figure out how to make E+ sim as fast as possible:
this has to be one of my favorite reads ever! super stoked..
Side note: some gouge I just got: Athena impact estimator should have their web app and API up ~'22 Q3. so that’s cool.
but yeah if that’s the case; other than for my own curiosity: There’s no tasking for me to do the 12M acceleration sooo probably just going to wait on E+10