Different Runs Outputting Same Results

Dear @mostapha and @antoinedao,

I’ve been having a huge issue with a parameter study I’m trying to run, in which the results I bring back from different runs within a Pollination job are all the identical when they should be different.

After much debugging I’m now thinking the issue might lie with Pollination. I was able to recreate the issue with the simple attached file:

unnamed.gh (102.9 KB)

As you can see, the two upper simulations are run locally, each with a different glazing fraction, which leads to different simulation results as expected.

The lower one is a parameter study where I varied the glazing fraction using the same two values as above (you can confirm this looking at the data recorder).

  1. The order of runs was WWR=0.3, then WWR=0.4, but I can see that when I fetch the data from Pollination the order is reversed. To prevent confusion, I suggest changing this if possible.

  2. You can see that the results from Pollination are identical for both runs. They are close to the results of the second local one, but not exactly.

Please let me know if I’m missing something, or whether this is a bug?

Hi @Max! As usual thank you for reporting this issue.

Can you try to download the sql for each run manually and try to get the results from the downloaded files instead of using the Grasshopper plugin? I want to see if the issue happens on Pollination runs or is it about how the files being downloaded by the Grasshopper plugin.

If it’s easier, you can also add me to your organization and give me access to this project and I can take a look tomorrow. I see that it’s a public project. I will have a look tomorrow if you didn’t get a chance to do so.

I’ll give my 2 cents here. I had something similar. The reason i’ve found is that the name of the model was the same for all parametric cases and then all results were the same. I just gave each case a different name and the issue was solved.
Hope it helps,

1 Like

This worked for me, thanks @ayezioro!

@mostapha, the results are still a bit different than when I run locally though - do you know why this is?

Thanks @ayezioro! This is correct. Just to clarify when you create the runs in Grasshopper if you don’t rename the model the models will overwrite each other and the last one will be uploaded when you submit the job.

The reason for this design is that in case you have a study that the model is not changing we don’t want to upload the same model several times. That said I see this as a recurring issue in parametric studies on Pollination.

@mingbo, what do you think about giving a warning if there are several runs but a single model file? It’s a bit tricky to generalize this case but for our current recipes it should work.

It might be hacky that adding a warning for multiple runs that share the same input model. For “Pollinate” component, it doesn’t know the which recipe has model input, and we cannot guaranty that all future recipes name the model input as “model”, not “HBModel” or “HBJson”.

I just exposed the file name for all path type arguments, and I think this should be good enough for users to review input arguments, and catch the same-model issue.

1 Like

Thanks @mingbo! Let’s see how it pans out.

Maybe we should add a note in the YouTube video that goes over the parametric study. I don’t think I mentioned changing the name of the file as a requirement. cc: @jankivyas

A post was split to a new topic: How to generate models with readable unique names