Support list input for LoadAssets component

@Mingbo, is there a way you could also make the LoadAssets component work with this?

The error message is:

  1. Solution exception:Input _run only takes the first item, there are 27 items are connected!

I realise that with larger parameter studies this will become unfeasible, but in some cases I’d really like to view all the variations side by side in Rhino.

What do you think about this @Mostapha. I don’t think this is a good idea to download files for all runs within the GH environment, as I could foresee that someone would try to download hundreds of runs that contains gigabytes of files for a big model.

But I understand what you want to do. I am proposing a solution that on web UI, we could have an option for zipping all files for a job or selected runs, and then users could download them to the folder where GH component is looking for the cached files. So all files are available on local machine, and GH components are just loading them. What do you think @antoine, @tykom .

Isn’t @Max only trying to get the model asset from all 27 runs in this example? It seems like it should be possible to implement a hypothetical “asset selector” that takes a set of runs and then some interface to filter assets that are present in all runs and downloads them without having to download all of the files that are produced.

So in this case, it should just be a GET to /projects/max/demo/runs/{run id}/artifacts/download?path=model for each run in [list of 27 run UUIDs]

Of course, we could add an endpoint to allow bulk downloads like this so that the loop is unnecessary.

Your point about downloading a huge amount of data is valid though. Maybe we need a “bulk artifacts” API group that can take one of these “bulk download” requests an get the total size before executing the download?

Hi @tykom, as you can see in the screenshot I was also trying to download the results.
Getting the total size before downloading might be helpful as you say, as well as maybe the number of items, since the results output maybe well have millions of items that could freeze up Grasshopper.

On the other hand, it’s not really in Grasshopper’s nature to warn you about these things, it generally just lets you crash and burn and that’s life (like when you graft one input and forget to graft another), so at this point I see it more as a bonus feature rather than a development priority.

Sure, I just meant that you weren’t trying to download all of the files produced by all runs in the project, correct? It looks like only the model output is connected so I assumed that the expected output would be 27 different models, one for each run.

We can do better than Grasshopper :wink:

My thinking was that, if this were SQL, one would want to be able to express something like select outputs.model from run where id in {set of run UUIDs} .

Thank you all for your comments. I agree with @Mingbo that this can potentially become a dangerous path. Some of these recipes generate tones of data and we don’t really want to limit what one would might need to generate. That being said we don’t want to encourage loading all those results in one go!

@Max, I would say for now iterating through the results and using the record component is the path to go. I know it’s some extra work but it will ensure that we don’t end up with crazy cases.

Also once we have the web-based recipe builder and visualization the need to download all the results to Grasshopper should be minimized.

1 Like