Utci comfort map Failed

Hello, i create gh definition to calculate utci comfort map.
But after 25 minutes the job failed
What’s wrong in the definition?

GH file:

https://app.pollination.cloud/projects/seghier/demo/jobs/72dfe816-4d98-4d7f-a7a0-d01dc4d31adf/runs/b4e58126-424c-5b6b-9da1-b6de27b804e8?path=&tab=debug

The same file work fine in my laptop but the job failed when use Pollination

Hi @seghier :wave:

I looked into the jobs you have submitted and it looks like the run-comfort-map step is the one that fails every time with a log message saying Killed.

This indicates that the run-comfort-map step is using more resources than it is allowed to (1 vCPU and 3.5GB of memory). This feels like a problem with the recipe or the cloud platform which doesn’t allow users to set custom resource allocation to specific tasks.

@chriswmackey and @mostapha how do you think we should go about resolving this issue at the moment? Can we optimise the run-comfort-map step be less resource hungry? Do we need to start work on enabling more customized resource allocation for a given task in a recipe?

1 Like

Thanks, this happened also with another simple file : just a cube

Thank you , @antoinedao .

I agree with you that I wrote that particular recipe step poorly and I have had rewriting that step on my agenda for the last few months since I know it can be parallelized between several workers instead of dumping all of the results into a single worker. However, I also think that the App should be able to handle these cases gracefully since I am sure that this is not the last time that we’ll encounter a situation like this.

While we have this example, can we use it to test some better reporting and error messages for when the worker runs out of resources? The “Killed” message is a little vague at the moment. Let’s discuss internally whether we’d be able to implement some solution to customize resource allocation before I am able to revise this particular recipe step. We’ll get back to you with the plan once we have it, @seghier .

2 Likes

If you mean my example yes of course, i hope you fix this soon

1 Like

Hi @seghier ,

I apologize that it took so long but I have finally gotten around to doing a full refactor of the comfort mapping recipes. The new version has a number of improvements and one of them is that the comfort calculation is now much more parallelized between different nodes. This greatly mitigates the issue that you experienced here where there was a single node performing the comfort calculation, which ran out of memory.

I verified that I was able to run your HBJSON model through the latest UTCI comfort map and the simulation finished successfully:

I’ll warn that it is still likely possible to run out of memory with this recipe but it would probably have to be a huge simulation with hundreds of thousands of sensors. In any event, this addresses your case here and let me know if you experience any future difficulties running the comfort map.

3 Likes

Thank you very much @chriswmackey
I will check it and try to improve the definition

Hi @chriswmackey
I test the file , the result of the local calculation appears without problem but from the cloud i get two different errors

1 Like

Hey @seghier ,

It looks like the run was successful on the cloud but you are not using the latest version of the Rhino/Grasshopper plugin to load the results. If you use the latest version of the “Read Thermal Matrix” component, it’s all successful:

So either get the latest Rhino plugin installer or run the LB Versioner component.

Also, the raw UTCI data is a lot to load into Grasshopper. I really recommend using the TCP, HSP, and CSP outputs as much as you can before diving into such a big data set.