Sensor count recommendation

Hi,
I wonder if there are some recommendation n how to split the sensors in a pollination run.
I remember a similar question in the LBT discourse and the answer was inconclusive. But in pollination i think the conditions are different, hence my question.
In one job i had about 90K sensors. It will be better to have many result files or just a few. Does did affect the calculation time?

Thanks,
-A.

1 Like

Hi @ayezioro,

The answer will be similar to the one on LBT Discourse. It depends on the number and the size of the sensor grids, available compute resource and the length of each step (which depends on input Radiance parameters in this case).

This blog post explains how the overall system for parallelization works but I will be specific to one of the daylight studies and sensor count.

What does sensor count do?

Sensor count is a threshold number to splits the input sensor grids into smaller sensor grids. The split happens separately for each sensor grid. For example if you have a sensor grid with 1000 sensors and you put a value of 200 for sensor count this grid will be broken down into 5 smaller sensor grids with 200 sensors each.

Why does it matter?

Parallelization! By breaking down a large sensor grid into smaller grids the recipe can distribute the calculation for the sensor grid among several CPUs. To use the example above the simulation for 1000 sensors will now run on 5 CPUs and each CPU will run the simulation for 200 sensors which will result in faster calculation. The results will be merged back together in a follow up step.

What is the best number for sensor count input?

Like every other multi-parameter question the answer is it depends. It’s easy to think that breaking down the grid into 1 single sensor for each CPU should give us the best results but that is not the case because of two reasons:

  1. Limited compute resources
  2. Overhead of parallelization

When we break down a grid into smaller grids there is a good chance that we have to wait for the extra resources to be available. Pollination always have a number of pods ready to be used but at some point and depending on the load we will need to start new compute pods for jobs. The more you breakdown the sensor grids the more likely that at some point you have to wait for these resources. This can take up to a couple of minutes.

We have also introduced a limitation of 100 CPUs per account for parallel execution during the beta testing. This is to make sure that no one single job delays the jobs for the rest f the users. This means if you breakdown the simulation too much then you have to possibly wait longer for each smaller step to be executed. And that is extra overhead.

How should one think about it?

Knowing all the above this is how I would strategize about the number.

If you have one larger grid then you want to think about the amount of resources available. In this case with 90K sensors I would go for 1000. That will create 90 sensor grids.

If you already have multiple sensor grids the question is if they about the same size or if they are all over the place. If they are all about the same size and the number of sensor grids are fewer than the number of available resources then I would split them to smaller grids to use all the available resources efficiently. If the size of the sensor grids varies you want to check the larger ones and make a decision based on those. You don’t want one large grid to run forever while all the other grids are finished calculating. In this case, you can think to breakdown the larger grids a bit more as there is a chance that the smaller grids finish quickly and more compute resources will be available.

This is too complicate for a normal user! Can’t you automate this?

Yes. It is complicated and I think we can do better. One solution is to merge all the sensor grids together initially and then split them based on the amount of resources. This will make the case simple by always assuming all the input sensor grids are part of a one single large grid. This should work fine before getting to cases for 3 or 5 phase studies with aperture groups. This is also how HB[+] used to work.

The challenge of this approach was that we would end up with a very large file/database after merging the results back together and loading them back for each sensor grid. Now with the recipes we can do much better. It’s just a matter of time for implementation and testing. With that change you can put a single number for the number of CPUs and forget about all the complexities that I mentioned above.

3 Likes

Thanks for the comprehensive answer @mostapha !
-A.