Jobs Running for Hours


I have been trying to run some point-in-time daylight jobs since the day before yesterday. They keep running for more than 20 hours with no results. I have cancelled multiple of my jobs in the last two days and tried simplifying my model again and again. I am in a bit of a crunch and don’t know what else to do.

Any help would be great.

Follow up on the problem!
Two of my jobs ran for almost 4 hours. But now those cases show that the run was done in 20 mins.

Hi @monkspaces :wave:

Thanks for the feedback! I have some detailed answers for you below and some requests for a bit more information so I can best help you. That being said, it is worth bearing in mind that during the early access period every account has access to a limit of 100 concurrent CPUs. This means that for very large models or large parametric studies your jobs will still take a while to execute. We have implemented this so we don’t run out of $$ before actually getting users to pay for our services :grinning_face_with_smiling_eyes:

There are a few reasons why the jobs might be executing slower than expected. It is difficult for me to know which one is more likely without knowing which job you are talking about specifically.

I had a quick look through your profile and I think you are talking about this job right?

From what I can see, the ray-tracing parts of your job executed over a very long period of time (~18 hours). Lets have a quick look at the radiance parameters you used for this job:

  • Radiance Parameters: -aa 0.1 -ab 6 -ad 4096 -ar 128 -as 4096 -dc 0.75 -dj 1.0 -dp 512 -dr 3 -ds 0.05 -dt 0.15 -lr 8 -lw 0.005 -ss 1.0 -st 0.15
  • Grid Size: 200 pts

At a glance it seems strange to me that it would take 18 hours to run a point-in-time radiance calculation for 200 points at a time (event with 6 ambient bounces). I am not a radiance expert so it might be worth getting @mostapha to provide additional feedback here.

I also noticed that the model used for this Job does not have any windows. Is this intentional?

That’s annoying! Could you let me know which Job this happened for and I will investigate ASAP, let you know what happened and give you an timeline for when we will have it fixed :raised_hands:

Hi @antoinedao

I understand the 100 CPU access. It makes total sense. I only started questioning this because previous runs on similar models ran pretty quickly.

I had to cancel 20-hour runs for multiple jobs (Job 1, Job 2 and Job 3) actually. The one you checked is definitely one of them. When these didn’t work out, I rebuilt models in Honeybee+ and ran them on my system. They ran in less than a minute there.

Yes, I did not model windows. I had modelled faces with glass properties because its a full glazing. Is that a problem?

That happened to this job.

Thanks for your promptness. Do let me know if I can help in any way.

1 Like

Thanks @monkspaces for reporting this. I’ll have a look to this job tomorrow.

When I looked into this yesterday, I noticed that it took relatively long time (considering the simulation parameters) to throw the message -nan(ind) for only three sensor points. Like in the link above, the results of this job is also -nan. Perhaps this might explain why it takes ~18 hours for the full grid? @monkspaces, can you check the refractive index of your glass modifier?

If the root of the problem is that the refractive index of the glass modifier is below 1, it makes sense that HB[+] can run it since you cannot assign a value of the refractive index in HB[+], i.e., it will be 1.52 by default.

1 Like

Hi @mikkel
The refractive index was actually set to 0.23. I have changed it to 1.52 and am running this job. Hoping this works :crossed_fingers:

1 Like

I just checked and it ran fine. Thank @mikkel for debugging.

@chriswmackey, we may want to put a check and give a warning for the refractive index in the core libraries.

Thank you for all the help from you and @mikkel :grinning:

1 Like

Thanks @mikkel and that’s a good idea, @mostapha .

As Radiance gurus, what value of refractive index do you think merits a warning? Is it just any value that’s below or equal to 1? Should there also be an upper limit where the user gets a warning?

I’m no expert in this topic but giving a warning for any value smaller than 1.0 makes sense. In reality except for rare cases that the user is trying to model a typical glass the value should be left to default (1.52). Considering that it might make sense to give a warning if the value is not set to 1.52 just to ensure the user understands the implication of changing this number.

This conversation on Ladybug Tools Discourse can also be a good reference/reminder:

I agree with Mostapha, but will just add that in terms of rtrace giving results and not -nan, the lower limit seems to be 1.0 while there is no upper limit.

I truly appreciate how you guys are trying to solve this problem even though it’s due to negligence on my part. I don’t mean to complicate your jobs but I thought of two suggestions that might help me avoid such issues. As an intermediate level user, I think it would also be helpful if the job could show two things

  1. An expected run time. If I see a run-time much larger than what I expect, I will be more considerate of the inputs I am giving. Also, the servers would be saved from unnecessary runs like these 18-hour long failed jobs.

  2. Before running the job, is it possible to see inputs and an expected range of inputs in a table? Users might be able to catch inaccurate values here.

Thanks :slight_smile:

Thank you @mostapha and @mikkel ,

It seems like we are certain that we want to avoid refraction indices less than 1 if they make rtrace go haywire. In this case, I think it would be best for us to enforce refractive indices greater than one via a “greater than” property in the schema and with an exception raised in the core libraries. This should help avoid cases like the one above whether they originate from Grasshopper, Rhino Revit, etc. I also support a warning for refractive indices that far from glass but I sense that it’s still a common use-case for people to change the refractive index to 1.4 in order to model ETFE or even 1.33 in order to model water. Let me first implement the “greater than one” property and then we can circle back to the need for a warning for other cases.

Thanks also for the suggestions, @monkspaces
Unfortunately, the only way that we could realistically give you an estimate of run time before the simulation is by severely limiting the freedom to change the parameters of the simulation. At some point in the future, maybe we can make a recipe that’s a bit like a “walled garden” that’s limited enough in functionality that we could estimate its runtime beforehand but our first priority is to give people the “freedom and flexibility to model real-world complexity,” as we say. So we can come back to this after we have a full set of recipes that are customizable and flexible enough to handle a wide range of design questions.

As for a table of inputs, are you asking for something different than what you already see in the Pollination App?

I realize that you see it after the simulation has already been initialized but, if you see something wrong in the table, you can cancel the run.

I implemented the check in honeybee-radiance and honeybee-schema, which will ensure refraction_index is always greater than 1: