Working with Pollination through Jupyter Lab

Hi Pollination team,

Firstly I’m pretty new to using the cloud computing aspect of Pollination :slight_smile:

I’m currently trying to run a parametric daylight factor simulation - I’ve used Ladybug Fly to generate 64 different HB models that I’ve saved to hbjson. I’m now trying to load these all into grasshopper to run but it’s currently churning.

Are there any example scripts out there where people are loading models in Jupyter and sending them to Pollination through that to avoid waiting on grasshopper loading times and data handling?

Thanks!
Charlie

2 Likes

Hi, @charliebrooker - Yes. You can use Python to do all of that. Since you only have a Professional account with a limited number of CPUs (48) you probably want to submit them in smaller chunks but you can change the code to do that.

Here are a few notes:

  1. My suggestion is to upload all the files to your project first so you don’t have to upload them one by one as part of the script. You can simply use the path on your project to do that.
  2. This code assumes you have already created a project and added the daylight factor recipe to the project.
  3. You need to have the pollination-streamlit library installed. You can do that use pip install pollination-streamlit

Here is the study that I submitted using the code below: Pollination Cloud App

"""A script to submit a daylight factor study to Pollination using HBJSON files in a folder."""
import os
import pathlib
import time

from pollination_streamlit.api.client import ApiClient
from pollination_streamlit.interactors import NewJob, Recipe
from queenbee.job.job import JobStatusEnum


# It is better to read the api-key as an environmental variable or from a file
# You can also set the value directly
# api_key = os.getenv('POLLINATION_TOKEN')
api_key = 'copy-the-api-key-here-or-set-it-as-a-env-var'
assert api_key is not None, 'You must provide valid Pollination API key.'

# project owner and project name - change them to your account and project names
owner = 'ladybug-tools'
project = 'demo'

api_client = ApiClient(api_token=api_key)

# We assume that the recipe has been added to the project manually
recipe = Recipe('ladybug-tools', 'daylight-factor', '0.7.14-viz', client=api_client)

# daylight factor doesn't have that many inputs
# for files and folder we have to provide the relative path to 
recipe_inputs = {
    'model': None,  # this will be replace for each run
    'model_id': None,  # this is useful to set additional information so you know which run is what
    'min-sensor-count': 1000,  # making sure it doesn't break each study into too many small grids
    # 'radiance-parameters': None,  # you can also overwrite other inputs like the radiance parameters
}

# create a new study
new_study = NewJob(owner, project, recipe, client=api_client)

new_study.name = 'Parametric study submitted from Python'

root_folder = r'c:\ladybug\sample_models'  # path to the folder with all the HBJSON files

study_inputs = []
for model in pathlib.Path(root_folder).glob('*.hbjson'):
    inputs = dict(recipe_inputs)  # create a copy of the recipe
    # upload this model to the project
    # It is better to upload the files to a subfolder not to overwrite other files in
    # the project. In this case I call it dataset_1.
    # you can find them here: https://app.pollination.cloud/ladybug-tools/projects/demo?tab=files&path=dataset_1
    uploaded_path = new_study.upload_artifact(model, target_folder='dataset_1')
    inputs['model'] = uploaded_path
    inputs['model_id'] = model.stem  # I'm using the file name as the id.
    study_inputs.append(inputs)

# add the inputs to the study
# each set of inputs create a new run
new_study.arguments = study_inputs

# # create the study
running_study = new_study.create()

job_url = f'https://app.pollination.cloud/{running_study.owner}/projects/{running_study.project}/jobs/{running_study.id}'
print(job_url)
time.sleep(5)

status = running_study.status.status

while True:
    status_info = running_study.status
    print(f'\t# ------------------ #')
    print(f'\t# pending runs: {status_info.runs_pending}')
    print(f'\t# running runs: {status_info.runs_running}')
    print(f'\t# failed runs: {status_info.runs_failed}')
    print(f'\t# completed runs: {status_info.runs_completed}')
    if status in [
        JobStatusEnum.pre_processing, JobStatusEnum.running, JobStatusEnum.created,
        JobStatusEnum.unknown
        ]:
        time.sleep(30)
        running_study.refresh()
        status = status_info.status
    else:
        # study is finished
        time.sleep(2)
        break

1 Like