Another "clarifications of explicit capabilities of plugins//apps//recipes"

Hi! I know I’ve sort of danced around this topic a number of times, but think I am in a place I can better ask the questions I have.
TLDR:
Do I have to do MOO / ML on my machine/VM or can we tap into the Argo workflow explicitly and use PO cloud CPU resources for it? like this


First a “from what I understand” context summary:

(All of these bullets to be taken with a question mark at the end; and not statements)
Pollination Plugin:

  • Honeybee-energy plugin for instance, A very familiar Honeybee-energy-grasshopper model to OSM feel, its like a raw code HB-grasshopper component! (Noice)
  • PO plugins are like “interfaces to the LBT main packages; enabling use on the cloud”(?)
  • Plugins utilize pollination-dsl to turn python code describing LBT-Computational processes, into “Queenbee Recipes”, basically "a LBT grasshopper script but in raw code, that gets translated to a *.yaml file, which is the “what to do” input file for the pollination cloud itself.(?)


With the above, from the pollination-dsl repo as a reference and the below as the question:
the Queenbee YAML ==> Pollination translation==> Argo
(this bit)image The Queenbee YAML fundementally is getting transformed to the "kubernetes ‘containers/container friendly data’ ", which is then handled as defined by the overall cloud infrastructure, i.e argo-stuff?


(the point… long winded ik… Sorry! just trying to get a full handle on everything!)

That being said:

Is the only ‘dynamically capable’ , i.e can do stuff ‘on the fly without having the overall infrastructure’s codebase modified to facilitate’, portion of the workflow; would be here:
image
on the left of, then utilizing, or written into the plugin itself; to facilitate things like brute force studies (i.e colibri==>pollination) for instance: itterate through design options, send payload of all the runs to the cloud as a job:

I think this is the actual question:

Where there to be any sort of “feedback loops”, i.e MOO, (U-NSGA-III<==my fav :smiley: ):
this would have to be facilitated via API calls back and forth?
i.e :

  1. make generation one, create job
  2. send to cloud for sim
  3. API call: “get generation 1’s results”
  4. locally, on some VM, Pollination Streamlit Applications etc: process gen1’s results, make gen2
  5. send gen2 for sim
  6. repeat.

…Or:

now I know this is kind of open ended as is most things involving code “Can do anything if you manifest it in code” but:
Can recipes / plugins facilitate such data feedback loop “in-cloud” itself? I think that would be somthing like:
image


… I think that’s about all the questions/clarification I have… hope that makes some sense!!

Hi @tfedyna,

I already replied to this discussion over the call but I’m documenting it here for future reference. Your overall understanding of the infrastructure is correct. We have several different layers of abstraction to be able to provide customizable and reusable automated solutions.

From the lowest level:

  • Commands make the execution of the core logic possible from a command line.
  • Pollination Functions make the commands reusable by standardizing inputs and outputs. You can think about a Function as a Grasshopper component.
  • Pollination Plugins are a collection of Functions that are running on the same Docker image. They are similar to Grassshopper plugins which are a collection of Grasshopper components.
  • Pollination Recipes describe an execution logic by defining the relationship between several Functions. You can think about them as Grasshopper scripts that connect different Grasshopper components to one another.

If you want to use ML worklfows you need to:

  1. Identify the commands.
  2. Create Functions to make those commands reusable.
  3. Package and release those Functions as a Plugin.
  4. Put the Functions together in a Recipe.

You initial workflow is what is possible at the moment. The second scenario is also possible inside Argo itself but we haven’t exposed it in Queenbee which means it can’t be used in Pollination.

I think using the first workflow with an App will give you the opportunity to visualize the solution as it converges which will be much better than letting the optimization to run forever and check the results at the end. Reminds me of my old good days! :sunglasses:

1 Like

Awesome!! Thank you! excellent reference!!