Hi! I know I’ve sort of danced around this topic a number of times, but think I am in a place I can better ask the questions I have.
TLDR:
Do I have to do MOO / ML on my machine/VM or can we tap into the Argo workflow explicitly and use PO cloud CPU resources for it? like this
First a “from what I understand” context summary:
(All of these bullets to be taken with a question mark at the end; and not statements)
Pollination Plugin:
-
Honeybee-energy plugin for instance, A very familiar Honeybee-energy-grasshopper model to OSM feel, its like a raw code HB-grasshopper component! (Noice)
- PO plugins are like “interfaces to the LBT main packages; enabling use on the cloud”(?)
- Plugins utilize pollination-dsl to turn python code describing LBT-Computational processes, into “Queenbee Recipes”, basically "a LBT grasshopper script but in raw code, that gets translated to a *.yaml file, which is the “what to do” input file for the pollination cloud itself.(?)
With the above, from the pollination-dsl repo as a reference and the below as the question:
the Queenbee YAML ==> Pollination translation==> Argo
(this bit) The Queenbee YAML fundementally is getting transformed to the "kubernetes ‘containers/container friendly data’ ", which is then handled as defined by the overall cloud infrastructure, i.e argo-stuff?
(the point… long winded ik… Sorry! just trying to get a full handle on everything!)
That being said:
Is the only ‘dynamically capable’ , i.e can do stuff ‘on the fly without having the overall infrastructure’s codebase modified to facilitate’, portion of the workflow; would be here:
on the left of, then utilizing, or written into the plugin itself; to facilitate things like brute force studies (i.e colibri==>pollination) for instance: itterate through design options, send payload of all the runs to the cloud as a job:
I think this is the actual question:
Where there to be any sort of “feedback loops”, i.e MOO, (U-NSGA-III<==my fav ):
this would have to be facilitated via API calls back and forth?
i.e :
- make generation one, create job
- send to cloud for sim
- API call: “get generation 1’s results”
- locally, on some VM, Pollination Streamlit Applications etc: process gen1’s results, make gen2
- send gen2 for sim
- repeat.
…Or:
now I know this is kind of open ended as is most things involving code “Can do anything if you manifest it in code” but:
Can recipes / plugins facilitate such data feedback loop “in-cloud” itself? I think that would be somthing like:
… I think that’s about all the questions/clarification I have… hope that makes some sense!!