Feature: See if we can parallelise using a plumber API to reduce overhead from spawning multiple shiny instances.
Description of feature:
Currently, using callr for multi-processing in a shiny session seems to use more memory than intended. I also think it doesn't garbage collect or close sessions well. It currently works for all regions, however seems to out of memory on augmenting population.
Going to explore the possibility of having a plumber API, and have it as a low memory instance that doesn't copy the larger dataset, but instead each region. Then we can handle all of the parallelisation and associated management on the plumber API.
If not on the plumber API, then some other approach for parallelisation.
Definition of Done (DoD):
Successful parallelisation handling in dpm_dash which handles pop ests and mig ests for all regions.
Function/s affected (if known):
Shiny observer which handles pop ests
Requested feature would mean that we'd have to launch a plumber API at the same time as the shiny server. We can handle this in the run_region_dash function, however we'd maybe have to have a way to find the port in which the plumber API has been launched on (or have it as a static port)
Links to related/relevant information:
https://www.rplumber.io/
Package version/s (if behaviour required across multiple dpm.dashboard versions):
[dpm.dashboard versions]
Feature: See if we can parallelise using a plumber API to reduce overhead from spawning multiple shiny instances.
Description of feature:
Currently, using callr for multi-processing in a shiny session seems to use more memory than intended. I also think it doesn't garbage collect or close sessions well. It currently works for all regions, however seems to out of memory on augmenting population.
Going to explore the possibility of having a plumber API, and have it as a low memory instance that doesn't copy the larger dataset, but instead each region. Then we can handle all of the parallelisation and associated management on the plumber API.
If not on the plumber API, then some other approach for parallelisation.
Definition of Done (DoD):
Successful parallelisation handling in dpm_dash which handles pop ests and mig ests for all regions.
Function/s affected (if known):
Shiny observer which handles pop ests
Requested feature would mean that we'd have to launch a plumber API at the same time as the shiny server. We can handle this in the
run_region_dashfunction, however we'd maybe have to have a way to find the port in which the plumber API has been launched on (or have it as a static port)Links to related/relevant information:
https://www.rplumber.io/
Package version/s (if behaviour required across multiple dpm.dashboard versions):
[dpm.dashboard versions]