Skip to content

Explore Plumber API for parallel processing #2

@Stephen1397

Description

@Stephen1397

Feature: See if we can parallelise using a plumber API to reduce overhead from spawning multiple shiny instances.

Description of feature:

Currently, using callr for multi-processing in a shiny session seems to use more memory than intended. I also think it doesn't garbage collect or close sessions well. It currently works for all regions, however seems to out of memory on augmenting population.

Going to explore the possibility of having a plumber API, and have it as a low memory instance that doesn't copy the larger dataset, but instead each region. Then we can handle all of the parallelisation and associated management on the plumber API.

If not on the plumber API, then some other approach for parallelisation.

Definition of Done (DoD):

Successful parallelisation handling in dpm_dash which handles pop ests and mig ests for all regions.

Function/s affected (if known):

Shiny observer which handles pop ests

Requested feature would mean that we'd have to launch a plumber API at the same time as the shiny server. We can handle this in the run_region_dash function, however we'd maybe have to have a way to find the port in which the plumber API has been launched on (or have it as a static port)

Links to related/relevant information:

https://www.rplumber.io/

Package version/s (if behaviour required across multiple dpm.dashboard versions):

[dpm.dashboard versions]

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions