Skip to content

Conversation

@Mag-D-Anas
Copy link

feat(rpc_loader): Add async support for RPC function calls

Summary

This PR implements the previously unimplemented function_rpc_interface_await function in the RPC loader, enabling non-blocking asynchronous remote procedure calls. This is a foundational step towards the Function Mesh concept - allowing applications to be split across multiple servers while maintaining transparent function call semantics.

Motivation

The RPC loader previously only supported synchronous function calls, which block the calling thread until the remote server responds. For long-running remote operations, this creates performance bottlenecks. Async support enables:

  • Non-blocking remote calls
  • Better resource utilization
  • Foundation for distributed function orchestration

Changes

source/loaders/rpc_loader/source/rpc_loader_impl.cpp

  • Implemented function_rpc_interface_await using std::thread for async HTTP calls
  • Added #include <thread> for threading support
  • Each async call spawns a detached thread that:
    • Creates its own CURL handle (CURL handles are not thread-safe)
    • Performs the HTTP POST to /await/{function_name}
    • Deserializes the response
    • Calls resolve_callback on success or reject_callback on failure
  • Proper memory management with deep-copied arguments for thread safety

source/tests/metacall_rpc_test/source/server.js

  • Added async_divide function with "async": true in inspect response
  • Added /await/async_divide POST endpoint handler
  • Simulates async processing with 100ms delay

Threading Strategy: Uses std::thread with detached threads. Each async call:

  1. Deep copies arguments to avoid use-after-free
  2. Serializes arguments to JSON
  3. Creates thread-local CURL handle
  4. Performs blocking HTTP call inside thread
  5. Deserializes response and invokes callback

Testing

Future Considerations

  • Consider CURL multi-handle for more efficient async I/O (single thread, event-driven)
  • Add timeout handling for async calls
  • Thread pool instead of spawning new threads per call

Related

@Mag-D-Anas
Copy link
Author

Update: Replaced std::thread with CURL Multi-Handle

Replaced the initial std::thread-per-call async design with CURL's multi-handle API.

Before: Each metacall_await() spawned a detached thread with its own CURL handle. N async calls = N threads.

After: A single poll thread drives all async transfers via curl_multi_perform(). Each async call just adds a lightweight easy handle to the shared multi handle.

Key changes in rpc_loader_impl.cpp:

  • rpc_async_context struct — per-call state (callbacks, response buffer, easy handle)
  • rpc_poll_loop() — one thread ticking every 50ms, drives + checks all transfers
  • Mutex held only during CURL operations (microseconds), released during sleep
  • CURLOPT_COPYPOSTFIELDS for safe POST body lifetime
  • Clean shutdown via std::atomic<bool> + thread::join()

Tested:

  • ctest -VV -R metacall-rpc-test
  • Function Mesh PoC (Python server + C client, sync + async calls) ✅

@Mag-D-Anas
Copy link
Author

Update: MPSC Lock-Free Queue Refactor

Replaced the mutex-based thread-safety mechanism with a lock-free architecture using moodycamel::ConcurrentQueue + curl_multi_wakeup().

What Changed

Before After
std::mutex around CURLM* Zero mutexes
std::condition_variable curl_multi_wakeup() (thread-safe)
sleep_for(50ms) busy-wait curl_multi_poll() (event-driven)
Lock contention under load Lock-free MPSC queue

Architecture

Producers (any thread)              Consumer (poll thread)
┌─────────────────────┐             ┌──────────────────────────┐
│ metacall_await()    │             │ curl_multi_poll()        │
│ → queue.enqueue()   │──────────→  │ → queue.try_dequeue()    │
│ → curl_multi_wakeup │             │ → curl_multi_add_handle  │
└─────────────────────┘             │ → curl_multi_perform     │
                                    │ → callbacks              │
                                    └──────────────────────────┘
  • Producers enqueue async contexts into a lock-free queue and wake the poll thread
  • Consumer (single poll thread) is the only thread that touches CURLM*
  • Graceful shutdown: exit_flag → wakeup → join() — drains queue + completes all in-flight transfers

Files Modified

  • source/loaders/rpc_loader/CMakeLists.txt — Added moodycamel::ConcurrentQueue via FetchContent
  • source/loaders/rpc_loader/source/rpc_loader_impl.cpp — Refactored poll loop, await, init, destroy

Testing

  • ctest -VV -R metacall-rpc-test — PASSED
  • ✅ Function Mesh PoC — sync + async calls verified, clean shutdown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant