Pull Request: Infrastructure Hardening: Metrics, Queues & Rate Limiting (#243-246)#285
Merged
JerryIdoko merged 2 commits intoVesting-Vault:mainfrom Apr 24, 2026
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
📝 Description
This PR transitions the Vesting-Vault API from a basic backend into a production-grade, observable, and resilient architecture. We have addressed critical bottlenecks by offloading heavy tasks to workers and protected the API from potential abuse.
🎯 Key Changes by Module
Metrics Endpoint: Integrated @willsoto/nestjs-prometheus to expose /metrics.
Standard Collectors: Now tracking Node.js heap usage, API response latency, and database connection pool health.
Custom Counters: Added custom metrics for total_ledger_blocks_indexed and vesting_schedules_created.
Asynchronous Jobs: Heavy computations (PDF report generation and CSV exports) are now offloaded from the main event loop.
Architecture: Implemented a Producer/Consumer pattern using Redis as the broker. This ensures the main API remains responsive even during high-volume report requests.
Throttler Module: Implemented ThrottlerModule with a Redis storage provider to maintain rate limits across multiple API instances.
Tiered Limits: * Public API: 100 requests / 60 seconds.
Auth/Sensitive: 5 requests / 60 seconds.
Storage: Leveraged existing Redis infrastructure to ensure state persistence and high-speed limit checks.
💻 Implementation Snippet: Rate Limiting Config
TypeScript
// src/app.module.ts
ThrottlerModule.forRootAsync({
imports: [ConfigModule],
inject: [ConfigService],
useFactory: (config: ConfigService) => ({
storage: new ThrottlerStorageRedisService(config.get('REDIS_URL')),
throttlers: [
{ name: 'short', ttl: 1000, limit: 3 },
{ name: 'medium', ttl: 60000, limit: 100 },
],
}),
});
✅ Acceptance Criteria Checklist
[x] Metrics: /metrics is accessible and returning valid Prometheus format data.
[x] Queues: PDF generation logs show execution within the Worker context, not the App context.
[x] Security: Repeated fast requests to /auth/login result in a 429 Too Many Requests response.
[x] Persistence: Rate limits and queues are successfully utilizing the Redis cluster.
🚀 How to Verify
Check Metrics: Run curl http://localhost:3000/metrics and verify block-indexing counters.
Test PDF Queue: Request a vesting report and check the bull-board (if enabled) or logs to see the background process finish.
Stress Test Limits: Use ab or locust to hit the login endpoint and verify the 429 status code triggers as expected.
🔗 Linked Issues
Closes #243,
Closes #244,
Closes #245,
Closes #246