BullMQ Proxy
  • What is BullMQ Proxy
  • Getting started
    • Architecture overview
    • Using Dragonfly
  • HTTP API
    • Authentication
    • Queues
      • Adding jobs
        • Retries
        • Delayed jobs
        • Prioritized
        • Repeatable
        • LIFO
        • Custom Job IDs
      • Getting jobs
      • Queue's actions
      • Reference
    • Workers
      • Endpoints
      • Adding workers
        • Concurrency
        • Rate-Limit
        • Removing finished jobs
        • Stalled jobs
        • Timeouts
      • Removing workers
      • Getting workers
      • Reference
    • Jobs
      • Jobs' actions
        • Update job progress
        • Add job logs
      • Reference
    • Configuration
    • Debugging
Powered by GitBook
On this page
  1. Getting started

Architecture overview

PreviousGetting startedNextUsing Dragonfly

Last updated 1 year ago

BullMQ Proxy is implemented using a relative simple architecture, yet it provides a lot of flexibility.

At the core of the proxy we have the . The choice of this runtime is mostly due to its much better HTTP and WebSocket performance and memory consumption than what is available on other popular runtimes such as NodeJS.

The proxy encapsulates BullMQ library (powered by a Redis™, or compatible instance), and provides an HTTP Restful API (and a WebSocket's API soon), that allows any language or framework that supports HTTP clients and servers to interact with the queues as well as to process jobs.

For example, adding a bunch of jobs to a queue implies posting an array with jobs to /queues/:queue-name/jobs, whereas to process the jobs we need to register an http endpoint (also known as a webhook) that will be called every time there is a job to process. This effectively allows BullMQ to be used on a multi-language multi-framework platform, unlocking easy communication and job management between any services.

Bun javascript runtime