Kue is a priority job queue backed by redis, built for node.js.

Overview

Kue

Kue is no longer maintained

Please see e.g. Bull as an alternative. Thank you!

Build Status npm version Dependency Status Join the chat at https://gitter.im/Automattic/kue

Kue is a priority job queue backed by redis, built for node.js.

PROTIP This is the latest Kue documentation, make sure to also read the changelist.

Upgrade Notes (Please Read)

Installation

  • Latest release:

    $ npm install kue
    
  • Master branch:

    $ npm install http://github.com/Automattic/kue/tarball/master
    

NPM

Features

  • Delayed jobs
  • Distribution of parallel work load
  • Job event and progress pubsub
  • Job TTL
  • Optional retries with backoff
  • Graceful workers shutdown
  • Full-text search capabilities
  • RESTful JSON API
  • Rich integrated UI
  • Infinite scrolling
  • UI progress indication
  • Job specific logging
  • Powered by Redis

Overview

Creating Jobs

First create a job Queue with kue.createQueue():

var kue = require('kue')
  , queue = kue.createQueue();

Calling queue.create() with the type of job ("email"), and arbitrary job data will return a Job, which can then be save()ed, adding it to redis, with a default priority level of "normal". The save() method optionally accepts a callback, responding with an error if something goes wrong. The title key is special-cased, and will display in the job listings within the UI, making it easier to find a specific job.

var job = queue.create('email', {
    title: 'welcome email for tj'
  , to: '[email protected]'
  , template: 'welcome-email'
}).save( function(err){
   if( !err ) console.log( job.id );
});

Job Priority

To specify the priority of a job, simply invoke the priority() method with a number, or priority name, which is mapped to a number.

queue.create('email', {
    title: 'welcome email for tj'
  , to: '[email protected]'
  , template: 'welcome-email'
}).priority('high').save();

The default priority map is as follows:

{
    low: 10
  , normal: 0
  , medium: -5
  , high: -10
  , critical: -15
};

Failure Attempts

By default jobs only have one attempt, that is when they fail, they are marked as a failure, and remain that way until you intervene. However, Kue allows you to specify this, which is important for jobs such as transferring an email, which upon failure, may usually retry without issue. To do this invoke the .attempts() method with a number.

 queue.create('email', {
     title: 'welcome email for tj'
   , to: '[email protected]'
   , template: 'welcome-email'
 }).priority('high').attempts(5).save();

Failure Backoff

Job retry attempts are done as soon as they fail, with no delay, even if your job had a delay set via Job#delay. If you want to delay job re-attempts upon failures (known as backoff) you can use Job#backoff method in different ways:

    // Honor job's original delay (if set) at each attempt, defaults to fixed backoff
    job.attempts(3).backoff( true )

    // Override delay value, fixed backoff
    job.attempts(3).backoff( {delay: 60*1000, type:'fixed'} )

    // Enable exponential backoff using original delay (if set)
    job.attempts(3).backoff( {type:'exponential'} )

    // Use a function to get a customized next attempt delay value
    job.attempts(3).backoff( function( attempts, delay ){
      //attempts will correspond to the nth attempt failure so it will start with 0
      //delay will be the amount of the last delay, not the initial delay unless attempts === 0
      return my_customized_calculated_delay;
    })

In the last scenario, provided function will be executed (via eval) on each re-attempt to get next attempt delay value, meaning that you can't reference external/context variables within it.

Job TTL

Job producers can set an expiry value for the time their job can live in active state, so that if workers didn't reply in timely fashion, Kue will fail it with TTL exceeded error message preventing that job from being stuck in active state and spoiling concurrency.

queue.create('email', {title: 'email job with TTL'}).ttl(milliseconds).save();

Job Logs

Job-specific logs enable you to expose information to the UI at any point in the job's life-time. To do so simply invoke job.log(), which accepts a message string as well as variable-arguments for sprintf-like support:

job.log('$%d sent to %s', amount, user.name);

or anything else (uses util.inspect() internally):

job.log({key: 'some key', value: 10});
job.log([1,2,3,5,8]);
job.log(10.1);

Job Progress

Job progress is extremely useful for long-running jobs such as video conversion. To update the job's progress simply invoke job.progress(completed, total [, data]):

job.progress(frames, totalFrames);

data can be used to pass extra information about the job. For example a message or an object with some extra contextual data to the current status.

Job Events

Job-specific events are fired on the Job instances via Redis pubsub. The following events are currently supported:

  • enqueue the job is now queued
  • start the job is now running
  • promotion the job is promoted from delayed state to queued
  • progress the job's progress ranging from 0-100
  • failed attempt the job has failed, but has remaining attempts yet
  • failed the job has failed and has no remaining attempts
  • complete the job has completed
  • remove the job has been removed

For example this may look something like the following:

var job = queue.create('video conversion', {
    title: 'converting loki\'s to avi'
  , user: 1
  , frames: 200
});

job.on('complete', function(result){
  console.log('Job completed with data ', result);

}).on('failed attempt', function(errorMessage, doneAttempts){
  console.log('Job failed');

}).on('failed', function(errorMessage){
  console.log('Job failed');

}).on('progress', function(progress, data){
  console.log('\r  job #' + job.id + ' ' + progress + '% complete with data ', data );

});

Note that Job level events are not guaranteed to be received upon process restarts, since restarted node.js process will lose the reference to the specific Job object. If you want a more reliable event handler look for Queue Events.

Note Kue stores job objects in memory until they are complete/failed to be able to emit events on them. If you have a huge concurrency in uncompleted jobs, turn this feature off and use queue level events for better memory scaling.

kue.createQueue({jobEvents: false})

Alternatively, you can use the job level function events to control whether events are fired for a job at the job level.

var job = queue.create('test').events(false).save();

Queue Events

Queue-level events provide access to the job-level events previously mentioned, however scoped to the Queue instance to apply logic at a "global" level. An example of this is removing completed jobs:

queue.on('job enqueue', function(id, type){
  console.log( 'Job %s got queued of type %s', id, type );

}).on('job complete', function(id, result){
  kue.Job.get(id, function(err, job){
    if (err) return;
    job.remove(function(err){
      if (err) throw err;
      console.log('removed completed job #%d', job.id);
    });
  });
});

The events available are the same as mentioned in "Job Events", however prefixed with "job ".

Delayed Jobs

Delayed jobs may be scheduled to be queued for an arbitrary distance in time by invoking the .delay(ms) method, passing the number of milliseconds relative to now. Alternatively, you can pass a JavaScript Date object with a specific time in the future. This automatically flags the Job as "delayed".

var email = queue.create('email', {
    title: 'Account renewal required'
  , to: '[email protected]'
  , template: 'renewal-email'
}).delay(milliseconds)
  .priority('high')
  .save();

Kue will check the delayed jobs with a timer, promoting them if the scheduled delay has been exceeded, defaulting to a check of top 1000 jobs every second.

Processing Jobs

Processing jobs is simple with Kue. First create a Queue instance much like we do for creating jobs, providing us access to redis etc, then invoke queue.process() with the associated type. Note that unlike what the name createQueue suggests, it currently returns a singleton Queue instance. So you can configure and use only a single Queue object within your node.js process.

In the following example we pass the callback done to email, When an error occurs we invoke done(err) to tell Kue something happened, otherwise we invoke done() only when the job is complete. If this function responds with an error it will be displayed in the UI and the job will be marked as a failure. The error object passed to done, should be of standard type Error.

var kue = require('kue')
 , queue = kue.createQueue();

queue.process('email', function(job, done){
  email(job.data.to, done);
});

function email(address, done) {
  if(!isValidEmail(address)) {
    //done('invalid to address') is possible but discouraged
    return done(new Error('invalid to address'));
  }
  // email send stuff...
  done();
}

Workers can also pass job result as the second parameter to done done(null,result) to store that in Job.result key. result is also passed through complete event handlers so that job producers can receive it if they like to.

Processing Concurrency

By default a call to queue.process() will only accept one job at a time for processing. For small tasks like sending emails this is not ideal, so we may specify the maximum active jobs for this type by passing a number:

queue.process('email', 20, function(job, done){
  // ...
});

Pause Processing

Workers can temporarily pause and resume their activity. That is, after calling pause they will receive no jobs in their process callback until resume is called. The pause function gracefully shutdowns this worker, and uses the same internal functionality as the shutdown method in Graceful Shutdown.

queue.process('email', function(job, ctx, done){
  ctx.pause( 5000, function(err){
    console.log("Worker is paused... ");
    setTimeout( function(){ ctx.resume(); }, 10000 );
  });
});

Note The ctx parameter from Kue >=0.9.0 is the second argument of the process callback function and done is idiomatically always the last

Note The pause method signature is changed from Kue >=0.9.0 to move the callback function to the last.

Updating Progress

For a "real" example, let's say we need to compile a PDF from numerous slides with node-canvas. Our job may consist of the following data, note that in general you should not store large data in the job it-self, it's better to store references like ids, pulling them in while processing.

queue.create('slideshow pdf', {
    title: user.name + "'s slideshow"
  , slides: [...] // keys to data stored in redis, mongodb, or some other store
});

We can access this same arbitrary data within a separate process while processing, via the job.data property. In the example we render each slide one-by-one, updating the job's log and progress.

queue.process('slideshow pdf', 5, function(job, done){
  var slides = job.data.slides
    , len = slides.length;

  function next(i) {
    var slide = slides[i]; // pretend we did a query on this slide id ;)
    job.log('rendering %dx%d slide', slide.width, slide.height);
    renderSlide(slide, function(err){
      if (err) return done(err);
      job.progress(i, len, {nextSlide : i == len ? 'itsdone' : i + 1});
      if (i == len) done()
      else next(i + 1);
    });
  }

  next(0);
});

Graceful Shutdown

Queue#shutdown([timeout,] fn) signals all workers to stop processing after their current active job is done. Workers will wait timeout milliseconds for their active job's done to be called or mark the active job failed with shutdown error reason. When all workers tell Kue they are stopped fn is called.

var queue = require('kue').createQueue();

process.once( 'SIGTERM', function ( sig ) {
  queue.shutdown( 5000, function(err) {
    console.log( 'Kue shutdown: ', err||'' );
    process.exit( 0 );
  });
});

Note that shutdown method signature is changed from Kue >=0.9.0 to move the callback function to the last.

Error Handling

All errors either in Redis client library or Queue are emitted to the Queue object. You should bind to error events to prevent uncaught exceptions or debug kue errors.

var queue = require('kue').createQueue();

queue.on( 'error', function( err ) {
  console.log( 'Oops... ', err );
});

Prevent from Stuck Active Jobs

Kue marks a job complete/failed when done is called by your worker, so you should use proper error handling to prevent uncaught exceptions in your worker's code and node.js process exiting before in handle jobs get done. This can be achieved in two ways:

  1. Wrapping your worker's process function in Domains
queue.process('my-error-prone-task', function(job, done){
  var domain = require('domain').create();
  domain.on('error', function(err){
    done(err);
  });
  domain.run(function(){ // your process function
    throw new Error( 'bad things happen' );
    done();
  });
});

Notice - Domains are deprecated from Nodejs with stability 0 and it's not recommended to use.

This is the softest and best solution, however is not built-in with Kue. Please refer to this discussion. You can comment on this feature in the related open Kue issue.

You can also use promises to do something like

queue.process('my-error-prone-task', function(job, done){
  Promise.method( function(){ // your process function
    throw new Error( 'bad things happen' );
  })().nodeify(done)
});

but this won't catch exceptions in your async call stack as domains do.

  1. Binding to uncaughtException and gracefully shutting down the Kue, however this is not a recommended error handling idiom in javascript since you are losing the error context.
process.once( 'uncaughtException', function(err){
  console.error( 'Something bad happened: ', err );
  queue.shutdown( 1000, function(err2){
    console.error( 'Kue shutdown result: ', err2 || 'OK' );
    process.exit( 0 );
  });
});

Unstable Redis connections

Kue currently uses client side job state management and when redis crashes in the middle of that operations, some stuck jobs or index inconsistencies will happen. The consequence is that certain number of jobs will be stuck, and be pulled out by worker only when new jobs are created, if no more new jobs are created, they stuck forever. So we strongly suggest that you run watchdog to fix this issue by calling:

queue.watchStuckJobs(interval)

interval is in milliseconds and defaults to 1000ms

Kue will be refactored to fully atomic job state management from version 1.0 and this will happen by lua scripts and/or BRPOPLPUSH combination. You can read more here and here.

Queue Maintenance

Queue object has two type of methods to tell you about the number of jobs in each state

queue.inactiveCount( function( err, total ) { // others are activeCount, completeCount, failedCount, delayedCount
  if( total > 100000 ) {
    console.log( 'We need some back pressure here' );
  }
});

you can also query on an specific job type:

queue.failedCount( 'my-critical-job', function( err, total ) {
  if( total > 10000 ) {
    console.log( 'This is tOoOo bad' );
  }
});

and iterating over job ids

queue.inactive( function( err, ids ) { // others are active, complete, failed, delayed
  // you may want to fetch each id to get the Job object out of it...
});

however the second one doesn't scale to large deployments, there you can use more specific Job static methods:

kue.Job.rangeByState( 'failed', 0, n, 'asc', function( err, jobs ) {
  // you have an array of maximum n Job objects here
});

or

kue.Job.rangeByType( 'my-job-type', 'failed', 0, n, 'asc', function( err, jobs ) {
  // you have an array of maximum n Job objects here
});

Note that the last two methods are subject to change in later Kue versions.

Programmatic Job Management

If you did none of above in Error Handling section or your process lost active jobs in any way, you can recover from them when your process is restarted. A blind logic would be to re-queue all stuck jobs:

queue.active( function( err, ids ) {
  ids.forEach( function( id ) {
    kue.Job.get( id, function( err, job ) {
      // Your application should check if job is a stuck one
      job.inactive();
    });
  });
});

Note in a clustered deployment your application should be aware not to involve a job that is valid, currently inprocess by other workers.

Job Cleanup

Jobs data and search indexes eat up redis memory space, so you will need some job-keeping process in real world deployments. Your first chance is using automatic job removal on completion.

queue.create( ... ).removeOnComplete( true ).save()

But if you eventually/temporally need completed job data, you can setup an on-demand job removal script like below to remove top n completed jobs:

kue.Job.rangeByState( 'complete', 0, n, 'asc', function( err, jobs ) {
  jobs.forEach( function( job ) {
    job.remove( function(){
      console.log( 'removed ', job.id );
    });
  });
});

Note that you should provide enough time for .remove calls on each job object to complete before your process exits, or job indexes will leak

Redis Connection Settings

By default, Kue will connect to Redis using the client default settings (port defaults to 6379, host defaults to 127.0.0.1, prefix defaults to q). Queue#createQueue(options) accepts redis connection options in options.redis key.

var kue = require('kue');
var q = kue.createQueue({
  prefix: 'q',
  redis: {
    port: 1234,
    host: '10.0.50.20',
    auth: 'password',
    db: 3, // if provided select a non-default redis db
    options: {
      // see https://github.com/mranney/node_redis#rediscreateclient
    }
  }
});

prefix controls the key names used in Redis. By default, this is simply q. Prefix generally shouldn't be changed unless you need to use one Redis instance for multiple apps. It can also be useful for providing an isolated testbed across your main application.

You can also specify the connection information as a URL string.

var q = kue.createQueue({
  redis: 'redis://example.com:1234?redis_option=value&redis_option=value'
});

Connecting using Unix Domain Sockets

Since node_redis supports Unix Domain Sockets, you can also tell Kue to do so. See unix-domain-socket for your redis server configuration.

var kue = require('kue');
var q = kue.createQueue({
  prefix: 'q',
  redis: {
    socket: '/data/sockets/redis.sock',
    auth: 'password',
    options: {
      // see https://github.com/mranney/node_redis#rediscreateclient
    }
  }
});

Replacing Redis Client Module

Any node.js redis client library that conforms (or when adapted) to node_redis API can be injected into Kue. You should only provide a createClientFactory function as a redis connection factory instead of providing node_redis connection options.

Below is a sample code to enable redis-sentinel to connect to Redis Sentinel for automatic master/slave failover.

var kue = require('kue');
var Sentinel = require('redis-sentinel');
var endpoints = [
  {host: '192.168.1.10', port: 6379},
  {host: '192.168.1.11', port: 6379}
];
var opts = options || {}; // Standard node_redis client options
var masterName = 'mymaster';
var sentinel = Sentinel.Sentinel(endpoints);

var q = kue.createQueue({
   redis: {
      createClientFactory: function(){
         return sentinel.createClient(masterName, opts);
      }
   }
});

Note that all <0.8.x client codes should be refactored to pass redis options to Queue#createQueue instead of monkey patched style overriding of redis#createClient or they will be broken from Kue 0.8.x.

Using ioredis client with cluster support

var Redis = require('ioredis');
var kue = require('kue');

// using https://github.com/72squared/vagrant-redis-cluster

var queue = kue.createQueue({
    redis: {
      createClientFactory: function () {
        return new Redis.Cluster([{
          port: 7000
        }, {
          port: 7001
        }]);
      }
    }
  });

User-Interface

The UI is a small Express application. A script is provided in bin/ for running the interface as a standalone application with default settings. You may pass in options for the port, redis-url, and prefix. For example:

node_modules/kue/bin/kue-dashboard -p 3050 -r redis://127.0.0.1:3000 -q prefix

You can fire it up from within another application too:

var kue = require('kue');
kue.createQueue(...);
kue.app.listen(3000);

The title defaults to "Kue", to alter this invoke:

kue.app.set('title', 'My Application');

Note that if you are using non-default Kue options, kue.createQueue(...) must be called before accessing kue.app.

Third-party interfaces

You can also use Kue-UI web interface contributed by Arnaud Bénard

JSON API

Along with the UI Kue also exposes a JSON API, which is utilized by the UI.

GET /job/search?q=

Query jobs, for example "GET /job/search?q=avi video":

["5", "7", "10"]

By default kue indexes the whole Job data object for searching, but this can be customized via calling Job#searchKeys to tell kue which keys on Job data to create index for:

var kue = require('kue');
queue = kue.createQueue();
queue.create('email', {
    title: 'welcome email for tj'
  , to: '[email protected]'
  , template: 'welcome-email'
}).searchKeys( ['to', 'title'] ).save();

Search feature is turned off by default from Kue >=0.9.0. Read more about this here. You should enable search indexes and add reds in your dependencies if you need to:

var kue = require('kue');
q = kue.createQueue({
    disableSearch: false
});
npm install reds --save

GET /stats

Currently responds with state counts, and worker activity time in milliseconds:

{"inactiveCount":4,"completeCount":69,"activeCount":2,"failedCount":0,"workTime":20892}

GET /job/:id

Get a job by :id:

{"id":"3","type":"email","data":{"title":"welcome email for tj","to":"[email protected]","template":"welcome-email"},"priority":-10,"progress":"100","state":"complete","attempts":null,"created_at":"1309973155248","updated_at":"1309973155248","duration":"15002"}

GET /job/:id/log

Get job :id's log:

['foo', 'bar', 'baz']

GET /jobs/:from..:to/:order?

Get jobs with the specified range :from to :to, for example "/jobs/0..2", where :order may be "asc" or "desc":

[{"id":"12","type":"email","data":{"title":"welcome email for tj","to":"[email protected]","template":"welcome-email"},"priority":-10,"progress":0,"state":"active","attempts":null,"created_at":"1309973299293","updated_at":"1309973299293"},{"id":"130","type":"email","data":{"title":"welcome email for tj","to":"[email protected]","template":"welcome-email"},"priority":-10,"progress":0,"state":"active","attempts":null,"created_at":"1309975157291","updated_at":"1309975157291"}]

GET /jobs/:state/:from..:to/:order?

Same as above, restricting by :state which is one of:

- active
- inactive
- failed
- complete

GET /jobs/:type/:state/:from..:to/:order?

Same as above, however restricted to :type and :state.

DELETE /job/:id

Delete job :id:

$ curl -X DELETE http://local:3000/job/2
{"message":"job 2 removed"}

POST /job

Create a job:

$ curl -H "Content-Type: application/json" -X POST -d \
    '{
       "type": "email",
       "data": {
         "title": "welcome email for tj",
         "to": "[email protected]",
         "template": "welcome-email"
       },
       "options" : {
         "attempts": 5,
         "priority": "high"
       }
     }' http://localhost:3000/job
{"message": "job created", "id": 3}

You can create multiple jobs at once by passing an array. In this case, the response will be an array too, preserving the order:

$ curl -H "Content-Type: application/json" -X POST -d \
    '[{
       "type": "email",
       "data": {
         "title": "welcome email for tj",
         "to": "[email protected]",
         "template": "welcome-email"
       },
       "options" : {
         "attempts": 5,
         "priority": "high"
       }
     },
     {
       "type": "email",
       "data": {
         "title": "followup email for tj",
         "to": "[email protected]",
         "template": "followup-email"
       },
       "options" : {
         "delay": 86400,
         "attempts": 5,
         "priority": "high"
       }
     }]' http://localhost:3000/job
[
  {"message": "job created", "id": 4},
  {"message": "job created", "id": 5}
]

Note: when inserting multiple jobs in bulk, if one insertion fails Kue will keep processing the remaining jobs in order. The response array will contain the ids of the jobs added successfully, and any failed element will be an object describing the error: {"error": "error reason"}.

Parallel Processing With Cluster

The example below shows how you may use Cluster to spread the job processing load across CPUs. Please see Cluster module's documentation for more detailed examples on using it.

When cluster .isMaster the file is being executed in context of the master process, in which case you may perform tasks that you only want once, such as starting the web app bundled with Kue. The logic in the else block is executed per worker.

var kue = require('kue')
  , cluster = require('cluster')
  , queue = kue.createQueue();

var clusterWorkerSize = require('os').cpus().length;

if (cluster.isMaster) {
  kue.app.listen(3000);
  for (var i = 0; i < clusterWorkerSize; i++) {
    cluster.fork();
  }
} else {
  queue.process('email', 10, function(job, done){
    var pending = 5
      , total = pending;

    var interval = setInterval(function(){
      job.log('sending!');
      job.progress(total - pending, total);
      --pending || done();
      pending || clearInterval(interval);
    }, 1000);
  });
}

This will create an email job processor (worker) per each of your machine CPU cores, with each you can handle 10 concurrent email jobs, leading to total 10 * N concurrent email jobs processed in your N core machine.

Now when you visit Kue's UI in the browser you'll see that jobs are being processed roughly N times faster! (if you have N cores).

Securing Kue

Through the use of app mounting you may customize the web application, enabling TLS, or adding additional middleware like basic-auth-connect.

$ npm install --save basic-auth-connect
var basicAuth = require('basic-auth-connect');
var app = express.createServer({ ... tls options ... });
app.use(basicAuth('foo', 'bar'));
app.use(kue.app);
app.listen(3000);

Testing

Enable test mode to push all jobs into a jobs array. Make assertions against the jobs in that array to ensure code under test is correctly enqueuing jobs.

queue = require('kue').createQueue();

before(function() {
  queue.testMode.enter();
});

afterEach(function() {
  queue.testMode.clear();
});

after(function() {
  queue.testMode.exit()
});

it('does something cool', function() {
  queue.createJob('myJob', { foo: 'bar' }).save();
  queue.createJob('anotherJob', { baz: 'bip' }).save();
  expect(queue.testMode.jobs.length).to.equal(2);
  expect(queue.testMode.jobs[0].type).to.equal('myJob');
  expect(queue.testMode.jobs[0].data).to.eql({ foo: 'bar' });
});

IMPORTANT: By default jobs aren't processed when created during test mode. You can enable job processing by passing true to testMode.enter

before(function() {
  queue.testMode.enter(true);
});

Screencasts

Contributing

We love contributions!

When contributing, follow the simple rules:

  • Don't violate DRY principles.
  • Boy Scout Rule needs to have been applied.
  • Your code should look like all the other code – this project should look like it was written by one person, always.
  • If you want to propose something – just create an issue and describe your question with as much description as you can.
  • If you think you have some general improvement, consider creating a pull request with it.
  • If you add new code, it should be covered by tests. No tests – no code.
  • If you add a new feature, don't forget to update the documentation for it.
  • If you find a bug (or at least you think it is a bug), create an issue with the library version and test case that we can run and see what are you talking about, or at least full steps by which we can reproduce it.

License

(The MIT License)

Copyright (c) 2011 LearnBoost <[email protected]>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Comments
  • Jobs stuck in inactive state

    Jobs stuck in inactive state

    Jobs get stuck in the inactive state fairly often for us. We noticed that the length of q:[type]:jobs is zero, even when there are inactive jobs of that type, so when getJob calls blpop, there is nothing to process.

    It looks like this gets set when a job is saved and the state is set to inactive using lpush q:[type]:jobs 1. We're wondering if this is failing in some cases and once the count is off, jobs remain unprocessed.

    Has anyone else seen this issue?

    Bug Discussion need more info 
    opened by mikemoser 159
  • [RedisLabs Users] Lots of Inactive Jobs

    [RedisLabs Users] Lots of Inactive Jobs

    Hello - we have Kue running on an ElasticBeanstalk on AWS and connected to a 500MB Redis instance on Rediscloud - webservers are connected from Heroku. It's set up with Redis Version Compliance 3.0.3.

    Here's how I have the connection set up.

    jobs = kue.createQueue({
        redis:
            port: 18147
            host: "XXX.us-east-1-4.1.ec2.garantiadata.com"
            auth: "MyPassword"
        })
    

    When I first deployed it, kue was processing the jobs perfectly. However, since deploying this we are seeing lots of jobs not even get processed and go right to inactive. They haven't even been attempted to be processed. I looked at the other issues about inactive jobs but none seem to have applied. Does anyone know why these jobs are going direct to inactive and not failed/delayed? Let me know if I can provide any more information. Thanks

    opened by jjchando 39
  • kue (reds?) creates always a Redis connection to localhost when requiring

    kue (reds?) creates always a Redis connection to localhost when requiring

    Don't know if this is kue or reds issue, but it certainly affects kue. Tested on kue master.

    Consider following:

    var redis = require("redis");                                                                                                                
    var client = redis.createClient(1234, "example.com");                           
    
    setTimeout(function() {                                                         
        require("kue");                                                   
    }, 1000);                                                                       
    
    client.on("connect", function() {                                               
      console.log("Connected!");                                                    
    });                                                                             
    

    It dies immediately after kue is required in the timeout:

    Connected!
    
    node.js:134
            throw e; // process.nextTick error, or 'error' event on first tick
            ^
    Error: Redis connection to 127.0.0.1:6379 failed - ECONNREFUSED, Connection refused
        at Socket.<anonymous> (/home/epeli/tmp/kue/node_modules/reds/node_modules/redis/index.js:123:28)
        at Socket.emit (events.js:64:17)
        at Array.<anonymous> (net.js:830:27)
        at EventEmitter._tickCallback (node.js:126:26)
    
    shell returned 1
    

    Creates a second connection to localhost?

    I think the reason is here:

    A search is created on require https://github.com/LearnBoost/kue/blob/b2cd5d3393d12e4bdd1c2e654a3ef5eb45e5bf11/lib/queue/job.js#L27

    which means new Search on Reds https://github.com/visionmedia/reds/blob/156c046803df6b96fa40e90be26b933f7d324d01/lib/reds.js#L68

    which then again means a new Redis client which connects to localhost https://github.com/visionmedia/reds/blob/156c046803df6b96fa40e90be26b933f7d324d01/lib/reds.js#L236

    This happens also if you override the createClient function in kue.

    opened by esamattis 39
  • QoS extensions

    QoS extensions

    Hi, I've created a bunch of QoS extensions to Kue this would be useful to incorporate into the master if you're interested. There are three main extensions that I added:

    • watchdog+heartbeat to auto-restart stuck/dead jobs (useful if, for example, a server that's processing jobs reboots without a graceful shutdown).
    • dependencies to allow jobs to wait for other jobs to complete before starting to give more granular controls of execution ordering even with a large pool of available workers (dependencies even work across different job types).
    • serialized execution to ensure that only one of a related group of jobs can execute at the same time (so, for example, you could use this to ensure that no two jobs related to the same user run at the same time across all workers). As with dependencies, this works across different job types.
    opened by dfoody 38
  • Set id, exponentially backoff

    Set id, exponentially backoff

    1.Set id. Allow to manually assign id to job and check if it already exist at save. usage: job = jobs.create(type, data, id) job.save() if job with same id already exist, job.save callback will be called with error "job already exist" 2. Exponentially backoff. Allow to delay attempts with given multiplier. usage: job = jobs.create(type,data) job.backoff(rate,custom_delay,attempts) job.save()

    Feature 
    opened by Jedius 34
  • Does kue fail after a few million tasks ?

    Does kue fail after a few million tasks ?

    Every time we reach about ~10,000,000 tasks, kue / redis seems to get stuck. It can add new tasks to redis but kue stops consuming tasks so they just pile up ...

    I was wondering if anyone else has experienced this issue with kue in production and has an idea why kue would fail ?

    Discussion need more info 
    opened by AaronO 31
  • (node) warning: possible EventEmitter memory leak detected. 11 job ttl exceeded ack listeners added

    (node) warning: possible EventEmitter memory leak detected. 11 job ttl exceeded ack listeners added

    (node) warning: possible EventEmitter memory leak detected. 11 job ttl exceeded ack listeners added. Use emitter.setMaxListeners() to increase limit.
    Trace
        at Queue.addListener (events.js:239:17)
        at Queue.on (/mnt/app/node_modules/kue/lib/kue.js:130:13)
        at Command.callback (/mnt/app/node_modules/kue/lib/kue.js:234:16)
        at RedisClient.return_reply (/mnt/app/node_modules/redis/index.js:664:25)
        at HiredisReplyParser.reply_parser.send_reply (/mnt/app/node_modules/redis/index.js:332:14)
        at HiredisReplyParser.execute (/mnt/app/node_modules/redis/lib/parsers/hiredis.js:30:18)
        at Socket.<anonymous> (/mnt/app/node_modules/redis/index.js:131:27)
        at emitOne (events.js:77:13)
        at Socket.emit (events.js:169:7)
        at readableAddChunk (_stream_readable.js:146:16)
        at Socket.Readable.push (_stream_readable.js:110:10)
        at TCP.onread (net.js:523:20)
    

    Is this warning just related to the number of jobs that have TTL expiring? We can use emitter.setMaxListeners() to increase this value and prevent the warning, but I'm curious about the root cause.

    Question 
    opened by andrewtamura 30
  • Improved User Interface

    Improved User Interface

    Hi,

    We have been using Kue at StreetHub in production and rebuilt the UI to fit our needs. It's build with Ember.js. It is still a work in progress and would love your feedback on it. If you want to integrate it, let us know.

    screen shot 2014-11-28 at 15 36 01

    screen shot 2014-11-28 at 15 36 28

    Cheers, Arnaud

    opened by arnaudbenard 29
  • Graceful Shutdown [isolated commits]

    Graceful Shutdown [isolated commits]

    Hey TJ, I freakin' love Kue. Thanks for it!

    This pull request is to implement a graceful shutdown procedure. I copied most of this code directly from #61, which is great, but has become entangled w/ too many unrelated changes. This request will only contain patches related to graceful shutdown.

    In addition, I added in an optional timeout, after which active jobs will be marked 'failed', and the process will shut down immediately.

    Here's the use case, which I'd argue is a common one: when we deploy a new version of a worker, we first kill the running process w/ a SIGTERM before starting it again w/ the new codebase. I needed a way to shut down w/ the least possible hassle, and w/out failing any jobs if we can help it.

    I'd really like to see this merged. Let's discuss if anything's holding it back, or isn't quite right.

    Feature 
    opened by runningskull 29
  • Allow setting of Redis connection parameters

    Allow setting of Redis connection parameters

    Kue uses the default Redis connection settings and did not provide a mechanism for setting them. I've added the ability to set the parameters that Kue passes to redis.createClient.

    lib/redis.js exposes two functions: set which is a chainable function that allows for setting of params and createClient which simply calls redis.createClient, passing any user defined parameters. This new file is named redis.js to minimize integration changes to a the require path.

    Both lib/http/routes/index.js and lib/http/routes/json.js create new instances of Queue when required, so I've updated lib/kue.js to require the http module on first access of kue.app, allowing for Redis connection settings to be set first.

    opened by davidwood 27
  • search keys not being removed

    search keys not being removed

    When a job is complete I remove it from the queue (see code below). After running about 70,000 jobs overnight the redis memory usage is at approx 30MB. There were 18 failed jobs still in the database, and the queue length is currently zero - jobs are processed more quickly than they are queuing. Redis is not being used in any other way.

    Any ideas why the redis memory usage keeps increasing even though I'm deleting the completed jobs? Coffeescript code:

    gaemodel.update = (params) ->
      job = jobs.create "gaemodel-update", params 
      job.attempts 2
      job.save()
      job.on "complete", ->
        job.remove (err) ->
          throw err if err
          console.log 'completed job #%d', job.id
    
    Bug 
    opened by bbn 26
  • NPM audit shows moderate vulnerabilities alert

    NPM audit shows moderate vulnerabilities alert

    On version 0.11.6

    npm audit shows the following moderate vulnerabilities alert:

    ┌───────────────┬──────────────────────────────────────────────────────────────┐
    │ Moderate      │ Prototype Pollution                                          │
    ├───────────────┼──────────────────────────────────────────────────────────────┤
    │ Package       │ extend                                                       │
    ├───────────────┼──────────────────────────────────────────────────────────────┤
    │ Patched in    │ >=2.0.2 <3.0.0 || >=3.0.2                                    │
    ├───────────────┼──────────────────────────────────────────────────────────────┤
    │ Dependency of │ kue                                                          │
    ├───────────────┼──────────────────────────────────────────────────────────────┤
    │ Path          │ kue > node-redis-warlock > node-redis-scripty > extend       │
    ├───────────────┼──────────────────────────────────────────────────────────────┤
    │ More info     │ https://npmjs.com/advisories/996                             │
    └───────────────┴──────────────────────────────────────────────────────────────┘
    
    opened by shaozi 0
  • Error: Connection is closed when shutdown ioredis cluster

    Error: Connection is closed when shutdown ioredis cluster

    I am seeing an UnhandledPromiseRejectionWarning (Error: Connection is closed) when calling kue shutdown. This happens when I'm using an ioredis client with an ElastiCache cluster.

    See debug output below. It looks like quit is being called twice.

    I think I can see where this is happening:

    https://github.com/Automattic/kue/blob/master/lib/kue.js#L396

    First redis.reset() calls quit and then quit is called again in the next line.

    020-06-21T23:31:44.314Z ioredis:cluster status: ready -> disconnecting
    2020-06-21T23:31:44.314Z ioredis:cluster:subscriber stopped
    2020-06-21T23:31:44.315Z ioredis:redis write command[]: 0 -> quit([])
    2020-06-21T23:31:44.315Z ioredis:redis status[cluster-staging-0001-002]: wait -> connecting
    2020-06-21T23:31:44.315Z ioredis:redis status[cluster-staging-0001-003]: wait -> connecting
    2020-06-21T23:31:44.316Z ioredis:cluster status: ready -> disconnecting
    2020-06-21T23:31:44.316Z ioredis:redis status[cluster-staging-0001-002]: wait -> close
    2020-06-21T23:31:44.316Z ioredis:connection skip reconnecting since the connection is manually closed.
    2020-06-21T23:31:44.316Z ioredis:redis status[cluster-staging-0001-002]: close -> end
    2020-06-21T23:31:44.316Z ioredis:cluster:subscriber stopped
    2020-06-21T23:31:44.316Z ioredis:redis write command[]: 0 -> quit([])
    2020-06-21T23:31:44.316Z ioredis:redis status[cluster-staging-0001-002]: wait -> connecting
    2020-06-21T23:31:44.317Z ioredis:redis status[cluster-staging-0001-003]: wait -> connecting
    2020-06-21T23:31:44.317Z ioredis:redis status[cluster-staging-0001-002]: connecting -> end
    2020-06-21T23:31:44.317Z ioredis:redis status[cluster-staging-0001-003]: connecting -> end
    2020-06-21T23:31:44.317Z ioredis:redis status[cluster-staging-0001-002]: connecting -> end
    2020-06-21T23:31:44.318Z ioredis:redis status[cluster-staging-0001-003]: connecting -> end
    2020-06-21T23:31:44.318Z ioredis:cluster:connectionPool Remove cluster-staging-0001-002
    2020-06-21T23:31:44.318Z ioredis:cluster:connectionPool Remove cluster-staging-0001-003
    2020-06-21T23:31:44.318Z ioredis:cluster:connectionPool Remove cluster-staging-0001-002
    2020-06-21T23:31:44.318Z ioredis:cluster:connectionPool Remove cluster-staging-0001-003
    queue shutdown complete
    2020-06-21T23:31:44.319Z ioredis:redis status[]: ready -> close
    2020-06-21T23:31:44.319Z ioredis:connection skip reconnecting since the connection is manually closed.
    2020-06-21T23:31:44.319Z ioredis:redis status[]: close -> end
    2020-06-21T23:31:44.319Z ioredis:cluster:connectionPool Remove cluster-staging-0001-001
    2020-06-21T23:31:44.319Z ioredis:cluster status: disconnecting -> close
    2020-06-21T23:31:44.320Z ioredis:redis status[]: ready -> close
    2020-06-21T23:31:44.320Z ioredis:connection skip reconnecting since the connection is manually closed.
    2020-06-21T23:31:44.320Z ioredis:redis status[]: close -> end
    2020-06-21T23:31:44.320Z ioredis:cluster:connectionPool Remove cluster-staging-0001-001
    2020-06-21T23:31:44.320Z ioredis:cluster status: disconnecting -> close
    Redis connection close
    2020-06-21T23:31:44.321Z ioredis:cluster status: close -> end
    2020-06-21T23:31:44.336Z ioredis:connection error: Error: Client network socket disconnected before secure TLS connection was established
        at connResetException (internal/errors.js:609:14)
        at TLSSocket.onConnectEnd (_tls_wrap.js:1547:19)
        at Object.onceWrapper (events.js:421:28)
        at TLSSocket.emit (events.js:327:22)
        at endReadableNT (_stream_readable.js:1221:12)
        at processTicksAndRejections (internal/process/task_queues.js:84:21) {
      code: 'ECONNRESET',
      path: undefined,
      host: 'cluster-staging-0001-002',
      port: 6379,
      localAddress: undefined
    }
    2020-06-21T23:31:44.337Z ioredis:redis status[cluster-staging-0001-002]: connecting -> close
    2020-06-21T23:31:44.338Z ioredis:connection skip reconnecting since the connection is manually closed.
    2020-06-21T23:31:44.338Z ioredis:redis status[cluster-staging-0001-002]: close -> end
    (node:1) UnhandledPromiseRejectionWarning: Error: Connection is closed.
        at close (/usr/src/app/node_modules/ioredis/built/redis/event_handler.js:179:25)
        at TLSSocket.<anonymous> (/usr/src/app/node_modules/ioredis/built/redis/event_handler.js:146:20)
        at Object.onceWrapper (events.js:422:26)
        at TLSSocket.emit (events.js:327:22)
        at net.js:674:12
        at TCP.done (_tls_wrap.js:567:7)
    
    opened by joebowbeer 0
  • Server restart for delayed jobs.

    Server restart for delayed jobs.

    I have use case that allows me to use this library's delay feature, and I love it cos it let me schedule events i'd previously be using cron for.

    But I am wondering if the server stops before a job is due and restarts after the job's delay has elapsed, due KUE still process those? if not can the retry feature to ensure that it attempts to run the job even after the delay?

    opened by mannyOE 1
  • Communicate with job

    Communicate with job

    Hey there, Thanks for the amazing library! I'm wondering if it is possible to communicate with a job after it was created. In my specific example merely changing the data property would be sufficient. I found out how to do it, but the change of the job is not reflected in the worker (I use clustering).

                                Job.get(rId, (err, job) => {
                                    if (job) {
                                        (job.data as QueueRenderPayloadData).shouldStop = true
                                    }
                                })
    

    I also tried to emit function: Job.get(rId, (err, job) => { if (job) { const emitWorked = job.emit('customevent', { stop: true }) } })

    But emitWorked is always false, is there something to keep in mind when using clustering and emitting?

    Thanks for your help!

    opened by Tobjoern 0
  • How to add more attempts in a job

    How to add more attempts in a job

    Hello, maybe someone can help me with that questions ... Would like to know how can add more attempts to a job based on the flow

    First I create my job with 3 attempts.... After... the work fail 2 times but based on some event I need to add 1 attempt more to my job.

    My question is how to get the current max attempts and add x more attempts, that is because if the event happens again need add 1 attempt more

    Second question is how to fail a job with no more attempts;

    Regards

    opened by jquequezana 2
Releases(v0.11.6)
  • v0.11.6(Jun 2, 2017)

    • Fix a bug that might cause job to stuck in inactive state, #1060
    • add id param validation in Job.get(), #990
    • Allow override of promotion lockTtl, #1018
    • Add support for querystring on redis URL, #1013
    • Add delay to custom back off function, #984
    • Add prefix as a parameter for kue-dashboard, #1028
    • Allow exponential backoff to cap its delay at an specified maximum, #1029
    • Minor documentation updates
    Source code(tar.gz)
    Source code(zip)
  • 0.11.0(May 13, 2016)

    • force node_redis version to 2.4.x, Closes #857
    • Converting Job ids back into integers, #855
    • Fix LPUSH crash during shutdown, #854
    • Install kue-dashboard script, #853
    • Add start event to documentation, #841
    • Add parameter for testMode.enter to continue processing jobs, #821
    • Modern Node.js versions support, #812
    • Don't start the next job until the current one is totally finished, Closes #806
    • Store multiple instances of jobs in jobs id map to emit events for all, #750
    Source code(tar.gz)
    Source code(zip)
  • 0.10.6(Apr 27, 2016)

  • 0.10.5(Apr 2, 2016)

    • Attempts surpassing max attempts on delay jobs upon failure, resulting in infinite retries, Fixes #797
    • Add yargs dependency for kue-dashboard, #796
    Source code(tar.gz)
    Source code(zip)
  • 0.10.4(Apr 2, 2016)

  • 0.10.3(Nov 20, 2015)

  • 0.10.2(Nov 20, 2015)

  • 0.10.0(Nov 20, 2015)

    • Update TTL on job progress, Closes #694
    • Upgrade to node_redis 2.3, #717
    • Fix LPUSH vs connection quit race when shutting down
    • Restart task btn, #754
    • Fix uncaught exception in job.js, #751
    • Added kue-dashboard script for conveniently running the dashboard #611
    • Fixed invalid CSS on production, #755
    • Connection string not supporting DB number #725
    • Fix attempts remaining logic, #742
    • Update jade, #741
    • Properly set job IDs in test mode, #727
    • Enhanced Job.log formatting, #630
    • Use node's util#format() in Job.log, #724
    Source code(tar.gz)
    Source code(zip)
  • 0.9.3(Jun 25, 2015)

  • 0.9.1(May 5, 2015)

  • 0.9.0(May 2, 2015)

    • Upgrade to express 4.x, Closes #537
    • Move job.reprocess done callback to the last, Closes #387, Closes #385
    • Standardize signature of .shutdown() callback, Closes #454
    • Turn off search indexes by default, Closes #412
    • Improve delayed job promotion feature, Closes #533, fixes #312, closes #352
    • Use a distributed redis lock to hide job promotion from user, Closes #556
    • Deprecate .promote and update documentation
    • Document Javascript API to query queue state, Closes #455
    • Add jobEvents flag to switch off job events for memory optimization, Closes #401
    • Add idle event to capture unsuccessful zpop's in between of worker get Job, should fix #538
    • Add TTL for active jobs, Closes #544
    • Document jobEvents queue config, Closes #557
    • Bulk job create API now processes all jobs in case of intermediate errors, Closes #552
    • Merge red job remove buttons and tooltips PR, Closes #566
    • Add a in-memory test Kue mode, Closes #561
    • Update reds package to 0.2.5
    • Merge PR #594, bad redirect URL in old express versions, fixes #592
    • update dependency to forked warlock repo to fix redis connection cleanup on shutdown, fixes #578
    • Update job hash with the worker ID, Closes #580
    Source code(tar.gz)
    Source code(zip)
  • 0.8.12(Mar 23, 2015)

    • Bulk job create JSON API, Closes #334, Closes #500, Closes #527
    • Add feature to specify redis connection string/url, Closes #540
    • Mention kue-ui in readme, Closes #502
    • Add an extra parameter to the progress method to notify extra contextual data, Closes #466, Closes #427, Closes #313
    • Document job event callback arguments, Closes #542
    • Fix typo in documentation, Closes #506
    • Document importance of using Kue error listeners, Closes #409
    • Document Queue maintenance and job.removeOnComplete( true ), Closes #439
    • Document how to query all the active jobs programmatically, Closes #418
    • Document to explain how "stuck queued jobs" happens, Closes #451
    • Document on proper error handling to prevent stuck jobs, Closes #391
    Source code(tar.gz)
    Source code(zip)
  • 0.8.10(Dec 13, 2014)

    • Add more tests, Closes #280
    • More atomic job state changes, Closes #411
    • Documentation: error passed to done should be string or standard JS error object, Closes #394
    • Documentation: backoff documentation, Closes #435
    • Documentation: correct promote usage, Closes #413
    • Add job enqueue event, Closes #458
    • Watch for errors with non-string err.stack, Closes #426
    • Fix web app redirect path for express 4.0, Closes #393
    • removeBadJob should do pessimistic job removal from all state ZSETs, Closes #438
    • Add stats json api by type and state, Closes #477
    • Don't let concurrent graceful shutdowns on subsequentQueue#shutdowncalls, Closes #479
    • Fix cleanup global leak, Closes #475
    Source code(tar.gz)
    Source code(zip)
  • 0.8.9(Oct 1, 2014)

  • 0.8.8(Sep 26, 2014)

    • Fix tests to limited shutdown timeouts
    • Add a redis lua watchdog to fix stuck inactive jobs, fixes #130
    • Stuck inactive jobs watchdog, Closes #130
    Source code(tar.gz)
    Source code(zip)
  • 0.8.7(Sep 26, 2014)

  • 0.8.6(Sep 26, 2014)

  • 0.8.5(Aug 10, 2014)

    • Emit event 'job failed attempt' after job successfully updated, closes #377
    • Fix delaying jobs when failed, closes #384
    • Implement job.removeOnComplete, closes #383
    • Make searchKeys chainable, closes #379
    • Add extra job options to JSON API, closes #378
    Source code(tar.gz)
    Source code(zip)
  • 0.8.3(Jul 13, 2014)

    • Inject other Redis clients compatible with node_redis #344
    • Add support to connect to Redis using Linux sockets #362
    • Add .save callback sample code in documentation #367
    • Fix broken failure backoff #360
    • Merge web console redirection fix #357
    • Add db selection option to redis configuration #354
    • Get number of jobs with given state and type #349
    • Add Queue.prototype.delayed function #351
    Source code(tar.gz)
    Source code(zip)
  • 0.8.1(Jun 18, 2014)

    • Implement backoff on failure retries #300
    • Allow passing back worker results via done to event handlers #170
    • Allow job producer to specify which keys of job.data to be indexed for search #284
    • Waffle.io Badge #332
    • Dropping monkey-patch style redis client connections
    • Update docs: Worker Pause/Resume-ability
    • Update docs: Reliability of Queue event handlers over Job event handlers
    • Fix job complete event callback parameter orders #343
    Source code(tar.gz)
    Source code(zip)
  • 0.7.9(Jul 12, 2014)

  • v0.7.6(May 1, 2014)

  • 0.7.5(May 1, 2014)

  • 0.7.0(Jan 25, 2014)

    • Suppress "undefined" messages on String errors. Closes #230
    • Fix cannot read property id of undefined errors. Closes #252
    • Parameterize limit of jobs checked in promotion cycles. Closes #244
    • Graceful shutdown
    • Worker pause/resume ability, Closes #163
    • Ensure event subscription before job save. Closes #179
    • Fix Queue singleton
    • Fix failed event being called in first attempt. Closes #142
    • Disable search (Search index memory leaks). See #58 & #218
    • Emit error events on both kue and job
    • JS/Coffeescript tests added (Mocha+Should)
    • Travis support added
    Source code(tar.gz)
    Source code(zip)
  • 0.6.2(Jul 3, 2013)

  • 0.6.1(Jul 3, 2013)

  • 0.6.0(Jul 3, 2013)

    • Make pollForJobs actually use ms argument. Closes #158
    • Support delay over HTTP POST. Closes #165
    • Fix natural sorting. Closes #174
    • Update updated_at timestamp during log, progress, attempt, or state changes. Closes #188
    • Fix redirection to /active. Closes #190
    Source code(tar.gz)
    Source code(zip)
Owner
Automattic
We are passionate about making the web a better place.
Automattic
A fast, robust and extensible distributed task/job queue for Node.js, powered by Redis.

Conveyor MQ A fast, robust and extensible distributed task/job queue for Node.js, powered by Redis. Introduction Conveyor MQ is a general purpose, dis

Conveyor MQ 45 Dec 15, 2022
Redis-backed task queue engine with advanced task control and eventual consistency

idoit Redis-backed task queue engine with advanced task control and eventual consistency. Task grouping, chaining, iterators for huge ranges. Postpone

Nodeca 65 Dec 15, 2022
Yet another concurrent priority task queue, yay!

YQueue Yet another concurrent priority task queue, yay! Install npm install yqueue Features Concurrency control Prioritized tasks Error handling for b

null 6 Apr 4, 2022
Opinionated, type-safe, zero-dependency max/min priority queue for JavaScript and TypeScript projects.

qewe qewe is an opinionated, type-safe, zero-dependency max/min priority queue for JavaScript and TypeScript projects. Installation Add qewe to your p

Jamie McElwain 2 Jan 10, 2022
🪦 Redis Key Value store backed by IPFS

?? RipDB ?? A snappy, decentralized JSON store perfect for fast moving web3 builders. Redis + IPFS = RIP = ?? Install With a Package Manager (browser

Zac Denham 42 Dec 13, 2022
A simple high-performance Redis message queue for Node.js.

RedisSMQ - Yet another simple Redis message queue A simple high-performance Redis message queue for Node.js. For more details about RedisSMQ design se

null 501 Dec 30, 2022
Redis Simple Message Queue

Redis Simple Message Queue A lightweight message queue for Node.js that requires no dedicated queue server. Just a Redis server. tl;dr: If you run a R

Patrick Liess 1.6k Dec 27, 2022
BullMQ - Premium Message Queue for NodeJS based on Redis

The fastest, most reliable, Redis-based distributed queue for Node. Carefully written for rock solid stability and atomicity. Read the documentation F

Taskforce.sh Inc. 3.1k Dec 30, 2022
Job queues and scheduled jobs for Node.js, Beanstalkd and/or Iron.io.

Ironium Job queues and scheduled jobs for Node.js backed by Beanstalk/IronMQ/SQS. The Why You've got a workload that runs outside the Web app's reques

Assaf Arkin 71 Dec 14, 2022
Bree is the best job scheduler for Node.js and JavaScript with cron, dates, ms, later, and human-friendly support.

The best job scheduler for Node.js and JavaScript with cron, dates, ms, later, and human-friendly support. Works in Node v10+ and browsers, uses workers to spawn sandboxed processes, and supports async/await, retries, throttling, concurrency, and graceful shutdown. Simple, fast, and lightweight. Made for @ForwardEmail and @ladjs.

Bree - The Best Node.js and JavaScript Job Scheduler 2.5k Dec 30, 2022
An open-source link shortener built with Vercel Edge Functions and Upstash Redis.

Dub An open-source link shortener built with Vercel Edge Functions and Upstash Redis. Introduction · Deploy Your Own · Contributing Introduction Dub i

Steven Tey 4.9k Jan 5, 2023
A simple Node.js APIBAN client for downloading banned IPs and inserting them into a redis set

apiban-redis A simple Node.js APIBAN client for downloading banned IPs and inserting them into a redis set. Installation This utility can be run as a

jambonz 4 Apr 5, 2022
Cache is easy to use data caching Node.js package. It supports Memcached, Redis, and In-Memory caching engines.

Cache Cache NPM implements wrapper over multiple caching engines - Memcached, Redis and In-memory (use with single threaded process in development mod

PLG Works 49 Oct 24, 2022
Hello Jobs is a one-stop solution for all job seekers. In future, this could also serve as a platform for recruiters to hire potential candidates.

Hello Jobs Hello Jobs is a one-stop solution for all job seekers. In future, this could also serve as a platform for recruiters to hire potential cand

S Harshita 6 Dec 26, 2022
Premium Queue package for handling distributed jobs and messages in NodeJS.

The fastest, most reliable, Redis-based queue for Node. Carefully written for rock solid stability and atomicity. Sponsors · Features · UIs · Install

null 13.5k Dec 31, 2022
Better Queue for NodeJS

Better Queue - Powerful flow control Super simple to use Better Queue is designed to be simple to set up but still let you do complex things. Persiste

Diamond 415 Dec 17, 2022
A client-friendly run queue

client-run-queue This package provides a RunQueue implementation for scheduling and managing async or time-consuming functions such that client-side i

Passfolio 6 Nov 22, 2022
A document based messaging queue for Mongo, DocumentDB, and others

DocMQ Messaging Queue for any document-friendly architectures (DocumentDB, Mongo, Postgres + JSONB, etc). Why Choose This DocMQ is a good choice if yo

Jakob Heuser 10 Dec 7, 2022
A client-friendly run queue

client-run-queue This package provides a RunQueue implementation for scheduling and managing async or time-consuming functions such that client-side i

Passfolio 4 Jul 5, 2022