Nano: The official Apache CouchDB library for Node.js

Overview

Build StatusCoveragedependencies StatusNPM

Nano

Offical Apache CouchDB library for Node.js.

Features:

  • Minimalistic - There is only a minimum of abstraction between you and CouchDB.
  • Pipes - Proxy requests from CouchDB directly to your end user. ( ...AsStream functions only)
  • Promises - The vast majority of library calls return native Promises.
  • TypeScript - Detailed TypeScript definitions are built in.
  • Errors - Errors are proxied directly from CouchDB: if you know CouchDB you already know nano.

Installation

  1. Install npm
  2. npm install nano

or save nano as a dependency of your project with

npm install --save nano

Note the minimum required version of Node.js is 10.

Table of contents

Getting started

To use nano you need to connect it to your CouchDB install, to do that:

const nano = require('nano')('http://localhost:5984');

Note: The URL you supply may also contain authentication credentials e.g. http://admin:mypassword@localhost:5984.

To create a new database:

nano.db.create('alice');

and to use an existing database:

const alice = nano.db.use('alice');

Under-the-hood, calls like nano.db.create are making HTTP API calls to the CouchDB service. Such operations are asynchronous. There are two ways to recieve the asynchronous data back from the library

  1. Promises
nano.db.create('alice').then((data) => {
  // success - response is in 'data'
}).catch((err) => {
  // failure - error information is in 'err'
})

or in the async/await style:

try {
  const response = await nano.db.create('alice')
  // succeeded
  console.log(response)
} catch (e) {
  // failed
  console.error(e)
}
  1. Callbacks
nano.db.create('alice', (err, data) => {
  // errors are in 'err' & response is in 'data'
})

In nano the callback function receives always three arguments:

  • err - The error, if any.
  • body - The HTTP response body from CouchDB, if no error. JSON parsed body, binary for non JSON responses.
  • header - The HTTP response header from CouchDB, if no error.

The documentation will follow the async/await style.


A simple but complete example in the async/await style:

async function asyncCall() {
  await nano.db.destroy('alice')
  await nano.db.create('alice')
  const alice = nano.use('alice')
  const response = await alice.insert({ happy: true }, 'rabbit')
  return response
}
asyncCall()

Running this example will produce:

you have inserted a document with an _id of rabbit.
{ ok: true,
  id: 'rabbit',
  rev: '1-6e4cb465d49c0368ac3946506d26335d' }

You can also see your document in futon.

Configuration

Configuring nano to use your database server is as simple as:

const nano = require('nano')('http://localhost:5984')
const db = nano.use('foo');

If you don't need to instrument database objects you can simply:

// nano parses the URL and knows this is a database
const db = require('nano')('http://localhost:5984/foo');

You can also pass options to the require to specify further configuration options you can pass an object literal instead:

// nano parses the URL and knows this is a database
const opts = {
  url: 'http://localhost:5984/foo',
  requestDefaults: { proxy: { 'protocol': 'http', 'host': 'myproxy.net' } }
};
const db = require('nano')(opts);

Please check axios for more information on the defaults. They support features like proxies, timeout etc.

You can tell nano to not parse the URL (maybe the server is behind a proxy, is accessed through a rewrite rule or other):

// nano does not parse the URL and return the server api
// "http://localhost:5984/prefix" is the CouchDB server root
const couch = require('nano')(
  { url : "http://localhost:5984/prefix"
    parseUrl : false
  });
const db = couch.use('foo');

Pool size and open sockets

A very important configuration parameter if you have a high traffic website and are using nano is the HTTP pool size. By default, the Node.js HTTP global agent has a infinite number of active connections that can run simultaneously. This can be limited to user-defined number (maxSockets) of requests that are "in flight", while others are kept in a queue. Here's an example explicitly using the Node.js HTTP agent configured with custom options:

const http = require('http')
const myagent = new http.Agent({
  keepAlive: true,
  maxSockets: 25
})

const db = require('nano')({ 
  url: 'http://localhost:5984/foo',
  requestDefaults : { 
    agent : myagent 
  }
});

TypeScript

There is a full TypeScript definition included in the the nano package. Your TypeScript editor will show you hints as you write your code with the nano library with your own custom classes:

import * as Nano  from 'nano'

let n = Nano('http://USERNAME:PASSWORD@localhost:5984')
let db = n.db.use('people')

interface iPerson extends Nano.MaybeDocument {
  name: string,
  dob: string
}

class Person implements iPerson {
  _id: string
  _rev: string
  name: string
  dob: string

  constructor(name: string, dob: string) {
    this._id = undefined
    this._rev = undefined
    this.name = name
    this.dob = dob
  }

  processAPIResponse(response: Nano.DocumentInsertResponse) {
    if (response.ok === true) {
      this._id = response.id
      this._rev = response.rev
    }
  }
}

let p = new Person('Bob', '2015-02-04')
db.insert(p).then((response) => {
  p.processAPIResponse(response)
  console.log(p)
})

Database functions

nano.db.create(name, [callback])

Creates a CouchDB database with the given name:

await nano.db.create('alice')

nano.db.get(name, [callback])

Get information about the database name:

const info = await nano.db.get('alice')

nano.db.destroy(name, [callback])

Destroys the database name:

await nano.db.destroy('alice')

nano.db.list([callback])

Lists all the CouchDB databases:

const dblist = await nano.db.list()

nano.db.listAsStream()

Lists all the CouchDB databases as a stream:

nano.db.listAsStream()
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

nano.db.compact(name, [designname], [callback])

Compacts name, if designname is specified also compacts its views.

nano.db.replicate(source, target, [opts], [callback])

Replicates source to target with options opts. The targetdatabase has to exist, add create_target:true to opts to create it prior to replication:

const response = await nano.db.replicate('alice', 
                  'http://admin:[email protected]:5984/alice',
                  { create_target:true })

nano.db.replication.enable(source, target, [opts], [callback])

Enables replication using the new CouchDB api from source to target with options opts. target has to exist, add create_target:true to opts to create it prior to replication. Replication will survive server restarts.

const response = await nano.db.replication.enable('alice', 
                  'http://admin:[email protected]:5984/alice',
                  { create_target:true })

nano.db.replication.query(id, [opts], [callback])

Queries the state of replication using the new CouchDB API. The id comes from the response given by the call to replication.enable:

const r = await nano.db.replication.enable('alice', 
                  'http://admin:[email protected]:5984/alice',
                   { create_target:true })
const q = await nano.db.replication.query(r.id)

nano.db.replication.disable(id, [opts], [callback])

Disables replication using the new CouchDB API. The id comes from the response given by the call to replication.enable:

const r = await nano.db.replication.enable('alice', 
                   'http://admin:[email protected]:5984/alice',
                   { create_target:true })
await nano.db.replication.disable(r.id);

nano.db.changes(name, [params], [callback])

Asks for the changes feed of name, params contains additions to the query string.

const c = await nano.db.changes('alice')

nano.db.changesAsStream(name, [params])

Same as nano.db.changes but returns a stream.

nano.db.changes('alice').pipe(process.stdout);

nano.db.info([callback])

Gets database information:

const info = await nano.db.info()

nano.use(name)

Returns a database object that allows you to perform operations against that database:

const alice = nano.use('alice');
await alice.insert({ happy: true }, 'rabbit')

The database object can be used to access the Document Functions.

nano.db.use(name)

Alias for nano.use

nano.db.scope(name)

Alias for nano.use

nano.scope(name)

Alias for nano.use

nano.request(opts, [callback])

Makes a custom request to CouchDB. This can be used to create your own HTTP request to the CouchDB server, to perform operations where there is no nano function that encapsulates it. The available opts are:

  • opts.db – the database name
  • opts.method – the http method, defaults to get
  • opts.path – the full path of the request, overrides opts.doc and opts.att
  • opts.doc – the document name
  • opts.att – the attachment name
  • opts.qs – query string parameters, appended after any existing opts.path, opts.doc, or opts.att
  • opts.content_type – the content type of the request, default to json
  • opts.headers – additional http headers, overrides existing ones
  • opts.body – the document or attachment body
  • opts.encoding – the encoding for attachments
  • opts.multipart – array of objects for multipart request
  • opts.stream - if true, a request object is returned. Default false and a Promise is returned.

nano.relax(opts, [callback])

Alias for nano.request

nano.config

An object containing the nano configurations, possible keys are:

  • url - the CouchDB URL
  • db - the database name

nano.updates([params], [callback])

Listen to db updates, the available params are:

  • params.feed – Type of feed. Can be one of
  • longpoll: Closes the connection after the first event.
  • continuous: Send a line of JSON per event. Keeps the socket open until timeout.
  • eventsource: Like, continuous, but sends the events in EventSource format.
  • params.timeout – Number of seconds until CouchDB closes the connection. Default is 60.
  • params.heartbeat – Whether CouchDB will send a newline character (\n) on timeout. Default is true.

Document functions

db.insert(doc, [params], [callback])

Inserts doc in the database with optional params. If params is a string, it's assumed it is the intended document _id. If params is an object, it's passed as query string parameters and docName is checked for defining the document _id:

const alice = nano.use('alice');
const response = await alice.insert({ happy: true }, 'rabbit')

The insert function can also be used with the method signature db.insert(doc,[callback]), where the doc contains the _id field e.g.

const alice = nano.use('alice')
const response alice.insert({ _id: 'myid', happy: true })

and also used to update an existing document, by including the _rev token in the document being saved:

const alice = nano.use('alice')
const response = await alice.insert({ _id: 'myid', _rev: '1-23202479633c2b380f79507a776743d5', happy: false })

db.destroy(docname, rev, [callback])

Removes a document from CouchDB whose _id is docname and who's revision is _rev:

const response = await alice.destroy('rabbit', '3-66c01cdf99e84c83a9b3fe65b88db8c0')

db.get(docname, [params], [callback])

Gets a document from CouchDB whose _id is docname:

const doc = await alice.get('rabbit')

or with optional query string params:

const doc = await alice.get('rabbit', { revs_info: true })

db.head(docname, [callback])

Same as get but lightweight version that returns headers only:

const headers = await alice.head('rabbit')

Note: if you call alice.head in the callback style, the headers are returned to you as the third argument of the callback function.

db.bulk(docs, [params], [callback])

Bulk operations(update/delete/insert) on the database, refer to the CouchDB doc e.g:

const documents = [
  { a:1, b:2 },
  { _id: 'tiger', striped: true}
];
const response = await alice.bulk({ docs: documents })

db.list([params], [callback])

List all the docs in the database .

const doclist = await alice.list().then((body)
doclist.rows.forEach((doc) => {
  console.log(doc);
});

or with optional query string additions params:

const doclist = await alice.list({include_docs: true})

db.listAsStream([params])

List all the docs in the database as a stream.

alice.listAsStream()
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.fetch(docnames, [params], [callback])

Bulk fetch of the database documents, docnames are specified as per CouchDB doc. additional query string params can be specified, include_docs is always set to true.

const keys = ['tiger', 'zebra', 'donkey'];
const datat = await alice.fetch({keys: keys})

db.fetchRevs(docnames, [params], [callback])

** changed in version 6 **

Bulk fetch of the revisions of the database documents, docnames are specified as per CouchDB doc. additional query string params can be specified, this is the same method as fetch but include_docs is not automatically set to true.

db.createIndex(indexDef, [callback])

Create index on database fields, as specified in CouchDB doc.

const indexDef = {
  index: { fields: ['foo'] },
  name: 'fooindex'
};
const response = await alice.createIndex(indexDef)

Reading Changes Feed

Nano provides a low-level API for making calls to CouchDB's changes feed, or if you want a reliable, resumable changes feed follower, then you need the changesReader.

There are three ways to start listening to the changes feed:

  1. changesReader.start() - to listen to changes indefinitely by repeated "long poll" requests. This mode continues to poll for changes forever.
  2. changesReader.get() - to listen to changes until the end of the changes feed is reached, by repeated "long poll" requests. Once a response with zero changes is received, the 'end' event will indicate the end of the changes and polling will stop.
  3. changesReader.spool() - listen to changes in one long HTTP request. (as opposed to repeated round trips) - spool is faster but less reliable.

Note: for .get() & .start(), the sequence of API calls can be paused by calling changesReader.pause() and resumed by calling changesReader.resume().

Set up your database connection and then choose changesReader.start() to listen to that database's changes:

const db = nano.db.use('mydb')
db.changesReader.start()
  .on('change', (change) => { console.log(change) })
  .on('batch', (b) => {
    console.log('a batch of', b.length, 'changes has arrived');
  }).on('seq', (s) => {
    console.log('sequence token', s);
  }).on('error', (e) => {
    console.error('error', e);
  })

Note: you probably want to monitor either the change or batch event, not both.

If you want changesReader to hold off making the next _changes API call until you are ready, then supply wait:true in the options to get/start. The next request will only fire when you call changesReader.resume():

db.changesReader.get({wait: true})
  .on('batch', (b) => {
    console.log('a batch of', b.length, 'changes has arrived');
    // do some asynchronous work here and call "changesReader.resume()" 
    // when you're ready for the next API call to be dispatched.
    // In this case, wait 5s before the next changes feed request.
    setTimeout( () => {
      db.changesReader.resume()
    }, 5000)
  }).on('end', () => {
    console.log('changes feed monitoring has stopped');
  });

You may supply a number of options when you start to listen to the changes feed:

Parameter Description Default value e.g.
batchSize The maximum number of changes to ask CouchDB for per HTTP request. This is the maximum number of changes you will receive in a batch event. 100 500
since The position in the changes feed to start from where 0 means the beginning of time, now means the current position or a string token indicates a fixed position in the changes feed now 390768-g1AAAAGveJzLYWBgYMlgTmGQ
includeDocs Whether to include document bodies or not false e.g. true
wait For get/start mode, automatically pause the changes reader after each request. When the the user calls resume(), the changes reader will resume. false e.g. true
fastChanges Adds a seq_interval parameter to fetch changes more quickly false true
selector Filters the changes feed with the supplied Mango selector {"name":"fred} null
timeout The number of milliseconds a changes feed request waits for data 60000 10000

The events it emits are as follows:s

Event Description Data
change Each detected change is emitted individually. Only available in get/start modes. A change object
batch Each batch of changes is emitted in bulk in quantities up to batchSize. An array of change objects
seq Each new sequence token (per HTTP request). This token can be passed into ChangesReader as the since parameter to resume changes feed consumption from a known point. Only available in get/start modes. String
error On a fatal error, a descriptive object is returned and change consumption stops. Error object
end Emitted when the end of the changes feed is reached. ChangesReader.get() mode only, Nothing

The ChangesReader library will handle many temporal errors such as network connectivity, service capacity limits and malformed data but it will emit an error event and exit when fed incorrect authentication credentials or an invalid since token.

The change event delivers a change object that looks like this:

{
	"seq": "8-g1AAAAYIeJyt1M9NwzAUBnALKiFOdAO4gpRix3X",
	"id": "2451be085772a9e588c26fb668e1cc52",
	"changes": [{
		"rev": "4-061b768b6c0b6efe1bad425067986587"
	}],
	"doc": {
		"_id": "2451be085772a9e588c26fb668e1cc52",
		"_rev": "4-061b768b6c0b6efe1bad425067986587",
		"a": 3
	}
}

N.B

  • doc is only present if includeDocs:true is supplied
  • seq is not present for every change

The id is the unique identifier of the document that changed and the changes array contains the document revision tokens that were written to the database.

The batch event delivers an array of change objects.

Partition Functions

Functions related to partitioned databses.

Create a partitioned database by passing { partitioned: true } to db.create:

await nano.db.create('my-partitioned-db', { partitioned: true })

The database can be used as normal:

const db = nano.db.use('my-partitioned-db')

but documents must have a two-part _id made up of <partition key>:<document id>. They are insert with db.insert as normal:

const doc = { _id: 'canidae:dog', name: 'Dog', latin: 'Canis lupus familiaris' }
await db.insert(doc)

Documents can be retrieved by their _id using db.get:

const doc = db.get('canidae:dog')

Mango indexes can be created to operate on a per-partition index by supplying partitioned: true on creation:

const i = {
  ddoc: 'partitioned-query',
  index: { fields: ['name'] },
  name: 'name-index',
  partitioned: true,
  type: 'json'
}
 
// instruct CouchDB to create the index
await db.index(i)

Search indexes can be created by writing a design document with opts.partitioned = true:

// the search definition
const func = function(doc) {
  index('name', doc.name)
  index('latin', doc.latin)
}
 
// the design document containing the search definition function
const ddoc = {
  _id: '_design/search-ddoc',
  indexes: {
    search-index: {
      index: func.toString()
    }
  },
  options: {
    partitioned: true
  }
}
 
await db.insert(ddoc)

MapReduce views can be created by writing a design document with opts.partitioned = true:

const func = function(doc) {
  emit(doc.family, doc.weight)
}
 
// Design Document
const ddoc = {
  _id: '_design/view-ddoc',
  views: {
    family-weight: {
      map: func.toString(),
      reduce: '_sum'
    }
  },
  options: {
    partitioned: true
  }
}
 
// create design document
await db.insert(ddoc)

db.partitionInfo(partitionKey, [callback])

Fetch the stats of a single partition:

const stats = await alice.partitionInfo('canidae')

db.partitionedList(partitionKey, [params], [callback])

Fetch documents from a database partition:

// fetch document id/revs from a partition
const docs = await alice.partitionedList('canidae')

// add document bodies but limit size of response
const docs = await alice.partitionedList('canidae', { include_docs: true, limit: 5 })

db.partitionedListAsStream(partitionKey, [params])

Fetch documents from a partition as a stream:

// fetch document id/revs from a partition
nano.db.partitionedListAsStream('canidae')
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

// add document bodies but limit size of response
nano.db.partitionedListAsStream('canidae', { include_docs: true, limit: 5 })
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.partitionedFind(partitionKey, query, [params])

Query documents from a partition by supplying a Mango selector:

// find document whose name is 'wolf' in the 'canidae' partition
await db.partitionedFind('canidae', { 'selector' : { 'name': 'Wolf' }})

db.partitionedFindAsStream(partitionKey, query)

Query documents from a partition by supplying a Mango selector as a stream:

// find document whose name is 'wolf' in the 'canidae' partition
db.partitionedFindAsStream('canidae', { 'selector' : { 'name': 'Wolf' }})
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.partitionedSearch(partitionKey, designName, searchName, params, [callback])

Search documents from a partition by supplying a Lucene query:

const params = {
  q: 'name:\'Wolf\''
}
await db.partitionedSearch('canidae', 'search-ddoc', 'search-index', params)
// { total_rows: ... , bookmark: ..., rows: [ ...] }

db.partitionedSearchAsStream(partitionKey, designName, searchName, params)

Search documents from a partition by supplying a Lucene query as a stream:

const params = {
  q: 'name:\'Wolf\''
}
db.partitionedSearchAsStream('canidae', 'search-ddoc', 'search-index', params)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)
// { total_rows: ... , bookmark: ..., rows: [ ...] }

db.partitionedView(partitionKey, designName, viewName, params, [callback])

Fetch documents from a MapReduce view from a partition:

const params = {
  startkey: 'a',
  endkey: 'b',
  limit: 1
}
await db.partitionedView('canidae', 'view-ddoc', 'view-name', params)
// { rows: [ { key: ... , value: [Object] } ] }

db.partitionedViewAsStream(partitionKey, designName, viewName, params)

Fetch documents from a MapReduce view from a partition as a stream:

const params = {
  startkey: 'a',
  endkey: 'b',
  limit: 1
}
db.partitionedViewAsStream('canidae', 'view-ddoc', 'view-name', params)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)
// { rows: [ { key: ... , value: [Object] } ] }

Multipart functions

db.multipart.insert(doc, attachments, params, [callback])

Inserts a doc together with attachments and params. If params is a string, it's assumed as the intended document _id. If params is an object, its passed as query string parameters and docName is checked for defining the _id. Refer to the doc for more details. The attachments parameter must be an array of objects with name, data and content_type properties.

const fs = require('fs');

fs.readFile('rabbit.png', (err, data) => {
  if (!err) {
    await alice.multipart.insert({ foo: 'bar' }, [{name: 'rabbit.png', data: data, content_type: 'image/png'}], 'mydoc')
  }
});

db.multipart.get(docname, [params], [callback])

Get docname together with its attachments via multipart/related request with optional query string additions params. Refer to the doc for more details. The multipart response body is a Buffer.

const response = await alice.multipart.get('rabbit')

Attachments functions

db.attachment.insert(docname, attname, att, contenttype, [params], [callback])

Inserts an attachment attname to docname, in most cases params.rev is required. Refer to the CouchDB doc for more details.

const fs = require('fs');

fs.readFile('rabbit.png', (err, data) => {
  if (!err) {
    await alice.attachment.insert('rabbit', 
      'rabbit.png', 
      data, 
      'image/png',
      { rev: '12-150985a725ec88be471921a54ce91452' })
  }
});

db.attachment.insertAsStream(docname, attname, att, contenttype, [params])

As of Nano 9.x, the function db.attachment.insertAsStream is now deprecated. Now simply pass a readable stream to db.attachment.insert as the third paramseter.

db.attachment.get(docname, attname, [params], [callback])

Get docname's attachment attname with optional query string additions params.

const fs = require('fs');

const body = await alice.attachment.get('rabbit', 'rabbit.png')
fs.writeFile('rabbit.png', body)

db.attachment.getAsStream(docname, attname, [params])

const fs = require('fs');
alice.attachment.getAsStream('rabbit', 'rabbit.png')
  .on('error', e => console.error)
  .pipe(fs.createWriteStream('rabbit.png'));

db.attachment.destroy(docname, attname, [params], [callback])

changed in version 6

Destroy attachment attname of docname's revision rev.

const response = await alice.attachment.destroy('rabbit', 'rabbit.png', {rev: '1-4701d73a08ce5c2f2983bf7c9ffd3320'})

Views and design functions

db.view(designname, viewname, [params], [callback])

Calls a view of the specified designname with optional query string params. If you're looking to filter the view results by key(s) pass an array of keys, e.g { keys: ['key1', 'key2', 'key_n'] }, as params.

const body = await alice.view('characters', 'happy_ones', { key: 'Tea Party', include_docs: true })
body.rows.forEach((doc) => {
  console.log(doc.value)
})

or

const body = await alice.view('characters', 'soldiers', { keys: ['Hearts', 'Clubs'] })

When params is not supplied, or no keys are specified, it will simply return all documents in the view:

const body = await alice.view('characters', 'happy_ones')
const body = alice.view('characters', 'happy_ones', { include_docs: true })

db.viewAsStream(designname, viewname, [params])

Same as db.view but returns a stream:

alice.viewAsStream('characters', 'happy_ones', {reduce: false})
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

db.viewWithList(designname, viewname, listname, [params], [callback])

Calls a list function fed by the given view from the specified design document.

const body = await alice.viewWithList('characters', 'happy_ones', 'my_list')

db.viewWithListAsStream(designname, viewname, listname, [params], [callback])

Calls a list function fed by the given view from the specified design document as a stream.

alice.viewWithListAsStream('characters', 'happy_ones', 'my_list')
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

db.show(designname, showname, doc_id, [params], [callback])

Calls a show function from the specified design for the document specified by doc_id with optional query string additions params.

const doc = await alice.show('characters', 'format_doc', '3621898430')

Take a look at the couchdb wiki for possible query paramaters and more information on show functions.

db.atomic(designname, updatename, docname, [body], [callback])

Calls the design's update function with the specified doc in input.

const response = await db.atomic('update', 'inplace', 'foobar', {field: 'foo', value: 'bar'})

Note that the data is sent in the body of the request. An example update handler follows:

"updates": {
  "in-place" : "function(doc, req) {
      var request_body = JSON.parse(req.body)
      var field = request_body.field
      var value = request_body.value
      var message = 'set ' + field + ' to ' + value
      doc[field] = value
      return [doc, message]
  }"
}

db.search(designname, searchname, params, [callback])

Calls a view of the specified design with optional query string additions params.

const response = await alice.search('characters', 'happy_ones', { q: 'cat' })

or

const drilldown = [['author', 'Dickens']['publisher','Penguin']]
const response = await alice.search('inventory', 'books', { q: '*:*', drilldown: drilldown })

Check out the tests for a fully functioning example.

db.searchAsStream(designname, searchname, params)

Calls a view of the specified design with optional query string additions params. Returns stream.

alice.search('characters', 'happy_ones', { q: 'cat' }).pipe(process.stdout);

db.find(selector, [callback])

Perform a "Mango" query by supplying a JavaScript object containing a selector:

// find documents where the name = "Brian" and age > 25.
const q = {
  selector: {
    name: { "$eq": "Brian"},
    age : { "$gt": 25 }
  },
  fields: [ "name", "age", "tags", "url" ],
  limit:50
};
const response = await alice.find(q)

db.findAsStream(selector)

Perform a "Mango" query by supplying a JavaScript object containing a selector, but return a stream:

// find documents where the name = "Brian" and age > 25.
const q = {
  selector: {
    name: { "$eq": "Brian"},
    age : { "$gt": 25 }
  },
  fields: [ "name", "age", "tags", "url" ],
  limit:50
};
alice.findAsStream(q)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

using cookie authentication

Nano supports making requests using CouchDB's cookie authentication functionality. If you initialise Nano so that it is cookie-aware, you may call nano.auth first to get a session cookie. Nano will behave like a web browser, remembering your session cookie and refreshing it if a new one is received in a future HTTP response.

const nano = require('nano')({
  url: 'http://localhost:5984',
  requestDefaults: {
    jar: true
  }
})
const username = 'user'
const userpass = 'pass'
const db = nano.db.use('mydb')

// authenticate
await nano.auth(username, userpass)

// requests from now on are authenticated
const doc = await db.get('mydoc')
console.log(doc)

The second request works because the nano library has remembered the AuthSession cookie that was invisibily returned by the nano.auth call.

When you have a session, you can see what permissions you have by calling the nano.session function

const doc = await nano.session()
// { userCtx: { roles: [ '_admin', '_reader', '_writer' ], name: 'rita' },  ok: true }

Advanced features

Getting uuids

If your application needs to generate UUIDs, then CouchDB can provide some for you

const response = await nano.uuids(3)
// { uuids: [
// '5d1b3ef2bc7eea51f660c091e3dffa23',
// '5d1b3ef2bc7eea51f660c091e3e006ff',
// '5d1b3ef2bc7eea51f660c091e3e007f0',
//]}

The first parameter is the number of uuids to generate. If omitted, it defaults to 1.

Extending nano

nano is minimalistic but you can add your own features with nano.request(opts)

For example, to create a function to retrieve a specific revision of the rabbit document:

function getrabbitrev(rev) {
  return nano.request({ db: 'alice',
                 doc: 'rabbit',
                 method: 'get',
                 params: { rev: rev }
               });
}

getrabbitrev('4-2e6cdc4c7e26b745c2881a24e0eeece2').then((body) => {
  console.log(body);
});

Pipes

You can pipe the return values of certain nano functions like other stream. For example if our rabbit document has an attachment with name picture.png you can pipe it to a writable stream:

const fs = require('fs');
const nano = require('nano')('http://127.0.0.1:5984/');
const alice = nano.use('alice');
alice.attachment.getAsStream('rabbit', 'picture.png')
  .on('error', (e) => console.error('error', e))
  .pipe(fs.createWriteStream('/tmp/rabbit.png'));

then open /tmp/rabbit.png and you will see the rabbit picture.

Functions that return streams instead of a Promise are:

  • nano.db.listAsStream

attachment functions:

  • db.attachment.getAsStream
  • db.attachment.insertAsStream

and document level functions

  • db.listAsStream

Logging

When instantiating Nano, you may supply the function that will perform the logging of requests and responses. In its simplest for, simply pass console.log as your logger:

const nano = Nano({ url: process.env.COUCH_URL, log: console.log })
// all requests and responses will be sent to console.log

You may supply your own logging function to format the data before output:

const url = require('url')
const logger = (data) => {
  // only output logging if there is an environment variable set
  if (process.env.LOG === 'nano') {
    // if this is a request
    if (typeof data.err === 'undefined') {
      const u = new url.URL(data.uri)
      console.log(data.method, u.pathname, data.qs)
    } else {
      // this is a response
      const prefix = data.err ? 'ERR' : 'OK'
      console.log(prefix, data.headers.statusCode, JSON.stringify(data.body).length)
    }
  }
}
const nano = Nano({ url: process.env.COUCH_URL, log: logger })
// all requests and responses will be formatted by my code
// GET /cities/_all_docs { limit: 5 }
// OK 200 468

Tutorials, examples in the wild & screencasts

Roadmap

Check issues

Tests

To run (and configure) the test suite simply:

cd nano
npm install
npm run test

Meta

http://freenode.org/

Release

To create a new release of nano. Run the following commands on the main branch

  npm version {patch|minor|major}
  github push  origin main --tags
  npm publish
Comments
  • version 8.2.0 did not work with typescript (3.7.5) node version 13.9.0

    version 8.2.0 did not work with typescript (3.7.5) node version 13.9.0

    Errors:

    node_modules/nano/lib/nano.d.ts(48,5): error TS7010: 'followUpdates', which lacks return-type annotation, implicitly has an 'any' return type. node_modules/nano/lib/nano.d.ts(49,5): error TS7010: 'followUpdates', which lacks return-type annotation, implicitly has an 'any' return type. node_modules/nano/lib/nano.d.ts(54,5): error TS7010: 'follow', which lacks return-type annotation, implicitly has an 'any' return type. node_modules/nano/lib/nano.d.ts(69,16): error TS7006: Parameter 'source' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(69,24): error TS7006: Parameter 'target' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(69,32): error TS7006: Parameter 'opts0' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(69,39): error TS7006: Parameter 'callback0' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(70,17): error TS7006: Parameter 'id' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(70,21): error TS7006: Parameter 'rev' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(70,26): error TS7006: Parameter 'opts0' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(70,33): error TS7006: Parameter 'callback0' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(71,15): error TS7006: Parameter 'id' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(71,19): error TS7006: Parameter 'opts0' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(71,26): error TS7006: Parameter 'callback0' implicitly has an 'any' type. node_modules/nano/lib/nano.d.ts(108,5): error TS7010: 'follow', which lacks return-type annotation, implicitly has an 'any' return type. node_modules/nano/lib/nano.d.ts(110,5): error TS7010: 'followUpdates', which lacks return-type annotation, implicitly has an 'any' return type. node_modules/nano/lib/nano.d.ts(111,5): error TS7010: 'followUpdates', which lacks return-type annotation, implicitly has an 'any' return type. node_modules/nano/lib/nano.d.ts(140,5): error TS7010: 'follow', which lacks return-type annotation, implicitly has an 'any' return type. node_modules/nano/lib/nano.d.ts(141,5): error TS7010: 'follow', which lacks return-type annotation, implicitly has an 'any' return type.

    opened by AZchVVT 10
  • The `queries` parameter is no longer supported at this endpoint

    The `queries` parameter is no longer supported at this endpoint

    Expected Behavior

    The view method is no longer working since my upgrade to CouchDB 3.0. Apparently, Nano send the request to a view as a POST with search parameters in the body with this form { "queries": ... }.

    Current Behavior

    Here is the error I get : "Error: The queries parameter is no longer supported at this endpoint" And the stacktrace : at Request._callback (***/node_modules/nano/lib/nano.js:168:15) at Request.self.callback (***/node_modules/request/request.js:185:22) at Request.emit (events.js:198:13) at Request.EventEmitter.emit (domain.js:466:23) at Request. (***/node_modules/request/request.js:1154:10) at Request.emit (events.js:198:13) at Request.EventEmitter.emit (domain.js:466:23) at IncomingMessage. (***/node_modules/request/request.js:1076:12) at Object.onceWrapper (events.js:286:20) at IncomingMessage.emit (events.js:203:15) at IncomingMessage.EventEmitter.emit (domain.js:466:23) at endReadableNT (_stream_readable.js:1145:12) at process._tickCallback (internal/process/next_tick.js:63:19)

    Possible Solution

    I'll rollback to CouchDB 2.3 in the mean time

    Steps to Reproduce (for bugs)

    Here is the content of the request : { method: 'POST', headers: { 'content-type': 'application/json', accept: 'application/json' }, uri: 'http://localhost:5984//_design/v/_view/', body: '{"queries":[{"startkey":["25ea7f29-3246-41d9-a45b-280837c087fe"],"endkey":["25ea7f29-3246-41d9-a45b-280837c087fe",{}],"include_docs":true},{"startkey":["12bcdbc6-57dd-48bb-a45f-32c9731ebcf9"],"endkey":["12bcdbc6-57dd-48bb-a45f-32c9731ebcf9",{}],"include_docs":true}]}', qsStringifyOptions: { arrayFormat: 'repeat' } }, headers: { uri:'http://localhost:5984//_design/v*/_view/*', statusCode: 400, 'cache-control': 'must-revalidate', connection: 'close', 'content-type': 'application/json', date: 'Tue, 10 Mar 2020 10:15:04 GMT', 'x-couch-request-id': '27e5536e4a', 'x-couchdb-body-time': '0' }, errid: 'non_200', description: 'couch returned 400' }

    Your Environment

    • Version used: Nano 8.2.2 / CouchDB 3.0.0
    • Browser Name and version: Firefox 73.0.1
    • Operating System and version (desktop or mobile): Ubuntu 18.04
    • Link to your project:

    Thanks for all ;)

    opened by creativityjuice 9
  • Remote Memory Exposure in nano@6.3.0 > follow@0.12.1 > request@2.55.0

    Remote Memory Exposure in [email protected] > [email protected] > [email protected]

    Hi,

    I am checking my projects with nsp and get the following findings.

    selection_010

    Could you please update dependencies or remove the follow module if possible? It looks like that the project is no longer active, the same issue was reported on Jun 10, 2016 at the follow project https://github.com/iriscouch/follow/issues/84. The last commit on master was on May 24, 2015.

    https://nodesecurity.io/advisories/309 https://nodesecurity.io/advisories/77

    Thanks in advance Konrad

    opened by kohms 9
  • Adding replication using the

    Adding replication using the "_replicator" database

    Starting with 1.2.0 CouchDB added a new system database called "_replicator" to handle replications jobs.

    Replications are now created as entries on that database and the server will schedule and perform the replication accordingly. Entries in the "_replicator" db will be updated.

    This means that replication now is a completely asynchronous job that is not guaranteed to run right after the replication was started.

    This commit adds three new object with three object to handle this new type of replication:

    • replication.enable: To enable the replication of a database.
    • replication.query: To query the status of a replication job.
    • replication.disable: To disable the replication of a database.

    More information on this type of replication can be found:

    • https://wiki.apache.org/couchdb/Replication#from_1.2.0_onward
    • http://guide.couchdb.org/draft/replication.html
    • https://gist.github.com/fdmanana/832610

    [Addendum after merging with the new repo] Fixing tests for uuids, since they were not passing.

    opened by carlosduclos 8
  • Error: Invalid operator: $regex

    Error: Invalid operator: $regex

    Facing issue while doing something like

    await test.find(
            { 
                selector: { name: {"$regex": /cat/}}
            }
        )
    

    Expected Behavior

    it should return names containing cat

    Current Behavior

    Current error stack

     Error: Invalid operator: $regex
        at Request._callback (/node_modules/nano/lib/nano.js:154:15)
        at Request.self.callback (node_modules/request/request.js:185:22)
        at Request.emit (events.js:198:13)
        at Request.<anonymous> (/node_modules/request/request.js:1161:10)
        at Request.emit (events.js:198:13)
        at IncomingMessage.<anonymous> (/node_modules/request/request.js:1083:12)
        at Object.onceWrapper (events.js:286:20)
        at IncomingMessage.emit (events.js:203:15)
        at endReadableNT (_stream_readable.js:1129:12)
    

    Possible Solution

    Steps to Reproduce (for bugs)

    1.Just use $regex, it will throw error

    Context

    I am trying to do wildcard search based on name in CouchDB

    Your Environment

    • Version used: 8.0.1
    • Browser Name and version: Chrome
    • Operating System and version (desktop or mobile): OSX
    opened by thatshailesh 7
  • Support _selector based filtering for changes feed

    Support _selector based filtering for changes feed

    It appears that the use of selectors (same as is used in Mango queries) to filter a changes feed is not supported. This would be a useful addition for cases where results of a Mango query are kept up to date by a client.

    This is described in Section 10.3.9.2.2 of the CouchDB current documentation (http://docs.couchdb.org/en/stable/api/database/changes.html).

    Your Environment

    • Version used: 7.0.0
    • Browser Name and version: Node.js
    • Operating System and version (desktop or mobile): Windows x64
    • Link to your project:
    opened by danielpaull 7
  • bring follow library under Nano's wing

    bring follow library under Nano's wing

    Overview

    Currently Nano uses a the follow library as a dependency to handle changes feed subscription. Unfortunately, follow is not maintained. The nice people at Cloudant have taken the code and fixed it up and published it here. This PR proposes to depend on the cloudant-follow fork, but I am proposing something different:

    1. Bring the follow codebase into the Nano project (it is published under an Apache-2.0 license)
    2. Remove follow as a dependency
    3. In the future refactor the follow code to be more in keeping with this project

    Testing recommendations

    This should be a drop-in replacement of the previous version. The only difference is that the follow code lives in this repo and not as a dependency.

    I've brought in the follow tests too. Unfortunately they rely on CouchDB1.6 so I've altered the testing sequence to run our core tests with CouchDB2 then switch over to CouchDB1.6 for the follow tests.

    See npm run coretests & npm run followtests in package.json.

    GitHub issue number

    Related Pull Requests

    See issue #28 & pull #51

    Checklist

    • [x] Code is written and works correctly;
    • [x] Changes are covered by tests;
    • [x] Documentation reflects the changes;
    opened by glynnbird 7
  • Error ECONNREFUSED when connecting to database

    Error ECONNREFUSED when connecting to database

    Expected Behavior

    Couchdb nano should connect to database.

    Current Behavior

    My couchdb server is running at localhost:5984. curl requests from terminal work fine.

    In a next.js api endpoint:

    const nano = require("nano")(`http://${user}:${pass}@localhost:5984`);
    const db = nano.db.use("patients")
    db.info().then(console.log)
    

    when I run this I get:

    error - unhandledRejection: Error: error happened in your connection
        at responseHandler (/Users/omoscow/Desktop/couch-test/node_modules/nano/lib/nano.js:137:16)
        at /Users/omoscow/Desktop/couch-test/node_modules/nano/lib/nano.js:427:13
        at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    

    Possible Solution

    Steps to Reproduce (for bugs)

    1. Create a fresh next.js application with npx create-next-app@latest
    2. npm install nano
    3. In the hello.ts file type:
    const nano = require("nano")(`http://admin:couchhie8394@localhost:5984`);
    const db = nano.db.use("patients")
    db.info().then(console.log)
    
    1. npm run dev

    Context

    Your Environment

    • Version used: 10.1.0
    • Browser Name and version: Chrome 106
    • Operating System and version (desktop or mobile): Mac OS Monterey
    • Link to your project:
    opened by OliverMoscow 6
  • Add detailed error message (fixes issue #58)

    Add detailed error message (fixes issue #58)

    Overview

    When encountering issues with connecting to CouchDB, the only error you get is "error happened in your connection". This is extremely unhelpful in debugging the issues. So instead, I've added the actual reason things are failing, so that it's easier for people to figure out what's wrong.

    Testing recommendations

    There's a unit test that already covered it.

    GitHub issue number

    Fixes #58

    Checklist

    • [ x ] Code is written and works correctly;
    • [ x ] Changes are covered by tests;
    • [ x ] Documentation reflects the changes;
    opened by gboer 6
  • scrubLog functionality can fail on JSON circular reference

    scrubLog functionality can fail on JSON circular reference

    Consider the following code in relax: ` req.httpAgent = cfg.requestDefaults.agent || defaultHttpAgent req.httpsAgent = cfg.requestDefaults.agent || defaultHttpsAgent

    // scrub and log const scrubbedReq = JSON.parse(JSON.stringify(req)) `

    I'm having issues where the JSON.stringify can fail due to circular references in Socket. Traditionally I've avoided logging Socket objects in my code because I've run into this problem before.

    Expected Behavior

    Code functions as designed.

    Current Behavior

    TypeError: Converting circular structure to JSON --> starting at object with constructor 'Socket' | property '_httpMessage' -> object with constructor 'ClientRequest' --- property 'socket' closes the circle {"timestamp":"2021-01-14T14:13:20.913Z"}

    Possible Solution

    I believe the scrubLog should happen BEFORE setting the req.httpAgent and req.httpsAgent fields. If you need to get that socket information, debug it.

    Steps to Reproduce (for bugs)

    I'm still trying to figure out why this happens sometimes and not others. For example, I was getting the error on changesReader.start at one point, but figured out my code was improperly calling the function multiple times. My current code seems fine though: this.myDb.changesReader.start({ includeDocs: true }).on("batch", (batch) => { ... }.on("error", (error) => { console.error(error) }

    Context

    I'm listening for any event in the database.

    Your Environment

    • Version used: 9.0.2
    • Browser Name and version: NodeJS 14
    • Operating System and version (desktop or mobile): Docker running CentOS 7
    • Link to your project: private internal
    opened by thescrublet 6
  • Axios

    Axios

    Overview

    The request module is deprecated. This pull request will aim to replace request with axios. I tried to do this work inside of the "relax" function so that all the callers of the relax function stay the same. Only the db.copy had to be removed because the COPY method is not supported by axios.

    As this project's dependency "cloudant-follow" is also a user of request (and is becoming deprecated itself), the 'follow*' functions of this library are also deprecated and replaced with new 'changesReader' functions which I wrote myself and have plumbed into the library.

    I also took the opportunity of removing extraneous dependencies (Nano now only has 2 dependencies), fixed up the logging module and improved the README.

    Note it looks like a lot of commits, but the first bunch are from a previous branch that were 'squashed' into master. All of this change will get squashed when merged.

    Testing recommendations

    npm run test

    GitHub issue number

    https://github.com/apache/couchdb-nano/issues/196

    Checklist

    • [x] Code is written and works correctly;
    • [x] Changes are covered by tests;
    • [x] Documentation reflects the changes;
    opened by glynnbird 6
  • `changesReader.get` doesn't call the 'end' callback

    `changesReader.get` doesn't call the 'end' callback

    Expected Behavior

    On call of changesReader.get it's expected that the request should pull changes till the last change then on('end' should be called

    Current Behavior

    Currently on call of changesReader.get({ includeDocs: true, since: changesSince, batchSize: 500 }) the changes are pulled but on('end', () => {...}) never gets called. So I can't tell when all the changes have been pulled.

    Your Environment

    • Version used: 10.1.0
    • Browser Name and version: N/A
    • Running in node app: 16.17.0
    • Operating System and version (desktop or mobile):
    • Link to your project:
    opened by Delink-D 0
  • Refactor to use fetch instead of axios

    Refactor to use fetch instead of axios

    Overview

    Replaces axios HTTP library with fetch, powered-by undici. The upshot of this is that we reduce the number of dependencies to 1 and possibly 0 in the future.

    Note this is for merging after April 2023 when Node 14 becomes end-of-life but discussion as to whether Nano should go in this direction is welcome in this PR.

    Comments and advice welcome.

    fetch

    Some history: originally Nano was built on top of the request library which was later deprecated. At this point I refactorted it to use axios instead. This PR eliminates axios and other axios-related dependencies and instead uses the new kid on the block: the fetch API

    The fetch feature has found widespread adoption in web browsers as a means of handling outbound HTTP requests. It has found its way into Node.js as a global function and is marked as an experimental feature in Node 18/19 and will likely be mainstream in Node 20.

    Note: there's a small chance that fetch is removed, or implemented differently until it loses its experimental status.

    Node.js's fetch capability is powered by the undici package which in turn uses Node's low-level network libraries instead of being based on the higher-level http/https built-in modules. It purports to be significantly faster (according to its own benchmarks) than traffic routed through http/https modules, as is the case with other HTTP libraries like axios & request.

    As we're using a feature marked 'experimental' by Node.js, executing a Node.js script using this branch's Nano would produce the following warning:

    (node:43358) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time

    Automated testing

    Replacing axios with fetch means also ditching the nock library for mocking HTTP requests and responses, because nock works by intercepting requests originating from the http/https layer, which is bypassed by undici. Fortunately, undici provides its own Mocking tooling.

    The outcome

    This branch's runtime dependents are one npm module: undici. This module is required because we use its Agent class to allow nano users to override timeouts and connection pooling parameters, and during testing to mock requests with its MockAgent. These classes are not exposed by Node.js, so we need the undici dependency - for now.

    Current dependencies:

      "dependencies": {
        "http-cookie-agent": "^4.0.2",
        "@types/tough-cookie": "^4.0.2",
        "axios": "^1.1.3",
        "qs": "^6.11.0",
        "tough-cookie": "^4.1.2",
        "node-abort-controller": "^3.0.1"
      },
      "devDependencies": {
        "@types/node": "^18.11.9",
        "jest": "^29.2.2",
        "nock": "^13.2.9",
        "standard": "^17.0.0",
        "typescript": "^4.8.4"
      }
    

    Post-PR dependencies:

      "dependencies": {
        "undici": "^5.14.0"
      },
      "devDependencies": {
        "@types/node": "^18.11.15",
        "typescript": "^4.9.4"
      }
    

    Backwards compatibility

    None of Nano's API has changed except when a user is supplying non-default connection handling parameters. Gone is requestDefaults which dates back to the "request" days and instead an optional agentOptions can be provided which is documented in the README and in TypeScript.

    const agentOptions = {
      bodyTimeout: 30000,
      headersTimeout: 30000,
      keepAliveMaxTimeout: 600000,
      pipelining: 6
    }
    const nano = Nano({ url: 'http://127.0.0.1:5984', agentOptions })
    

    Node versioning

    It's not all plain sailing. The undici library only works on Node 16.8 onwards. As it happens, Node 14 is E.O.L in April 2023, so that might be a good time to merge this PR and release a version 11 of Nano - older versions of Nano will still work for folks with older Nodes, but Nano 11+ would be for Node 16+ i.e. all Long-Term Supported Nodes.

    In summary

    If Node.js ends up sticking with undici's implementation of fetch, then this PR would allow this project to have fewer dependencies and potentially faster performance. I'm relucantant to merge this yet until fetch is no longer experimental and Node.js 14 is still supported.

    Testing recommendations

    The test suite has been rewritten to use the built-in Node.js test runner (one fewer dependency!) and uses the undici.MockAgent to simulate responses for each of Nano's API calls, just as Nock did previously.

    Run with:

    npm run test
    

    GitHub issue number

    Fixes https://github.com/apache/couchdb-nano/issues/307

    Related Pull Requests

    n/a

    Checklist

    • [x] Code is written and works correctly;
    • [x] Changes are covered by tests;
    • [x] Documentation reflects the changes;
    opened by glynnbird 1
  • fetch instead of axios

    fetch instead of axios

    If this library used fetch instead of axios (with cross-fetch as a polyfill for environments that don't have fetch) it would be usable in other js environments (deno, bun, cloudflare workers, maybe even browser!)

    Possible Solution

    Remove dependencies on axios, and use cross-fetch and standard fetch API.

    I have seen browser-nano but haven't tested. It may already have support for this. This might be a nice way to merge these projects. It has a very thin wrapper to monkey-patch nano.

    Context

    For me specifically, it would enable me to use nano/couch more easily, directly with cloudflare workers (which have fetch built-in.)

    opened by konsumer 1
  • How to set unit tests using in-memory database for couchdb-nano

    How to set unit tests using in-memory database for couchdb-nano

    Looking to implement unit test coverage for nanodb functionalities for my application (with a nodejs server and couch DB) .Been doing a transition from pouchdb to nano.For the case of pouch db its possible to use 'pouchdb-adapter-memory'(Refer https://pouchdb.com/adapters.html).I was wondering if its possible to do something similar for nano where i can do unit tests but with an in memory database.Any tips/suggestions would be greatly appreciated!

    opened by alexmathew98 0
  • db.head does not return content-length header

    db.head does not return content-length header

    Trying to use db.head to get document size fast, it returns headers without content-length. I switched to axios just for that call to access document size.

    Here is a simple piece of code :

    const db = nano.db.use('accounts');
    const nano_account_headers = await db.head('00000000-0000-0000-0000-ffffffffffff');
    console.log('nano_account_headers -', nano_account_headers);
    const axios_account = await axios.head(`${db_url}/accounts/00000000-0000-0000-0000-ffffffffffff`);
    console.log('axios_account.headers -', axios_account.headers);
    

    Expected Behavior

    Here is what I get with axios, and therefor have with nano:

    axios_account.headers - {
      server: 'nginx',
      date: 'Mon, 20 Jun 2022 13:24:14 GMT',
      'content-type': 'application/json',
      **'content-length': '1156',**
      connection: 'close',
      'cache-control': 'must-revalidate',
      etag: '"xx-xxxxxxxxxxxxxxxxxxxxxxxxx"',
      'x-couch-request-id': 'xxxxxxxxxx',
      'x-couchdb-body-time': '0',
      'strict-transport-security': 'max-age=31536000; includeSubdomains; preload'
    }
    

    Current Behavior

    Here is what I get with nano:

    nano_account_headers - {
      uri: 'https://xxxxxxxx.com/accounts/00000000-0000-0000-0000-ffffffffffff',
      statusCode: 200,
      date: 'Mon, 20 Jun 2022 13:24:14 GMT',
      'content-type': 'application/json',
      connection: 'close',
      'cache-control': 'must-revalidate',
      etag: '"xx-xxxxxxxxxxxxxxxxxxxx"',
      'x-couch-request-id': 'xxxxxxxxxx',
      'x-couchdb-body-time': '0',
      'strict-transport-security': 'max-age=31536000; includeSubdomains; preload'
    }
    

    Your Environment

    • Version used: 7.1.1
    • Browser Name and version: Node 16.5.1
    • Operating System and version (desktop or mobile): Ubuntu 20.04
    opened by creativityjuice 1
  • Supports multiple instances using different credentials

    Supports multiple instances using different credentials

    Overview

    Although a program can currently create multiple instances of nano, they all share the same cookie jar. Authorizing with a set of credentials in one instance changes the credentials (the AuthSession cookie) used by all instances.

    This change gives every instance of nano its own cookie jar, so each instance can use a different set of credentials. One application server can thus service the requests of multiple CouchDB users.

    The cookie jar is visible to the code instantiating nano, so it can re-create cookies for the following architecture: An application server can accept a CouchDB username and password from a client web app, pass them to a CouchDB cluster, then pass the value of the AuthSession cookie back to the client web app. In future requests, the client can then pass the value of the AuthSession cookie back to the application server, which re-creates the AuthSession cookie. As long as a client web app retains the value of the AuthSession cookie, the application server can thus handle requests without requiring the client web app to pass the username and password, even if the application server was restarted after the client web app authenticated, or didn't handle the authentication.

    This supports application servers implementing the adapter pattern.

    My current work is implementing an adapter for Armadietto, which implements the remoteStorage protocol.

    Testing recommendations

    The test 'should be able to authenticate - POST /_session - nano.auth' has been extended to cover using two instances of nano and verifying that they retain separate credentials. (Running this extended test without the code changes demonstrates that the current implementation cannot maintain separate sets of credentials.)

    It also works for an actual adapter: https://github.com/DougReeder/armadietto/tree/couchdb-auth

    opened by DougReeder 2
Releases(v10.1.0)
  • v10.1.0(Nov 3, 2022)

    • update dependencies, including using the latest, post v1, Axios
    • aborting in-flight HTTP requests initiated by ChangesReader when stop is called. cc @insidewhy
    • remove axios-cookiejar-support dependency which causes some users problems
    • ensure callbacks are called with Error objects cc @revington
    • various small typos and Typescript fixes from @lukashass @insidewhy @DougReeder
    Source code(tar.gz)
    Source code(zip)
  • v10.0.0(Mar 25, 2022)

    • Properly escape partition ids - Thanks @swansontec. This is a potentially breaking change for anyone encoding the partition key before passing to Nano. See https://github.com/apache/couchdb-nano/issues/283
    • Fix up broken badge links - Thanks @brnnnfx
    • Typescript fixes - Thanks @sziladriana
    • More Typescript fixes - Thanks @adipascu
    • Yet more Typescript fixes - Thanks @vividn
    • Dependency bump to get latest axios and other dependencies
    Source code(tar.gz)
    Source code(zip)
  • v9.0.5(Sep 14, 2021)

  • v9.0.4(Sep 2, 2021)

  • 9.0.3(Jan 15, 2021)

  • 9.0.2(Jan 13, 2021)

    • vastly improved TypeScript definition comments which show up as hints in VSCode
    • some TypeScript definition bug fixes
    • switched README to use async/await examples
    • scrub credentials from logged messages
    Source code(tar.gz)
    Source code(zip)
  • v9.0.1(Nov 9, 2020)

  • 9.0.0(Nov 4, 2020)

  • v8.2.3(Nov 2, 2020)

    Maintenance release on v8.x.x including various Typescript fixes.

    Released in preparation for v9.0.0 release based on axios instead of request.

    Source code(tar.gz)
    Source code(zip)
  • v8.2.2(Mar 5, 2020)

  • v8.2.1(Mar 2, 2020)

  • 8.2.0(Feb 27, 2020)

    • added new functions to deal with Partitioned Databases for CouchDB 3.
    • rewritten test suite using jest with the focus less on testing CouchDB's functionality, but ensuring that Nano is dispatching the correct HTTP call. Most tests are "mocked" although some are performed against CouchDB 3 in a container.
    • dependency bump
    • fixed many TypeScript definitions which folks kindly raised as GitHub issues
    Source code(tar.gz)
    Source code(zip)
  • 8.1.0(May 2, 2019)

    • TypeScript definitions
    • Typos in code and docs
    • allow db.create to take an opts object, to specify q/r
    • db.search does an HTTP POST instead of GET, which simplifies processing of parameters as they don't need to be URL encode, just passed as JSON
    • dependency bump
    Source code(tar.gz)
    Source code(zip)
  • v8.0.1(Mar 18, 2019)

  • v7.0.0(Jul 24, 2018)

  • 6.4.1(Sep 20, 2017)

Owner
The Apache Software Foundation
The Apache Software Foundation
DataStax Node.js Driver for Apache Cassandra

DataStax Node.js Driver for Apache Cassandra® A modern, feature-rich and highly tunable Node.js client library for Apache Cassandra and DSE using excl

DataStax 1.2k Dec 30, 2022
Pulsar Flex is a modern Apache Pulsar client for Node.js, developed to be independent of C++.

PulsarFlex Apache Pulsar® client for Node.js Report Bug · Request Feature About the project Features Usage Contributing About PulsarFlex is a modern A

null 43 Aug 19, 2022
Couchbase Node.js Client Library (Official)

Couchbase Node.js Client The Node.js SDK library allows you to connect to a Couchbase cluster from Node.js. It is a native Node.js module and uses the

null 460 Nov 13, 2022
The Official MongoDB Node.js Driver

MongoDB NodeJS Driver The official MongoDB driver for Node.js. NOTE: v3.x released with breaking API changes. You can find a list of changes here. Ver

mongodb 9.6k Dec 28, 2022
Official turtleDB project

Overview • Getting Started • Features • Contributors • License • Overview turtleDB is a JavaScript framework and in-browser database for developers to

turtleDB 436 Dec 24, 2022
Run official FLAC tools `flac` and `metaflac` as WebAssembly, on browsers or Deno.

flac.wasm Run official FLAC tools flac and metaflac as WebAssembly, on browsers or Deno. Currently we have no plans on supporting Node.js. Try it onli

Pig Fang 15 Aug 21, 2022
A MySQL Library for Node.js

A MySQL Library for Node.js

Blaze Rowland 3 Mar 14, 2022
A database library stores JSON file for Node.js.

concisedb English | 简体中文 A database library stores JSON file for Node.js. Here is what updated every version if you want to know. API Document Usage B

LKZ烂裤子 3 Sep 4, 2022
A node.js locks library with support of Redis and MongoDB

locco A small and simple library to deal with race conditions in distributed systems by applying locks on resources. Currently, supports locking via R

Bohdan 5 Dec 13, 2022
A Node.js library for retrieving data from a PostgreSQL database with an interesting query language included.

RefQL A Node.js library for retrieving data from a PostgreSQL database with an interesting query language included. Introduction RefQL is about retrie

Rafael Tureluren 7 Nov 2, 2022
An easy-to-use multi SQL dialect ORM tool for Node.js

Sequelize Sequelize is a promise-based Node.js ORM tool for Postgres, MySQL, MariaDB, SQLite and Microsoft SQL Server. It features solid transaction s

Sequelize 27.3k Jan 4, 2023
⚡️ lowdb is a small local JSON database powered by Lodash (supports Node, Electron and the browser)

Lowdb Small JSON database for Node, Electron and the browser. Powered by Lodash. ⚡ db.get('posts') .push({ id: 1, title: 'lowdb is awesome'}) .wri

null 18.9k Dec 30, 2022
Execute one command (or mount one Node.js middleware) and get an instant high-performance GraphQL API for your PostgreSQL database!

PostGraphile Instant lightning-fast GraphQL API backed primarily by your PostgreSQL database. Highly customisable and extensible thanks to incredibly

Graphile 11.7k Jan 4, 2023
PostgreSQL client for node.js.

node-postgres Non-blocking PostgreSQL client for Node.js. Pure JavaScript and optional native libpq bindings. Monorepo This repo is a monorepo which c

Brian C 10.9k Jan 9, 2023
A simple Node.js ORM for PostgreSQL, MySQL and SQLite3 built on top of Knex.js

bookshelf.js Bookshelf is a JavaScript ORM for Node.js, built on the Knex SQL query builder. It features both Promise-based and traditional callback i

Bookshelf.js 6.3k Jan 2, 2023
An SQL-friendly ORM for Node.js

Objection.js Objection.js is an ORM for Node.js that aims to stay out of your way and make it as easy as possible to use the full power of SQL and the

Vincit 6.9k Jan 5, 2023
AlaSQL.js - JavaScript SQL database for browser and Node.js. Handles both traditional relational tables and nested JSON data (NoSQL). Export, store, and import data from localStorage, IndexedDB, or Excel.

Please use version 1.x as prior versions has a security flaw if you use user generated data to concat your SQL strings instead of providing them as a

Andrey Gershun 6.1k Jan 9, 2023
TypeScript ORM for Node.js based on Data Mapper, Unit of Work and Identity Map patterns. Supports MongoDB, MySQL, MariaDB, PostgreSQL and SQLite databases.

TypeScript ORM for Node.js based on Data Mapper, Unit of Work and Identity Map patterns. Supports MongoDB, MySQL, MariaDB, PostgreSQL and SQLite datab

MikroORM 5.4k Dec 31, 2022
A pure node.js JavaScript Client implementing the MySQL protocol.

mysql Table of Contents Install Introduction Contributors Sponsors Community Establishing connections Connection options SSL options Connection flags

null 17.6k Jan 1, 2023