Postgres.js - The Fastest full featured PostgreSQL client for Node.js and Deno

Overview

Fastest full PostgreSQL nodejs client


Getting started


Good UX with Postgres.js


Installation

$ npm install postgres

Usage

Create your sql database instance

// db.js
import postgres from 'postgres'

const sql = postgres({ /* options */ }) // will use psql environment variables

export default sql

Simply import for use elsewhere

// users.js
import sql from './db.js'

async function getUsersOver(age) {
  const users = await sql`
    select
      name,
      age
    from users
    where age > ${ age }
  `
  // users = Result [{ name: "Walter", age: 80 }, { name: 'Murray', age: 68 }, ...]
  return users
}


async function insertUser({ name, age }) {
  const users = await sql`
    insert into users
      (name, age)
    values
      (${ name }, ${ age })
    returning name, age
  `
  // users = Result [{ name: "Murray", age: 68 }]
  return users
}

Table of Contents

Connection

postgres([url], [options])

You can use either a postgres:// url connection string or the options to define your database connection properties. Options in the object will override any present in the url. Options will fall back to the same environment variables as psql.

const sql = postgres('postgres://username:password@host:port/database', {
  host                 : '',            // Postgres ip address[s] or domain name[s]
  port                 : 5432,          // Postgres server port[s]
  database             : '',            // Name of database to connect to
  username             : '',            // Username of database user
  password             : '',            // Password of database user
  ...and more
})

More options can be found in the Connection details section.

Queries

await sql`...` -> Result[]

Postgres.js utilizes Tagged template functions to process query parameters before interpolation. Using tagged template literals benefits developers by:

  1. Enforcing safe query generation
  2. Giving the sql`` function powerful utility and query building features.

Any generic value will be serialized according to an inferred type, and replaced by a PostgreSQL protocol placeholder $1, $2, .... The parameters are then sent separately to the database which handles escaping & casting.

All queries will return a Result array, with objects mapping column names to each row.

const xs = await sql`
  insert into users (
    name, age
  ) values (
    'Murray', 68
  )

  returning *
`

// xs = [{ user_id: 1, name: 'Murray', age: 68 }]

Please note that queries are first executed when awaited – or manually by using .execute().

Query parameters

Parameters are automatically extracted and handled by the database so that SQL injection isn't possible. No special handling is necessary, simply use tagged template literals as usual.

const name = 'Mur'
    , age = 60

const users = await sql`
  select
    name,
    age
  from users
  where
    name like ${ name + '%' }
    and age > ${ age }
`
// users = [{ name: 'Murray', age: 68 }]

Be careful with quotation marks here. Because Postgres infers column types, you do not need to wrap your interpolated parameters in quotes like '${name}'. This will cause an error because the tagged template replaces ${name} with $1 in the query string, leaving Postgres to do the interpolation. If you wrap that in a string, Postgres will see '$1' and interpret it as a string as opposed to a parameter.

Dynamic column selection

const columns = ['name', 'age']

sql`
  select
    ${ sql(columns) }
  from users
`

// Which results in:
select "name", "age" from users

Dynamic inserts

const user = {
  name: 'Murray',
  age: 68
}

sql`
  insert into users ${
    sql(user, 'name', 'age')
  }
`

// Which results in:
insert into users ("name", "age") values ($1, $2)

You can omit column names and simply execute sql(user) to get all the fields from the object as columns. Be careful not to allow users to supply columns that you do not want to be inserted.

Multiple inserts in one query

If you need to insert multiple rows at the same time it's also much faster to do it with a single insert. Simply pass an array of objects to sql().

const users = [{
  name: 'Murray',
  age: 68,
  garbage: 'ignore'
},
{
  name: 'Walter',
  age: 80
}]

sql`insert into users ${ sql(users, 'name', 'age') }`

// Is translated to:
insert into users ("name", "age") values ($1, $2), ($3, $4)

// Here you can also omit column names which will use object keys as columns
sql`insert into users ${ sql(users) }`

// Which results in:
insert into users ("name", "age") values ($1, $2), ($3, $4)

Dynamic columns in updates

This is also useful for update queries

const user = {
  id: 1,
  name: 'Murray',
  age: 68
}

sql`
  update users set ${
    sql(user, 'name', 'age')
  }
  where user_id = ${ user.id }
`

// Which results in:
update users set "name" = $1, "age" = $2 where user_id = $3

Dynamic values and where in

Value lists can also be created dynamically, making where in queries simple too.

const users = await sql`
  select
    *
  from users
  where age in ${ sql([68, 75, 23]) }
`

or

const [{ a, b, c }] => await sql`
  select
    *
  from (values ${ sql(['a', 'b', 'c']) }) as x(a, b, c)
`

Building queries

Postgres.js features a simple dynamic query builder by conditionally appending/omitting query fragments. It works by nesting sql`` fragments within other sql`` calls or fragments. This allows you to build dynamic queries safely without risking sql injections through usual string concatenation.

Partial queries

const olderThan = x => sql`and age > ${ x }`

const filterAge = true

sql`
  select
   *
  from users
  where name is not null ${
    filterAge
      ? olderThan(50)
      : sql``
  }
`
// Which results in:
select * from users where name is not null
// Or
select * from users where name is not null and age > 50

Dynamic filters

sql`
  select
    *
  from users ${
    id
      ? sql`where user_id = ${ id }`
      : sql``
  }
`

// Which results in:
select * from users
// Or
select * from users where user_id = $1

SQL functions

Using keywords or calling functions dynamically is also possible by using sql`` fragments.

const date = null

sql`
  update users set updated_at = ${ date || sql`now()` }
`

// Which results in:
update users set updated_at = now()

Table names

Dynamic identifiers like table names and column names is also supported like so:

const table = 'users'
    , column = 'id'

sql`
  select ${ sql(column) } from ${ sql(table) }
`

// Which results in:
select "id" from "users"

Advanced query methods

Cursors

await sql``.cursor([rows = 1], [fn])

Use cursors if you need to throttle the amount of rows being returned from a query. You can use a cursor either as an async iterable or with a callback function. For a callback function new results won't be requested until the promise / async callback function has resolved.

callback function
await sql`
  select
    *
  from generate_series(1,4) as x
`.cursor(async([row]) => {
  // row = { x: 1 }
  await http.request('https://example.com/wat', { row })
})
for await...of
// for await...of
const cursor = sql`select * from generate_series(1,4) as x`.cursor()

for await (const [row] of cursor) {
  // row = { x: 1 }
  await http.request('https://example.com/wat', { row })
}

A single row will be returned by default, but you can also request batches by setting the number of rows desired in each batch as the first argument to .cursor:

await sql`
  select
    *
  from generate_series(1,1000) as x
`.cursor(10, async rows => {
  // rows = [{ x: 1 }, { x: 2 }, ... ]
  await Promise.all(rows.map(row =>
    http.request('https://example.com/wat', { row })
  ))
})

If an error is thrown inside the callback function no more rows will be requested and the outer promise will reject with the thrown error.

You can close the cursor early either by calling break in the for await...of loop, or by returning the token sql.CLOSE from the callback function.

await sql`
  select * from generate_series(1,1000) as x
`.cursor(row => {
  return Math.random() > 0.9 && sql.CLOSE // or sql.END
})

Instant iteration

await sql``.forEach(fn)

If you want to handle rows returned by a query one by one, you can use .forEach which returns a promise that resolves once there are no more rows.

await sql`
  select created_at, name from events
`.forEach(row => {
  // row = { created_at: '2019-11-22T14:22:00Z', name: 'connected' }
})

// No more rows

Query Descriptions

await sql``.describe([rows = 1], fn) -> Result[]

Rather than executing a given query, .describe will return information utilized in the query process. This information can include the query identifier, column types, etc.

This is useful for debugging and analyzing your Postgres queries. Furthermore, .describe will give you access to the final generated query string that would be executed.

Rows as Array of Values

sql``.values()

Using .values will return rows as an array of values for each column, instead of objects.

This can be useful to receive identically named columns, or for specific performance/transformation reasons. The column definitions are still included on the result array, plus access to parsers for each column.

Rows as Raw Array of Buffers

sql``.raw()

Using .raw will return rows as an array with Buffer values for each column, instead of objects.

This can be useful for specific performance/transformation reasons. The column definitions are still included on the result array, plus access to parsers for each column.

Queries in Files

await sql.file(path, [args], [options]) -> Result[]

Using a file for a query is also supported with optional parameters to use if the file includes $1, $2, etc

const result = await sql.file('query.sql', ['Murray', 68])

Copy to/from as Streams

Postgres.js supports COPY ... queries, which are exposed as Node.js streams.

await sql`copy ... from stdin`.writable() -> Writable

import { pipeline } from 'node:stream/promises'

// Stream of users with the default tab delimitated cells and new-line delimitated rows
const userStream = Readable.from([
  'Murray\t68\n',
  'Walter\t80\n'
])

const query = await sql`copy users (name, age) from stdin`.writable()
await pipeline(userStream, query);

await sql`copy ... to stdout`.readable() -> Readable

Using Stream Pipeline
import { pipeline } from 'node:stream/promises'
import { createWriteStream } from 'node:fs'

const readableStream = await sql`copy users (name, age) to stdout`.readable()
await pipeline(readableStream, createWriteStream('output.tsv'))
// output.tsv content: `Murray\t68\nWalter\t80\n`
Using for await...of
const readableStream = await sql`
  copy (
    select name, age 
    from users 
    where age = 68
  ) to stdout
`.readable()
for await (const chunk of readableStream) {
  // chunk.toString() === `Murray\t68\n`
}

NOTE This is a low-level API which does not provide any type safety. To make this work, you must match your copy query parameters correctly to your Node.js stream read or write code. Ensure Node.js stream backpressure is handled correctly to avoid memory exhaustion.

Canceling Queries in Progress

Postgres.js supports, canceling queries in progress. It works by opening a new connection with a protocol level startup message to cancel the current query running on a specific connection. That means there is no guarantee that the query will be canceled, and due to the possible race conditions it might even result in canceling another query. This is fine for long running queries, but in the case of high load and fast queries it might be better to simply ignore results instead of canceling.

const query = sql`select pg_sleep 100`.execute()
setTimeout(() => query.cancel(), 100)
const result = await query

Unsafe raw string queries

Advanced unsafe use cases

await sql.unsafe(query, [args], [options]) -> Result[]

If you know what you're doing, you can use unsafe to pass any string you'd like to postgres. Please note that this can lead to SQL injection if you're not careful.

sql.unsafe('select ' + danger + ' from users where id = ' + dragons)

Transactions

BEGIN / COMMIT await sql.begin([options = ''], fn) -> fn()

Use sql.begin to start a new transaction. Postgres.js will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.begin will resolve with the returned value from the callback function.

BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.

const [user, account] = await sql.begin(async sql => {
  const [user] = await sql`
    insert into users (
      name
    ) values (
      'Murray'
    )
  `

  const [account] = await sql`
    insert into accounts (
      user_id
    ) values (
      ${ user.user_id }
    )
  `

  return [user, account]
})

It's also possible to pipeline the requests in a transaction if needed by returning an array with queries from the callback function like this:

const result = await sql.begin(sql => [
  sql`update ...`,
  sql`update ...`,
  sql`insert ...`
])

SAVEPOINT await sql.savepoint([name], fn) -> fn()

sql.begin('read write', async sql => {
  const [user] = await sql`
    insert into users (
      name
    ) values (
      'Murray'
    )
  `

  const [account] = (await sql.savepoint(sql =>
    sql`
      insert into accounts (
        user_id
      ) values (
        ${ user.user_id }
      )
    `
  ).catch(err => {
    // Account could not be created. ROLLBACK SAVEPOINT is called because we caught the rejection.
  })) || []

  return [user, account]
})
.then(([user, account]) => {
  // great success - COMMIT succeeded
})
.catch(() => {
  // not so good - ROLLBACK was called
})

Do note that you can often achieve the same result using WITH queries (Common Table Expressions) instead of using transactions.

Data Transformation

Postgres.js comes with a number of built-in data transformation functions that can be used to transform the data returned from a query or when inserting data. They are available under transform option in the postgres() function connection options.

Like - postgres('connectionURL', { transform: {...} })

Parameters

  • to: The function to transform the outgoing query column name to, i.e SELECT ${ sql('aName') } to SELECT a_name when using postgres.toCamel.
  • from: The function to transform the incoming query result column name to, see example below.

Both parameters are optional, if not provided, the default transformation function will be used.

Built in transformation functions are:

  • For camelCase - postgres.toCamel and postgres.fromCamel
  • For PascalCase - postgres.toPascal and postgres.fromPascal
  • For Kebab-Case - postgres.toKebab and postgres.fromKebab

These functions can be passed in as options when calling postgres(). For example -

// this will tranform the column names to camel case back and forth
(async function () {
  const sql = postgres('connectionURL', { transform: { column: { to: postgres.fromCamel, from: postgres.toCamel } }});
  await sql`CREATE TABLE IF NOT EXISTS camel_case (a_test INTEGER, b_test TEXT)`;
  await sql`INSERT INTO camel_case ${ sql([{ aTest: 1, bTest: 1 }]) }`
  const data = await sql`SELECT ${ sql('aTest', 'bTest') } FROM camel_case`;
  console.log(data) // [ { aTest: 1, bTest: '1' } ]
  process.exit(1)
})();

Note that if a column name is originally registered as snake_case in the database then to tranform it from camelCase to snake_case when querying or inserting, the column camelCase name must be put in sql('columnName') as it's done in the above example, Postgres.js does not rewrite anything inside the static parts of the tagged templates.

Listen & notify

When you call .listen, a dedicated connection will be created to ensure that you receive notifications instantly. This connection will be used for any further calls to .listen. The connection will automatically reconnect according to a backoff reconnection pattern to not overload the database server.

Listen await sql.listen(channel, onnotify, [onlisten]) -> { state }

.listen takes the channel name, a function to handle each notify, and an optional function to run every time listen is registered and ready (happens on initial connect and reconnects). It returns a promise which resolves once the LISTEN query to Postgres completes, or if there is already a listener active.

await sql.listen('news', payload => {
  const json = JSON.parse(payload)
  console.log(json.this) // logs 'is'
})

The optional onlisten method is great to use for a very simply queue mechanism:

await sql.listen(
  'jobs', 
  (x) => run(JSON.parse(x)),
  ( ) => sql`select unfinished_jobs()`.forEach(run)
)

function run(job) {
  // And here you do the work you please
}

Notify await sql.notify(channel, payload) -> Result[]

Notify can be done as usual in SQL, or by using the sql.notify method.

sql.notify('news', JSON.stringify({ no: 'this', is: 'news' }))

Realtime subscribe

Postgres.js implements the logical replication protocol of PostgreSQL to support subscription to real-time updates of insert, update and delete operations.

NOTE To make this work you must create the proper publications in your database, enable logical replication by setting wal_level = logical in postgresql.conf and connect using either a replication or superuser.

Quick start

Create a publication (eg. in migration)

CREATE PUBLICATION alltables FOR ALL TABLES

Subscribe to updates

const sql = postgres({ publications: 'alltables' })

const { unsubscribe } = await sql.subscribe(
  'insert:events', 
  (row, { command, relation, key, old }) => {
    // Callback function for each row change
    // tell about new event row over eg. websockets or do something else
  },
  () => {
    // Callback on initial connect and potential reconnects
  }
)

Subscribe pattern

You can subscribe to specific operations, tables, or even rows with primary keys.

operation : schema . table = primary_key

operation is one of * | insert | update | delete and defaults to *

schema defaults to public

table is a specific table name and defaults to *

primary_key can be used to only subscribe to specific rows

Examples

sql.subscribe('*',                () => /* everything */ )
sql.subscribe('insert',           () => /* all inserts */ )
sql.subscribe('*:users',          () => /* all operations on the public.users table */ )
sql.subscribe('delete:users',     () => /* all deletes on the public.users table */ )
sql.subscribe('update:users=1',   () => /* all updates on the users row with a primary key = 1 */ )

Numbers, bigint, numeric

Number in javascript is only able to represent 253-1 safely which means that types in PostgreSQLs like bigint and numeric won't fit into Number.

Since Node.js v10.4 we can use BigInt to match the PostgreSQL type bigint which is returned for eg. count(*). Unfortunately, it doesn't work with JSON.stringify out of the box, so Postgres.js will return it as a string.

If you want to use BigInt you can add this custom type:

const sql = postgres({
  types: {
    bigint: postgres.BigInt
  }
})

There is currently no guaranteed way to handle numeric / decimal types in native Javascript. These [and similar] types will be returned as a string. The best way in this case is to use custom types.

Result Array

The Result Array returned from queries is a custom array allowing for easy destructuring or passing on directly to JSON.stringify or general Array usage. It includes the following properties.

.count

The count property is the number of affected rows returned by the database. This is usefull for insert, update and delete operations to know the number of rows since .length will be 0 in these cases if not using RETURNING ....

.command

The command run by the query - eg. one of SELECT, UPDATE, INSERT, DELETE

.columns

The columns returned by the query useful to determine types, or map to the result values when using .values()

{
  name  : String,    // Column name,
  type  : oid,       // PostgreSQL oid column type
  parser: Function   // The function used by Postgres.js for parsing
}

.statement

The statement contains information about the statement implicitly created by Postgres.js.

{
  name    : String,  // The auto generated statement name
  string  : String,  // The actual query string executed
  types   : [oid],   // An array of oid expected as input parameters
  columns : [Column] // Array of columns - same as Result.columns
}

.state

This is the state { pid, secret } of the connection that executed the query.

Connection details

All Postgres options

const sql = postgres('postgres://username:password@host:port/database', {
  host                 : '',            // Postgres ip address[es] or domain name[s]
  port                 : 5432,          // Postgres server port[s]
  path                 : '',            // unix socket path (usually '/tmp')
  database             : '',            // Name of database to connect to
  username             : '',            // Username of database user
  password             : '',            // Password of database user
  ssl                  : false,         // true, prefer, require, tls.connect options
  max                  : 10,            // Max number of connections
  max_lifetime         : null,          // Max lifetime in seconds (more info below)
  idle_timeout         : 0,             // Idle connection timeout in seconds
  connect_timeout      : 30,            // Connect timeout in seconds
  prepare              : true,          // Automatic creation of prepared statements
  types                : [],            // Array of custom types, see more below
  onnotice             : fn,            // Defaults to console.log
  onparameter          : fn,            // (key, value) when server param change
  debug                : fn,            // Is called with (connection, query, params, types)
  socket               : fn,            // fn returning custom socket to use
  transform            : {
    undefined          : undefined,     // Transforms undefined values (eg. to null)
    column             : fn,            // Transforms incoming column names
    value              : fn,            // Transforms incoming row values
    row                : fn             // Transforms entire rows
  },
  connection           : {
    application_name   : 'postgres.js', // Default application_name
    ...                                 // Other connection parameters
  },
  target_session_attrs : null,          // Use 'read-write' with multiple hosts to
                                        // ensure only connecting to primary
  fetch_types          : true,          // Automatically fetches types on connect
                                        // on initial connection.
})

Note that max_lifetime = 60 * (30 + Math.random() * 30) by default. This resolves to an interval between 45 and 90 minutes to optimize for the benefits of prepared statements and working nicely with Linux's OOM killer.

SSL

Although vulnerable to MITM attacks, a common configuration for the ssl option for some cloud providers is to set rejectUnauthorized to false (if NODE_ENV is production):

const sql =
  process.env.NODE_ENV === 'production'
    ? // "Unless you're using a Private or Shield Heroku Postgres database, Heroku Postgres does not currently support verifiable certificates"
      // https://help.heroku.com/3DELT3RK/why-can-t-my-third-party-utility-connect-to-heroku-postgres-with-ssl
      postgres({ ssl: { rejectUnauthorized: false } })
    : postgres()

For more information regarding ssl with postgres, check out the Node.js documentation for tls.

Multi-host connections - High Availability (HA)

Multiple connection strings can be passed to postgres() in the form of postgres('postgres://localhost:5432,localhost:5433', ...). This works the same as native the psql command. Read more at multiple host URIs.

Connections will be attempted in order of the specified hosts/ports. On a successful connection, all retries will be reset. This ensures that hosts can come up and down seamlessly.

If you specify target_session_attrs: 'primary' or PGTARGETSESSIONATTRS=primary Postgres.js will only connect to the primary host, allowing for zero downtime failovers.

The Connection Pool

Connections are created lazily once a query is created. This means that simply doing const sql = postgres(...) won't have any effect other than instantiating a new sql instance.

No connection will be made until a query is made.

For example:

const sql = postgres() // no connections are opened

await sql`...` // one connection is now opened
await sql`...` // previous opened connection is reused

// two connections are opened now
await Promise.all([
  sql`...`,
  sql`...`
])

When there are high amount of concurrent queries, postgres will open as many connections as needed up until max number of connections is reached. By default max is 10. This can be changed by setting max in the postgres() call. Example - postgres('connectionURL', { max: 20 }).

This means that we get a much simpler story for error handling and reconnections. Queries will be sent over the wire immediately on the next available connection in the pool. Connections are automatically taken out of the pool if you start a transaction using sql.begin(), and automatically returned to the pool once your transaction is done.

Any query which was already sent over the wire will be rejected if the connection is lost. It'll automatically defer to the error handling you have for that query, and since connections are lazy it'll automatically try to reconnect the next time a query is made. The benefit of this is no weird generic "onerror" handler that tries to get things back to normal, and also simpler application code since you don't have to handle errors out of context.

There are no guarantees about queries executing in order unless using a transaction with sql.begin() or setting max: 1. Of course doing a series of queries, one awaiting the other will work as expected, but that's just due to the nature of js async/promise handling, so it's not necessary for this library to be concerned with ordering.

Since this library automatically creates prepared statements, it also has a default max lifetime for connections to prevent memory bloat on the database itself. This is a random interval for each connection between 45 and 90 minutes. This allows multiple connections to come up and down seamlessly without user interference.

Connection timeout

By default, connections will not close until .end() is called. However, it may be useful to have them close automatically when:

  • re-instantiating multiple sql`` instances
  • using Postgres.js in a Serverless environment (Lambda, etc.)
  • using Postgres.js with a database service that automatically closes connections after some time (see ECONNRESET issue)

This can be done using the idle_timeout or max_lifetime options. These configuration options specify the number of seconds to wait before automatically closing an idle connection and the maximum time a connection can exist, respectively.

For example, to close a connection that has either been idle for 20 seconds or existed for more than 30 minutes:

const sql = postgres({
  idle_timeout: 20,
  max_lifetime: 60 * 30
})

Auto fetching of array types

Postgres.js will automatically fetch table/array-type information when it first connects to a database.

If you have revoked access to pg_catalog this feature will no longer work and will need to be disabled.

You can disable this feature by setting fetch_types to false.

Environmental variables

It is also possible to connect to the database without a connection string or any options. Postgres.js will fall back to the common environment variables used by psql as in the table below:

const sql = postgres()
Option Environment Variables
host PGHOST
port PGPORT
database PGDATABASE
username PGUSERNAME or PGUSER
password PGPASSWORD
idle_timeout PGIDLE_TIMEOUT
connect_timeout PGCONNECT_TIMEOUT

Prepared statements

Prepared statements will automatically be created for any queries where it can be inferred that the query is static. This can be disabled by using the no_prepare option. For instance — this is useful when using PGBouncer in transaction mode.

Custom Types

You can add ergonomic support for custom types, or simply use sql.typed(value, type) inline, where type is the PostgreSQL oid for the type and the correctly serialized string. (oid values for types can be found in the pg_catalog.pg_types table.)

Adding Query helpers is the cleanest approach which can be done like this:

const sql = postgres({
  types: {
    rect: {
      // The pg_types oid to pass to the db along with the serialized value.
      to        : 1337,

      // An array of pg_types oids to handle when parsing values coming from the db.
      from      : [1337],

      //Function that transform values before sending them to the db.
      serialize : ({ x, y, width, height }) => [x, y, width, height],

      // Function that transforms values coming from the db.
      parse     : ([x, y, width, height]) => { x, y, width, height }
    }
  }
})

// Now you can use sql.typed.rect() as specified above
const [custom] = sql`
  insert into rectangles (
    name,
    rect
  ) values (
    'wat',
    ${ sql.typed.rect({ x: 13, y: 37, width: 42, height: 80 }) }
  )
  returning *
`

// custom = { name: 'wat', rect: { x: 13, y: 37, width: 42, height: 80 } }

Custom socket

Easily do in-process ssh tunneling to your database by providing a custom socket for Postgres.js to use. The function (optionally async) must return a socket-like duplex stream.

Here's a sample using ssh2

import ssh2 from 'ssh2'

const sql = postgres({
  ...options,
  socket: ({ host: [host], port: [port] }) => new Promise((resolve, reject) => {
    const ssh = new ssh2.Client()
    ssh
    .on('error', reject)
    .on('ready', () => 
      ssh.forwardOut('127.0.0.1', 12345, host, port, 
        (err, socket) => err ? reject(err) : resolve(socket)
      )
    )
    .connect(sshOptions)
  })
})

Teardown / Cleanup

To ensure proper teardown and cleanup on server restarts use await sql.end() before process.exit().

Calling sql.end() will reject new queries and return a Promise which resolves when all queries are finished and the underlying connections are closed. If a { timeout } option is provided any pending queries will be rejected once the timeout (in seconds) is reached and the connections will be destroyed.

Sample shutdown using Prexit

import prexit from 'prexit'

prexit(async () => {
  await sql.end({ timeout: 5 })
  await new Promise(r => server.close(r))
})

Error handling

Errors are all thrown to related queries and never globally. Errors coming from database itself are always in the native Postgres format, and the same goes for any Node.js errors eg. coming from the underlying connection.

Query errors will contain a stored error with the origin of the query to aid in tracing errors.

Query errors will also contain the query string and the parameters. These are not enumerable to avoid accidentally leaking confidential information in logs. To log these it is required to specifically access error.query and error.parameters, or set debug: true in options.

There are also the following errors specifically for this library.

UNSAFE_TRANSACTION

Only use sql.begin or max: 1

To ensure statements in a transaction runs on the same connection (which is required for them to run inside the transaction), you must use sql.begin(...) or only allow a single connection in options (max: 1).

UNDEFINED_VALUE

Undefined values are not allowed

Postgres.js won't accept undefined as values in tagged template queries since it becomes ambiguous what to do with the value. If you want to set something to null, use null explicitly.

MESSAGE_NOT_SUPPORTED

X (X) is not supported

Whenever a message is received from Postgres which is not supported by this library. Feel free to file an issue if you think something is missing.

MAX_PARAMETERS_EXCEEDED

Max number of parameters (65534) exceeded

The postgres protocol doesn't allow more than 65534 (16bit) parameters. If you run into this issue there are various workarounds such as using sql([...]) to escape values instead of passing them as parameters.

SASL_SIGNATURE_MISMATCH

Message type X not supported

When using SASL authentication the server responds with a signature at the end of the authentication flow which needs to match the one on the client. This is to avoid man-in-the-middle attacks. If you receive this error the connection was canceled because the server did not reply with the expected signature.

NOT_TAGGED_CALL

Query not called as a tagged template literal

Making queries has to be done using the sql function as a tagged template. This is to ensure parameters are serialized and passed to Postgres as query parameters with correct types and to avoid SQL injection.

AUTH_TYPE_NOT_IMPLEMENTED

Auth type X not implemented

Postgres supports many different authentication types. This one is not supported.

CONNECTION_CLOSED

write CONNECTION_CLOSED host:port

This error is thrown if the connection was closed without an error. This should not happen during normal operations, so please create an issue if this was unexpected.

CONNECTION_ENDED

write CONNECTION_ENDED host:port

This error is thrown if the user has called sql.end() and performed a query afterward.

CONNECTION_DESTROYED

write CONNECTION_DESTROYED host:port

This error is thrown for any queries that were pending when the timeout to sql.end({ timeout: X }) was reached.

CONNECTION_CONNECT_TIMEOUT

write CONNECTION_CONNECT_TIMEOUT host:port

This error is thrown if the startup phase of the connection (tcp, protocol negotiation, and auth) took more than the default 30 seconds or what was specified using connect_timeout or PGCONNECT_TIMEOUT.

TypeScript support

postgres has TypeScript support. You can pass a row list type for your queries in this way:

interface User {
  id: number
  name: string
}

const users = await sql<User[]>`SELECT * FROM users`
users[0].id // ok => number
users[1].name // ok => string
users[0].invalid // fails: `invalid` does not exists on `User`

However, be sure to check the array length to avoid accessing properties of undefined rows:

const users = await sql<User[]>`SELECT * FROM users WHERE id = ${id}`
if (!users.length)
  throw new Error('Not found')
return users[0]

You can also prefer destructuring when you only care about a fixed number of rows. In this case, we recommend you to prefer using tuples to handle undefined properly:

const [user]: [User?] = await sql`SELECT * FROM users WHERE id = ${id}`
if (!user) // => User | undefined
  throw new Error('Not found')
return user // => User

// NOTE:
const [first, second]: [User?] = await sql`SELECT * FROM users WHERE id = ${id}` // fails: `second` does not exist on `[User?]`
const [first, second] = await sql<[User?]>`SELECT * FROM users WHERE id = ${id}` // don't fail : `second: User | undefined`

We do our best to type all the public API, however types are not always updated when features are added or changed. Feel free to open an issue if you have trouble with types.

Migration tools

Postgres.js doesn't come with any migration solution since it's way out of scope, but here are some modules that support Postgres.js for migrations:

Thank you

A really big thank you to @JAForbes who introduced me to Postgres and still holds my hand navigating all the great opportunities we have.

Thanks to @ACXgit for initial tests and dogfooding.

Also thanks to Ryan Dahl for letting me have the postgres npm package name.

Comments
  • v3

    v3

    This is a complete rewrite to better support all the features that I was trying to get into v2. There are a few breaking changes from v2 beta , which some (myself included) are using in production, so I'm skipping a stable v2 release and going to v3.

    Here are some of the new things available..

    • Converted to ESM with CJS support
    • Deno support
    • Dynamic query builder based on raw sql
    • Realtime subscribe to db changes through logical replication
    • Cursors as async iterators
    • Multi-host support for High Availability setups
    • Postgres input parameter types from ParameterDescription
    • .describe() to only get query input types and column definitions
    • Support for Large Objects
    • max_lifetime for connections
    • Cancellation of requests

    Breaking changes from v2 -> v3

    • Cursors are always called with Result arrays (previously cursor 1 would return a row object, where > 1 would return an array of rows)
    • .writable() and .readable() is now async (returns a Promise that resolves to the stream)
    • Queries now returns a lazy promise instead of being executed immediately. This means the query won't be sent until awaited (.then, .catch, .finally is called) or until .execute() is manually called.
    • .stream() is renamed to .forEach
    • Returned results are now it's own Result class extending Array instead of an Array with extra properties (actually shouldn't be breaking unless you're doing something funny)
    • Parameters are now cast using the types returned from Postgres ParameterDescription with a fallback to the previously inferred types
    • Only tested with node v12 and up
    • Implicit array value to multiple parameter expansion removed (use sql([...]) instead)

    Breaking changes from v1 -> v2 (v2 never moved on from beta)

    • All identifiers from sql() in queries are now always quoted
    • Undefined parameters are no longer allowed
    • Numbers that cannot be safely cast to JS Number are returned as string. This happens for eg, select count(*) because count() returns a 64 bit integer (int8), so if you know your count() won't be too big for a js number just cast in your query to int4 like select count(*)::int

    Fixes #12, Fixes #30, Fixes #63, Fixes #65, Fixes #67, Fixes #89, Fixes #156, Fixes #159, Fixes #179, Fixes #201, Fixes #221, Fixes #230, Fixes #234, Fixes #248, Fixes #250, Fixes #252, Fixes #254 Closes #98, Closes #101, Closes #231, Closes #233

    opened by porsager 54
  • Typescript support

    Typescript support

    Fix #7 :smile:

    Typescript definitions infered from the code and the documentation.

    I relied on tests/index.js to improve theses definitions. Just add // @ts-check at the beginning to enable type validation in js files.

    opened by Minigugus 50
  • Better dynamic query building

    Better dynamic query building

    Hi, First and foremost, I think this is a great start at making something really simple and effective. Thank you!!

    However... I have always found template strings a little odd especially in this context. Let us say we have

    const query = `SELECT ${distinct}, ${columns} FROM ${table} ${innerJoins} WHERE ${where}`; (Where distinct, columns, table etc are their own strings made up of other functions elsehwhere) which basically outputs...

    SELECT DISTINCT(person.uid), person.name, movie.title, movie.year
    FROM person 
    INNER JOIN movie_ref ON movie_ref.person_id = person.id
    INNER JOIN movie ON movie.id = movie_ref.movie_id
    WHERE person.name ILIKE '%luke%'
    

    I should then be able to do sql`${query}`.then((data) => console.log(data) or await etc..

    But instead I receive a syntax error: Error: syntax error at or near "$1" at Object.generic (../node_modules/postgres/lib/types.js:200:5)

    Gonna take a look at how this tagged template function is actually working as Im aware it takes the values $1, $2 etc.. and so forth to evaluate them and pass them to the statement e.g. select * from table where id = $1 => select * from table where id = somevalue

    At the moment, it means I have to directly place all my select statements big or small in this method. Which will start looking very messy especially when queries become more complex and the app grows.

    enhancement good first issue 
    opened by okbrown 36
  • Result of query is mixed array

    Result of query is mixed array

    According to docs:

    // other.js
    const sql = require('./db.js')
    
    const users = await sql`
      select name, age from users
    `
    // users: [{ name: 'Murray', age: 68 }, { name: 'Walter', age: 78 }]
    

    the result of a sql query call should be an array of rows, but in my case (v1.0.2) this array is returned:

    [
      { name: 'Murray', age: 68 },
      { name: 'Walter', age: 78 },
      count: 2,
      command: 'SELECT'
    ]
    

    This is really unfortunate because I can't easily return the data without count and command attributes.

    opened by slaviczavik 35
  • CONNECTION_CLOSED error on listen()

    CONNECTION_CLOSED error on listen()

    I'm getting this error kinda randomly:

    node:internal/process/promises:265
                triggerUncaughtException(err, true /* fromPromise */);
                ^
    Error: write CONNECTION_CLOSED undefined:undefined
        at Socket.closed (file:///.../node_modules/postgres/src/connection.js:448:57)
        at Socket.emit (node:events:520:28)
        at Socket.emit (node:domain:475:12)
        at TCP.<anonymous> (node:net:687:12)
        at cachedError (file:///.../node_modules/postgres/src/query.js:160:23)
        at new Query (file:///.../node_modules/postgres/src/query.js:36:24)
        at sql (file:///.../node_modules/postgres/src/index.js:104:11)
    

    I'm not sure if I should do anything to handle it myself, but I'm posting the issue since the docs mention:

    This should not happen during normal operations, so please create an issue if this was unexpected.

    opened by PierBover 24
  • Implement COPY table TO/FROM STDOUT/STDIN

    Implement COPY table TO/FROM STDOUT/STDIN

    Hello. We using node-pg-copy-streams https://github.com/brianc/node-pg-copy-streams

    COPY table FROM STDIN - for import large csv file to table. COPY table TO STDOUT - for export from large table to csv file.

    We, and I think many, cannot use COPY table FROM/TO '/local/file.csv' the option for the Postgres server to read the file from the local file system, because the web server (which upload or download) does not have the permissions to read and write to the file on the server database.

    This is the only reason why we cannot use your library on our project. Do you think the implementation of this functionality is possible in the foreseeable future? Thanks.

    enhancement 
    opened by uasan 23
  • Error: write CONNECTION_CLOSED false

    Error: write CONNECTION_CLOSED false

    First of all, thank you so much for this amazing library! It is the best postgres driver so far.

    We have a service with ~60 req/s to the database. The postgres package is set to default, only connection stuffs (host, port, username, password, dbName, ...) are set. Everything works like a charm, but after more or less 2 hours, small number of errors occur:

    Error: write CONNECTION_CLOSED false
        at Object.connection (/node-reverse/node_modules/postgres/lib/types.js:191:5)
        at close (/node-reverse/node_modules/postgres/lib/connection.js:173:18)
        at Socket.onclose (/node-reverse/node_modules/postgres/lib/connection.js:199:5)
        at Object.onceWrapper (events.js:286:20)
        at Socket.emit (events.js:198:13)
        at TCP._handle.close (net.js:607:12)
    

    The number of the errors increases with the time of application runtime. It seems like bad pool handling to me. We are testing it right now with the timeout option, so I will let you know if it helps or not.

    opened by mdorda 22
  • Throwing exceptions from sql.begin has some caching issue

    Throwing exceptions from sql.begin has some caching issue

    This code should fail with exception 'throws from tx', but instead fails with 'SELECTexception' syntax error:

    async function main() {
            try {
                    await sql.begin(async sql => {
                            try {
                                    let r = await sql`SELECTexception`;
                            } catch (ex) {
                                    throw new Error('throws from tx');
                            }
                    });
            } finally {
                    await sql.end();
            }
    }
    

    Looks like the original exception is cached somewhere in sql.begin()...

    Other than that, v3 is beautiful. :) I've made a bit of a quick and dirty compat layer for migrating from vitaly-t/pg-promise:

    https://megous.com/dl/tmp/compat.patch

    And was able to use your DB connector on a fairly large project that uses pg-promise just like that, and it works nicely. :) I've only found this issue so far during porting.

    Great job with v3 :) I'm looking forward to using those lovely batched cursors, and streaming replication protocol support. :)

    opened by megous 20
  • Using Postgres functions, like NOW(), in query helpers

    Using Postgres functions, like NOW(), in query helpers

    Consider the code:

    const fields = {
      name: 'John Doe',
      created_at: ...
    };
    await sql`
      INSERT INTO users ${sql(fields)}
    `;
    

    What would I use for created_at to send NOW()?

    I know I could probably use the following, but my hope is to keep the SQL compact with the dynamic query helper:

    await sql`
      INSERT INTO users (
        name,
        created_at
      ) VALUES (
        ${fields.name},
        NOW()
      )
    `;
    
    enhancement 
    opened by jamiesyme 19
  • Query timeout

    Query timeout

    If shutdown Postgres server and make a query:

    req.sql`select ...`;
    

    it runs forever

    Is there a parameter that would be responsible for the maximum query execution time in such a situation, after which an error would be generated?

    opened by budarin 18
  • Stream speed management using async/await (cursor)

    Stream speed management using async/await (cursor)

    The current stream mechanics make it easy to process rows as they are read from the query rather than return them in a block; however in ultra-large result set cases it's possible the rows will be read faster than they can be processed. See this Reddit thread for more comprehensive specifics. An example use case would be executing a query that yields 1 million+ rows then sending them in blocks of 200 to an external web service that takes several seconds to process each request. In this case it could be possible to send hundreds or thousands of concurrent outbound requests overwhelming heap space and the external service.

    There are two potential fixes:

    1. Allow iterator functions to return a promise which is then awaited to ensure more data is not sent to the iterator function than it can handle. The following is a hypothetical use case that buffers up to 200 rows at a time then sends it to an external service with limited speed:
    	var rowBuffer = [];
    	await sql`select created_at, name from events`.stream(async row => {
    		if (rowBuffer.push(row) >= 200) await transformAndSend(rowBuffer.splice(0,200));
    	});
    	if (rowBuffer.length > 0) await transformAndSend(rowBuffer.splice(0,200));
    

    From a syntax standpoint all that would change from existing functionality is the addition of the optional async. But from the backend the return from the result of the iterator would need to be checked if it is a promise, then awaited. This is an overtly simplified example:

    	while (moreRows) {
    		var row = await getRow();
    		var P = StreamIterator(row);
    		if (P instanceof Promise) await P;
    	}
    

    I'm not sure how the rows are received weather each is electively read or they come in as events which can be paused. If they are purposefully read then this would be pretty easy. You'd just await the callback and get the next row. If they come in as events you'd need to buffer them to be sent to the callback and pause the incoming results if that buffer gets too large.I

    *Ideally the promise resolving to a true or a specified enum should stop the query from completing. So if for example the user executed a query that botches a join and results in 10 billion rows of nonsense or the designation service won't accept the results in the callback, it would be nice to have a means to gracefully and forcefully stop it from reading more rows.

    1. Alternatively (or additionally) making use of Symbol.asyncIterator would allow a standard "for await" loop as a means to asynchronously process data from an async source. This would be (very) easy to add in after altering to existing stream functionality to watch for an await returned promises and could be officially extended from the query result as an .iterate() that returns an Symbol.asyncIterator object to manage this flow for the end user. That would look something like the following in practice:
    	var rowBuffer = [];
    	for await (let row of sql`select created_at, name from events`.iterate()) {
    		if (rowBuffer.push(row) >= 200) await transformAndSend(rowBuffer.splice(0,200));
    	});
    	if (rowBuffer.length > 0) await transformAndSend(rowBuffer.splice(0,200));
    

    I'm at your disposal if you need assistance in implementing this feature or need to test it once complete. If you can make efficiently iterating through large queries as easy as a "for await" loop you'll have introduced an quality of life improvement well ahead of the curve in terms of future proofing.

    enhancement 
    opened by Blitzsturm 18
  • the bulk/ multiple inserts not always work: ERROR: 42601 PostgresError: syntax error at or near

    the bulk/ multiple inserts not always work: ERROR: 42601 PostgresError: syntax error at or near ""0""

    my program is getting multiple rows from another query:

    for await (const rows of sql`SELECT <...some-query...>`.cursor(10) {
      // some checking/processing of the rows data,
      // then insert to another table, with
      await sql`INSERT <...another-table> ${sql(rows)}`
    }
    

    it turns:

    ERROR: 42601 PostgresError: syntax error at or near ""0""
    

    in debugging logs:

    INSERT INTO <...another-table> "0"=$1,"1"=$2,"2"=$3
    

    with parameters:

    params: [
      { ... }, { ... }, { ... }
    ]
    

    in this cases rows is a query result Array of 3 elements, but it seems sql(rows) did not recognize it as array of multiple inserts, but consider it as a single object insert instead, and derived 3 keys as 0 1 and 2

    the same code of multiple inserts seems working before, but not sure what changes triggered it, and when it happens, what can i do,

       // notice here when the problem happens, the `params` is passed with each raw objects in,
    

    but when the same multiple inserts works before, all objects' values were flattened to pass in values

     params: [
       row0's column0, row0's column1, row0's column2, ...
       row1's column0, row1's column1, row1's column2, ...
       ...
     ]
    

    what's the problem here, how can I tell sql(rows) is a real array of objects instead of single insert object?

    opened by tx0c 3
  • How to connect database custom schema

    How to connect database custom schema

    This is more a question than an issue, and it is basically related to know if is possible to connect to PostgreSQL custom database schema instead of default "public" schema?

    I have look in doc, but I could not find any connection option to specify a database schema like:

    const sql = postgres('postgres://username:password@host:port/database?current_schema=custom');
    
    opened by osieltorres 0
  • COPY...FROM stream doesn't work with interpolated values

    COPY...FROM stream doesn't work with interpolated values

    I'm trying to import values using COPY FROM piped from a readable stream, but the sql helper only gives me errors.

    The example from the docs works as written:

    const userStream = Readable.from([
      'Murray\t68\n',
      'Walter\t80\n'
    ])
    const query = await sql`copy test_table ( name, age ) from stdin`.writable()
    await pipeline(userStream, query);
    

    But, if I want to interpolate the table and/or column names, it doesn't work:

    const tableName = 'test_table'
    const cols = [ 'name', 'age' ]
    
    // Interpolate both table name and columns
    const query = await sql`copy ${sql(tableName)} (${sql(cols)}) from stdin`.writable()
    // --> Error: Could not infer helper mode
    
    // Only interpolate columns
    const query = await sql`copy test_table (${sql(cols)}) from stdin`.writable()
    // --> Error: Could not infer helper mode
    
    // Only interpolate table name
    const query = await sql`copy ${sql(tableName)} (name, age) from stdin`.writable()
    // --> PostgresError: syntax error at end of input
    

    There's a full (non) working example here: https://gist.github.com/partap/fefc18a0d294dae033c893f2d2828187

    opened by partap 0
  • Fix writing host and port on connection timeouts

    Fix writing host and port on connection timeouts

    Received a write CONNECT_TIMEOUT undefined:undefined when the configuration was correct, but something in the network was incorrectly configured.

    The Deno polyfill already sets these options, but the Node.js socket doesn't expose them. Defaulting to the options object would also be an option here, but then we need to know which options.host[idx] is currently used.

    opened by dirkdev98 0
  • Fix transform function logic for deeply nested jsonb

    Fix transform function logic for deeply nested jsonb

    The createJsonTransform() function is not transforming deeply nested objects and arrays in jsonb

    function createJsonTransform(fn) {
      return function jsonTransform(x, column) {
        return typeof x === 'object' && x !== null && (column.type === 114 || column.type === 3802)
          ? Array.isArray(x)
            ? x.map(x => jsonTransform(x, column))
            : Object.entries(x).reduce((acc, [k, v]) => Object.assign(acc, { [fn(k)]: v }), {})
          : x
      }
    }
    

    Problem

    It wasn't transforming deeply nested arrays and objects because the reduce() method only iterate over the keys and values of the top-level object. It does not recursively transform the keys of any subsequent nested objects.

    Solution

    jsonTransform() function needs to be called on the values of each key-value pair in the object so that it can keep iterating and transforming them regardless of how deeply nested they are. Example:

    function createJsonTransform(fn) {
      return function jsonTransform(x, column) {
        return typeof x === 'object' && x !== null && (column.type === 114 || column.type === 3802)
          ? Array.isArray(x)
            // Recursively transform array elements
            ? x.map(x => jsonTransform(x, column))
            // Recursively transform object keys and values
            : Object.entries(x).reduce((acc, [k, v]) => {
              // Use fn() to transform the key
              const transformedKey = fn(k);
              return Object.assign(acc, { [transformedKey]: jsonTransform(v, column) });
            }, {})
          // Return non-object values as-is
          : x;
      };
    }
    
    opened by Eprince-hub 0
  • Postgres not serialize byte[] on insert and update

    Postgres not serialize byte[] on insert and update

    It works properly if we need to insert/update bytea but on arrays is fails with error column "columname" is of type bytea[] but expression is of type bytea

    opened by el-gato-de-nieve 0
Releases(v3.3.2)
  • v3.3.2(Nov 16, 2022)

    • Fix null json array transform error b153202
    • Fix nested json array transform - fixes #506 5dea953
    • Allow period in listen channel names - fix #495 a12108a

    https://github.com/porsager/postgres/compare/v3.3.1...v3.3.2

    Source code(tar.gz)
    Source code(zip)
  • v3.3.1(Oct 3, 2022)

  • v3.3.0(Sep 30, 2022)

    • Fix subscribe stream close 19c9684
    • Support transform in subscribe messages 4e28de9
    • Prevent transform of columns when using subscribe - fixes #474 4ea4a59
    • Add simple() - fixes #472 e1a21ad
    • Add nested transforms (#460) 65dcf0d
    • Fix query being executed when using describe on unsafe - fixes #424 3e47f0a
    • Support empty arrays in dynamic in - fixes #417 85bca49
    • expose table oid and column number (#409) 599d0e7
    • Ensure listen doesn't reconnect if first connect throws edfa360
    • Only call setKeepAlive if present - fixes #404 84583e0

    Typescript related

    • Minor typescript organization (#416) 9a61b9f
    • Fix some undefined types issues (#447) 097d272
    • Add typescript support for "exports" (#439) 3415278
    • Add .values() typings - fixes #385 (#393) ac1bca4
    • Remove dead code that breaks types - fixes #382 (#383) 218a7d4

    https://github.com/porsager/postgres/compare/v3.2.4...v3.3.0

    Source code(tar.gz)
    Source code(zip)
  • v3.2.4(May 25, 2022)

    • Allow setting keep_alive: false bee62f3
    • Fix support for null in arrays - fixes #371 b04c853

    https://github.com/porsager/postgres/compare/v3.2.3...v3.2.4

    Source code(tar.gz)
    Source code(zip)
  • v3.2.3(May 23, 2022)

    • Fix Only use setKeepAlive in Deno if available 28fbbaf
    • Fix wrong helper match on multiple occurances 02f3854

    Typescript related

    • Fix Deno assertRejects compatibility (#365) 0f0af92
    • Fix include missing boolean type in JSONValue union (#373) 1817387

    https://github.com/porsager/postgres/compare/v3.2.2...v3.2.3

    Source code(tar.gz)
    Source code(zip)
  • v3.2.2(May 15, 2022)

  • v3.2.1(May 15, 2022)

  • v3.2.0(May 15, 2022)

    • Add sslmode=verify-full support e67da29
    • Add support for array of fragments 342bf55
    • Add uri decode of host in url - fixes #346 1adc113
    • Add passing of rest url params to connection (ootb support cockroach urls) 41ed84f
    • Fix Deno partial writes 452a30d
    • Fix as dynamic helper 3300c40
    • Fix some nested fragments usage 9bfa902
    • Fix missing columns on Result when using simple protocol - fixes #350 1e2e298
    • Fix fragments in transactions - fixes #333 75914c7

    Typescript related

    • Upgrade/fix types (#357) 1e6d312
    • Add optional onlisten callback to listen() on TypeScript (#360) 6b749b2
    • Add implicit custom type inference (#361) 28512bf
    • Fix and improve sql() helper types (#338) c1de3d8
    • Fix update query type def for .writable() and .readable() to return promises (#347) 51269ce
    • Add bigint to typescript Serializable - fixes #330 f1e41c3

    https://github.com/porsager/postgres/compare/v3.1.0...v3.2.0

    Source code(tar.gz)
    Source code(zip)
  • v3.1.0(Apr 22, 2022)

    • Add close method to close but not end connections forever 94fea8f
    • Add .values() method to return rows as arrays of values 56873c2
    • Support transform.undefined - fixes #314 eab71e5
    • Support nested fragments values and dynamics - fixes #326 86445ca
    • Fix deno close sequence f76af24
    • Fix subscribe reconnect and add onsubscribe method - fixes #315 5097345
    • Deno ts fix - fixes #327 50403a1

    https://github.com/porsager/postgres/compare/v3.0.6...v3.1.0

    Source code(tar.gz)
    Source code(zip)
  • v3.0.6(Apr 19, 2022)

    • Properly close connections in Deno 13950af
    • Only write end message if open 408a2fb
    • Improve query cancellation d69e264
    • Use monotonically increasing time for timeout - fixes #316 04644c0
    • Add support for dynamic columns with returning - fixes #317 822fb21
    • Fix type errors in TypeScript deno projects (#313) 6a631b7
    • Execute forEach instantly ea6ccd4

    https://github.com/porsager/postgres/compare/v3.0.5...v3.0.6

    Source code(tar.gz)
    Source code(zip)
  • v3.0.5(Apr 6, 2022)

    • Fix transaction execution timing 28bb0b3
    • Add optional onlisten function to listen 1dc2fd2
    • Fix dynamic in helper after insert #305 4d63a59

    https://github.com/porsager/postgres/compare/v3.0.4...v3.0.5

    Source code(tar.gz)
    Source code(zip)
  • v3.0.4(Apr 5, 2022)

  • v3.0.3(Apr 4, 2022)

    • Run tests with github actions b536d0d
    • Add custom socket option - fixes #284 5413f0c
    • Fix sql function overload type inference (#294) 3c4e90a
    • Update deno std to 0.132 and enable last tests 50762d4
    • Send proper client-encoding - Fixes #288 e5b8554

    https://github.com/porsager/postgres/compare/v3.0.2...v3.0.3

    Source code(tar.gz)
    Source code(zip)
  • v3.0.2(Mar 31, 2022)

    • Fix BigInt handling 36a70df
    • Fix unsubscribing (#300) b6c597f
    • Parse update properly with identity full - Fixes #296 3ed11e7

    https://github.com/porsager/postgres/compare/v3.0.1...v3.0.2

    Source code(tar.gz)
    Source code(zip)
  • v3.0.1(Mar 30, 2022)

    • Improve connection queue handling + fix leak cee1a57
    • Use publications option - fixes #295 b5ceecc
    • Add types to debug signature dbb668c
    • Throw proper query error if destroyed e148a0a
    • Transaction rejects with rethrown error - fixes #289 f7c8ae6
    • Only create origin stacktrace for tagged and debug - fixes #290 a782edf
    • Include types and readme in deno release - fixes #287 9068820
    • Disable fetch_types for Subscribe options 72e0cdb
    • Update TypeScript types with v3 changes (#293) db05836

    https://github.com/porsager/postgres/compare/v3.0.0...v3.0.1

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Mar 25, 2022)

    This is a complete rewrite to better support all the features that I was trying to get into v2. There are a few breaking changes from v2 beta, which some (myself included) was using in production, so I'm skipping a stable v2 release and going straight to v3.

    Here are some of the new things available, but check the updated docs.

    • Dynamic query builder based on raw sql
    • Realtime subscribe to db changes through logical replication
    • Multi-host support for High Availability setups
    • Postgres input parameter types from ParameterDescription
    • Deno support
    • Cursors as async iterators
    • .describe() to only get query input types and column definitions
    • Support for Large Objects
    • max_lifetime for connections
    • Cancellation of requests
    • Converted to ESM (with CJS support)
    • Typescript support (Credit @minigugus)

    Breaking changes from v2 -> v3

    • Cursors are always called with Result arrays (previously cursor 1 would return a row object, where > 1 would return an array of rows)
    • .writable() and .readable() is now async (returns a Promise that resolves to the stream)
    • Queries now returns a lazy promise instead of being executed immediately. This means the query won't be sent until awaited (.then, .catch, .finally is called) or until .execute() is manually called.
    • .stream() is renamed to .forEach
    • Returned results are now it's own Result class extending Array instead of an Array with extra properties (actually shouldn't be breaking unless you're doing something funny)
    • Parameters are now cast using the types returned from Postgres ParameterDescription with a fallback to the previously inferred types
    • Only tested with node v12 and up
    • Implicit array value to multiple parameter expansion removed (use sql([...]) instead)

    Breaking changes from v1 -> v2 (v2 never moved on from beta)

    • All identifiers from sql() in queries are now always quoted
    • Undefined parameters are no longer allowed
    • Rename timeout option to idle_timeout
    • Default to 10 connections instead of number of CPUs
    • Numbers that cannot be safely cast to JS Number are returned as string. This happens for eg, select count(*) because count() returns a 64 bit integer (int8), so if you know your count() won't be too big for a js number just cast in your query to int4 like select count(*)::int

    https://github.com/porsager/postgres/compare/v1.0.2...v3.0.0

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0-rc.3(Mar 24, 2022)

  • v3.0.0-rc.2(Mar 23, 2022)

  • v3.0.0-rc.1(Mar 20, 2022)

    This is a complete rewrite to better support all the features that I was trying to get into v2. There are a few breaking changes from v2 beta , which some (myself included) are using in production, so I'm skipping a stable v2 release and going to v3.

    Here are some of the new things available..

    • Converted to ESM with CJS support
    • Deno support
    • Dynamic query builder based on raw sql
    • Realtime subscribe to db changes through logical replication
    • Cursors as async iterators
    • Multi-host support for High Availability setups
    • Postgres input parameter types from ParameterDescription
    • .describe() to only get query input types and column definitions
    • Support for Large Objects
    • max_lifetime for connections
    • Cancellation of requests

    Breaking changes from v2 -> v3

    • Cursors are always called with Result arrays (previously cursor 1 would return a row object, where > 1 would return an array of rows)
    • .writable() and .readable() is now async (returns a Promise that resolves to the stream)
    • Queries now returns a lazy promise instead of being executed immediately. This means the query won't be sent until awaited (.then, .catch, .finally is called) or until .execute() is manually called.
    • .stream() is renamed to .forEach
    • Returned results are now it's own Result class extending Array instead of an Array with extra properties (actually shouldn't be breaking unless you're doing something funny)
    • Parameters are now cast using the types returned from Postgres ParameterDescription with a fallback to the previously inferred types
    • Only tested with node v12 and up
    • Implicit array value to multiple parameter expansion removed (use sql([...]) instead)

    Breaking changes from v1 -> v2 (v2 never moved on from beta)

    • All identifiers from sql() in queries are now always quoted
    • Undefined parameters are no longer allowed
    • Numbers that cannot be safely cast to JS Number are returned as string. This happens for eg, select count(*) because count() returns a 64 bit integer (int8), so if you know your count() won't be too big for a js number just cast in your query to int4 like select count(*)::int

    https://github.com/porsager/postgres/compare/v2.0.0-beta.11...v3.0.0-rc.1

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-beta.11(Nov 17, 2021)

    • Fix listen reconnect on connection errors + simple backoff 09592ea

    https://github.com/porsager/postgres/compare/v2.0.0-beta.10...v2.0.0-beta.11

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-beta.10(Sep 29, 2021)

  • v2.0.0-beta.9(Sep 19, 2021)

    • Subscribe to changes using logical replication (#220) 51d033c
    • Fix #85 ce80061
    • Fix slow parsing of large messages (#229) 2c3353b
    • Expose PostgresError on module (#227) e954dba
    • transform.column.to support (#212) a02de67

    https://github.com/porsager/postgres/compare/v2.0.0-beta.8...v2.0.0-beta.9

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-beta.8(Aug 28, 2021)

  • v2.0.0-beta.7(Aug 16, 2021)

    • Fix potential endless loop 8150005
    • Recreate prepared statements if invalidated (#195) 591d47a
    • Copy (#211) 8e675a3
    • Add raw() method - fixes #194 (#196) 8b405b3
    • Use correct SSL mode description (#210) 1e7713e
    • Prevent obscuring errors when querying pg_catalog (#204) e3ace38
    • Optionally disable fetching of array types. (#205) 779771a
    • Fix return type definition f900c58

    https://github.com/porsager/postgres/compare/v2.0.0-beta.6...v2.0.0-beta.7

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-beta.6(Jul 7, 2021)

    • Use keepalive with 1 minute idle time 60c1f8e
    • Include custom types when fetching array types. (#103) 174e4fd
    • Make cursor work with unsafe - fixes #149 b15861d
    • Properly throw initial cursor errors 1b7e9b9
    • Include missing last set in cursor - fixes #150 #81 cea5f73
    • Fix #188 Support escaping dynamic schemas and tables 2e3652b
    • Handle disable and false sslmode url strings - fixes #177 0eff017
    • Add support for prepared statements in sql.unsafe (#176) ae6c350

    https://github.com/porsager/postgres/compare/v2.0.0-beta.5...v2.0.0-beta.6

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-beta.5(Apr 12, 2021)

    • Feature: Add raw input property to error and debug parameters (#167) 9e83e7f
    • Feature: Multihost support for High Availability setups (#158) 16bc7db
    • Feature: Add unlisten support (#155) 6ef71c8
    • Feature: Add sslmode prefer and require 0cb2981
    • Fix: Catch connection errors in begin() (transactions) - fixes #162 fff6640
    • Fix: Escaping columns for helpers and listen 00a10b2
    • Fix: Properly cleanup sockets (#136) c5bfcf2

    https://github.com/porsager/postgres/compare/v2.0.0-beta.4...v2.0.0-beta.5

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-beta.4(Feb 12, 2021)

    • Better sql`` types (#84) f82d679
    • Fallback to user for db name instead of 'postgres' - fixes #141 0b33b71

    https://github.com/porsager/postgres/compare/v2.0.0-beta.3...v2.0.0-beta.4

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-beta.3(Jan 19, 2021)

    • Fix parallel SCRAM-SHA-256 authentication (#145) 0709db4
    • Expose the PostgresError type for TypeScript (#99) 4d5fdc1
    • Fix type of timeout (#118) c2c0bf1

    https://github.com/porsager/postgres/compare/v2.0.0-beta.2...v2.0.0-beta.3

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-beta.2(Oct 2, 2020)

    • double precision can be represented safely as number in js 5e5ae5a
    • Fix .end() hanging as pending forever a5daa3e
    • Ensure end is called for connections not ready 219fa27
    • Fix support for node 8 and 10 3f6bbab

    https://github.com/porsager/postgres/compare/v2.0.0-beta.1...v2.0.0-beta.2

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-beta.1(Sep 29, 2020)

    • Fix wrong api usage errors being swallowed a891603
    • Fix connection config errors being swallowed 869ab9b
    • Add no_prepare option - fixes #93 and #76 ce14949

    https://github.com/porsager/postgres/compare/v2.0.0-beta.0...v2.0.0-beta.1

    Source code(tar.gz)
    Source code(zip)
Owner
Rasmus Porsager
Rasmus Porsager
Fastest JavaScript client for MySQL, with Deno FFI.

mysql-native The fastest [1] JavaScript MySQL client. import { Connection } from "https://deno.land/x/mysql_native/mod.ts"; const conn = new Connectio

Deno Drivers 7 Sep 27, 2022
A full-featured Solana Metaplex Candymachine client-side library in Typescript

Candymachine Client SDK A full-featured Solana Metaplex Candymachine client-side library in Typescript Disclaimer: The SDK is currently a pre-alpha ve

Boxfish Studio 36 Nov 10, 2022
Opinionated collection of TypeScript definitions and utilities for Deno and Deno Deploy. With complete types for Deno/NPM/TS config files, constructed from official JSON schemas.

Schemas Note: You can also import any type from the default module, ./mod.ts deno.json import { type DenoJson } from "https://deno.land/x/[email protected]

deno911 2 Oct 12, 2022
Grupprojekt för kurserna 'Javascript med Ramverk' och 'Agil Utveckling'

JavaScript-med-Ramverk-Laboration-3 Grupprojektet för kurserna Javascript med Ramverk och Agil Utveckling. Utvecklingsguide För information om hur utv

Svante Jonsson IT-Högskolan 3 May 18, 2022
Hemsida för personer i Sverige som kan och vill erbjuda boende till människor på flykt

Getting Started with Create React App This project was bootstrapped with Create React App. Available Scripts In the project directory, you can run: np

null 4 May 3, 2022
Kurs-repo för kursen Webbserver och Databaser

Webbserver och databaser This repository is meant for CME students to access exercises and codealongs that happen throughout the course. I hope you wi

null 14 Jan 3, 2023
The fastest way ⚡️ to create sitemap in your Deno Fresh project 🍋

Fresh SEO ??     Quickly creating sitemaps for your Deno Fresh project. Getting Started Run the setup at the root of your project. deno run

Steven Yung 34 Dec 19, 2022
🐘 instantly stand up full Postgres databases in your AVA tests

ava-postgres ava-postgres is a test fixture for AVA that provides you with nearly-instant access to a fresh Postgres database for every test. Installa

Seam 2 Dec 22, 2022
A Deno ORM for MySQL, SQLite, PostgreSQL, MongoDB, GitHub and serverless service like Deta, InspireCloud, CloudBase, LeanCloud

A Deno ORM for MySQL, SQLite, PostgreSQL, MongoDB, GitHub and serverless service like Deta, InspireCloud, CloudBase, LeanCloud.

ʀᴀʏ 5 Dec 15, 2022
Deno + PostgreSQL = DenoGres

DenoGres Import Path: https://raw.githubusercontent.com/oslabs-beta/DenoGres/dev/mod.ts To install CLI: deno install --allow-read --allow-write --allo

OSLabs Beta 139 Dec 29, 2022
Deno + PostgreSQL = DenoGres

Welcome to DenoGres! A new comprehensive ORM for PostgreSQL and Deno. Getting Started To begin, let's download DenoGres! Execute the below in the term

OSLabs 69 Aug 29, 2022
Full featured JavaScript image & video gallery. No dependencies

lightgallery.js Full featured JavaScript lightbox gallery. No dependencies. Important notice lightgallery.js has been merged with lightGallery. That m

Sachin Neravath 5.2k Dec 30, 2022
A Full-Featured Modern Dashboard Template for Discord Bots

D-Dash: Discord Bot Dashboard A Full-Featured Dashboard Template for Discord Bots You can modify config/config.js to edit configuration without touchi

MONEY 33 Jan 8, 2023
Triumph Tech’s Magnus Editor is a full-featured remote editor for Rock RMS.

Magnus Visual Studio Code Editor for Rock RMS Triumph Tech’s Magnus Editor is a full-featured remote editor for Rock RMS. Rock RMS is an open source R

Triumph Tech 8 Nov 23, 2022
Very simple full-stack application using React, Java Spring Boot, and PostgreSQL

Very simple full-stack application using React, Java Spring Boot, and PostgreSQL. The API was built following the N-Tier architecture. The goal was to explore and learn more in-depth the development of APIs, the use of Docker and deploying with AWS.

Diego Quintela 0 Apr 23, 2022
🛣️ A tiny and fast http request router designed for use with deno and deno deploy

Rutt Rutt is a tiny http router designed for use with deno and deno deploy. It is written in about 200 lines of code and is pretty fast, using an exte

Denosaurs 26 Dec 10, 2022
RepoProvas API, a system for sharing tests between students, built with Typescript, Node.js, Express, Prisma and Postgres.

Repoprovas Built With ?? Description RepoProvas API, a system for sharing tests between students, built with Typescript, Node.js, Express, Prisma and

Lucas Lima 3 Dec 13, 2022
100% type-safe query builder for node-postgres :: Generated types, call any function, tree-shakable, implicit type casts, and more

⚠️ This library is currently in alpha. Contributors wanted! tusken Postgres client from a galaxy far, far away. your database is the source-of-truth f

alloc 54 Dec 29, 2022
A small, but powerful HTTP library for Deno & Deno Deploy, built for convenience and simplicity

Wren Wren is a small, but powerful HTTP library for Deno & Deno Deploy, built for convenience and simplicity. convenient aliases for HTTP responses au

Jakub Neander 69 Dec 12, 2022