A PostgreSQL client with strict types, detailed logging and assertions.

Overview

Slonik

Travis build status Coveralls NPM version Canonical Code Style Twitter Follow

A battle-tested PostgreSQL client with strict types, detailed logging and assertions.

Tailing Slonik logs

(The above GIF shows Slonik producing query logs. Slonik produces logs using Roarr. Logs include stack trace of the actual query invocation location and values used to execute the query.)

Sponsors

If you value my work and want to see Slonik and many other of my Open-Source projects to be continuously improved, then please consider becoming a patron:

Buy Me A Coffee Become a Patron

Principles

  • Promotes writing raw SQL.
  • Discourages ad-hoc dynamic generation of SQL.

Read: Stop using Knex.js

Note: Using this project does not require TypeScript. It is a regular ES6 module. Ignore the type definitions used in the documentation if you do not use a type system.

Features

Contents

About Slonik

Battle-Tested

Slonik began as a collection of utilities designed for working with node-postgres. We continue to use node-postgres as it provides a robust foundation for interacting with PostgreSQL. However, what once was a collection of utilities has since grown into a framework that abstracts repeating code patterns, protects against unsafe connection handling and value interpolation, and provides rich debugging experience.

Slonik has been battle-tested with large data volumes and queries ranging from simple CRUD operations to data-warehousing needs.

Origin of the name

Slonik

The name of the elephant depicted in the official PostgreSQL logo is Slonik. The name itself is derived from the Russian word for "little elephant".

Read: The History of Slonik, the PostgreSQL Elephant Logo

Repeating code patterns and type safety

Among the primary reasons for developing Slonik, was the motivation to reduce the repeating code patterns and add a level of type safety. This is primarily achieved through the methods such as one, many, etc. But what is the issue? It is best illustrated with an example.

Suppose the requirement is to write a method that retrieves a resource ID given values defining (what we assume to be) a unique constraint. If we did not have the aforementioned convenience methods available, then it would need to be written as:

import {
  sql
} from 'slonik';
import type {
  DatabaseConnectionType
} from 'slonik';

type DatabaseRecordIdType = number;

const getFooIdByBar = async (connection: DatabaseConnectionType, bar: string): Promise<DatabaseRecordIdType> => {
  const fooResult = await connection.query(sql`
    SELECT id
    FROM foo
    WHERE bar = ${bar}
  `);

  if (fooResult.rowCount === 0) {
    throw new Error('Resource not found.');
  }

  if (fooResult.rowCount > 1) {
    throw new Error('Data integrity constraint violation.');
  }

  return fooResult[0].id;
};

oneFirst method abstracts all of the above logic into:

const getFooIdByBar = (connection: DatabaseConnectionType, bar: string): Promise<DatabaseRecordIdType> => {
  return connection.oneFirst(sql`
    SELECT id
    FROM foo
    WHERE bar = ${bar}
  `);
};

oneFirst throws:

  • NotFoundError if query returns no rows
  • DataIntegrityError if query returns multiple rows
  • DataIntegrityError if query returns multiple columns

This becomes particularly important when writing routines where multiple queries depend on the previous result. Using methods with inbuilt assertions ensures that in case of an error, the error points to the original source of the problem. In contrast, unless assertions for all possible outcomes are typed out as in the previous example, the unexpected result of the query will be fed to the next operation. If you are lucky, the next operation will simply break; if you are unlucky, you are risking data corruption and hard to locate bugs.

Furthermore, using methods that guarantee the shape of the results, allows us to leverage static type checking and catch some of the errors even before they executing the code, e.g.

const fooId = await connection.many(sql`
  SELECT id
  FROM foo
  WHERE bar = ${bar}
`);

await connection.query(sql`
  DELETE FROM baz
  WHERE foo_id = ${fooId}
`);

Static type check of the above example will produce a warning as the fooId is guaranteed to be an array and binding of the last query is expecting a primitive value.

Protecting against unsafe connection handling

Slonik only allows to check out a connection for the duration of the promise routine supplied to the pool#connect() method.

The primary reason for implementing only this connection pooling method is because the alternative is inherently unsafe, e.g.

// Note: This example is using unsupported API.

const main = async () => {
  const connection = await pool.connect();

  await connection.query(sql`SELECT foo()`);

  await connection.release();
};

In this example, if SELECT foo() produces an error, then connection is never released, i.e. the connection remains to hang.

A fix to the above is to ensure that connection#release() is always called, i.e.

// Note: This example is using unsupported API.

const main = async () => {
  const connection = await pool.connect();

  let lastExecutionResult;

  try {
    lastExecutionResult = await connection.query(sql`SELECT foo()`);
  } finally {
    await connection.release();
  }

  return lastExecutionResult;
};

Slonik abstracts the latter pattern into pool#connect() method.

const main = () => {
  return pool.connect((connection) => {
    return connection.query(sql`SELECT foo()`);
  });
};

Connection is always released back to the pool after the promise produced by the function supplied to connect() method is either resolved or rejected.

Protecting against unsafe transaction handling

Just like in the unsafe connection handling described above, Slonik only allows to create a transaction for the duration of the promise routine supplied to the connection#transaction() method.

connection.transaction(async (transactionConnection) => {
  await transactionConnection.query(sql`INSERT INTO foo (bar) VALUES ('baz')`);
  await transactionConnection.query(sql`INSERT INTO qux (quux) VALUES ('quuz')`);
});

This pattern ensures that the transaction is either committed or aborted the moment the promise is either resolved or rejected.

Protecting against unsafe value interpolation

SQL injections are one of the most well known attack vectors. Some of the biggest data leaks were the consequence of improper user-input handling. In general, SQL injections are easily preventable by using parameterization and by restricting database permissions, e.g.

// Note: This example is using unsupported API.

connection.query('SELECT $1', [
  userInput
]);

In this example, the query text (SELECT $1) and parameters (value of the userInput) are passed to the PostgreSQL server where the parameters are safely substituted into the query. This is a safe way to execute a query using user-input.

The vulnerabilities appear when developers cut corners or when they do not know about parameterization, i.e. there is a risk that someone will instead write:

// Note: This example is using unsupported API.

connection.query('SELECT \'' + userInput + '\'');

As evident by the history of the data leaks, this happens more often than anyone would like to admit. This is especially a big risk in Node.js community, where predominant number of developers are coming from frontend and have not had training working with RDBMSes. Therefore, one of the key selling points of Slonik is that it adds multiple layers of protection to prevent unsafe handling of user-input.

To begin with, Slonik does not allow to run plain-text queries.

connection.query('SELECT 1');

The above invocation would produce an error:

TypeError: Query must be constructed using sql tagged template literal.

This means that the only way to run a query is by constructing it using sql tagged template literal, e.g.

connection.query(sql`SELECT 1`);

To add a parameter to the query, user must use template literal placeholders, e.g.

connection.query(sql`SELECT ${userInput}`);

Slonik takes over from here and constructs a query with value bindings, and sends the resulting query text and parameters to the PostgreSQL. As sql tagged template literal is the only way to execute the query, it adds a strong layer of protection against accidental unsafe user-input handling due to limited knowledge of the SQL client API.

As Slonik restricts user's ability to generate and execute dynamic SQL, it provides helper functions used to generate fragments of the query and the corresponding value bindings, e.g. sql.identifier, sql.join and sql.unnest. These methods generate tokens that the query executor interprets to construct a safe query, e.g.

connection.query(sql`
  SELECT ${sql.identifier(['foo', 'a'])}
  FROM (
    VALUES
    (
      ${sql.join(
        [
          sql.join(['a1', 'b1', 'c1'], sql`, `),
          sql.join(['a2', 'b2', 'c2'], sql`, `)
        ],
        sql`), (`
      )}
    )
  ) foo(a, b, c)
  WHERE foo.b IN (${sql.join(['c1', 'a2'], sql`, `)})
`);

This (contrived) example generates a query equivalent to:

SELECT "foo"."a"
FROM (
  VALUES
    ($1, $2, $3),
    ($4, $5, $6)
) foo(a, b, c)
WHERE foo.b IN ($7, $8)

That is executed with the parameters provided by the user.

To sum up, Slonik is designed to prevent accidental creation of queries vulnerable to SQL injections.

Documentation

Usage

Create connection

Use createPool to create a connection pool, e.g.

import {
  createPool,
} from 'slonik';

const pool = createPool('postgres://');

Instance of Slonik connection pool can be then used to create a new connection, e.g.

pool.connect(async (connection) => {
  await connection.query(sql`SELECT 1`);
});

The connection will be kept alive until the promise resolves (the result of the method supplied to connect()).

Refer to query method documentation to learn about the connection methods.

If you do not require having a persistent connection to the same backend, then you can directly use pool to run queries, e.g.

pool.query(sql`SELECT 1`);

Beware that in the latter example, the connection picked to execute the query is a random connection from the connection pool, i.e. using the latter method (without explicit connect()) does not guarantee that multiple queries will refer to the same backend.

End connection pool

Use pool.end() to end idle connections and prevent creation of new connections.

The result of pool.end() is a promise that is resolved when all connections are ended.

import {
  createPool,
  sql,
} from 'slonik';

const pool = createPool('postgres://');

const main = async () => {
  await pool.query(sql`
    SELECT 1
  `);

  await pool.end();
};

main();

Note: pool.end() does not terminate active connections/ transactions.

Describing the current state of the connection pool

Use pool.getPoolState() to find out if pool is alive and how many connections are active and idle, and how many clients are waiting for a connection.

import {
  createPool,
  sql,
} from 'slonik';

const pool = createPool('postgres://');

const main = async () => {
  pool.getPoolState();

  // {
  //   activeConnectionCount: 0,
  //   ended: false,
  //   idleConnectionCount: 0,
  //   waitingClientCount: 0,
  // }

  await pool.connect(() => {
    pool.getPoolState();

    // {
    //   activeConnectionCount: 1,
    //   ended: false,
    //   idleConnectionCount: 0,
    //   waitingClientCount: 0,
    // }
  });

  pool.getPoolState();

  // {
  //   activeConnectionCount: 0,
  //   ended: false,
  //   idleConnectionCount: 1,
  //   waitingClientCount: 0,
  // }

  await pool.end();

  pool.getPoolState();

  // {
  //   activeConnectionCount: 0,
  //   ended: true,
  //   idleConnectionCount: 0,
  //   waitingClientCount: 0,
  // }
};

main();

Note: pool.end() does not terminate active connections/ transactions.

API

/**
 * @param connectionUri PostgreSQL [Connection URI](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING).
 */
createPool(
  connectionUri: string,
  clientConfiguration: ClientConfigurationType
): DatabasePoolType;

/**
 * @property captureStackTrace Dictates whether to capture stack trace before executing query. Middlewares access stack trace through query execution context. (Default: true)
 * @property connectionRetryLimit Number of times to retry establishing a new connection. (Default: 3)
 * @property connectionTimeout Timeout (in milliseconds) after which an error is raised if connection cannot cannot be established. (Default: 5000)
 * @property idleInTransactionSessionTimeout Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 60000)
 * @property idleTimeout Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 5000)
 * @property interceptors An array of [Slonik interceptors](https://github.com/gajus/slonik#slonik-interceptors).
 * @property maximumPoolSize Do not allow more than this many connections. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 10)
 * @property preferNativeBindings Uses libpq bindings when `pg-native` module is installed. (Default: true)
 * @property statementTimeout Timeout (in milliseconds) after which database is instructed to abort the query. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 60000)
 * @property transactionRetryLimit Number of times a transaction failing with Transaction Rollback class error is retried. (Default: 5)
 * @property typeParsers An array of [Slonik type parsers](https://github.com/gajus/slonik#slonik-type-parsers).
 */
type ClientConfigurationInputType = {|
  +captureStackTrace?: boolean,
  +connectionRetryLimit?: number,
  +connectionTimeout?: number | 'DISABLE_TIMEOUT',
  +idleInTransactionSessionTimeout?: number | 'DISABLE_TIMEOUT',
  +idleTimeout?: number | 'DISABLE_TIMEOUT',
  +interceptors?: $ReadOnlyArray<InterceptorType>,
  +maximumPoolSize?: number,
  +preferNativeBindings?: boolean,
  +statementTimeout?: number | 'DISABLE_TIMEOUT',
  +transactionRetryLimit?: number,
  +typeParsers?: $ReadOnlyArray<TypeParserType>,
|};

Example:

import {
  createPool
} from 'slonik';

const pool = createPool('postgres://');

await pool.query(sql`SELECT 1`);

Default configuration

Default interceptors

None.

Check out slonik-interceptor-preset for an opinionated collection of interceptors.

Default type parsers

These type parsers are enabled by default:

Type name Implementation
date Produces a literal date as a string (format: YYYY-MM-DD).
int8 Produces an integer.
interval Produces interval in seconds (integer).
numeric Produces a float.
timestamp Produces a unix timestamp (in milliseconds).
timestamptz Produces a unix timestamp (in milliseconds).

To disable the default type parsers, pass an empty array, e.g.

createPool('postgres://', {
  typeParsers: []
});

You can create default type parser collection using createTypeParserPreset, e.g.

import {
  createTypeParserPreset
} from 'slonik';

createPool('postgres://', {
  typeParsers: [
    ...createTypeParserPreset()
  ]
});

Default timeouts

There are 4 types of configurable timeouts:

Configuration Description Default
connectionTimeout Timeout (in milliseconds) after which an error is raised if connection cannot cannot be established. 5000
idleInTransactionSessionTimeout Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. 60000
idleTimeout Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. 5000
statementTimeout Timeout (in milliseconds) after which database is instructed to abort the query. Use 'DISABLE_TIMEOUT' constant to disable the timeout. 60000

Slonik sets aggressive timeouts by default. These timeouts are designed to provide safe interface to the database. These timeouts might not work for all programs. If your program has long running statements, consider adjusting timeouts just for those statements instead of changing the defaults.

Using native libpq bindings

In order to use native libpq PostgreSQL bindings install pg-native.

$ npm install pg-native

By default, Slonik uses native bindings when pg-native is installed. To use JavaScript bindings when pg-native is installed, configure preferNativeBindings: false.

Checking out a client from the connection pool

Slonik only allows to check out a connection for the duration of the promise routine supplied to the pool#connect() method.

import {
  createPool,
} from 'slonik';

const pool = createPool('postgres://localhost');

const result = await pool.connect(async (connection) => {
  await connection.query(sql`SELECT 1`);
  await connection.query(sql`SELECT 2`);

  return 'foo';
});

result;
// 'foo'

Connection is released back to the pool after the promise produced by the function supplied to connect() method is either resolved or rejected.

Read: Protecting against unsafe connection handling

Mocking Slonik

Slonik provides a way to mock queries against the database.

  • Use createMockPool to create a mock connection.
  • Use createMockQueryResult to create a mock query result.
import {
  createMockPool,
  createMockQueryResult,
} from 'slonik';

type OverridesType = {|
  +query: (sql: string, values: $ReadOnlyArray<PrimitiveValueExpressionType>,) => Promise<QueryResultType<QueryResultRowType>>,
|};

createMockPool(overrides: OverridesType): DatabasePoolType;
createMockQueryResult(rows: $ReadOnlyArray<QueryResultRowType>): QueryResultType<QueryResultRowType>;

Example:

import {
  createMockPool,
  createMockQueryResult,
} from 'slonik';

const pool = createMockPool({
  query: async () => {
    return createMockQueryResult([
      {
        foo: 'bar',
      },
    ]);
  },
});

await pool.connect(async (connection) => {
  const results = await connection.query(sql`
    SELECT ${'foo'}
  `);
});

How are they different?

pg vs slonik

pg is built intentionally to provide unopinionated, minimal abstraction and encourages use of other modules to implement convenience methods.

Slonik is built on top of pg and it provides convenience methods for building queries and querying data.

Work on pg began on Tue Sep 28 22:09:21 2010. It is authored by Brian Carlson.

pg-promise vs slonik

As the name suggests, pg-promise was originally built to enable use of pg module with promises (at the time, pg only supported Continuation Passing Style (CPS), i.e. callbacks). Since then pg-promise added features for connection/ transaction handling, a powerful query-formatting engine and a declarative approach to handling query results.

The primary difference between Slonik and pg-promise:

Note: Author of pg-promise has objected to the above claims. I have removed a difference that was clearly wrong. I maintain that the above two differences remain valid differences: even though pg-promise might have substitute functionality for variable interpolation and interceptors, it implements them in a way that does not provide the same benefits that Slonik provides, namely: guaranteed security and support for extending library functionality using multiple plugins.

Other differences are primarily in how the equivalent features are implemented, e.g.

pg-promise Slonik
Custom type formatting. Not available in Slonik. The current proposal is to create an interceptor that would have access to the query fragment constructor.
formatting filters Slonik tagged template value expressions to construct query fragments and bind parameter values.
Query files. Use slonik-sql-tag-raw.
Tasks. Use pool.connect.
Configurable transactions. Not available in Slonik. Track this issue.
Events. Use interceptors.

When weighting which abstraction to use, it would be unfair not to consider that pg-promise is a mature project with dozens of contributors. Meanwhile, Slonik is a young project (started in March 2017) that until recently was developed without active community input. However, if you do support the unique features that Slonik adds, the opinionated API design, and are not afraid of adopting a technology in its young days, then I warmly invite you to adopt Slonik and become a contributor to what I intend to make the standard PostgreSQL client in the Node.js community.

Work on pg-promise began Wed Mar 4 02:00:34 2015. It is authored by Vitaly Tomilov.

Type parsers

Type parsers describe how to parse PostgreSQL types.

type TypeParserType = {|
  +name: string,
  +parse: (value: string) => *
|};

Example:

{
  name: 'int8',
  parse: (value) => {
    return parseInt(value, 10);
  }
}

Note: Unlike pg-types that uses OIDs to identify types, Slonik identifies types using their names.

Use this query to find type names:

SELECT
  typname
FROM pg_type
ORDER BY typname ASC

Type parsers are configured using typeParsers client configuration.

Read: Default type parsers.

Built-in type parsers

Type name Implementation Factory function name
date Produces a literal date as a string (format: YYYY-MM-DD). createDateTypeParser
int8 Produces an integer. createBigintTypeParser
interval Produces interval in seconds (integer). createIntervalTypeParser
numeric Produces a float. createNumericTypeParser
timestamp Produces a unix timestamp (in milliseconds). createTimestampTypeParser
timestamptz Produces a unix timestamp (in milliseconds). createTimestampWithTimeZoneTypeParser

Built-in type parsers can be created using the exported factory functions, e.g.

import {
  createTimestampTypeParser
} from 'slonik';

createTimestampTypeParser();

// {
//   name: 'timestamp',
//   parse: (value) => {
//     return value === null ? value : Date.parse(value);
//   }
// }

Interceptors

Functionality can be added to Slonik client by adding interceptors (middleware).

Interceptors are configured using client configuration, e.g.

import {
  createPool
} from 'slonik';

const interceptors = [];

const connection = createPool('postgres://', {
  interceptors
});

Interceptors are executed in the order they are added.

Read: Default interceptors.

Interceptor methods

Interceptor is an object that implements methods that can change the behaviour of the database client at different stages of the connection life-cycle

type InterceptorType = {|
  +afterPoolConnection?: (
    connectionContext: ConnectionContextType,
    connection: DatabasePoolConnectionType
  ) => MaybePromiseType<null>,
  +afterQueryExecution?: (
    queryContext: QueryContextType,
    query: QueryType,
    result: QueryResultType<QueryResultRowType>
  ) => MaybePromiseType<QueryResultType<QueryResultRowType>>,
  +beforePoolConnection?: (
    connectionContext: ConnectionContextType
  ) => MaybePromiseType<?DatabasePoolType>,
  +beforePoolConnectionRelease?: (
    connectionContext: ConnectionContextType,
    connection: DatabasePoolConnectionType
  ) => MaybePromiseType<null>,
  +beforeQueryExecution?: (
    queryContext: QueryContextType,
    query: QueryType
  ) => MaybePromiseType<QueryResultType<QueryResultRowType>> | MaybePromiseType<null>,
  +beforeQueryResult?: (
    queryContext: QueryContextType,
    query: QueryType,
    result: QueryResultType<QueryResultRowType>
  ) => MaybePromiseType<null>,
  +beforeTransformQuery?: (
    queryContext: QueryContextType,
    query: QueryType
  ) => Promise<null>,
  +queryExecutionError?: (
    queryContext: QueryContextType,
    query: QueryType,
    error: SlonikError
  ) => MaybePromiseType<null>,
  +transformQuery?: (
    queryContext: QueryContextType,
    query: QueryType
  ) => QueryType,
  +transformRow?: (
    queryContext: QueryContextType,
    query: QueryType,
    row: QueryResultRowType,
    fields: $ReadOnlyArray<FieldType>
  ) => QueryResultRowType
|};

afterPoolConnection

Executed after a connection is acquired from the connection pool (or a new connection is created), e.g.

const pool = createPool('postgres://');

// Interceptor is executed here. ↓
pool.connect();

afterQueryExecution

Executed after query has been executed and before rows were transformed using transformRow.

Note: When query is executed using stream, then afterQuery is called with empty result set.

beforeQueryExecution

This function can optionally return a direct result of the query which will cause the actual query never to be executed.

beforeQueryResult

Executed just before the result is returned to the client.

Use this method to capture the result that will be returned to the client.

beforeTransformQuery

Executed before transformQuery. Use this interceptor to capture the original query (e.g. for logging purposes).

beforePoolConnectionRelease

Executed before connection is released back to the connection pool, e.g.

const pool = await createPool('postgres://');

pool.connect(async () => {
  await 1;

  // Interceptor is executed here. ↓
});

queryExecutionError

Executed if query execution produces an error.

Use queryExecutionError to log and/ or re-throw another error.

transformQuery

Executed before beforeQueryExecution.

Transforms query.

transformRow

Executed for each row.

Transforms row.

Use transformRow to modify the query result.

Community interceptors

Name Description
slonik-interceptor-field-name-transformation Transforms Slonik query result field names.
slonik-interceptor-query-benchmarking Benchmarks Slonik queries.
slonik-interceptor-query-cache Caches Slonik queries.
slonik-interceptor-query-logging Logs Slonik queries.
slonik-interceptor-query-normalisation Normalises Slonik queries.

Check out slonik-interceptor-preset for an opinionated collection of interceptors.

Recipes

Inserting large number of rows

Use sql.unnest to create a set of rows using unnest. Using the unnest approach requires only 1 variable per every column; values for each column are passed as an array, e.g.

await connection.query(sql`
  INSERT INTO foo (bar, baz, qux)
  SELECT *
  FROM ${sql.unnest(
    [
      [1, 2, 3],
      [4, 5, 6]
    ],
    [
      'int4',
      'int4',
      'int4'
    ]
  )}
`);

Produces:

{
  sql: 'INSERT INTO foo (bar, baz, qux) SELECT * FROM unnest($1::int4[], $2::int4[], $2::int4[])',
  values: [
    [
      1,
      4
    ],
    [
      2,
      5
    ],
    [
      3,
      6
    ]
  ]
}

Inserting data this way ensures that the query is stable and reduces the amount of time it takes to parse the query.

Routing queries to different connections

If connection is initiated by a query (as opposed to a obtained explicitly using pool#connect()), then beforePoolConnection interceptor can be used to change the pool that will be used to execute the query, e.g.

const slavePool = createPool('postgres://slave');
const masterPool = createPool('postgres://master', {
  interceptors: [
    {
      beforePoolConnection: (connectionContext, pool) => {
        if (connectionContext.query && connectionContext.query.sql.includes('SELECT')) {
          return slavePool;
        }

        return pool;
      }
    }
  ]
});

// This query will use `postgres://slave` connection.
masterPool.query(sql`SELECT 1`);

// This query will use `postgres://master` connection.
masterPool.query(sql`UPDATE 1`);

sql tag

sql tag serves two purposes:

sql tag can be imported from Slonik package:

import {
  sql
} from 'slonik';

Sometimes it may be desirable to construct a custom instance of sql tag. In those cases, you can use the createSqlTag factory, e.g.

import {
  createSqlTag
} from 'slonik';

/**
 * @typedef SqlTagConfiguration
 */

/**
 * @param {SqlTagConfiguration} configuration
 */
const sql = createSqlTag(configuration);

Value placeholders

Tagged template literals

Slonik query methods can only be executed using sql tagged template literal, e.g.

import {
  sql
} from 'slonik'

connection.query(sql`
  SELECT 1
  FROM foo
  WHERE bar = ${'baz'}
`);

The above is equivalent to evaluating:

SELECT 1
FROM foo
WHERE bar = $1

query with 'baz' value binding.

Manually constructing the query

Manually constructing queries is not allowed.

There is an internal mechanism that checks to see if query was created using sql tagged template literal, i.e.

const query = {
  sql: 'SELECT 1 FROM foo WHERE bar = $1',
  type: 'SQL',
  values: [
    'baz'
  ]
};

connection.query(query);

Will result in an error:

Query must be constructed using sql tagged template literal.

This is a security measure designed to prevent unsafe query execution.

Furthermore, a query object constructed using sql tagged template literal is frozen to prevent further manipulation.

Nesting sql

sql tagged template literals can be nested, e.g.

const query0 = sql`SELECT ${'foo'} FROM bar`;
const query1 = sql`SELECT ${'baz'} FROM (${query0})`;

Produces:

{
  sql: 'SELECT $1 FROM (SELECT $2 FROM bar)',
  values: [
    'baz',
    'foo'
  ]
}

Query building

Queries are built using methods of the sql tagged template literal.

If this is your first time using Slonik, read Dynamically generating SQL queries using Node.js.

sql.array

(
  values: $ReadOnlyArray<PrimitiveValueExpressionType>,
  memberType: TypeNameIdentifierType | SqlTokenType
) => ArraySqlTokenType;

Creates an array value binding, e.g.

await connection.query(sql`
  SELECT (${sql.array([1, 2, 3], 'int4')})
`);

Produces:

{
  sql: 'SELECT $1::"int4"[]',
  values: [
    [
      1,
      2,
      3
    ]
  ]
}

sql.array memberType

If memberType is a string (TypeNameIdentifierType), then it is treated as a type name identifier and will be quoted using double quotes, i.e. sql.array([1, 2, 3], 'int4') is equivalent to $1::"int4"[]. The implication is that keywords that are often used interchangeably with type names are not going to work, e.g. int4 is a type name identifier and will work. However, int is a keyword and will not work. You can either use type name identifiers or you can construct custom member using sql tag, e.g.

await connection.query(sql`
  SELECT (${sql.array([1, 2, 3], sql`int[]`)})
`);

Produces:

{
  sql: 'SELECT $1::int[]',
  values: [
    [
      1,
      2,
      3
    ]
  ]
}

sql.array vs sql.join

Unlike sql.join, sql.array generates a stable query of a predictable length, i.e. regardless of the number of values in the array, the generated query remains the same:

  • Having a stable query enables pg_stat_statements to aggregate all query execution statistics.
  • Keeping the query length short reduces query parsing time.

Example:

sql`SELECT id FROM foo WHERE id IN (${sql.join([1, 2, 3], sql`, `)})`;
sql`SELECT id FROM foo WHERE id NOT IN (${sql.join([1, 2, 3], sql`, `)})`;

Is equivalent to:

sql`SELECT id FROM foo WHERE id = ANY(${sql.array([1, 2, 3], 'int4')})`;
sql`SELECT id FROM foo WHERE id != ALL(${sql.array([1, 2, 3], 'int4')})`;

Furthermore, unlike sql.join, sql.array can be used with an empty array of values. In short, sql.array should be preferred over sql.join when possible.

sql.binary

(
  data: Buffer
) => BinarySqlTokenType;

Binds binary (bytea) data, e.g.

await connection.query(sql`
  SELECT ${sql.binary(Buffer.from('foo'))}
`);

Produces:

{
  sql: 'SELECT $1',
  values: [
    Buffer.from('foo')
  ]
}

sql.identifier

(
  names: $ReadOnlyArray<string>
) => IdentifierSqlTokenType;

Delimited identifiers are created by enclosing an arbitrary sequence of characters in double-quotes ("). To create create a delimited identifier, create an sql tag function placeholder value using sql.identifier, e.g.

sql`
  SELECT 1
  FROM ${sql.identifier(['bar', 'baz'])}
`;

Produces:

{
  sql: 'SELECT 1 FROM "bar"."baz"',
  values: []
}

sql.json

(
  value: SerializableValueType
) => JsonSqlTokenType;

Serializes value and binds it as a JSON string literal, e.g.

await connection.query(sql`
  SELECT (${sql.json([1, 2, 3])})
`);

Produces:

{
  sql: 'SELECT $1',
  values: [
    '[1,2,3]'
  ]
}

Difference from JSON.stringify

Input sql.json JSON.stringify
undefined Throws InvalidInputError error. undefined
null null "null" (string literal)

sql.join

(
  members: $ReadOnlyArray<SqlTokenType>,
  glue: SqlTokenType
) => ListSqlTokenType;

Concatenates SQL expressions using glue separator, e.g.

await connection.query(sql`
  SELECT ${sql.join([1, 2, 3], sql`, `)}
`);

Produces:

{
  sql: 'SELECT $1, $2, $3',
  values: [
    1,
    2,
    3
  ]
}

sql.join is the primary building block for most of the SQL, e.g.

Boolean expressions:

sql`
  SELECT ${sql.join([1, 2], sql` AND `)}
`

// SELECT $1 AND $2

Tuple:

sql`
  SELECT (${sql.join([1, 2], sql`, `)})
`

// SELECT ($1, $2)

Tuple list:

sql`
  SELECT ${sql.join(
    [
      sql`(${sql.join([1, 2], sql`, `)})`,
      sql`(${sql.join([3, 4], sql`, `)})`,
    ],
    sql`, `
  )}
`

// SELECT ($1, $2), ($3, $4)

sql.unnest

(
  tuples: $ReadOnlyArray<$ReadOnlyArray<PrimitiveValueExpressionType>>,
  columnTypes: $ReadOnlyArray<string>
): UnnestSqlTokenType;

Creates an unnest expressions, e.g.

await connection.query(sql`
  SELECT bar, baz
  FROM ${sql.unnest(
    [
      [1, 'foo'],
      [2, 'bar']
    ],
    [
      'int4',
      'text'
    ]
  )} AS foo(bar, baz)
`);

Produces:

{
  sql: 'SELECT bar, baz FROM unnest($1::int4[], $2::text[]) AS foo(bar, baz)',
  values: [
    [
      1,
      2
    ],
    [
      'foo',
      'bar'
    ]
  ]
}

Query methods

any

Returns result rows.

Example:

const rows = await connection.any(sql`SELECT foo`);

#any is similar to #query except that it returns rows without fields information.

anyFirst

Returns value of the first column of every row in the result set.

  • Throws DataIntegrityError if query returns multiple columns.

Example:

const fooValues = await connection.anyFirst(sql`SELECT foo`);

exists

Returns a boolean value indicating whether query produces results.

The query that is passed to this function is wrapped in SELECT exists() prior to it getting executed, i.e.

pool.exists(sql`
  SELECT LIMIT 1
`)

is equivalent to:

pool.oneFirst(sql`
  SELECT exists(
    SELECT LIMIT 1
  )
`)

copyFromBinary

(
  streamQuery: TaggedTemplateLiteralInvocationType,
  tupleList: $ReadOnlyArray<$ReadOnlyArray<any>>,
  columnTypes: $ReadOnlyArray<TypeNameIdentifierType>
) => Promise<null>;

Copies from a binary stream.

The binary stream is constructed using user supplied tupleList and columnTypes values.

Example:

const tupleList = [
  [
    1,
    'baz'
  ],
  [
    2,
    'baz'
  ]
];

const columnTypes = [
  'int4',
  'text'
];

await connection.copyFromBinary(
  sql`
    COPY foo
    (
      id,
      baz
    )
    FROM STDIN BINARY
  `,
  tupleList,
  columnTypes
);

Limitations

  • Tuples cannot contain NULL values.

Implementation notes

copyFromBinary implementation is designed to minimize the query execution time at the cost of increased script memory usage and execution time. This is achieved by separating data encoding from feeding data to PostgreSQL, i.e. all data passed to copyFromBinary is first encoded and then fed to PostgreSQL (contrast this to using a stream with encoding transformation to feed data to PostgreSQL).

Related documentation

many

Returns result rows.

  • Throws NotFoundError if query returns no rows.

Example:

const rows = await connection.many(sql`SELECT foo`);

manyFirst

Returns value of the first column of every row in the result set.

  • Throws NotFoundError if query returns no rows.
  • Throws DataIntegrityError if query returns multiple columns.

Example:

const fooValues = await connection.many(sql`SELECT foo`);

maybeOne

Selects the first row from the result.

  • Returns null if row is not found.
  • Throws DataIntegrityError if query returns multiple rows.

Example:

const row = await connection.maybeOne(sql`SELECT foo`);

// row.foo is the result of the `foo` column value of the first row.

maybeOneFirst

Returns value of the first column from the first row.

  • Returns null if row is not found.
  • Throws DataIntegrityError if query returns multiple rows.
  • Throws DataIntegrityError if query returns multiple columns.

Example:

const foo = await connection.maybeOneFirst(sql`SELECT foo`);

// foo is the result of the `foo` column value of the first row.

one

Selects the first row from the result.

  • Throws NotFoundError if query returns no rows.
  • Throws DataIntegrityError if query returns multiple rows.

Example:

const row = await connection.one(sql`SELECT foo`);

// row.foo is the result of the `foo` column value of the first row.

Note:

I've been asked "What makes this different from knex.js knex('foo').limit(1)?". knex('foo').limit(1) simply generates "SELECT * FROM foo LIMIT 1" query. knex is a query builder; it does not assert the value of the result. Slonik #one adds assertions about the result of the query.

oneFirst

Returns value of the first column from the first row.

  • Throws NotFoundError if query returns no rows.
  • Throws DataIntegrityError if query returns multiple rows.
  • Throws DataIntegrityError if query returns multiple columns.

Example:

const foo = await connection.oneFirst(sql`SELECT foo`);

// foo is the result of the `foo` column value of the first row.

query

API and the result shape are equivalent to pg#query.

Example:

await connection.query(sql`SELECT foo`);

// {
//   command: 'SELECT',
//   fields: [],
//   notices: [],
//   rowCount: 1,
//   rows: [
//     {
//       foo: 'bar'
//     }
//   ]
// }

stream

Streams query results.

Example:

await connection.stream(sql`SELECT foo`, (stream) => {
  stream.on('data', (datum) => {
    datum;
    // {
    //   fields: [
    //     {
    //       name: 'foo',
    //       dataTypeId: 23,
    //     }
    //   ],
    //   row: {
    //     foo: 'bar'
    //   }
    // }
  });
});

Note: Implemented using pg-query-stream.

transaction

transaction method is used wrap execution of queries in START TRANSACTION and COMMIT or ROLLBACK. COMMIT is called if the transaction handler returns a promise that resolves; ROLLBACK is called otherwise.

transaction method can be used together with createPool method. When used to create a transaction from an instance of a pool, a new connection is allocated for the duration of the transaction.

const result = await connection.transaction(async (transactionConnection) => {
  await transactionConnection.query(sql`INSERT INTO foo (bar) VALUES ('baz')`);
  await transactionConnection.query(sql`INSERT INTO qux (quux) VALUES ('corge')`);

  return 'FOO';
});

result === 'FOO';

Transaction nesting

Slonik uses SAVEPOINT to automatically nest transactions, e.g.

await connection.transaction(async (t1) => {
  await t1.query(sql`INSERT INTO foo (bar) VALUES ('baz')`);

  return t1.transaction((t2) => {
    return t2.query(sql`INSERT INTO qux (quux) VALUES ('corge')`);
  });
});

is equivalent to:

START TRANSACTION;
INSERT INTO foo (bar) VALUES ('baz');
SAVEPOINT slonik_savepoint_1;
INSERT INTO qux (quux) VALUES ('corge');
COMMIT;

Slonik automatically rollsback to the last savepoint if a query belonging to a transaction results in an error, e.g.

await connection.transaction(async (t1) => {
  await t1.query(sql`INSERT INTO foo (bar) VALUES ('baz')`);

  try {
    await t1.transaction(async (t2) => {
      await t2.query(sql`INSERT INTO qux (quux) VALUES ('corge')`);

      return Promise.reject(new Error('foo'));
    });
  } catch (error) {

  }
});

is equivalent to:

START TRANSACTION;
INSERT INTO foo (bar) VALUES ('baz');
SAVEPOINT slonik_savepoint_1;
INSERT INTO qux (quux) VALUES ('corge');
ROLLBACK TO SAVEPOINT slonik_savepoint_1;
COMMIT;

If error is unhandled, then the entire transaction is rolledback, e.g.

await connection.transaction(async (t1) => {
  await t1.query(sql`INSERT INTO foo (bar) VALUES ('baz')`);

  await t1.transaction(async (t2) => {
    await t2.query(sql`INSERT INTO qux (quux) VALUES ('corge')`);

    await t1.transaction(async (t3) => {
      await t3.query(sql`INSERT INTO uier (grault) VALUES ('garply')`);

      return Promise.reject(new Error('foo'));
    });
  });
});

is equivalent to:

START TRANSACTION;
INSERT INTO foo (bar) VALUES ('baz');
SAVEPOINT slonik_savepoint_1;
INSERT INTO qux (quux) VALUES ('corge');
SAVEPOINT slonik_savepoint_2;
INSERT INTO uier (grault) VALUES ('garply');
ROLLBACK TO SAVEPOINT slonik_savepoint_2;
ROLLBACK TO SAVEPOINT slonik_savepoint_1;
ROLLBACK;

Transaction retrying

Transactions that are failing with Transaction Rollback class errors are automatically retried.

A failing transaction will be rolled back and all queries up to the failing query will be replayed.

How many times a transaction is retried is controlled using transactionRetryLimit configuration (default: 5).

Error handling

All Slonik errors extend from SlonikError, i.e. You can catch Slonik specific errors using the following logic.

import {
  SlonikError
} from 'slonik';

try {
  await query();
} catch (error) {
  if (error instanceof SlonikError) {
    // This error is thrown by Slonik.
  }
}

Original node-postgres error

When error originates from node-postgres, the original error is available under originalError property.

This propery is exposed for debugging purposes only. Do not use it for conditional checks – it can change.

If you require to extract meta-data about a specific type of error (e.g. contraint violation name), raise a GitHub issue describing your use case.

Handling BackendTerminatedError

BackendTerminatedError is thrown when the backend is terminated by the user, i.e. pg_terminate_backend.

BackendTerminatedError must be handled at the connection level, i.e.

await pool.connect(async (connection0) => {
  try {
    await pool.connect(async (connection1) => {
      const backendProcessId = await connection1.oneFirst(sql`SELECT pg_backend_pid()`);

      setTimeout(() => {
        connection0.query(sql`SELECT pg_cancel_backend(${backendProcessId})`)
      }, 2000);

      try {
        await connection1.query(sql`SELECT pg_sleep(30)`);
      } catch (error) {
        // This code will not be executed.
      }
    });
  } catch (error) {
    if (error instanceof BackendTerminatedError) {
      // Handle backend termination.
    } else {
      throw error;
    }
  }
});

Handling CheckIntegrityConstraintViolationError

CheckIntegrityConstraintViolationError is thrown when PostgreSQL responds with check_violation (23514) error.

Handling ConnectionError

ConnectionError is thrown when connection cannot be established to the PostgreSQL server.

Handling DataIntegrityError

To handle the case where the data result does not match the expectations, catch DataIntegrityError error.

import {
  NotFoundError
} from 'slonik';

let row;

try {
  row = await connection.one(sql`SELECT foo`);
} catch (error) {
  if (error instanceof DataIntegrityError) {
    console.error('There is more than one row matching the select criteria.');
  } else {
    throw error;
  }
}

Handling ForeignKeyIntegrityConstraintViolationError

ForeignKeyIntegrityConstraintViolationError is thrown when PostgreSQL responds with foreign_key_violation (23503) error.

Handling NotFoundError

To handle the case where query returns less than one row, catch NotFoundError error.

import {
  NotFoundError
} from 'slonik';

let row;

try {
  row = await connection.one(sql`SELECT foo`);
} catch (error) {
  if (!(error instanceof NotFoundError)) {
    throw error;
  }
}

if (row) {
  // row.foo is the result of the `foo` column value of the first row.
}

Handling NotNullIntegrityConstraintViolationError

NotNullIntegrityConstraintViolationError is thrown when PostgreSQL responds with not_null_violation (23502) error.

Handling StatementCancelledError

StatementCancelledError is thrown when a query is cancelled by the user (i.e. pg_cancel_backend) or in case of a timeout.

It should be safe to use the same connection if StatementCancelledError is handled, e.g.

await pool.connect(async (connection0) => {
  await pool.connect(async (connection1) => {
    const backendProcessId = await connection1.oneFirst(sql`SELECT pg_backend_pid()`);

    setTimeout(() => {
      connection0.query(sql`SELECT pg_cancel_backend(${backendProcessId})`)
    }, 2000);

    try {
      await connection1.query(sql`SELECT pg_sleep(30)`);
    } catch (error) {
      if (error instanceof StatementCancelledError) {
        // Safe to continue using the same connection.
      } else {
        throw error;
      }
    }
  });
});

Handling StatementTimeoutError

StatementTimeoutError inherits from StatementCancelledError and it is called only in case of a timeout.

Handling UniqueIntegrityConstraintViolationError

UniqueIntegrityConstraintViolationError is thrown when PostgreSQL responds with unique_violation (23505) error.

Handling TupleMovedToAnotherPartitionError

TupleMovedToAnotherPartitionError is thrown when affecting tuple moved into different partition.

Types

This package is using TypeScript types.

Refer to ./src/types.js.

The public interface exports the following types:

  • DatabaseConnectionType
  • DatabasePoolConnectionType
  • DatabaseSingleConnectionType

Use these types to annotate connection instance in your code base, e.g.

import type {
  DatabaseConnectionType
} from 'slonik';

export default async (
  connection: DatabaseConnectionType,
  code: string
): Promise<number> => {
  const countryId = await connection.oneFirst(sql`
    SELECT id
    FROM country
    WHERE code = ${code}
  `);

  return countryId;
};

Debugging

Logging

Slonik uses roarr to log queries.

To enable logging, define ROARR_LOG=true environment variable.

By default, Slonik logs only connection events, e.g. when connection is created, connection is acquired and notices.

Query-level logging can be added using slonik-interceptor-query-logging interceptor.

Capture stack trace

Note: Requires slonik-interceptor-query-logging.

Enabling captureStackTrace configuration will create a stack trace before invoking the query and include the stack trace in the logs, e.g.

{"context":{"package":"slonik","namespace":"slonik","logLevel":20,"executionTime":"357 ms","queryId":"01CV2V5S4H57KCYFFBS0BJ8K7E","rowCount":1,"sql":"SELECT schedule_cinema_data_task();","stackTrace":["/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist:162:28","/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist:314:12","/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist:361:20","/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist/utilities:17:13","/Users/gajus/Documents/dev/applaudience/data-management-program/src/bin/commands/do-cinema-data-tasks.js:59:21","/Users/gajus/Documents/dev/applaudience/data-management-program/src/bin/commands/do-cinema-data-tasks.js:590:45","internal/process/next_tick.js:68:7"],"values":[]},"message":"query","sequence":4,"time":1540915127833,"version":"1.0.0"}
{"context":{"package":"slonik","namespace":"slonik","logLevel":20,"executionTime":"66 ms","queryId":"01CV2V5SGS0WHJX4GJN09Z3MTB","rowCount":1,"sql":"SELECT cinema_id \"cinemaId\", target_data \"targetData\" FROM cinema_data_task WHERE id = ?","stackTrace":["/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist:162:28","/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist:285:12","/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist/utilities:17:13","/Users/gajus/Documents/dev/applaudience/data-management-program/src/bin/commands/do-cinema-data-tasks.js:603:26","internal/process/next_tick.js:68:7"],"values":[17953947]},"message":"query","sequence":5,"time":1540915127902,"version":"1.0.0"}

Use @roarr/cli to pretty-print the output.

Log Roarr pretty-print output.

Syntax Highlighting

Atom Syntax Highlighting Plugin

Using Atom IDE you can leverage the language-babel package in combination with the language-sql to enable highlighting of the SQL strings in the codebase.

Syntax highlighting in Atom

To enable highlighting, you need to:

  1. Install language-babel and language-sql packages.
  2. Configure language-babel "JavaScript Tagged Template Literal Grammar Extensions" setting to use language-sql to highlight template literals with sql tag (configuration value: sql:source.sql).
  3. Use sql helper to construct the queries.

For more information, refer to the JavaScript Tagged Template Literal Grammar Extensions documentation of language-babel package.

VS Code Syntax Highlighting Extension

The vscode-sql-template-literal extension provides syntax highlighting for VS Code: Syntax highlighting in VS Code

Comments
  • Allow Date object as a parameter/ result

    Allow Date object as a parameter/ result

    It has been proposed that date types that describe timestamp should be parsed into Date object, and that when Date object is provided, it should be converted into a timestamp that PostgreSQL understands.

    In general, I am opposed to using Date object at all, because of the mutable API. However, with regards to reading values from the database, I can see some value in returning Date as it provides a uniform data type for arbitrary accuracy timestamp values (in contrast, Slonik assumes millisecond accuracy).

    With regards to accepting Date object as a parameter, would it require to simply translate Date to string and use that as a value?

    enhancement 
    opened by gajus 23
  • inserting large number of rows using unnest with an array column

    inserting large number of rows using unnest with an array column

    If I try to insert an array column, I get error: type "varchar[][]" does not exist

    await connection.query(sql`
          INSERT INTO xx (id, txt)
          SELECT *
          FROM ${sql.unnest(
            [
              [1, ['"', "'"]],
              [2, ["'", '"']],
            ],
            [
              'int4',
              'varchar[]'
            ]
          )}
        `)
    
    enhancement 
    opened by retorquere 22
  • slonik and notice log from database

    slonik and notice log from database

    Hi, I’m trying Slonik and Roarr by the way

    I made it for migrate pg-async to slonik for some query, my test work

    And now I try to get all log from slonik. I have installed roarr and roarr-cli,

    I try to get NOTICE log from database, and get it on console. It work but it’s too verbose. I use notice a lot in database, and I just want to print a simple line when it’s notice I’m interested to get more details when it’s an error

    Is it possible to read the "log NOTICE" stream in nodeJs? And not with bash?

    Currently I have 5 lines for a notice message, output from roarr.

    [1] [2019-10-08T15:55:25.916Z] INFO (30) (@slonik): notice message
    [1] poolId: 01DPP1A3NNYA7JN6VGFTE4N234
    [1] notice:
    [1]   level:   notice
    [1]   message: hello world
    

    My goal is to connect notice message in the NodeJS console, with all others messages and have this format

    [1] [2019-10-08T15:55:25.916Z] INFO (30) (@slonik): hello world
    

    Any help could be very nice!

    bug 
    opened by dmpvost 19
  • PostGIS functions in sql.unnest

    PostGIS functions in sql.unnest

    Hello,

    while I appreciate a lot the philosophy of this library and its effort to get rid of ORMs like Sequelize, TypeORM and similars, I still do not understand how to call a SQL function when performing sql.unnest.

    I am using PostGIS extension, although I assume the problem may occur with any Postgres function.

    Now, assuming I have a table named metrics like this

        timestamp TIMESTAMPTZ NOT NULL,
        geolocation geography (POINT),
        data JSONB NOT NULL,
    

    The queries I am trying to execute are like the following ones:

    Query 1

    INSERT INTO metrics (timestamp, data, geolocation)
    SELECT *
    FROM unnest(
    	ARRAY['2019-08-07T09:02:53.174Z']::"timestamptz"[], 
    	ARRAY['{"someInt": 0, "someFloat": 0.3, "someString": "str_0"}']::"jsonb"[],
    	ARRAY[ST_GeographyFromText('POINT EMPTY')]::"geography"[]
    )
    

    Query 2

    INSERT INTO metrics (timestamp, data, geolocation)
    SELECT *
    FROM unnest(
    	ARRAY['2019-08-07T08:02:53.174Z']::"timestamptz"[], 
    	ARRAY['{"someInt": 0, "someFloat": 0.3, "someString": "str_0"}']::"jsonb"[],
    	ARRAY[NULL]::"geography"[]
    )
    

    tried those in my client (Postico) and they work just fine.

    This is how I'm trying it via slonik

    // Generate random data
    function genRecord(i = 0) {
      return {
        timestamp: new Date(),
        data: {
          someInt: i,
          someFloat: Date.now() * Math.random(),
          someString: `str_${i}`,
        },
        geolocation: null // I could generate some random lat lon, but this has not to do with the issue
      };
    }
    
    // Serialize data so that they can be chewed by slonik
    function serializeRecord(record) {
      const wkt = `POINT ${
        record.geolocation
          ? `(${record.geolocation.longitude} ${record.geolocation.latitude})`
           : 'EMPTY'
       }`;
      const geolocation = sql`${sql.raw(`ST_GeographyFromText('$1')`, [wkt])}`; 
      return [record.timestamp, sql.json(record.data), geolocation];
    }
    const SIZE = 3;
    const records = Array.from(Array(SIZE)).map((un, i) => genRecord(i));
    
    
    const testInsert = records.map(serializeRecord);
    
    const pool = createPool(connectionString);
    
    async function runQuery(query) {
      await pool.connect(async connection => {
        try {
          await connection.query(query);
        } catch (err) {
          console.error('Query Error', err);
        }
      });
    }
    
    let query = sql`
    INSERT INTO metrics (timestamp, data, geolocation)
    SELECT *
    FROM ${sql.unnest(testInsert, ['timestamptz', 'jsonb', 'geography'])}
    `;
    
    runQuery(query);
    

    If I try to use ST_GeographyFromText I always get "malformed array literal error". Even weirder, if I set NULL values for geometry I get parse error - invalid geometry

    I also have tried the following approaches, all failing (yeah, kind of brute-force attempts):

    const geolocation = null;
    const geolocation = `ST_GeographyFromText('${wkt}')`;
    const geolocation = sql.raw(`ST_GeographyFromText('$1')`, wkt)
    const geolocation = sql.raw(`ST_GeographyFromText('$1')`, [wkt]);
    

    The final Question

    How to use functions like ST_GeographyFromText ? How to make it work with unnest?

    Thanks

    enhancement help wanted 
    opened by lamuertepeluda 17
  • Allow json types in query methods

    Allow json types in query methods

    In #265 some type safety were enforced. But it caused json types to be considered invalid.

    connection.any<{ col1: number, col2: {myJsonColumn: string}}>(sql`...`)
    
    opened by alitaheri 16
  • Pass a custom pg connection object

    Pass a custom pg connection object

    You're using Postgres connection strings, which are more than sufficient for the general use-case, however configuring Slonik to play nice with more special sql offerings is a challenge.

    For example connecting to an instance in Google Compute Platform requires this string

    socket://username:passwort@/cloudsql/projectId:location:instance-name?db=databse-name
    

    And it gets really complex because https://github.com/iceddev/pg-connection-string does not implement the full specification of https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING

    I also haven't found a way to pass TLS options for a socket configuration at all.

    My suggestion is to just allow passing of an object that would be used directly by pg.Pool as opposed to passing it through https://github.com/iceddev/pg-connection-string first. I could write a PR but I work mostly with Typescript so I hesitate a bit how to implement it correctly in flow.

    I know you are planning on reimplementing pg.Pool yourself but it would seem prudent to keep the same config parameters. That would allow for greater backwards compatibility. I presume you would be keeping Postgres uri syntax as well.

    The typescript types are a bit misleading maybe, because they state

    export type DatabaseConfigurationType =
        | string
        | {
            database?: string;
            host?: string;
            idleTimeoutMillis?: number;
            max?: number;
            password?: string;
            port?: number;
            user?: string;
        };
    

    but if you pass an object to createPool, you get an error, even leaving aside the fact that there is no TLS config in the types as well.

    Slonik Version 19.2.0

    enhancement 
    opened by ivank 15
  • feat: omit undefined values from sql.assignmentList generated sql code

    feat: omit undefined values from sql.assignmentList generated sql code

    This simplifies cases where you want to pass an object with possible undefined values trough too sql.assignmentList, without having to filter out undefined values via delete object.key or cloning.

    Before:

    With delete

    const input = {
      foo: "bar",
      lel: undefined
    };
    
    if (input.hasOwnProperty("lel") && input.lel === undefined) {
      delete input.lel;
    }
    
    sql.assignmentList(input);
    

    Cloning

    const input = {
      foo: "bar",
      lel: undefined
    };
    
    const sanitizedInput = Object.fromEntries(
      Object.entries(input)
        .filter(([, value]) => value !== undefined)
    );
    
    sql.assignmentList(sanitizedInput);
    

    After:

    const input = {
      foo: "bar",
      lel: undefined
    };
    
    sql.assignmentList(input);
    
    opened by n1ru4l 15
  • integrate zod into Slonik

    integrate zod into Slonik

    At the moment, we can provide type suggestions the following way:

    const persons = await pool.any<{id: number, name: string}>(sql`
      SELECT id, name
      FROM person
    `);
    

    While this works, there are no guarantees that this is what the query actually returns.

    In order to add more excellent type safety, including runtime safety, Slonik is going to adopt zod natively.

    The current thinking is that the API is going to be:

    const persons = await pool.any(sql`
      ${z.object({
        id: z.number(),
        name: z.string(),
      })}
      SELECT id, name
      FROM person
    `);
    

    The reason for adding z.object to the sql template itself is because, ideally, we want them to live together.

    We can then toggle runtime validation to ensure that queries return exactly what we type them to return, i.e. If this query were to change to:

    const persons = await pool.any(sql`
      ${z.object({
        id: z.number(),
        name: z.string(),
      })}
      SELECT id, name, description
      FROM person
    `);
    

    We would throw an error about the unknown description property.

    In contrast to build-time type safety, this mechanism of validation ensures that the application continues to work as expected (or breaks loudly) if database schema drifts from whatever we typed at build time, e.g. In the earlier example, we may have assumed that name is non-nullable based on our current knowledge of schema, but if that changed during the lifetime of the application, we'd want to know about it.

    enhancement 
    opened by gajus 14
  • How to do a graceful shutdown (close open connection)

    How to do a graceful shutdown (close open connection)

    is there any method for closing any active database connections?

    When using jest with the following test:

    it("test", async () => {
      const slonik = createPool(
        `postgres://postgres:[email protected]:7002/postgres`
      );
      await slonik.connect(async conn => {
        await conn.many(sql`SELECT * FROM "users"`);
      });
    });
    

    I get the following console output:

    Test Suites: 1 passed, 1 total
    Tests:       1 passed, 1 total
    Snapshots:   0 total
    Time:        5.968s, estimated 6s
    Ran all test suites.
    Jest did not exit one second after the test run has completed.
    

    It seems like it is currently not possible to close connection of a pg.Pool instance. If there is not would you accept a pull request that implements this functionality?

    question 
    opened by n1ru4l 14
  • Canonical way of partially updating a table

    Canonical way of partially updating a table

    Hey, I was thinking about what should be the canonical way to partially update a table.

    Setup

    For example, we have the following table:

    books
    - id
    - name
    - description
    - author
    

    What I want to do?

    I want to partially update a row from the books table, e.g. update name and description, but not the author.

    Maybe it goes against the principle of the lib, namely:

    Discourages ad-hoc dynamic generation of SQL.

    but I think it's really a common problem that users need to solve.

    How can you do it if you update all columns?

    const bookId = 1;
    const newName = "New Book Name";
    const newDescription = "New Book Description";
    const newAuthor = "New Book Author";
    const updateNameStatement = sql`name = ${newName}`;
    const updateStatusStatement = sql`status = ${newDescription}`;
    const updateAuthorStatement = sql`status = ${newAuthor}`;
    
    const updateBookSql = sql`UPDATE books
    SET
      ${updateNameStatement},
      ${updateStatusStatement},
      ${updateAuthorStatement}
    WHERE id = ${bookId};`;
    

    How can you do it if you update a subset of columns?

    The most obvious way is to use the sql.raw, but you need to take care of the placeholder index by yourself when generating the query. A better solution is to construct it using sql tag literal, for example:

    const bookId = 1;
    const newName = "New Book Name";
    const newDescription = "New Book Description";
    const updateNameStatement = sql`name = ${newName}`;
    const updateStatusStatement = sql`status = ${newDescription}`;
    const updateStatements = [updateNameStatement, updateStatusStatement];
    
    const updateBookSqlWithoutWhere = updateStatements.reduce(
      (updateQuery, currentUpdateStatement, index) => {
        if (index === updateStatements.length - 1) {
          return sql`${updateQuery} ${currentUpdateStatement}`;
        }
    
        return sql`${updateQuery} ${currentUpdateStatement},`;
      },
      sql`UPDATE books SET`
    );
    
    const updateBookSql = sql`${updateBookSqlWithoutWhere} WHERE id = ${bookId};`;
    

    I think it's better than using sql.raw approach defined in the docs. Do you see anything grossly wrong with this approach?

    Note: Currently, I am not using the library, I'm using sequelize with mostly raw SQL queries, but am considering to migrate to your library. :)

    enhancement 
    opened by steliyan 14
  • Additional `#` in the sql string

    Additional `#` in the sql string

    Expected Behavior

    Before upgrading to Slonik 26.1.0, tagged template literal generates sql string correctly.

    Current Behavior

    Inserting a number in the tagged template literal will generate # in front and at the back of $ of sql placeholder.

    Possible Solution

    Steps to Reproduce

    import { sql } from 'slonik';
    const limit = 1;
    const offset = 2;
    const query = sql`SELECT 1 FROM "user" LIMIT ${limit} OFFSET ${offset}`;
    

    The query.sql yields 'SELECT 1 FROM "user" LIMIT #$#1 OFFSET #$#2'

    Logs

    question 
    opened by shernshiou 13
  • Error When Connecting to Redshift

    Error When Connecting to Redshift

    I get the following error when I try to connect to redshift using the createPool method. Does this library work with the older postgres specification that redshift uses (8.0.2)?

    column "typarray" does not exist in pg_type

    Expected Behavior

    Should connect successfully.

    Current Behavior

    Connection errors out with: "column "typarray" does not exist in pg_type"

    Steps to Reproduce

    Try to connect using createPool method:

              const pool = await slonik.createPool(
                "postgresql://user:[email protected]:5439/somedefaultdb",
              );
    

    Logs

    err: { "type": "DatabaseError", "message": "column \"typarray\" does not exist in pg_type", "stack": error: column "typarray" does not exist in pg_type at Parser.parseErrorMessage (/app/node_modules/pg-protocol/dist/parser.js:287:98) at Parser.handlePacket (/app/node_modules/pg-protocol/dist/parser.js:126:29) at Parser.parse (/app/node_modules/pg-protocol/dist/parser.js:39:38) at Socket.<anonymous> (/app/node_modules/pg-protocol/dist/index.js:11:42) at Socket.emit (events.js:400:28) at Socket.emit (domain.js:475:12) at addChunk (internal/streams/readable.js:293:12) at readableAddChunk (internal/streams/readable.js:267:9) at Socket.Readable.push (internal/streams/readable.js:206:10) at TCP.onStreamRead (internal/stream_base_commons.js:188:23)

    bug 
    opened by valeeum 0
  • Consider replacing the dependency on hyperid with native crypto module

    Consider replacing the dependency on hyperid with native crypto module

    Feature Request

    A single module hyperid imports the base library uuid for generating Ids. This is now supported by default since nodejs 14.17 using the "crypto" module and the "randomUUID" method.

    NodeJS 14.X docs about randomUUID: https://nodejs.org/docs/latest-v14.x/api/crypto.html#crypto_crypto_randomuuid_options

    Desired Behavior

    Remove external dependencies in favor of native nodejs uuid generation

    Motivation

    Simplicity and reduced asset sizes

    Implementation

    Replace body of /src/uitilities/createUid.ts with the following:

    import { randomUUID } from 'crypto';
    
    export const createUid = (): string => {
      return randomUUID();
    };
    
    

    I can open a PR if that would be helpful.

    enhancement 
    opened by mdstaff 0
  • Using RLS with Slonik

    Using RLS with Slonik

    I'm evaluating using slonik for a multi-tenant application using postgres RLS. However, I did not find a good way to set postgres session variables with the tenant id before executing queries. Is there a way to do it with slonik?

    Desired Behavior

    A way to execute SET app.current_tenant =TENANT_ID before each query for postgres RLS to work properly

    Motivation

    Postgres RLS is gaining a lot of popularity and most major libraries/frameworks don't offer a good way of adding session variables per connection and resetting this info on release

    Implementation

    enhancement 
    opened by victorlcm 1
  • Backwards Compatibility to allow migrating to Runtime-Validation (Zod)

    Backwards Compatibility to allow migrating to Runtime-Validation (Zod)

    Give long-time slonik users a way to migrate to the value of runtime validation without having to stay pinned on an older version of Slonik. I have working code in production that I'd be glad to share in a PR.

    Desired Behavior

    Allow non-Zod types for queries (like we've historically had)

    interface DataTest { foo: string, bar: string; };
    const qry = sql.legacy<DataTest>`SELECT * FROM data_tests WHERE ${whereClause}`;
    

    Allow non-Zod types for query helper functions (like we've historically had):

    const results = await db.many<DataTest>(sql.unsafe`SELECT * FROM data_tests WHERE id = ${inserted.id}`);
    
    • I went with this approach bc it still leaves "marker" names in place to help us find where to go back and fix up properly (the things to search for being sql.legacy and sql.unsafe)
    • This did require some mass-renaming in the codebase ( sql' => sql.unsafe' and sql<' => sql.legacy' )

    Motivation

    It appears that the breaking changes in v33 and beyond are mostly Typescript related. I put together some of the types below to give us backward compatibility in our codebase as migrating our entire data access layer to Zod models was not something we could do in one shot. We are migrating, as runtime validation is a huge need, but its just not an overnight thing.

    Implementation

    I am currently using the following types in a ./db/index.ts file that all of my consuming code imports from. I re-export the slonik types from here (overridden) and then make sure none of the consuming code is importing slonik directly.

    import {
      createPool,
      DatabasePool as SlonikDatabasePool,
      DatabaseTransactionConnection as SlonikDatabaseTransactionConnection,
      PrimitiveValueExpression,
      QueryResult,
      QuerySqlToken,
      sql as slonikSql,
      TypeParser,
      ValueExpression,
    } from 'slonik';
    
    type LegacyCommonQueryMethods = {
      query: <T>(sql: QuerySqlToken, values?: PrimitiveValueExpression[]) => Promise<QueryResult<T>>;
      one: <T>(sql: QuerySqlToken, values?: PrimitiveValueExpression[]) => Promise<T>;
      oneFirst: <T>(sql: QuerySqlToken, values?: PrimitiveValueExpression[]) => Promise<T[keyof T]>;
      many: <T>(sql: QuerySqlToken, values?: PrimitiveValueExpression[]) => Promise<ReadonlyArray<T>>;
      manyFirst: <T>(sql: QuerySqlToken, values?: PrimitiveValueExpression[]) => Promise<ReadonlyArray<T[keyof T]>>;
      maybeOne: <T>(sql: QuerySqlToken, values?: PrimitiveValueExpression[]) => Promise<T | null>;
      maybeOneFirst: <T>(sql: QuerySqlToken, values?: PrimitiveValueExpression[]) => Promise<T[keyof T] | null>;
      exists: (sql: QuerySqlToken, values?: PrimitiveValueExpression[]) => Promise<boolean>;
      any: <T>(sql: QuerySqlToken, values?: PrimitiveValueExpression[]) => Promise<ReadonlyArray<T>>;
    };
    
    // TODO: legacy compatibility. Remove this once we're fully uzing Zod runtime validation
    type DatabaseTransactionConnection = SlonikDatabaseTransactionConnection & LegacyCommonQueryMethods;
    type DatabasePool = SlonikDatabasePool &
      LegacyCommonQueryMethods & {
        transaction: <T>(
          handler: (connection: DatabaseTransactionConnection) => Promise<T>,
          transactionRetryLimit?: number
        ) => Promise<T>;
      };
    
    // slonik: add backwards compatibility for old code
    type LegacySql = typeof slonikSql & {
      legacy: <T>(
        parts: readonly string[],
        ...args: readonly ValueExpression[]
      ) => Readonly<{
        parser: z.ZodAny;
        legacyType: T; // <--- access to non-zod type
        type: 'SLONIK_TOKEN_QUERY';
        sql: string;
        values: PrimitiveValueExpression[];
      }>;
    };
    
    const sql: LegacySql = {
      ...slonikSql,
      legacy: <T>(parts: readonly string[], ...args: readonly ValueExpression[]) => {
        const query = sql.unsafe(parts, ...args);
        return query as Readonly<{
          parser: z.ZodAny;
          legacyType: T;
          type: 'SLONIK_TOKEN_QUERY';
          sql: string;
          values: PrimitiveValueExpression[];
        }>;
      },
    };
    
    // make the imports tidier
    export { DatabasePool, sql, DatabaseTransactionConnection };
    

    I also have unit tests that I'd be willing to contribute.

    opened by ericmbarnard 2
  • Drop ZodType constraints on sql.type, QeurySqlToken, type aliases, etc

    Drop ZodType constraints on sql.type, QeurySqlToken, type aliases, etc

    Remove the ZodType type constraints on sql.type (and the QuerySqlToken that it generates, etc) as well as the type alias configuration to createSqlTag to enable users to use whatever runtime validation implementation that they chose without the need for excessive casting. The library could still promote the use of Zod, and use Zod internally for things such as the interval tag validation, but allow other libraries to be used for result validation without needing to jump through hoops.

    Motivation

    The rollout of runtime schema validation has been somewhat contentious and in the end the in the end the implementation of the validation interceptor has been left to the user to fit their needs. Thus the QuerySqlToken does not necessarily need to be be a Zod schema. There have been requests to support other validation providers and it has been pointed out that a user can implement that with their own interceptor. However, to do so the user needs to work around the current Zod type constraints, casting the input to sql.type and generally the result type as well.

    Desired Behavior

    The ability to input validation schemas for validators other than Zod without needing to cast.

    For example a developer wishing to use class-transformer/class-validator as is common in the NestJS community should be able to define an interceptor like:

    { transformRow: (queryContext: QueryContext, query: Query, row: QueryResultRow ): QueryResultRow => {
        const { resultParser } = queryContext;
        if (resultParser) {
            const result = plainToInstance(resultParser, row);
            const errors = validateSync(result);
            if (errors.length) {
                throw new SchemaValidationError(query, row, errors);
            }
            return result as QueryResultRow;
        }
        return row;
    }}
    

    To be used with

    class Test {
        @IsNumber()
        id!: number;
    
        @IsOptional()
        @IsString()
        name?: string;
    
        @IsDate()
        @Expose({ name: 'create_time'})
        @Type(() => Date)
        createdAt!: Date;
    }
    
    const res = await db.any(sql.type(Test)`SELECT id, name, create_time FROM test`);
    

    without having to cast the validation annotated type and the result as is currently the case:

    const res = (await db.any(sql.type(Test as any)`SELECT id, name, create_time FROM test`)) as Test[];
    
    enhancement 
    opened by pgrocholski 2
Releases(v33.0.6)
Owner
Gajus Kuizinas
Software architect. Passionate about JavaScript, React, GraphQL, Redux. Active open-source contributor.
Gajus Kuizinas
Ultimate Script to complete PostgreSQL-to-PostgreSQL Migration right after AWS DMS task done

Ultimate Script to complete PostgreSQL-to-PostgreSQL Migration right after AWS DMS task done

방신우 22 Dec 23, 2022
PostgreSQL client for node.js.

node-postgres Non-blocking PostgreSQL client for Node.js. Pure JavaScript and optional native libpq bindings. Monorepo This repo is a monorepo which c

Brian C 10.9k Jan 9, 2023
ORM for TypeScript and JavaScript (ES7, ES6, ES5). Supports MySQL, PostgreSQL, MariaDB, SQLite, MS SQL Server, Oracle, SAP Hana, WebSQL databases. Works in NodeJS, Browser, Ionic, Cordova and Electron platforms.

TypeORM is an ORM that can run in NodeJS, Browser, Cordova, PhoneGap, Ionic, React Native, NativeScript, Expo, and Electron platforms and can be used

null 30.1k Jan 3, 2023
A query builder for PostgreSQL, MySQL and SQLite3, designed to be flexible, portable, and fun to use.

knex.js A SQL query builder that is flexible, portable, and fun to use! A batteries-included, multi-dialect (MSSQL, MySQL, PostgreSQL, SQLite3, Oracle

knex 16.9k Jan 4, 2023
TypeScript ORM for Node.js based on Data Mapper, Unit of Work and Identity Map patterns. Supports MongoDB, MySQL, MariaDB, PostgreSQL and SQLite databases.

TypeScript ORM for Node.js based on Data Mapper, Unit of Work and Identity Map patterns. Supports MongoDB, MySQL, MariaDB, PostgreSQL and SQLite datab

MikroORM 5.4k Dec 31, 2022
Execute one command (or mount one Node.js middleware) and get an instant high-performance GraphQL API for your PostgreSQL database!

PostGraphile Instant lightning-fast GraphQL API backed primarily by your PostgreSQL database. Highly customisable and extensible thanks to incredibly

Graphile 11.7k Jan 4, 2023
A simple Node.js ORM for PostgreSQL, MySQL and SQLite3 built on top of Knex.js

bookshelf.js Bookshelf is a JavaScript ORM for Node.js, built on the Knex SQL query builder. It features both Promise-based and traditional callback i

Bookshelf.js 6.3k Jan 2, 2023
NodeJS PostgreSQL database performance insights. Locks, index usage, buffer cache hit ratios, vacuum stats and more.

Node Postgres Extras NodeJS port of Heroku PG Extras with several additions and improvements. The goal of this project is to provide powerful insights

Paweł Urbanek 68 Nov 14, 2022
A simple url shorter API built with nodejs running on Kubernetes in Google Cloud, using PostgreSQL for storage and cloud sql proxy.

Simple URL Shorter - Google Cloud - Kubernetes A simple url shorter API built with nodejs running on Kubernetes in Google Cloud, using PostgreSQL for

null 3 Nov 25, 2021
Database manager for MySQL, PostgreSQL, SQL Server, MongoDB, SQLite and others. Runs under Windows, Linux, Mac or as web application

Database manager for MySQL, PostgreSQL, SQL Server, MongoDB, SQLite and others. Runs under Windows, Linux, Mac or as web application

DbGate 2k Dec 30, 2022
A Node.js ORM for MySQL, SQLite, PostgreSQL, MongoDB, GitHub and serverless service like Deta, InspireCloud, CloudBase, LeanCloud.

Dittorm A Node.js ORM for MySQL, SQLite, PostgreSQL, MongoDB, GitHub and serverless service like Deta, InspireCloud, CloudBase, LeanCloud. Installatio

Waline 21 Dec 25, 2022
Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server & SQLite

Prisma Quickstart • Website • Docs • Examples • Blog • Slack • Twitter • Prisma 1 What is Prisma? Prisma is a next-generation ORM that consists of the

Prisma 28k Jan 2, 2023
PostgreSQL interface for Node.js

pg-promise PostgreSQL interface for Node.js About Support & Sponsorship Documentation Contributing Usage Methods Query Formatting Index Variables Name

Vitaly Tomilov 3.2k Jan 6, 2023
just a graphql example created by typescript + fastify + mikro-orm(postgresql) + mercurius(graphql adaptor) + type-graphql

fastify-mikro-orm-mercurius-graphql-example A MikroORM boilerplate for GraphQL made with Fastify, Mercurius, Typescript using TypeGraphQL ?? Packages

Vigen 10 Aug 28, 2022
A Node.js library for retrieving data from a PostgreSQL database with an interesting query language included.

RefQL A Node.js library for retrieving data from a PostgreSQL database with an interesting query language included. Introduction RefQL is about retrie

Rafael Tureluren 7 Nov 2, 2022
A tiny wrapper around pg that makes PostgreSQL a lot of fun to use. Written in TypeScript.

A tiny wrapper around pg that makes PostgreSQL a lot of fun to use. Written in TypeScript.

Mojolicious 8 Nov 29, 2022
🚀 A robust, performance-focused and full-featured Redis client for Node.js.

A robust, performance-focused and full-featured Redis client for Node.js. Supports Redis >= 2.6.12 and (Node.js >= 6). Completely compatible with Redi

Zihua Li 11.6k Jan 8, 2023
curl for GraphQL with autocomplete, subscriptions and GraphiQL. Also a dead-simple universal javascript GraphQL client.

graphqurl graphqurl is a curl like CLI for GraphQL. It's features include: CLI for making GraphQL queries. It also provisions queries with autocomplet

Hasura 3.2k Jan 3, 2023