Brittle TAP test framework

Overview

brittle

tap à la mode

A fullstack TAP test runner built for modern times.

API

Initializers

To create a test the exported test method can be used in a few different ways. In addition, variations of the test method such as solo and skip can be used to filter when tests are executed.

Every initializer accepts the same optional options object.

Options

  • timeout (5000) - milliseconds to wait before ending a stalling test
  • output (process.stderr) - stream to write TAP output to
  • skip - skip this test, alternatively use the skip() function
  • todo - mark this test as todo and skip it, alternatively use the todo() function

test(description[, opts], async (assert) => {})

Create a test trad-style. The async function will be passed an object, which provides the assertions and utilities interface.

import test from 'brittle'
test('some test', async (assert) => {
  assert.is(true, true)
})

For convenience the test method is both the default export and named exported method

import { test } from 'brittle'

test(description[, opts]) => assert

Create an inverted test. An object is returned providing assertions and utilities interface. This object is also a promise and can be awaited, it will will resolve at test completion.

import test from 'brittle'

const assert = test('some test')

assert.plan(1)

setTimeout(() => {
  assert.is(true, true)
}, 1000)


await assert // won't proceed past here until plan is fulfilled

assert.test(description[, opts]) => assert

assert.test(description[, opts], async (assert) => {})

A subtest can be created by calling test on an assert object. This will provide a new sub-assert object. Using this in inverted style can be very useful for flow control within a test:

import test from 'brittle'
test('some test', async ({ test, ok }) => {
  const assert1 = test('some sub test')
  const assert2 = test('some other sub test')
  assert1.plan(1)
  assert2.plan(2)

  setTimeout(() => { assert1.is(true, true) }, Math.random() * 1000)

  setTimeout(() => { assert2.is(true, true) }, Math.random() * 1000)
  
  // won't proceed past here until both assert1 and assert2 plans are fulfilled
  await assert1
  await assert2

  ok('cool')
})

The assert object also has a done property which is a circular reference to the assert object, this can instead be awaited to determine sub test completion:

import test from 'brittle'
test('some test', async ({ test, ok }) => {
  const { plan, done } = test('some sub test')
  const assert2 = test('some other sub test')
  plan(1)
  assert2.plan(2)

  setTimeout(() => { assert1.is(true, true) }, Math.random() * 1000)

  setTimeout(() => { assert2.is(true, true) }, Math.random() * 1000)
  
  // won't proceed past here until both assert1 and assert2 plans are fulfilled
  await done
  await assert2

  ok('cool')
})

solo(description, async function)

Filter out other tests by using the solo method:

import { test, solo } from 'brittle'
test('some test', async ({ is }) => {
  is(true, true)
})
solo('another test', async ({ is }) => {
  is(true, false)
})
solo('yet another test', async ({ is }) => {
  is(false, false)
})

Note how there can be more than one solo tests.

If a solo function is called, test functions will not execute, only solo functions.

The solo method is also available on the test method, and can be used without a function like test:

import test from 'brittle' 
const { is } = test.solo('another test')
is(true, false)

The detection of a solo function is based on execution flow, there may be cases where brittle needs to be explicitly informed to enter solo mode. Use solo.enable() to explicitly enable solo mode:

import { test, solo } from 'brittle'
solo.enable()
await test('some test', async ({ is }) => {
  is(true, true)
})
solo('another test', async ({ is }) => {
  is(true, false)
})

skip(description, async function)

Skip a test:

import { test, skip } from 'brittle'
skip('some test', async ({ is }) => {
  is(true, true)
})
test('another test', async ({ is }) => {
  is(true, false)
})

The first test will not be executed.

The skip method is also available on the test method:

import test from 'brittle' 
test.skip('some test', async ({ is }) => {
  is(true, true)
})

Assertions

is(actual, expected, [ message ])

Compare actual to expected with ===

not(actual, expected, [ message ])

Compare actual to expected with !==

alike(actual, expected, [ message ])

Object comparison, comparing all primitives on the actual object to those on the expected object using ===.

unlike(actual, expected, [ message ])

Object comparison, comparing all primitives on the actual object to those on the expected object using !==.

ok(value, [ message ])

Checks that value is truthy: !!value === true

not(value, [ message ])

Checks that value is falsy: !!value === false

pass([ message ])

Asserts success. Useful for explicitly confirming that a function was called, or that behavior is as expected.

fail([ message ])

Asserts failure. Useful for explicitly checking that a function should not be called.

exception(Promise|function|async function, [ error, message ])

Verify that a function throws, or a promise rejects.

exception(() => { throw Error('an err') }, /an err/)
exception(async () => { throw Error('an err') }, /an err/)
exception(Promise.reject(Error('an err')), /an err/)

execution(Promise|function|async function, [ message ])

Assert that a function executes instead of throwing or that a promise resolves instead of rejecting.

execution(() => { })
execution(async () => { })
execution(Promise.resolve('cool'))

snapshot(actual, [ message ])

On the first run, this assertion automatically creates a fixture in the __snapshots__ folder of project root. On subsequent test runs the actual value is asserted against the previously captured fixture as the expected value. If the input value matches the snapshot, the test passes. Test failure means either the code should be fixed or the snapshot should be updated. See Updating Snapshots for how to regenerate snapshots.

is.coercively(actual, expected, [ message ])

Compare actual to expected with ==

not.coercively(actual, expected, [ message ])

Compare actual to expected with !=

alike.coercively(actual, expected, [ message ])

Object comparison, comparing all primitives on the actual object to those on the expected object using ==.

unlike.coercively(actual, expected, [ message ])

Object comparison, comparing all primitives on the actual object to those on the expected object using !=.

Utilities

plan(n)

Constrain a test to an explicit amount of assertions.

teardown(function|async function)

The function passed to teardown is called right after a test ends

import test from 'brittle'
test('some test', async ({ ok, teardown }) => {
  teardown(async () => {
    await doSomeCleanUp()
  })
  const assert = test('some sub test')
  setTimeout(() => { assert.is(true, true) }, Math.random() * 1000)
  
  await assert

  ok('cool')
})

timeout(ms)

Fail the test after a given timeout.

comment(message)

Inject a TAP comment into the output.

end()

Force end a test. This mostly shouldn't be needed, as end is determined by assert resolution or when a containing async function completes.

Metadata

The object returned from an initializer (test, solo, skip) or passed into an async function passed to an initializer is reffered to as the assert object.

This assert object is a promise, when it resolves it provides information about the test.

The resulting information object has the following shape:

{
  start: BigInt, // time when the test started in nanoseconds
  description: String, // test description
  planned: Number, // the amount of assert planned
  count: Number, // the amount of asserts executed
  error: Error || null, // an error object or null if successful
  ended: Boolean // whether the test ended
}

These same properties are available on the assert object directly, but the values are final after assert has resolved.

Examples:

import test from 'brittle' 
const assert = test('describe')
assert.plan(1)
assert.pass()
const result = await assert
console.log(result)
import test from 'brittle' 
test('describe', async (assert) => {
  assert.plan(1)
  assert.pass()
  const result = await assert
  console.log(result)
})
import test from 'brittle' 
const result = await test('describe', async ({ plan, pass }) => {
  plan(1)
  pass()
})
console.log(result)

Runner

Tests can be executed directly with node:

node path/to/my/test.js

A brittle runner is supplied for enhances functionality:

npm install -g brittle
brittle path/to/tests/*.test.js

Note globbing is supported.

For usage information run brittle -h

Brittle

brittle [flags] [<files>]

--help | -h         Show this help
--watch | -w        Rerun tests when a file changes
--reporter | -R     Set test reporter: tap, spec, dot
--snap-all          Update all snapshots
--snap <name>       Update specific snapshot by name
--no-cov            Turn off coverage
--100               Fail if coverage is not 100%
--90                Fail if coverage is not 90%
--85                Fail if coverage is not 85%
--cov-report        Set coverage reporter:
                    text, html, text-summary...

--cov-help          Show advanced coverage options

Updating snapshots

If a snapshot assert fails it is up to the developer to either verify that the current input is incorrect and fix it, or to establish that the input is an update and therefore correct. In the event that the input is correct the SNAP environment variable or the brittle CLI tool can be used to update the snapshot.

Directly with Node

To update all snapshots:

SNAP=1 node path/to/test.js

To update a specific snapshot:

SNAP="name of snapshot" node path/to/test.js

The string is converted into a regular expression with global matching so partial matches and multiple matches are possible.

brittle command-line

To update all snapshots:

brittle --snap-all path/to/*.test.js

To update a specific snapshot:

brittle --snap "name of snapshot" path/to/*.test.js

The string is converted into a regular expression with global matching so partial matches and multiple matches are possible.

brittle interactive watch mode

If a snapshot assert fails in watch mode, an additional function key is provided: Press s to manage snapshots.

This will provide a menu where individual failing snapshots can be selected so in order be individually updated.

Example package.json test field setup

The following would run all .js files in the test folder, output test results using the spec reporter and re-test a project every time a file changed while also enforcing an 85% coverage constraint. In a CI environment the watch functionality would be turned off, and the reporter would be the tap reporter.

{
  "name": "my-app",
  "version": "1.0.0",
  "scripts": {
    "test": "brittle -R spec --85 -w test/*.js"
  },
  "devDependencies": {
    "brittle": "^1.0.0"
  }
}

License

MIT

Comments
  • Add TypeScript declarations

    Add TypeScript declarations

    This pull request intends to add TypeScript declarations for Brittle. Initially, the declarations have been autogenerated based on the inference capabilities of TypeScript. The only thing currently inferred by TypeScript are the names of the top-level exports so this approach won't produce useful declarations unless the JavaScript source is also adjusted. The other option is to author the type declarations by hand, which of course means that any change to public API will have to be manually reflected in the type declarations.

    To summarise, I see two approaches to introducing TypeScript declarations:

    1. Adjust the JavaScript source to improve type inference, adding JSDoc annotations as needed (https://www.typescriptlang.org/docs/handbook/jsdoc-supported-types.html).

    2. Write the declarations by hand.

    Which one would be preferred?

    opened by kasperisager 4
  • Questions

    Questions

    Looks great! A couple questions:

    • Any plans to include a cli test runner?
    • Any plans to have a default glob that discovers *.test.js or *.spec.js tests throughout a project?
    • Any plans for built in coverage reporting?

    Thanks!

    opened by bcomnes 3
  • spec, dot reporters not including expected/actual output

    spec, dot reporters not including expected/actual output

    Ran into this issue the other weekend, didn't get a chance to dig in, but I figured I should open an issue sooner than later..

    Basically, on a test failure when running with spec or dot reporters, it only lists the test that failed, not the expected/actual value. It's in the tap report, but the other reporters aren't including that.

    I can look into it more soon, but if anyone knows a quick fix in the meantime, feel free.

    opened by bcomnes 2
  • TAP ok but DOT and SPEC return 1 failed with test count !== plan:

    TAP ok but DOT and SPEC return 1 failed with test count !== plan:

    TAP output

        ok 20 - should be equald
    # time=4190.529833ms
    ---------------|---------|----------|---------|---------|-------------------
    File           | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s 
    ---------------|---------|----------|---------|---------|-------------------
    All files      |     100 |      100 |     100 |     100 |                   
     rover_wrap.js |     100 |      100 |     100 |     100 |                   
    ---------------|---------|----------|---------|---------|-------------------
    

    DOT output

    
      49 passing (4s)
      1 failing
    
      1) test count !== plan:
    
          test count !== plan
          + expected - actual
    
          -0
          +3
          
      
    
    ․
    ․---------------|---------|----------|---------|---------|-------------------
    File           | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s 
    ---------------|---------|----------|---------|---------|-------------------
    All files      |     100 |      100 |     100 |     100 |                   
     rover_wrap.js |     100 |      100 |     100 |     100 |                   
    

    my tests look like this

    import { getLatest } from './wrap.js'
    import { test, configure } from 'brittle'
    
    configure({ serial: true })
    test('getLatest()', async (t) => {
     const data = await getLatest()
     isDataType(t, data)
    })
    
    function isDataType (t, data) {
     t.is(typeof data.metadata.id, 'number')
    }
    
    opened by timcash 1
  • json output

    json output

    Output buffering hinders debuggability, particular where chronology is important.

    By design nested child output requires reordering (so the subtest asserts go into the correct nested place).

    However, nested child tap is useful for representing the assertion structures.

    Outputting a JSON structure as the core functionality of brittle can solve for the debugging issues while preserving the prettification aspect of child tests.

    The JSON output should be:

    • self described
    • flat
    • time ordered
    • without any user objects - e.g. diffs inlined, don't serialize user objects

    The JSON output could then be processed to show flat tap output (ala tape) which can be useful in a debugging mode or nested tap output (ala node-tap) to show assertion structure.

    opened by davidmarkclements 1
  • after invoking solo in one file, all later tests in other files are skipped

    after invoking solo in one file, all later tests in other files are skipped

    In testing the gui lib of you know what, we import solo from brittle and apply it with no arguments at the top of the file. All tests in all other files are then skipped. Note: this is when invoking the test file with brittle, e.g. brittle my-test-dir/*.test.js.

    Expected Behavior: The use of solo in one test file does not effect the behavior of any other test file

    opened by utanapishtim 1
  • Feature: console comments

    Feature: console comments

    Support an opt-in that converts console.log/info/error/etc messages into TAP comments.

    Benefits:

    • clean tap output
    • in concurrent mode, console.log messages will appear as tap comments among the relevant surrounding TAP, instead of being logged out-of-order relative to the TAP output

    Enabling:

    • configure({ consoleComments: true }
    • CC=1 node test.js
    • brittle --cc test.js
    opened by davidmarkclements 1
  • non planned test passes when never ending

    non planned test passes when never ending

    Running this:

    const test = require('brittle')
    
    test('a', async function (t) {
      await new Promise((resolve) => {}) // never resolves
      t.pass('a')
    })
    

    Results in:

    TAP version 13
    # a
    ok 1 - a # time=0.485583ms
    
    1..1
    # time=1.749166ms
    

    It should have failed when a "no end" error equivalent

    opened by mafintosh 1
  • Add `engines` to `package.json`

    Add `engines` to `package.json`

    To enable programmatic discovery of supported engines through a variety of tools

    Maybe replace >=16.0.0 with ^16.0.0 to be explicit that you don't currently support anything newer than 16, but I think that would be a bit too strict for programmatic uses

    opened by voxpelli 0
  • Exception capturing issue with global installation

    Exception capturing issue with global installation

    When brittle is installed globally npm i brittle -g, there seems to be the following issue with exception capturing:

    ❯ brittle test/*.js
    TAP version 13
    # test/basic.js
    Brittle: Fatal Error
    Error [ERR_UNCAUGHT_EXCEPTION_CAPTURE_ALREADY_SET]: `process.setupUncaughtExceptionCapture()` was called while a capture callback was already active
        at new NodeError (internal/errors.js:322:7)
        at process.setUncaughtExceptionCaptureCallback (internal/process/execution.js:115:11)
        at Object.<anonymous> (/home/andrewosh/Development/@hypercore-skunkworks/hypercore-next/node_modules/brittle/index.js:50:9)
        at Module._compile (internal/modules/cjs/loader.js:1085:14)
        at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
        at Module.load (internal/modules/cjs/loader.js:950:32)
        at Function.Module._load (internal/modules/cjs/loader.js:790:12)
        at Module.require (internal/modules/cjs/loader.js:974:19)
        at require (internal/modules/cjs/helpers.js:93:18)
        at Object.<anonymous> (/home/andrewosh/Development/@hypercore-skunkworks/hypercore-next/test/basic.js:1:14) {
      code: 'ERR_UNCAUGHT_EXCEPTION_CAPTURE_ALREADY_SET'
    }
    ----------|---------|----------|---------|---------|-------------------
    File      | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s 
    ----------|---------|----------|---------|---------|-------------------
    All files |       0 |        0 |       0 |       0 |                   
    ----------|---------|----------|---------|---------|-------------------
    
    opened by andrewosh 0
  • serial makes all tests timeout when one does it

    serial makes all tests timeout when one does it

    const test = require('brittle')
    
    // timeout here doesn't matter, just to speed up the test case
    test.configure({ serial: true, timeout: 1000 })
    
    test('a', async function (t) {
      await new Promise((resolve) => {
        setTimeout(resolve, 5000)
      })
      t.pass('a')
    })
    
    test('b', async function (t) {
      t.pass('b')
    })
    
    test('c', async function (t) {
      t.pass('c')
    })
    

    Running this resolves in all tests failing instead of just the first one:

    TAP version 13
    # a
        not ok 0 - test timed out after 1000ms
          ---
          actual:
            !error
            name: Error
            message: test timed out after 1000ms
            stack: |-
              Error: test timed out after 1000ms
                  at listOnTimeout (node:internal/timers:557:17)
                  at processTimers (node:internal/timers:500:7)
            code: ERR_TIMEOUT
            test: a
            plan: 0
            count: 0
            ended: false
          expected: null
          operator: execution
          ...
    
    not ok 1 - a # time=1018.982875ms
    
    # b
        not ok 0 - test timed out after 1000ms
          ---
          actual:
            !error
            name: Error
            message: test timed out after 1000ms
            stack: |-
              Error: test timed out after 1000ms
                  at listOnTimeout (node:internal/timers:557:17)
                  at processTimers (node:internal/timers:500:7)
            code: ERR_TIMEOUT
            test: b
            plan: 0
            count: 0
            ended: false
          expected: null
          operator: execution
          ...
    
    not ok 2 - b # time=1020.934458ms
    
    # c
        not ok 0 - test timed out after 1000ms
          ---
          actual:
            !error
            name: Error
            message: test timed out after 1000ms
            stack: |-
              Error: test timed out after 1000ms
                  at listOnTimeout (node:internal/timers:557:17)
                  at processTimers (node:internal/timers:500:7)
            code: ERR_TIMEOUT
            test: c
            plan: 0
            count: 0
            ended: false
          expected: null
          operator: execution
          ...
    
    not ok 3 - c # time=1022.618125ms
    
    1..3
    # time=5008.828917ms
    # failing=3
    
    opened by mafintosh 0
  • Create a VS Code test provider

    Create a VS Code test provider

    Just adding here to keep track of this request. I may perhaps be able to contribute this eventually. Been wanting to make a VS Code extension for some time.

    Links:

    • Relevant documentation: https://code.visualstudio.com/api/extension-guides/testing
    • Example provider: https://github.com/microsoft/vscode-selfhost-test-provider
    • Relevant vscode issue: https://github.com/microsoft/vscode/issues/107467
    • Related Twitter discussion: https://twitter.com/voxpelli/status/1465693796960665614?s=20
    opened by voxpelli 4
Releases(v2.0.0)
  • v2.0.0(Dec 3, 2021)

    🥜 🥜

    Brittle The Second

    Major/Breaking Changes

    Serial test execution

    By default, in version 1, tests run concurrently, output is buffered and released at various checkpoints. This can mean that console.log (or any writing to I/O) will appear out-of-order with the TAP output. One way round this is to use the comment feature, however this isn't ideal when debugging a dependency using console.log. Therefore serial execution is the default in Brittle version 2. To opt-in to concurrency mode, use configure({ concurrent: true }) or to set the concurrency use configure({concurrency: <LIMIT>})

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Dec 3, 2021)

Owner
David Mark Clements
Consultant, Principal Architect, Author of Node Cookbook, Technical Lead of OpenJS Certifications
David Mark Clements
A lightweight normalized tap event

?? Retired: Tappy! Per our unmaintained repository status documentation this repository is now retired and is no longer accepting issue reports or pul

Filament Group 573 Dec 9, 2022
A simple handle tap and hold action for Svelte/SvelteKit.

Svelte Tap Hold Minimalistic tap and hold component for Svelte/SvelteKit, see demo here. Installation // Using Yarn to install yarn add --dev svelte-t

Binsar Dwi Jasuma 9 Dec 8, 2022
Jester is a test-generation tool to create integration test code.

Code Generator for Integration Tests Introduction Welcome to Jester: An easy-to-use web application that helps you create and implement integration te

OSLabs Beta 54 Dec 12, 2022
Test each framework for it's performance cost

Framework Benchmarks Test each framework for it's performance, particularly common Lighthouse and CWV metrics as applications scale Important: This is

Builder.io 366 Jan 2, 2023
coc-pyright-tools is a coc-extension that adds its own functionality to coc-pyright for coc.nvim. Currently the "Inlay Hints", "CodeLens" and "Test Framework commands" feature is available.

coc-pyright-tools !!WARNING!! Inlay hints feature of coc-pyright-tools, have been ported to coc-pyright itself. https://github.com/fannheyward/coc-pyr

yaegassy 5 Aug 23, 2022
A list of helpful front-end related questions you can use to interview potential candidates, test yourself or completely ignore.

Front-end Developer Interview Questions This repository contains a number of front-end interview questions that can be used when vetting potential can

H5BP 56.1k Jan 4, 2023
A simple playground to create and test your Katas in Typescript.

Kata Playground TS A simple playground to create and test your Katas in Typescript. A code kata is an exercise in programming which helps programmers

Willian Justen 23 Jan 20, 2022
10lift Applicant Test Senior Front End Development

Getting Started with Create React App This project was bootstrapped with Create React App. Available Scripts In the project directory, you can run: ya

Barış ÖZDEMİRCİ 2 Sep 27, 2021
A stupid simple way to test

@asdgf A stupid simple way to test. Packages @asdgf/core The core of the test framework, and nothing else @asdgf/cli CLI tool to run tests in node, us

Pascal Schilp 16 Nov 17, 2022
This repository contains a basic example on how to set up and run test automation jobs with CircleCI and report results to Testmo.

CircleCI test automation example This repository contains a basic example on how to set up and run test automation jobs with CircleCI and report resul

Testmo 2 Dec 23, 2021
Mamera is a stupidly silly app developed to test CapacitorJS. It can be found on the iOS App Store.

Mamera This repo is focused on mobile app development for iOS. Although you may be able to build to Android from this repo, this ReadMe was written fo

Jamel 7 Mar 30, 2022
Example-browserstack-reporting - This repository contains an example of running Selenium tests and reporting BrowserStack test results, including full CI pipeline integration.

BrowserStack reporting and Selenium test result example This repository contains an example of running Selenium tests and reporting BrowserStack test

Testmo 1 Jan 1, 2022
Test for client-side script injection via NFTs

Rektosaurus A test suite to check for client-side script injection via NFTs. Overview NFTs contain a variety of metadata and content that gets process

Bernhard Mueller 42 Jun 28, 2022
This is a test parser which can automatically parse the tests in from websites like codeforces, codechef, atcoder etc.

✔ Sublime test parser This is a test parser which can automatically parse the tests in from websites like codeforces, codechef, atcoder etc. See how i

Prakhar Rai 15 Aug 6, 2022
This Project is made with HTML5, CSS3, ReactJS, Axios, MetaMask, thirdweb, Rinkeby Test Network, Web 3.0 Technologies, and OpenSea API.

Abstract Collections This Project is made with HTML5, CSS3, ReactJS, Axios, MetaMask, thirdweb, Rinkeby Test Network, Web 3.0 Technologies, and OpenSe

Shobhit Gupta 34 Jan 4, 2023
This tool allows you to test your chains.json file to see if your chains are available, syncing, or in sync.

Chains Tester This tool allows you to test your chains.json file to see if your chains are available, syncing, or in sync. This is an open source tool

Jorge S. Cuesta 9 Nov 4, 2022
🤪 A linter, prettier, and test suite that does everything as-simple-as-possible.

Features Fully Featured Code Grading Knowing if you need to work on your code is important- that's why we grade your code automatically. But, unlike o

Fairfield Programming Association 18 Sep 25, 2022
Hackathons + Contests => Hack TestHackathons + Contests => Hack Test

Hack Test API Hackathons + Contests => Hack Test Website: https://hacktestapi.herokuapp.com/ Workspace: https://www.postman.com/satellite-geoscientist

Abhishek Chauhan 4 Jul 25, 2022
A new zero-config test runner for the true minimalists

Why User-friendly - zero-config, no API to learn, simple conventions Extremely lighweight - only 40 lines of code and no dependencies Super fast - wit

null 680 Dec 20, 2022