This is a JS/TS library for accelerated tensor computation intended to be run in the browser.

Overview

TensorJS

Test Version

This is a JS/TS library for accelerated tensor computation intended to be run in the browser. It contains an implementation for numpy-style multidimensional arrays and their operators.

It also allows executing Onnx models. For examples check the examples folder.

There are three execution backends available:

  • CPU: This is implemented in plain javascript and thus not very fast. It is intended to be a reference implementation. Big optimizations are avoided for simplicity.
  • Web Assembly: This is implemented in Rust. It is optimized for faster execution (although right now there is a lot of work to be done).
  • GPU: This uses WebGL to enable very fast execution and should be used whenever a GPU is available. It is typically ~10-100 times faster than the WASM backend (except for a few operators). Most of the development focus was spent here so this is by far the fastest backend.

How to use

Install with

$ npm install @hoff97/tensor-js

and then import

import * as tjs from '@hoff97/tensor-js';

or import the stuff you need directly.

Tensors

You can create tensors of the respective backend like this:

  • CPU:
    const tensor = new tjs.tensor.cpu.CPUTensor([2,2], [1,2,3,4]);
  • WASM:
    const tensor = new tjs.tensor.wasm.WASMTensor(new Float32Array([1,2,3,4]), [2,2]);
  • GPU:
    const tensor = new tjs.tensor.gpu.GPUTensor(new Float32Array([1,2,3,4]), [2,2]);
    or directly from an image/video element:
    const video: HTMLVideoElement = document.querySelector("#videoElement");
    const tensor = tjs.tensor.gpu.GPUTensor.fromData(video);
    which will be a tensor with shape [height,width,4] and data type float32. Creating a GPU tensor from a video element will usually be pretty fast. Creation from an image not necessarily, since here the image data first has to be transferred to the GPU.

Tensor operations

Once you have created a tensor, you can do operations on it, for example:

  • Add two tensors
    const res = a.add(b);
  • Matrix multiplication
    const res = a.matMul(b);
  • Find the maximum
    const res = a.max(1);

For a list of all operators, see the docs. Most operators will behave like their numpy/pytorch counterparts.

Reading values

When you want to read data from a tensor:

const values = await tensor.getValues();

which will give you the values as a array of the values. For CPU tensors you can also get the value at an index:

const value = tensor.get([1,2,3,4]);

Data types

Tensors are created with float values (using 32 bits) by default. You can specify another data type on creation:

  const tensor = new tjs.tensor.cpu.CPUTensor([2,2], [1,2,3,4], 'float16');

or cast to another data type with:

  const casted = tensor.cast('float16');

The available data types are float64, float32, float16, int32, int16, int8, uint32, uint16, uint8. Note that not all backends support all data types:

  • CPU: Supports all data types, but float16 will be represented as float32 internally
  • WASM: Supports all except float16
  • GPU: Supports all except float64. Note that except for float16, all other data types will be represented by float32 internally, since WebGL1 does not allow writing anything else than floats to frame buffers. This means that for int32 and uint32, not the whole range of values of the respective data type is available.

The data type of a tensor can be accessed via tensor.dtype. Additionally, each tensor has a generic type argument, which will carry its data type:

  const tensor: Tensor<'float16'> = new tjs.tensor.cpu.CPUTensor([2,2], [1,2,3,4], 'float16');

This allows type checking tensor operations, which means that only tensor operations with the same data type compile when using typescript. The generic type defaults to float32. If you want to represent the data type of a tensor with an unknown data type, write for example

  const tensor: Tensor<any> = a.add(b);

or alternatively

  const tensor: Tensor<DType> = a.add(b);

Converting between backends

You can conver a tensor to a different backend like so:

const cpuTensor = await tjs.util.convert.toCPU(tensor);
const wasmTensor = await tjs.util.convert.toWASM(tensor);
const gpuTensor = await tjs.util.convert.toGPU(tensor);

Note that converting between backends (especially from/to WebGL) is an expensive operation and should be prevented if possible!

Onnx model support

You can load an onnx model like this:

const respones = await fetch(`model.onnx`);
const buffer = await res.arrayBuffer();

const model = new tjs.onnx.model.OnnxModel(buffer);

To see all supported operators, check the supported operator list.

You will very likely want to run this model on the GPU. To do this:

await model.toGPU();

Optimizations

There are a few optimization passes that can be done on an Onnx model to get faster execution. To do this, run

model.optimize()

Running with half precision

By default full precision floats (32-bits) are used for model execution. On the GPU backend, you can try executing with half precision, but be aware that this might not work for all models. To use half precision, specify this when loading the model:

const model = new tjs.onnx.model.OnnxModel(buffer, {
  precision: 16
})
model.toGPU();

For the best performance you should also create your GPU tensors with half precision

const tensor = new tjs.tensor.gpu.GPUTensor(new Float32Array([1,2,3,4]), [2,2], 'float16');

The outputs of the model will be half-precision tensors as well. To read the values of a half precision gpu tensor, you have to convert it to full precision first, which can be done with:

const values = await tensor.cast('float32').getValues();

Other performance considerations

Try to run your models with static input sizes. TensorJS will compile specialized versions of all operations after enough forward passes. For this the input shapes of the tensors have to be constant though.

Autograd functionality

Automatic differentiation is supported. For this create variables from all your tensors:

const a = new tjs.tensor.cpu.CPUTensor([2,2], [1,2,3,4]);
const b = new tjs.tensor.cpu.CPUTensor([2,2], [5,6,7,8]);

const varA = new tjs.autograd.Variable(a);
const varB = new tjs.autograd.Variable(b);

Or use the utility methods:

const varA = tjs.autograd.Variable.create([2,2], [1,2,3,4], 'GPU');
const videoElement = document.querySelector("#videoElement");
const varB = tjs.autograd.Variable.fromData(videoElement);

Afterwards you can perform normal tensor operations:

const mul = varA.matMul(varB);
const sum = mul.sum();

To perform a backward pass, call backward on a scalar tensor (a tensor with shape [1]). All variables will have an attribute .grad, which is the gradient

sum.backward();

console.log(varA.grad);

Multiple backward passes will add up the gradients. After you are done with the variable, delete the computation graph by calling delete().

Sparse tensors

Sparse tensors are tensors where most entries are zero, for example the following one:

const a = new CPUTensor([3,3],
  [1,0,0,
   0,2,0,
   0,3,4]);

TensorJS supports sparse tensors in coordinate format, where we store the coordinates and values of the nonzero entries in two tensors:

  const indices = [
    0,0,  // Corresponds to value 1
    1,1,  // Corresponds to value 2
    2,1,  // Corresponds to value 3
    2,2   // Corresponds to value 4
  ];
  const indiceTensor = new CPUTensor([4, 2], indices, 'uint32');

  const values = [1,2,3,4];
  const valueTensor = new CPUTensor([4],values);
  const sparseTensor = new SparseTensor(valueTensor, indiceTensor, [3,3]);

The implementations of the operators for sparse tensors only consider the nonzero entries and are thus faster than their dense counterparts.

Note that some operators make specific assumptions on the sparse tensor, for details check the corresponding documentation here.

Backend support for sparse tensors

As of now, most operators are only supported on the CPU and WASM backend. If an operation is not supported, this is noted in the docs.

Documentation

You can find the documentation here.

Contributing

See Contributing.md

Development

See Development.md

You might also like...

Bayesian bandit implementation for Node and the browser.

#bayesian-bandit.js This is an adaptation of the Bayesian Bandit code from Probabilistic Programming and Bayesian Methods for Hackers, specifically d3

Aug 19, 2022

Simple Javascript implementation of the k-means algorithm, for node.js and the browser

#kMeans.js Simple Javascript implementation of the k-means algorithm, for node.js and the browser ##Installation npm install kmeans-js ##Example (JS)

Aug 19, 2022

Clustering algorithms implemented in Javascript for Node.js and the browser

Clustering.js ####Clustering algorithms implemented in Javascript for Node.js and the browser Examples License Copyright (c) 2013 Emil Bay github@tixz

Aug 19, 2022

A neural network library built in JavaScript

A neural network library built in JavaScript

A flexible neural network library for Node.js and the browser. Check out a live demo of a movie recommendation engine built with Mind. Features Vector

Dec 31, 2022

A lightweight library for neural networks that runs anywhere

A lightweight library for neural networks that runs anywhere

Synapses A lightweight library for neural networks that runs anywhere! Getting Started Why Sypapses? It's easy Add one dependency to your project. Wri

Nov 9, 2022

A JavaScript deep learning and reinforcement learning library.

A JavaScript deep learning and reinforcement learning library.

neurojs is a JavaScript framework for deep learning in the browser. It mainly focuses on reinforcement learning, but can be used for any neural networ

Jan 4, 2023

Linear Regression library in pure Javascript

Lyric Linear Regression library in pure Javascript Lyric can help you analyze any set of x,y series data by building a model that can be used to: Crea

Dec 22, 2020

Machine Learning library for node.js

Machine Learning library for node.js

shaman Machine Learning library for node.js Linear Regression shaman supports both simple linear regression and multiple linear regression. It support

Feb 26, 2021

Support Vector Machine (SVM) library for nodejs

Support Vector Machine (SVM) library for nodejs

node-svm Support Vector Machine (SVM) library for nodejs. Support Vector Machines Wikipedia : Support vector machines are supervised learning models t

Nov 6, 2022
Owner
Frithjof Winkelmann
Studying computer science at TU Munich
Frithjof Winkelmann
WebGL-accelerated ML // linear algebra // automatic differentiation for JavaScript.

This repository has been archived in favor of tensorflow/tfjs. This repo will remain around for some time to keep history but all future PRs should be

null 8.5k Dec 31, 2022
Run Keras models in the browser, with GPU support using WebGL

**This project is no longer active. Please check out TensorFlow.js.** The Keras.js demos still work but is no longer updated. Run Keras models in the

Leon Chen 4.9k Dec 29, 2022
Run XGBoost model and make predictions in Node.js

XGBoost-Node eXtreme Gradient Boosting Package in Node.js XGBoost-Node is a Node.js interface of XGBoost. XGBoost is a library from DMLC. It is design

暖房 / nuan.io 31 Nov 15, 2022
architecture-free neural network library for node.js and the browser

Synaptic Important: Synaptic 2.x is in stage of discussion now! Feel free to participate Synaptic is a javascript neural network library for node.js a

Juan Cazala 6.9k Dec 27, 2022
A library for prototyping realtime hand detection (bounding box), directly in the browser.

Handtrack.js View a live demo in your browser here. Handtrack.js is a library for prototyping realtime hand detection (bounding box), directly in the

Victor Dibia 2.7k Jan 3, 2023
A speech recognition library running in the browser thanks to a WebAssembly build of Vosk

A speech recognition library running in the browser thanks to a WebAssembly build of Vosk

Ciaran O'Reilly 207 Jan 3, 2023
Deep Learning in Javascript. Train Convolutional Neural Networks (or ordinary ones) in your browser.

ConvNetJS ConvNetJS is a Javascript implementation of Neural networks, together with nice browser-based demos. It currently supports: Common Neural Ne

Andrej 10.4k Dec 31, 2022
The Fastest DNN Running Framework on Web Browser

WebDNN: Fastest DNN Execution Framework on Web Browser WebDNN is an open source software framework for executing deep neural network (DNN) pre-trained

Machine Intelligence Laboratory (The University of Tokyo) 1.9k Jan 1, 2023
JavaScript API for face detection and face recognition in the browser and nodejs with tensorflow.js

face-api.js JavaScript face recognition API for the browser and nodejs implemented on top of tensorflow.js core (tensorflow/tfjs-core) Click me for Li

Vincent Mühler 14.6k Jan 2, 2023
Train and test machine learning models for your Arduino Nano 33 BLE Sense in the browser.

Tiny Motion Trainer Train and test IMU based TFLite models on the Web Overview Since 2009, coders have created thousands of experiments using Chrome,

Google Creative Lab 59 Nov 21, 2022