Deep Learning in Javascript. Train Convolutional Neural Networks (or ordinary ones) in your browser.

Overview

ConvNetJS

ConvNetJS is a Javascript implementation of Neural networks, together with nice browser-based demos. It currently supports:

  • Common Neural Network modules (fully connected layers, non-linearities)
  • Classification (SVM/Softmax) and Regression (L2) cost functions
  • Ability to specify and train Convolutional Networks that process images
  • An experimental Reinforcement Learning module, based on Deep Q Learning

For much more information, see the main page at convnetjs.com

Note: I am not actively maintaining ConvNetJS anymore because I simply don't have time. I think the npm repo might not work at this point.

Online Demos

Example Code

Here's a minimum example of defining a 2-layer neural network and training it on a single data point:

// species a 2-layer neural network with one hidden layer of 20 neurons
var layer_defs = [];
// input layer declares size of input. here: 2-D data
// ConvNetJS works on 3-Dimensional volumes (sx, sy, depth), but if you're not dealing with images
// then the first two dimensions (sx, sy) will always be kept at size 1
layer_defs.push({type:'input', out_sx:1, out_sy:1, out_depth:2});
// declare 20 neurons, followed by ReLU (rectified linear unit non-linearity)
layer_defs.push({type:'fc', num_neurons:20, activation:'relu'}); 
// declare the linear classifier on top of the previous hidden layer
layer_defs.push({type:'softmax', num_classes:10});

var net = new convnetjs.Net();
net.makeLayers(layer_defs);

// forward a random data point through the network
var x = new convnetjs.Vol([0.3, -0.5]);
var prob = net.forward(x); 

// prob is a Vol. Vols have a field .w that stores the raw data, and .dw that stores gradients
console.log('probability that x is class 0: ' + prob.w[0]); // prints 0.50101

var trainer = new convnetjs.SGDTrainer(net, {learning_rate:0.01, l2_decay:0.001});
trainer.train(x, 0); // train the network, specifying that x is class zero

var prob2 = net.forward(x);
console.log('probability that x is class 0: ' + prob2.w[0]);
// now prints 0.50374, slightly higher than previous 0.50101: the networks
// weights have been adjusted by the Trainer to give a higher probability to
// the class we trained the network with (zero)

and here is a small Convolutional Neural Network if you wish to predict on images:

var layer_defs = [];
layer_defs.push({type:'input', out_sx:32, out_sy:32, out_depth:3}); // declare size of input
// output Vol is of size 32x32x3 here
layer_defs.push({type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'});
// the layer will perform convolution with 16 kernels, each of size 5x5.
// the input will be padded with 2 pixels on all sides to make the output Vol of the same size
// output Vol will thus be 32x32x16 at this point
layer_defs.push({type:'pool', sx:2, stride:2});
// output Vol is of size 16x16x16 here
layer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});
// output Vol is of size 16x16x20 here
layer_defs.push({type:'pool', sx:2, stride:2});
// output Vol is of size 8x8x20 here
layer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});
// output Vol is of size 8x8x20 here
layer_defs.push({type:'pool', sx:2, stride:2});
// output Vol is of size 4x4x20 here
layer_defs.push({type:'softmax', num_classes:10});
// output Vol is of size 1x1x10 here

net = new convnetjs.Net();
net.makeLayers(layer_defs);

// helpful utility for converting images into Vols is included
var x = convnetjs.img_to_vol(document.getElementById('some_image'))
var output_probabilities_vol = net.forward(x)

Getting Started

A Getting Started tutorial is available on main page.

The full Documentation can also be found there.

See the releases page for this project to get the minified, compiled library, and a direct link to is also available below for convenience (but please host your own copy)

Compiling the library from src/ to build/

If you would like to add features to the library, you will have to change the code in src/ and then compile the library into the build/ directory. The compilation script simply concatenates files in src/ and then minifies the result.

The compilation is done using an ant task: it compiles build/convnet.js by concatenating the source files in src/ and then minifies the result into build/convnet-min.js. Make sure you have ant installed (on Ubuntu you can simply sudo apt-get install it), then cd into compile/ directory and run:

$ ant -lib yuicompressor-2.4.8.jar -f build.xml

The output files will be in build/

Use in Node

The library is also available on node.js:

  1. Install it: $ npm install convnetjs
  2. Use it: var convnetjs = require("convnetjs");

License

MIT

Comments
  • Typed arrays

    Typed arrays

    I had a go at using typed arrays (where available) for w & dw in Vol (and return value of zeros function). Running in Chrome Canary, there seems to be a bit of a performance increase - the first 1000 examples in the MNIST demo went from taking ~16s to ~12s on my machine (Chrome 34.0.1769.2, Windows 8.0).

    opened by JimAllanson 8
  • Fix external dependencies for MagicNet demo

    Fix external dependencies for MagicNet demo

    Just ripped the files from the cs.stanford.edu/people/karpathy/... Tried to minimise changes, personally I think it's best to avoid hosting any of these external libs locally. For the data files, because of the requests for the .txt one must still run a server to load the demo page. I did not check if other demos are similarly broken when running locally.

    opened by bittlingmayer 7
  • Speed-up of ConvLayer.forward

    Speed-up of ConvLayer.forward

    I noticed that the ConvLayer.forward is being benchmarked by convnet-benchmarks - and I thought have a go at some optimisation. With a little type-hinting here and there (plus some slight loop-order modifications, and constant extraction), I think I've got at least a 2x speed-up (YMMV, of course). It's functionally identical (AFAICT).

    Here's the run-down of the benchmark timings

          // Orig   #5 iteration : 4880ms (original)
          // Dupe   #5 iteration : 5067ms (+1 console.log!)
          // oxoy   #5 iteration : 4155ms (move oy,ox calc outside of inner loop)
          // xy|0   #5 iteration : 4155ms (type hint on x and y)
          // xyst|0 #5 iteration : 2607ms (type hint on stride_x and stride_y)
          // more|0 #5 iteration : 2662ms (type hint on f>depth - WORSE)
          // hint|0 #5 iteration : 2591ms (type hint on constructors)
          // ox out #5 iteration : 2586ms (move ox variable setting outside y loop (splitting 'if' makes it WORSE, though))
          // xy->yx #5 iteration : 2398ms (switch loop order, so that faster moving indices inside (better cache perf))
          // contru #5 iteration : 2366ms (type-hinting into constructor of A)
          // VolSet #5 iteration : 2322ms (type-hinting into Vol.set())
    

    One issue with submitting the patch, though, is that my build/convnet.js is also updated, which seems wasteful. OTOH, since you want the concatted-minimised version in the repo, I don't see how to get away from including it...

    In addition, the current state-of-play has both forward_orig and forward(new) in it - as well as some commentary about things that don't work, etc. Would you like me to clean them up before submitting?

    All the Best Martin :-)

    opened by mdda 7
  • average_loss_window has a value of NaN

    average_loss_window has a value of NaN

    Hello,

    I've recently started using this library and it has been exciting so far. However, it appears that the backward() function is not returning a proper value and as such I'm not really able to train anything.

    Specifically, the variable avcost gets a value of NaN. After tracing the issue back to the policy function, it appears that maxval is getting a value of NaN, which seems to be because action_values.w contains NaN for all its values, which I think is because all of the layers associated with value_net have Float64Array values of NaN's pretty much across the board. I'm using the layer setup found in the rldemo demonstration so I'm not really sure how to progress past this.

    Any help is appreciated. Sorry for any formatting issues, I don't regularly use GitHub.

    Thanks.

    opened by RyleyGG 6
  • Progress thus far on using SIMD + Typed Objects.

    Progress thus far on using SIMD + Typed Objects.

    As per #32, I've been trying to rewrite convnetjs using SIMD and Typed Objects, which are both proposals for ECMAScript 2016 (ES7). In the end I also ended up rewriting most of the code in ES6 and changed the build process to use Babel and Browserify.

    Unfortunately, it appears as though the Typed Objects proposal is about to be changed substantially, and SIMD is going to be modified slightly so it becomes somewhat of a subset of Typed Objects (as will Typed Arrays). Given this sort of instability as far as APIs are concerned, this repo won't currently run in any browser or in node.js. It would be possible to get it to run in Firefox Nightly with some effort.

    Some changes that were necessary:

    • All Vols are instances of a VolType - so all Vols of a particular VolType have the same preset dimensions and can't be resized.
    • Support for browsers without Typed Arrays had to be dropped. I don't think this affects too many people, given it's just old versions of IE.
    • Most algorithms have been made as parallel as possible using SIMD.

    Some things I've also considered:

    • Tried using a use_webgl argument, which is either false or a reference to a WebGL context (can easily be emulated in node.js using OpenGL bindings).

    Given my branch is behind quite a few commits, and isn't actually working, I thought I'd just send through this pull request so progress thus far could be looked at/torn apart by anyone who's interested.

    opened by thomasfoster96 6
  • Refactored the demos

    Refactored the demos

    I extracted styles and javascript from all the demo files, explicitly declared several magic constants in mnist and cifar demos, unified the javascript for mnist and cifar10 into one js file, moved js and css files to respective folders and updated references in html added ability to test with own image in cifar10 and mnist demos

    still left to do: refactor and/or unify the javascript files for the rest of the demos

    opened by pavelpep 6
  • npm: 404 not found

    npm: 404 not found

    I'm new to node and npm so I might be doing something wrong, but when I try:

    $ npm install convnetjs
    

    I get:

    npm http GET https://registry.npmjs.org/convnetjs
    npm http 304 https://registry.npmjs.org/convnetjs
    npm http GET https://registry.npmjs.org/convnetjs/-/convnetjs-0.2.0.tgz
    npm http 404 https://registry.npmjs.org/convnetjs/-/convnetjs-0.2.0.tgz
    npm ERR! fetch failed https://registry.npmjs.org/convnetjs/-/convnetjs-0.2.0.tgz
    npm ERR! Error: 404 Not Found
    npm ERR!     at WriteStream.<anonymous> (/usr/local/lib/node_modules/npm/lib/utils/fetch.js:57:12)
    npm ERR!     at WriteStream.EventEmitter.emit (events.js:117:20)
    npm ERR!     at fs.js:1598:14
    npm ERR!     at /usr/local/lib/node_modules/npm/node_modules/graceful-fs/graceful-fs.js:105:5
    npm ERR!     at Object.oncomplete (fs.js:107:15)
    npm ERR! If you need help, you may report this *entire* log,
    npm ERR! including the npm and node versions, at:
    npm ERR!     <http://github.com/npm/npm/issues>
    
    npm ERR! System Darwin 13.1.0
    npm ERR! command "node" "/usr/local/bin/npm" "install" "convnetjs"
    npm ERR! cwd /Users/christian/repos/github/2Q48/js
    npm ERR! node -v v0.10.26
    npm ERR! npm -v 1.4.7
    npm ERR!
    npm ERR! Additional logging details can be found in:
    npm ERR!     /Users/christian/repos/github/2Q48/js/npm-debug.log
    npm ERR! not ok code 0
    
    opened by cjauvin 4
  • Same output with every input after exporting and using.

    Same output with every input after exporting and using.

    I have verified the the images I am forwarding in are different. Although I am seeing the exact same weights every time I choose a different image. It is very strange...

    I know I trained the network properly (for a beginner) because I calculated and saw the average cost_loss fall from ~2 to below 1. So that is great.

    I feel I must be importing the network in wrong or possibly getting my predictions incorrectly.

    I used the .toJSON and .fromJSON methods, just as I have seen in examples.

    I can get some code examples up here, but was wondering if anyone had any ideas or if anyone had see this behavior before.

    opened by dijs 3
  • Trigger a warning when regression trainer is not given an array

    Trigger a warning when regression trainer is not given an array

    When you train a network for regression and that you want to predict only one value it is very tricky that the trainer should take an array of one value:

    var stats = trainer.train(line.x, [line.y]);
    

    i just added a console log so that beginners users realise this quickly (i've seen two people fall in this trap in a short interval).

    opened by vallettea 3
  • Possible issue with batch_size parameter

    Possible issue with batch_size parameter

    I was looking through your code and you initialize the gradient to be zero on each backward pass, won't this impact the use of batch_size? Doesn't the gradient need to be added up so that you get the average gradient for the weight update? You already set the gradient to zero on each weight update.

    opened by jgbos 3
  • Convolutional Network not training

    Convolutional Network not training

    I'm trying to recreate a readable version of the mnist demo. My implementation uses a 28x28 array to forward into the network with the same layer defs:

    layer_defs = [];
    layer_defs.push({type:'input', out_sx:24, out_sy:24, out_depth:1});
    layer_defs.push({type:'conv', sx:5, filters:8, stride:1, pad:2, activation:'relu'});
    layer_defs.push({type:'pool', sx:2, stride:2});
    layer_defs.push({type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'});
    layer_defs.push({type:'pool', sx:3, stride:3});
    layer_defs.push({type:'softmax', num_classes:10});
    

    I don't know how to train the network properly using the Vol class. Whatever I do the network just does not learn and the cost error is always NaN. The forward pass works fine though but the trainer.train does not.

    My train function is as follows

    function train(data){
      if (stopTraining){return}
      let cop = [...data.input]
      cop.reshape(28, 28)
      cop = transposeArray(cop, cop.length)
      let x = new convnetjs.Vol(cop.length, cop[0].length, 1, 0.0)
      x.w = cop.flat()
      let stats = trainer.train(x, data.output)
      console.log(stats)
      
    }
    
    opened by Bobingstern 2
  • Update rldemo.js

    Update rldemo.js

    Pretty sure this is what was intended. The graph doesn't appear to work correctly in the demo. This change makes the x axis match the age of the trained agent. Tested change and appears correct to me.

    opened by realrandolph 1
  • A new active ML library.

    A new active ML library.

    I made a library similar to this. I'm active and making it better everyday! It would mean a lot if you checked out some of the examples. It would mean even more if you submitted a pr! So far the only main thing missing in optimizer which i will get to very soon.

    This library has Deconv layers and other new features that not even convnetjs has!

    Link

    opened by TrevorBlythe 0
  • [Suggestion for

    [Suggestion for "Painting"] Use of Fourier Features?

    Hello,

    On the page titled "Image Painting" I wanted to suggest passing the (x, y) points through a Fourier feature mapping. The details on this are described here in this 2020 NeurIPS paper by Google Research (including example code).

    Admittedly I'm not sure if this would be out of scope for an intro example for this library, but it would be very cool to see the neural net be able to paint the cat clearly.

    Btw thanks for sharing this great work! I originally found this repo when I was in undergrad, it was part of what got me excited about computer vision :)

    opened by UPstartDeveloper 2
  • A 4x faster alternative to ConvNetJS

    A 4x faster alternative to ConvNetJS

    While extending my knowledge of neural networks, I implemented a neural network library in Javascript. It has capabilities similar to ConvNetJS, but both training and testing are 4x faster (while still running in a single JS thread on the CPU).

    I did not have time to prepare such nice demos, as there are for ConvNetJS. I guess you can use ConvNetJS for learning and experimenting, and use my library when you want to train a specific network.

    Also, my library can load pre-trained models from ConvNetJS (JSON) and Coffe (.coffemodel).

    https://github.com/photopea/UNN.js - it is only 18 kB.

    opened by photopea 6
Releases(2014.08.31)
Owner
Andrej
I like to train Deep Neural Nets on large datasets.
Andrej
A JavaScript deep learning and reinforcement learning library.

neurojs is a JavaScript framework for deep learning in the browser. It mainly focuses on reinforcement learning, but can be used for any neural networ

Jan 4.4k Jan 4, 2023
DN2A - Digital Neural Networks Architecture in JavaScript

DN2A (JavaScript) Digital Neural Networks Architecture About DN2A is a set of highly decoupled JavaScript modules for Neural Networks and Artificial I

Antonio De Luca 464 Jan 1, 2023
Train and test machine learning models for your Arduino Nano 33 BLE Sense in the browser.

Tiny Motion Trainer Train and test IMU based TFLite models on the Web Overview Since 2009, coders have created thousands of experiments using Chrome,

Google Creative Lab 59 Nov 21, 2022
DN2A - Digital Neural Networks Architecture

DN2A (JavaScript) Digital Neural Networks Architecture About DN2A is a set of highly decoupled JavaScript modules for Neural Networks and Artificial I

Antonio De Luca 464 Jan 1, 2023
A lightweight library for neural networks that runs anywhere

Synapses A lightweight library for neural networks that runs anywhere! Getting Started Why Sypapses? It's easy Add one dependency to your project. Wri

Dimos Michailidis 65 Nov 9, 2022
DN2A - Digital Neural Networks Architecture

DN2A (JavaScript) Digital Neural Networks Architecture About DN2A is a set of highly decoupled JavaScript modules for Neural Networks and Artificial I

Antonio De Luca 464 Jan 1, 2023
Deep Neural Network Sandbox for JavaScript.

Deep Neural Network Sandbox for Javascript Train a neural network with your data & save it's trained state! Demo • Installation • Getting started • Do

Matias Vazquez-Levi 420 Jan 4, 2023
🤖chat discord bot powered by Deep learning algorithm🧠

✨ Akaya ✨ ❗ Discord integration functionality not implemented yet! Only the deep-learning module working. Install git clone https://github.com/LyeZinh

Pedro Kaleb! 3 Jun 23, 2022
architecture-free neural network library for node.js and the browser

Synaptic Important: Synaptic 2.x is in stage of discussion now! Feel free to participate Synaptic is a javascript neural network library for node.js a

Juan Cazala 6.9k Dec 27, 2022
[UNMAINTAINED] Simple feed-forward neural network in JavaScript

This project has reached the end of its development as a simple neural network library. Feel free to browse the code, but please use other JavaScript

Heather 8k Dec 26, 2022
A neural network library built in JavaScript

A flexible neural network library for Node.js and the browser. Check out a live demo of a movie recommendation engine built with Mind. Features Vector

Steven Miller 1.5k Dec 31, 2022
Powerful Neural Network for Node.js

NeuralN Powerful Neural Network for Node.js NeuralN is a C++ Neural Network library for Node.js with multiple advantages compared to existing solution

TOTEMS::Tech 275 Dec 15, 2022
FANN (Fast Artificial Neural Network Library) bindings for Node.js

node-fann node-fann is a FANN bindings for Node.js. FANN (Fast Artificial Neural Network Library) is a free open source neural network library, which

Alex Kocharin 186 Oct 31, 2022
Machine learning tools in JavaScript

ml.js - Machine learning tools in JavaScript Introduction This library is a compilation of the tools developed in the mljs organization. It is mainly

ml.js 2.3k Jan 1, 2023
JavaScript Machine Learning Toolkit

The JavaScript Machine Learning Toolkit, or JSMLT, is an open source JavaScript library for education in machine learning.

JSMLT 25 Nov 23, 2022
K-nearest neighbors algorithm for supervised learning implemented in javascript

kNear Install npm install knear --save About kNear is a javascript implementation of the k-nearest neighbors algorithm. It is a supervised machine lea

Nathan Epstein 45 Mar 7, 2022
Machine-learning for Node.js

Limdu.js Limdu is a machine-learning framework for Node.js. It supports multi-label classification, online learning, and real-time classification. The

Erel Segal-Halevi 1k Dec 16, 2022
Friendly machine learning for the web! 🤖

Read our ml5.js Code of Conduct and software licence here! This project is currently in development. Friendly machine learning for the web! ml5.js aim

ml5 5.9k Jan 2, 2023
Machine Learning library for node.js

shaman Machine Learning library for node.js Linear Regression shaman supports both simple linear regression and multiple linear regression. It support

Luc Castera 108 Feb 26, 2021