architecture-free neural network library for node.js and the browser

Overview

Synaptic Build Status Join the chat at https://synapticjs.slack.com

Important: Synaptic 2.x is in stage of discussion now! Feel free to participate

Synaptic is a javascript neural network library for node.js and the browser, its generalized algorithm is architecture-free, so you can build and train basically any type of first order or even second order neural network architectures.

This library includes a few built-in architectures like multilayer perceptrons, multilayer long-short term memory networks (LSTM), liquid state machines or Hopfield networks, and a trainer capable of training any given network, which includes built-in training tasks/tests like solving an XOR, completing a Distracted Sequence Recall task or an Embedded Reber Grammar test, so you can easily test and compare the performance of different architectures.

The algorithm implemented by this library has been taken from Derek D. Monner's paper:

A generalized LSTM-like training algorithm for second-order recurrent neural networks

There are references to the equations in that paper commented through the source code.

Introduction

If you have no prior knowledge about Neural Networks, you should start by reading this guide.

If you want a practical example on how to feed data to a neural network, then take a look at this article.

You may also want to take a look at this article.

Demos

The source code of these demos can be found in this branch.

Getting started

To try out the examples, checkout the gh-pages branch.

git checkout gh-pages

Other languages

This README is also available in other languages.

Overview

Installation

In node

You can install synaptic with npm:

npm install synaptic --save
In the browser

You can install synaptic with bower:

bower install synaptic

Or you can simply use the CDN link, kindly provided by CDNjs

<script src="https://cdnjs.cloudflare.com/ajax/libs/synaptic/1.1.4/synaptic.js"></script>

Usage

var synaptic = require('synaptic'); // this line is not needed in the browser
var Neuron = synaptic.Neuron,
	Layer = synaptic.Layer,
	Network = synaptic.Network,
	Trainer = synaptic.Trainer,
	Architect = synaptic.Architect;

Now you can start to create networks, train them, or use built-in networks from the Architect.

Examples

Perceptron

This is how you can create a simple perceptron:

perceptron.

function Perceptron(input, hidden, output)
{
	// create the layers
	var inputLayer = new Layer(input);
	var hiddenLayer = new Layer(hidden);
	var outputLayer = new Layer(output);

	// connect the layers
	inputLayer.project(hiddenLayer);
	hiddenLayer.project(outputLayer);

	// set the layers
	this.set({
		input: inputLayer,
		hidden: [hiddenLayer],
		output: outputLayer
	});
}

// extend the prototype chain
Perceptron.prototype = new Network();
Perceptron.prototype.constructor = Perceptron;

Now you can test your new network by creating a trainer and teaching the perceptron to learn an XOR

var myPerceptron = new Perceptron(2,3,1);
var myTrainer = new Trainer(myPerceptron);

myTrainer.XOR(); // { error: 0.004998819355993572, iterations: 21871, time: 356 }

myPerceptron.activate([0,0]); // 0.0268581547421616
myPerceptron.activate([1,0]); // 0.9829673642853368
myPerceptron.activate([0,1]); // 0.9831714267395621
myPerceptron.activate([1,1]); // 0.02128894618097928
Long Short-Term Memory

This is how you can create a simple long short-term memory network with input gate, forget gate, output gate, and peephole connections:

long short-term memory

function LSTM(input, blocks, output)
{
	// create the layers
	var inputLayer = new Layer(input);
	var inputGate = new Layer(blocks);
	var forgetGate = new Layer(blocks);
	var memoryCell = new Layer(blocks);
	var outputGate = new Layer(blocks);
	var outputLayer = new Layer(output);

	// connections from input layer
	var input = inputLayer.project(memoryCell);
	inputLayer.project(inputGate);
	inputLayer.project(forgetGate);
	inputLayer.project(outputGate);

	// connections from memory cell
	var output = memoryCell.project(outputLayer);

	// self-connection
	var self = memoryCell.project(memoryCell);

	// peepholes
	memoryCell.project(inputGate);
	memoryCell.project(forgetGate);
	memoryCell.project(outputGate);

	// gates
	inputGate.gate(input, Layer.gateType.INPUT);
	forgetGate.gate(self, Layer.gateType.ONE_TO_ONE);
	outputGate.gate(output, Layer.gateType.OUTPUT);

	// input to output direct connection
	inputLayer.project(outputLayer);

	// set the layers of the neural network
	this.set({
		input: inputLayer,
		hidden: [inputGate, forgetGate, memoryCell, outputGate],
		output: outputLayer
	});
}

// extend the prototype chain
LSTM.prototype = new Network();
LSTM.prototype.constructor = LSTM;

These are examples for explanatory purposes, the Architect already includes Multilayer Perceptrons and Multilayer LSTM network architectures.

Contribute

Synaptic is an Open Source project that started in Buenos Aires, Argentina. Anybody in the world is welcome to contribute to the development of the project.

If you want to contribute feel free to send PR's, just make sure to run npm run test and npm run build before submitting it. This way you'll run all the test specs and build the web distribution files.

Support

If you like this project and you want to show your support, you can buy me a beer with magic internet money:

BTC: 16ePagGBbHfm2d6esjMXcUBTNgqpnLWNeK
ETH: 0xa423bfe9db2dc125dd3b56f215e09658491cc556
LTC: LeeemeZj6YL6pkTTtEGHFD6idDxHBF2HXa
XMR: 46WNbmwXpYxiBpkbHjAgjC65cyzAxtaaBQjcGpAZquhBKw2r8NtPQniEgMJcwFMCZzSBrEJtmPsTR54MoGBDbjTi2W1XmgM

<3

Comments
  • Parallelize neuron activation sequence in self-connected layers

    Parallelize neuron activation sequence in self-connected layers

    This PR is not about parallelizing the library to run on multiple cores/gpu. It's about a bug in the sequence of activating neurons in self-connected layers.

    We need to take great care when activating a self-connected layer (e.g. memoryCell.project(memoryCell);). Before this PR all neurons in such a layer are activated in sequence, which means the former neurons always overwrite the activations fed to later neurons, and the activations may mess up. The right way to do this is to update the activations only after all neurons in the layer have been activated.

    This PR fixes the bug in forward propagation, including hardcoder (standalone hardcoder is fixed but not tested yet). I'm not sure yet if similar bug exists in back propagation. We have to fix these bugs if we want to parallelize (multi-corelize) it because we don't want resource conflicts.

    The fixed codes pass the DSR task as well as the timing task.

    Notable changes:

    • Neuron got two new properties: .newactivation and .inselfconnectedlayer
    • update_sentences was added to the hardcoder which contains a few lines like this, Neuron.activation = Neuron.newactivation;
    opened by Sleepwalking 25
  • Output/Hidden to Hidden gatings in LSTM-RNN

    Output/Hidden to Hidden gatings in LSTM-RNN

    According to Felix Gers' dissertation[1], gates on memory cells have connection not only from input layer, but also the memory cells themselves. However, Architect currently only projects input-to-out/forget/in gates connections for LSTMs (except peepholes),

          inputLayer.project(inputGate);
          inputLayer.project(forgetGate);
          inputLayer.project(outputGate);
    

    which means that the neural network remembers/forgets information only based on its current inputs, which could be diastrous for certain tasks that require long term memory. In some other applications memory cells are even gated by outputs. Besides gating, first order connections from output to hidden layers also exist in some literatures.

    This observation provides an insight for why the Wikipedia language modeling task doesn't give promising results even after hours of training. Through informal test enabling hidden layer to gates connections, the network is able to reproduce text such as "the of the of the of the of the of ..." on its own. I also trained a LSTM with 70 memory cells on some short paragraphs and the network can exactly reproduce two or three sentences on its own.

    I'm going to run further tests to compare the hidden-to-gates connected LSTMs with input-to-gates connected ones.

    [1] Gers, Felix. Long short-term memory in recurrent neural networks. PhD dissertation, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 2001.

    opened by Sleepwalking 19
  • Saving network to file

    Saving network to file

    I exported my network to json using .toJSON() and saved it to file after using "JSON.stringify" on it, when I read it I used "JSON.parse" to convert into json and loaded it using "Network.fromJSON" and I got this error: "Connection Error: Invalid neurons".

    has anyone else encountered this?

    opened by Leekao 19
  • removing useless variables && declaring missed ones

    removing useless variables && declaring missed ones

    Guys, I've detected some minor issues:

    • some variables are falling into global scope (e.g. for (level in layers) { - var level should be here instead, var iterations = bucketSize = 0; - bucketSize should be declared separately);
    • some variables are not used (e.g. var type = connection.gatedfrom[from].type;, var xtrace = this.trace.extended[id];)
    • some code style issues - semicolons and so on.

    Also there are significant possible issues regarding for... in loops - libraries such as prototype.js, sugar.js and several others extends the prototype and will possibly cause an iteration over an non-own properties. There are several ways to solve this, so if you'd like to, I can work over this question.

    opened by Jabher 18
  • trainAsync does not work for me.

    trainAsync does not work for me.

    Hi I am trying the "XOR using the method trainAsync" example and trainAsync does not work for me. ---------------------Error--------------- [[PromiseStatus]] : "rejected" [[PromiseValue]] : TypeError: Cannot read property 'rate' of undefined at Perceptron.worker (https://cdnjs.cloudflare.com/ajax/libs/synaptic/1.0.8/synaptic.min.js:1:23572) at e.workerTrain (https://cdnjs.cloudflare.com/ajax/libs/synaptic/1.0.8/synaptic.min.js:1:28694) at https://cdnjs.cloudflare.com/ajax/libs/synaptic/1.0.8/synaptic.min.js:1:28101 at e.trainAsync (https://cdnjs.cloudflare.com/ajax/libs/synaptic/1.0.8/synaptic.min.js:1:28071) at :23:9

    Uncaught (in promise) TypeError: Cannot read property 'rate' of undefined(…) ------------------in the Synaptic.js file------------------- 1743 // Copy the options and set defaults (options might be different for each worker) 1744 var workerOptions = {}; 1745 if(options) workerOptions = options; 1746 workerOptions.rate = options.rate || .2; // this is where the error of undefined 'rate' parameter. -------------------adding parameters to trainAsync-------------------- The parameters {rate: 0.15,iterations: 100,error: 0.0001} -------------------New Error-------------------- cb50247c-b4fe-4567-bfea-2c7a8589d99b:1Uncaught SyntaxError: Unexpected token (

    The error is in the messy first line of the blob starting with "function (t,i){function n(t){for"

    Can someone point me to a working example?

    opened by MariasStory 16
  • Help getting going

    Help getting going

    Newb to ML and NNets. What are some ways I could use synaptic to find hidden relationships in standard json objects?

    Made a S.O query but doesn't seem very well received :ear: I have a long array of objects that are created to track daily actions.

    Example:

    [
        {name:'workout', duration:'120', enjoy: true, time:1455063275, tags:['gym', 'weights']},
        {name:'lunch', duration:'45', enjoy: false, time:1455063275, tags:['salad', 'wine']},
        {name:'sleep', duration:'420', enjoy: true, time:1455063275, tags:['bed', 'romance']}
    ]
    

    I'm having a hard time understanding how to use this data in a neural network to predict if future actions would be enjoyable. Additionally, I want to find hidden relationships between various activities.

    Not sure how to get the rubber on the road. How do I feed the network my array of objects and read the results?

    If anyone can answer this within the context of https://github.com/cazala/synaptic that would be great. It's also super if the answer is a straight machine learning lesson.

    Thanks all! https://stackoverflow.com/questions/35304800/hidden-relationships-with-javascript-machine-learning

    opened by 34r7h 15
  • Same options and data, different result with different computers.

    Same options and data, different result with different computers.

    I know the result will be different because of random numbers, but there was always more iterations on the fastest computer. Is there a way to ensure same result with equal options and training data?

    Windows 10 Intel Core i7 @ 2.60GHz:

    iterations 1000 error 0.5832954855597182 rate 0.035 iterations 2000 error 0.47984523349249425 rate 0.035 iterations 3000 error 0.4096766632625817 rate 0.035 iterations 4000 error 0.38598258320892237 rate 0.035 iterations 5000 error 0.2839244462200957 rate 0.035 iterations 6000 error 0.21731707647510384 rate 0.035 iterations 7000 error 0.14974001959676705 rate 0.035 iterations 8000 error 0.0922783295181048 rate 0.035 iterations 9000 error 0.07411138184981307 rate 0.035 iterations 10000 error 0.06626638341129625 rate 0.035 iterations 11000 error 0.06183108647527365 rate 0.035 Win Win Loss Win Win Win Win Win Win Loss Win Loss Win Win Loss Win Win Win Win Win Loss Loss Loss Loss Win Loss Win Loss Win Win Loss Loss

    OS X El Capitan 1.8 GHz Intel core i7:

    iterations 1000 error 0.5473525119076491 rate 0.035 iterations 2000 error 0.4808354441834457 rate 0.035 iterations 3000 error 0.4521193897218217 rate 0.035 iterations 4000 error 0.43399835084618216 rate 0.035 iterations 5000 error 0.4050580287584716 rate 0.035 iterations 6000 error 0.29390403125518283 rate 0.035 iterations 7000 error 0.16752626336749996 rate 0.035 iterations 8000 error 0.0810127505036039 rate 0.035 iterations 9000 error 0.06531469909494222 rate 0.035 Loss Win Win Win Loss Win Loss Win Win Loss Loss Loss Loss Win Loss Win Loss Win Win Win Loss Loss Loss Loss Win Loss Win Loss Loss Win Loss Loss

    opened by kritollm 12
  • Moving to ES6 codebase & drop Layer.add & mark Network.set as deprecated

    Moving to ES6 codebase & drop Layer.add & mark Network.set as deprecated

    So, the big story was that I specially asked some of my data-science friends and they said that Network.set(layers: Layers) and Layer.add(neuron:Neuron) are actually useless for any big project: there is stage when you create a network, then there's stage when you actually use it. So the data flow is config => Neuron[][] => Layer[] => Network.

    I've investigated a bit and discovered that Layer.add is used only in Network.fromJSON (and even not mentioned in wiki documentation). So I've overloaded Layer constructor, so for now its signature not only Layer(size: number, label?: string) but also Layer(neurons: Neuron[], label?: string).

    Second thing, Network.set, is a bit more complicated case, as it's documented and used in the examples. The thing is that as we're extending the Network class it actually should be super(layers) inside the constructor, not network.set(layers), as there's no other application for this. I was not able to remove it (as it's already documented and is possibly used right now), but I've added deprecation notice and changed the examples to work with super(layers) instead of .set.

    So as we're having classes and inheritance now supported in node.js and in most of the browsers, I've converted the codebase to ES6-compatible: I've used lebab (lebab --replace src --transform arrow,let,arg-spread,obj-method,obj-shorthand,no-strict,class,template,default-param), applied some by-hand fixes (single cyclic dependency fix, some wrong const applications and exports), and as we're building it with webpack anyway - removed all of this legacy if (typeof module !== 'undefined').

    Fun fact: without Babel-powered transpilation this code is not working only in latest Safari (did not test IE11 though) and Node.js v5 and lower (currently active is v6, also it will become LTS in October, as I mentioned previously). So common Node.js user will be simply able to require('synaptic') or require('synaptic/src') and he will have native classes and so on.

    Additional change: separating Architects into separate files.

    One more additional change: dropping in-code universal module in favor of "UMD modules" of webpack - in order to simplify the code.

    One more time - this code is nearly identical to the one that was previosly, but it's class-based, looks nice and works on more native level for Node 6 and latest browsers. I will definitely understand if you will reject this PR, but you cannot resist the future for a long time :)

    opened by Jabher 12
  • Any thoughts / docs / examples about reinforcement learning

    Any thoughts / docs / examples about reinforcement learning

    I'd like to train the network with reinforcement learning. There have been some talk about that:

    • http://stackoverflow.com/questions/10722064/training-a-neural-network-with-reinforcement-learning
    • http://stackoverflow.com/questions/9010576/neural-network-learning-without-training-values

    Maybe you can provide some examples / wiki about that area? I am clearly lacking some knowledge here. In any case, any pointers, examples or snippets regarding implementation of reinforcement learning ANN using synaptic would be great!

    opened by rikkertkoppes 11
  • Homepage Demo?

    Homepage Demo?

    Where can I find the code for the demo on the homepage? I inspected the page but I couldn't find the script controlling the canvas, let alone the creature.js and world.js scripts. I would really like to see all the code behind the demo, is it possible for somebody to direct me to the source?

    opened by woojoo666 11
  • Training a neural network with continuous data from acclerometer

    Training a neural network with continuous data from acclerometer

    Hi, I found synaptic very useful for building multi layer networks and few other things also. Now i am trying to train a network with the accelerometer data and want to predict the accelerometer reading in the next 50ms time. But i found that the trainer uses one hot encoding in synaptic which is not useful for this(i guess). So is there any other way to use the trainer to train with continuous data from accelerometer with out normalizing?

    opened by SrujithPoondla 8
  • Multi Hidden-Layer Network not learning appropriately.

    Multi Hidden-Layer Network not learning appropriately.

    There is something wrong with the learning process of networks with more than 1 hidden layer.

    I have a small dataset, 500 lines with 10 inputs and 2 outputs each (all inputs/outputs are already between 0 and 1).

    When I run a simple NN with 1 hidden layer, it gets highly optimized in a few epochs and error drops gradually until it reaches a plateau (like expected).

    Ex: var myPerceptron = new Architect.Perceptron(10, 10, 2); var myTrainer = new Trainer(myPerceptron); var trainingSet = [ { input: [0,0,0,0,0,0,0,0,0,0], output: [0,1]}, { input: [1,1,1,1,1,1,1,1,1,1], output: [1,0]}, ... }

    myTrainer.train(trainingSet, { rate: .1, iterations: 10000, error: .0005, shuffle: true, log: 100, cost: Trainer.cost.CROSS_ENTROPY });

    But if I change it's structure to 2 hidden layers, it never gets optimized, even with hundreds or thousands of epochs.

    var myPerceptron = new Architect.Perceptron(10, 10, 10, 2); // small change here var myTrainer = new Trainer(myPerceptron); var trainingSet = [ { input: [0,0,0,0,0,0,0,0,0,0], output: [0,1]}, { input: [1,1,1,1,1,1,1,1,1,1], output: [1,0]}, ... }

    myTrainer.train(trainingSet, { rate: .1, iterations: 10000, error: .0005, shuffle: true, log: 100, cost: Trainer.cost.CROSS_ENTROPY });

    There is something wrong with the learning process when you add extra hidden layers. One extra layer should, at worst, make the learning process slower, but not interrupt it at all.

    I can add the source-code or input/output data if needed.

    opened by MateusAngelo97 0
  • Can I get the H and C states of the LSTM?

    Can I get the H and C states of the LSTM?

    Hi. I wish to build encoder and decoder model, but I don't know where the H and C states of the encoder are, and how to use them as the initial state of the decoder?

    Pseudocode:
    [outputs, stateH, stateC] = LSTM()
    [outputs] = LSTM(initial_state=[stateH, stateC])
    

    Thank you

    opened by ghost 0
  • Added ShuffleInplace implementation to Network's Worker source

    Added ShuffleInplace implementation to Network's Worker source

    This PR solves this issue. The problem was that the Worker did not have the shuffleInplace function implemented.

    To replicate the error before this commit try to train a network asyncronously with shuffle: true in config, like the following:

        trainer.trainAsync(trainingSets[channelId], {
            iterations: 2500,
            log: 250,
            rate: 0.01,
            error: 0.01,
            shuffle: true,
            cost: Trainer.cost.CROSS_ENTROPY
        })
    

    Both npm run test and npm run build worked after this alteration locally, but I don't know how to make a failing/suceeding test with this stack, if someone does then feel free to replace this pull request entirely. Tested locally with the built /dist/synaptic.js file on a project and validated it fixes the issue too.

    opened by GuilhermeRossato 0
  • Running synaptic on a website (browser) that restricts 'unsafe-eval' with CSP

    Running synaptic on a website (browser) that restricts 'unsafe-eval' with CSP

    I'm using synaptic in a third-party product on the client side (basically a our code is being injected to our client's website).

    We encountered an issue with a website that uses CSP (Content-Security-Policy) and does not allow 'unsafe-eval'. Eval includes all method of executing string through javascript code:

    1. eval('var x = 1');
    2. new Function('var x = 1');

    It seems that the library is using 'new Function' in couple of places. After further investigation I realized that this code can be refactored to not use 'new Function' but native code.

    I can help fixing this issue, I just want to know if there is a "real" reason for using this method rather than an actual code.

    opened by adi-darachi 0
Releases(1.1.4)
Powerful Neural Network for Node.js

NeuralN Powerful Neural Network for Node.js NeuralN is a C++ Neural Network library for Node.js with multiple advantages compared to existing solution

TOTEMS::Tech 275 Dec 15, 2022
A neural network library built in JavaScript

A flexible neural network library for Node.js and the browser. Check out a live demo of a movie recommendation engine built with Mind. Features Vector

Steven Miller 1.5k Dec 31, 2022
DN2A - Digital Neural Networks Architecture

DN2A (JavaScript) Digital Neural Networks Architecture About DN2A is a set of highly decoupled JavaScript modules for Neural Networks and Artificial I

Antonio De Luca 464 Jan 1, 2023
DN2A - Digital Neural Networks Architecture

DN2A (JavaScript) Digital Neural Networks Architecture About DN2A is a set of highly decoupled JavaScript modules for Neural Networks and Artificial I

Antonio De Luca 464 Jan 1, 2023
DN2A - Digital Neural Networks Architecture in JavaScript

DN2A (JavaScript) Digital Neural Networks Architecture About DN2A is a set of highly decoupled JavaScript modules for Neural Networks and Artificial I

Antonio De Luca 464 Jan 1, 2023
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX, TensorFlow Lite, Caffe, Keras, Darknet, Paddle

Lutz Roeder 21k Jan 5, 2023
[UNMAINTAINED] Simple feed-forward neural network in JavaScript

This project has reached the end of its development as a simple neural network library. Feel free to browse the code, but please use other JavaScript

Heather 8k Dec 26, 2022
Deep Neural Network Sandbox for JavaScript.

Deep Neural Network Sandbox for Javascript Train a neural network with your data & save it's trained state! Demo • Installation • Getting started • Do

Matias Vazquez-Levi 420 Jan 4, 2023
Deep Learning in Javascript. Train Convolutional Neural Networks (or ordinary ones) in your browser.

ConvNetJS ConvNetJS is a Javascript implementation of Neural networks, together with nice browser-based demos. It currently supports: Common Neural Ne

Andrej 10.4k Dec 31, 2022
A lightweight library for neural networks that runs anywhere

Synapses A lightweight library for neural networks that runs anywhere! Getting Started Why Sypapses? It's easy Add one dependency to your project. Wri

Dimos Michailidis 65 Nov 9, 2022
Bayesian bandit implementation for Node and the browser.

#bayesian-bandit.js This is an adaptation of the Bayesian Bandit code from Probabilistic Programming and Bayesian Methods for Hackers, specifically d3

null 44 Aug 19, 2022
Simple Javascript implementation of the k-means algorithm, for node.js and the browser

#kMeans.js Simple Javascript implementation of the k-means algorithm, for node.js and the browser ##Installation npm install kmeans-js ##Example (JS)

Emil Bay 44 Aug 19, 2022
Clustering algorithms implemented in Javascript for Node.js and the browser

Clustering.js ####Clustering algorithms implemented in Javascript for Node.js and the browser Examples License Copyright (c) 2013 Emil Bay github@tixz

Emil Bay 29 Aug 19, 2022
A library for prototyping realtime hand detection (bounding box), directly in the browser.

Handtrack.js View a live demo in your browser here. Handtrack.js is a library for prototyping realtime hand detection (bounding box), directly in the

Victor Dibia 2.7k Jan 3, 2023
A speech recognition library running in the browser thanks to a WebAssembly build of Vosk

A speech recognition library running in the browser thanks to a WebAssembly build of Vosk

Ciaran O'Reilly 207 Jan 3, 2023
This is a JS/TS library for accelerated tensor computation intended to be run in the browser.

TensorJS TensorJS How to use Tensors Tensor operations Reading values Data types Converting between backends Onnx model support Optimizations Running

Frithjof Winkelmann 32 Jun 26, 2022
JavaScript API for face detection and face recognition in the browser and nodejs with tensorflow.js

face-api.js JavaScript face recognition API for the browser and nodejs implemented on top of tensorflow.js core (tensorflow/tfjs-core) Click me for Li

Vincent Mühler 14.6k Jan 2, 2023
Machine Learning library for node.js

shaman Machine Learning library for node.js Linear Regression shaman supports both simple linear regression and multiple linear regression. It support

Luc Castera 108 Feb 26, 2021
Train and test machine learning models for your Arduino Nano 33 BLE Sense in the browser.

Tiny Motion Trainer Train and test IMU based TFLite models on the Web Overview Since 2009, coders have created thousands of experiments using Chrome,

Google Creative Lab 59 Nov 21, 2022