Deep Neural Network Sandbox for JavaScript.

Overview

Dannjs

versionNpmStat repoSize downloadNpmStat GitHub

Deep Neural Network Sandbox for Javascript

Train a neural network with your data & save it's trained state!

DemoInstallationGetting startedDocsContributeDiscordLicense


Installation

CDN :

<script src="https://cdn.jsdelivr.net/gh/matiasvlevi/[email protected]/build/dann.min.js"></script>

Node :

npm i dannjs

dannjs on npmjs.com

Getting started

Node Imports

Object types from the library can be imported like this

const dn = require('dannjs');
const Dann = dn.dann;
const Layer = dn.layer;
const Matrix = dn.matrix;

The objects containing functions can be imported this way

const dn = require('dannjs');
const lossfuncs = dn.lossfuncs;
const activations = dn.activations;
const poolfuncs = dn.poolfuncs;

Basic model construction

Setting up a small (4,6,6,2) neural network.

const nn = new Dann(4,2);
nn.addHiddenLayer(6,'leakyReLU');
nn.addHiddenLayer(6,'leakyReLU');
nn.outputActivation('tanH');
nn.makeWeights();
nn.lr = 0.0001;
nn.log({details:true});

Train by backpropagation

Training with a dataset.

//XOR 2 inputs, 1 output
const dataset = [
    {
        input: [0,0],
        output: [0]
    },
    {
        input: [1,0],
        output: [1]
    },
    {
        input: [0,1],
        output: [1]
    },
    {
        input: [1,1],
        output: [0]
    }
];

//train 1 epoch
for (data of dataset) {
    nn.backpropagate(data.input,data.output);
    console.log(nn.loss);
}

Train by mutation

For neuroevolution simulations. Works best with small models & large population size.

const populationSize = 1000;
let newGeneration = [];

for (let i = 0; i < populationSize; i++) {

    // parentNN would be the best nn from past generation.
    const childNN = parentNN;
    childNN.mutateRandom(0.01,0.65);

    newGeneration.push(childNN);
}


Demo:

AI predicts San-francisco Housing prices.
more examples & demos here

Contribute

Contributor docs

Report Bugs

Report an issue

Socials

Twitter Follow Patreon donate button



Contact

[email protected]

License

MIT

Comments
  • [🔷 Feature request ]: XOR multiple inputs

    [🔷 Feature request ]: XOR multiple inputs

    Feature

    A function that would create an XOR dataset with X number of inputs.

    Type

    • [ ] Dann
    • [ ] Matrix
    • [ ] Layer
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [x] Datasets
    • [ ] Documentation
    • [ ] tests & examples
    • [ ] Other

    Description

    We currently have a static 2 input XOR dataset for testing/examples purposes. We also have a makeBinary function that creates a dataset of binary digits with X bits, so you can create a custom dataset to test a neural network. What if XOR had a similar function allowing for the creation of a 3 or 4 input XOR.

    Examples

    const dataset = makeXOR(3);
    console.log(dataset);
    
    {
     {
      input: [0, 0, 0],
      output: [0]
     },
     {
      input: [0, 0, 1],
      output: [1]
     },
     {
      input: [0, 1, 0],
      output: [1]
     },
     {
      input: [0, 1, 1],
      output: [0]
     },
    //...
     {
      input: [1, 1, 1],
      output: [1]
     },
    }
    

    This is a 3 input XOR table for reference

    drawing enhancement good first issue Priority: Low 
    opened by matiasvlevi 8
  • [🔺 BUG ]: Grunt error | npm ERR! code ELIFECYCLE errno3

    [🔺 BUG ]: Grunt error | npm ERR! code ELIFECYCLE errno3

    We are getting errors with npm run test and npm run build

    npm run test: image

    npm run build: image

    I am not sure why this is happening. I am using node 14.17.1 by the way, in windows.

    Originally posted by @Labnann in https://github.com/matiasvlevi/Dann/issues/22#issuecomment-869055913

    opened by matiasvlevi 6
  • Dann.load() not working

    Dann.load() not working

    Seems like net loading is broken. On every net.load('name') i get Uncaught ReferenceError: name is not defined onchange https://null.jsbin.com/runner:1

    Dann is loaded from https://cdn.jsdelivr.net/gh/matiasvlevi/[email protected]/build/dann.min.js

    https://jsbin.com/kedijibewi/edit?js,console,output Check it out here.

    It's the only major bug preventing me from deeper experiments with Dann.

    opened by davay42 3
  • [🔷 Feature request ] Can we add a Change Log / Release Notes ?

    [🔷 Feature request ] Can we add a Change Log / Release Notes ?

    Feature

    Unless I've missed it, there doesn't seem to be any public-facing documentation that describes the changes introduced in a given release.

    E.g., comparing v2.4.0 to v2.4.1c it looks like some of the recent code-level changes may include:

    • renaming the various named activation/loss functions to be all lower case (but backward compatible to the mixedCase names because of the name.toLocaleLowerCase() calls added in parallel?)
    • minor changes to the minimized toFunction output (maybe a bug fix?)
    • a new saveLoss configuration option (that populates nn.losses?)

    I suspect there are no "breaking" changes here (and didn't notice anything obviously broken when I upgraded locally) but it would be helpful to have some context for what has changed beyond spelunking in the diff between release tags. (Also if saveLoss does what I think it does that would be neat to add to the log(options) documentation.)

    Type

    • [ ] Dann
    • [ ] Matrix
    • [ ] Layer
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [X] Documentation
    • [ ] tests & examples
    • [ ] Other

    Description

    (See above)

    Examples

    For an elaborate example of this kind of documentation, see Electron.js's release notes but for what it's worth I'd be satisfied with a simple CHANGES.md file or whatever in the root directory that describes:

    a. breaking changes, b. new features, and c. bugs fixed

    in few short bullet points (and the "bugs fixed" one is probably optional since "fixed in release N" is probably being added to the actual bug report anyway and if anyone knows or cares about a specific issue they can probably find that info there).

    It doesn't need to be exhaustive or especially detailed, I'm just looking for a clue from the contributors about the intended or expected impact of the changes in a give release.

    Additional Context

    Thanks for this library BTW. I don't mean to appear ungrateful, but even rudimentary release notes would make it easier to upgrade versions with confidence and it's hard for someone on more on the "api consumer" side of this to contribute to that. (It's probably best for the contributors directly involved with the changes or at least the troubleshooting to capture that info.)

    enhancement 
    opened by rodw 2
  • [🔶 Change request ]: Activations should be case insensitive

    [🔶 Change request ]: Activations should be case insensitive

    Change

    We could use toLocaleLowerCase to specify the activations functions without worrying about capitalization.

    Type

    • [x] Dann
    • [ ] Matrix
    • [x] Layer
    • [x] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [ ] Documentation
    • [ ] tests & examples
    • [ ] Other

    Examples

    these statements would all be valid if we ignore capitalization.

    nn.addHiddenLayer(x, 'leakyrelu');
    nn.addHiddenLayer(x, 'LEAKYRELU');
    nn.addHiddenLayer(x, 'leakyReLU');
    

    Additional context

    This also means all activation names need to be in lowercase, and when we add a new activation with Add.activation, we should also convert the input name to lowercase.

    Changes should be in these methods:

    • Layer.stringToFunc parses the activation names into an object containing, the derivative & the activation.
    • Add.activation adds new activations, and also needs to respect the case insensitivity.
    • Other Dann methods using activation strings will need some minor adjusments.
    enhancement good first issue 
    opened by matiasvlevi 2
  • [🔷 Feature request ]: Sin wave dataset for Rann

    [🔷 Feature request ]: Sin wave dataset for Rann

    Feature

    Segmented Sin wave dataset for Rann.

    Type

    • [ ] Dann
    • [ ] Matrix
    • [ ] Layer
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [x] Datasets
    • [ ] Documentation
    • [ ] tests & examples
    • [ ] Other

    Description

    Just like we have XOR & Binary digit testing datasets for Dann models, it would be nice to have a testing dataset for the upcoming Rann model. The changes would have to be applied to origin/RNN branch. The method creating the dataset should be referenced in the module.exports in src/io/exports.js.

    The source file for this method should be in src/core/datasets/

    Context

    We train a Rann model this way. We feed an array of sequences to the Rann model. The sequences lengths must be the same as the number of input neurons the Rann model has.

    rnn.train([
      [0, 1],
      [2, 3],
      [4, 5]
    ]);
    

    We could technically have a sin wave in an array of sequences to later train a model with.

    let data = [
      [0, sinus values..., ],
      [sinus values continuation..., ],
      [sinus values continuation..., ],
      [sinus values continuation..., ],
    ]
    

    Example

    Here is an example of how the method could work.

    let dataset = makeSinWave( sequence_length, total_length, resolution );
    console.log(dataset);
    
    enhancement good first issue Priority: Medium 
    opened by matiasvlevi 2
  • [🔷 Feature request ]: Aliases for feedForward & backpropagate

    [🔷 Feature request ]: Aliases for feedForward & backpropagate

    Feature

    We could have aliases for the feedForward & backpropagate methods of the Dann class.

    Type

    • [x] Dann
    • [ ] Matrix
    • [ ] Layer
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [ ] Documentation
    • [ ] tests & examples
    • [ ] Other

    Description

    These aliases would be to make method names a little more uniform when we add in the Rann class for RNNs as feedForward & backpropagate would not be the most accurate terms for RNNs. See the issue on RNNs here.

    Feedforward

    Is located in src/classes/dann/methods/feedForward.js

    backpropagate

    Is located in src/classes/dann/methods/backpropagate.js

    Examples

    These would be the aliases Dann.prototype.backpropagate to Dann.prototype.train Dann.prototype.feedForward to Dann.prototype.feed

    Note

    It is important to note that we do not want to remove backpropagate & feedForward names, I think having machine learning terms for methods helps to get a grasp at what the neural network is doing since you can look up the terms.

    enhancement good first issue Priority: Medium 
    opened by matiasvlevi 2
  • [🔺 malformed arrows when logging nn.toFunction ]

    [🔺 malformed arrows when logging nn.toFunction ]

    Bug description

    the console.log provides a function with malformed arrow statements

    To Reproduce

    // logging the following snippet toFunction will produce malformed arrow statements
    // ex. 
    // a[1]=(1+Math.exp(-t)=>);a[2]=(1+Math.exp(-t)=>);a[3]=(1+Math.exp(-t)=>);  
    
    // initialize Dann with 1 input and 1 output  
    let nn = new Dann(1, 1)
    
    // Number of neuron layers in the neural net  
    nn.addHiddenLayer(3, "sigmoid")
    nn.addHiddenLayer(3, "sigmoid")
    
    // how to calculate output  
    nn.outputActivation("sigmoid")
    
    // assign random weights to layers  
    nn.makeWeights()
    
    // How fast should it learn?  
    nn.lr = 0.1
    
    // mean square errorrate  
    nn.setLossFunction("mse")
    
    // show info about the neural network  
    nn.log()
    
    
    // Training data  
    for(let count=0; count < 1000; count++) {
        let randNum = Math.random()*10 - 5
        nn.backpropagate([randNum], [randNum < 0 ? 0 : 1])
    }
    console.log(nn.loss)
    
    // log the function
    console.log(nn.toFunction())
    
    // Logging the function produces the following -   
    // function myDannFunction(input){let w=[];w[0]=[[118.16350397261459],[125.7198197305115],[-61.353668013979465]];w[1]=[[-0.3268324018128853,-0.10547783949606436,1.2385617474541086],[-4.756040201258138,-5.530586211507047,2.0654393849840065],[-6.638909077737027,-6.373098375160245,3.4436506766914343]];w[2]=[[-0.27843549703223947,-1.8499126518203834,-2.4900563442361467]];let b=[];b[0]=[[6.431720708819896],[5.712699746115524],[-8.928168116731628]];b[1]=[[-0.8442173421026961],[-1.8347328438421329],[-1.6789862537895264]];b[2]=[[1.2994856424588022]];let c=[1,3,3,1];let a=[];a[1]=(1+Math.exp(-t)=>);a[2]=(1+Math.exp(-t)=>);a[3]=(1+Math.exp(-t)=>);let l=[];l[0]=[];for(let i=0;i<1;i++){l[0][i]=[input[i]]};for(let i=1;i<4;i++){l[i]=[];for(let j=0;j<c[i];j++){l[i][j]=[0]}};for(let m=0;m<3;m++){for(let i=0;i<w[m].length;i++){for(let j=0;j<l[m][0].length;j++){let sum=0;for(let k=0;k<w[m][0].length;k++){sum+=w[m][i][k]*l[m][k][j]};l[m+1][i][j]=sum}};for(let i=0;i<l[m+1].length;i++){for(let j=0;j<l[m+1][0].length;j++){l[m+1][i][j]=l[m+1][i][j]+b[m][i][j]}};for(let i=0;i<l[m+1].length;i++){for(let j=0;j<l[m+1][0].length;j++){l[m+1][i][j]=a[m+1](l[m+1][i][j])}}};let o=[];for(let i=0;i<1;i++){o[i]=l[3][i][0]};return o}
    
    // passing data to the model  
    nn.feedForward([25], {log: true, decimals: 3})
    
    

    Expected behavior

    Generate a usable function

    Actual behavior

    Logging produces the following statements:

    a[1]=(1+Math.exp(-t)=>);
    a[2]=(1+Math.exp(-t)=>);
    a[3]=(1+Math.exp(-t)=>);  
    

    Platform

    • [ x ] Browser
    • [ ] Nodejs

    bug 
    opened by dsmith73 1
  • [🔶 Change request ]:  Activations should be case insensitive

    [🔶 Change request ]: Activations should be case insensitive

    #44 Activations should be case insensitive

    • Making the activations all lowercase to be case insensitive.
    • activation.js, name of activation made lowercase before inserting into activations dictionary.
    • addHiddenLayer.js, act - name of activation made lowercase before checking the activation name from dictionary.
    • outputActivation.js, act - name of activation made lowercase before checking the activation name
    • setFunc.js, act - activation name made lowercase
    • stringTofunc.js, act - activation name made lowercase
    • actfuncs.js, made each of the activation names to be all lowercase from definition of activations
    • test/unit/classes/dann.js - update tests to have all activations functions be lowercase
    • test/unit/core/functions/actfuncs.js - update tests to have all activations functions be lowercase
    • test/unit/core/functions/add.js - include test that users can add a new layer that will have activations function name be changed to lowercase and a minor typo from a test.
    opened by and1can 1
  • [🔶 Change request ]: Mobile friendly documentation

    [🔶 Change request ]: Mobile friendly documentation

    Change

    We could adapt the documentation for mobile devices.

    Type

    • [ ] Dann
    • [ ] Matrix
    • [ ] Layer
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [x] Documentation
    • [ ] tests & examples
    • [ ] Other

    Examples

    Here is an example of what could be done to the documentation on mobile: image

    Additional context We are using handlebars with yui-doc for the documentation. Even if you never used yui-doc or handlebars before, these tasks only requires CSS knowledge since the content/elements dont change.

    The documentation templates are located in docs/yuidoc-dannjs-theme/partials/ And all the css is located in docs/yuidoc-dannjs-theme/assets/css/

    documentation enhancement good first issue Priority: High 
    opened by matiasvlevi 1
  • [🔷 Feature request ]: numToString, reverse from stringToNum.

    [🔷 Feature request ]: numToString, reverse from stringToNum.

    Feature

    A static function that takes in an array of values and returns a string. Just like Rann.stringToNum reversed. This method should be added to the RNN branch.

    Type

    • [ ] Dann
    • [ ] Matrix
    • [ ] Layer
    • [x] Rann
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [ ] Documentation
    • [ ] tests & examples
    • [ ] Other

    Examples

    Existing stringToNum method

    Rann.stringToNum('hey');
    // Returns [0.7604166666666666, 0.7291666666666666, 0.9375]
    

    numToString method

    Rann.numToString([0.7604166666666666, 0.7291666666666666, 0.9375]);
    // Should return 'hey'
    

    Additional context Be sure to commit changes to the RNN branch.

    enhancement good first issue Priority: Medium 
    opened by matiasvlevi 1
  • [🔷 Feature request ] Batch Back-Propogation

    [🔷 Feature request ] Batch Back-Propogation

    Feature

    Batch Back-Propagation / Training

    Type

    • [x] Dann
    • [ ] Matrix
    • [ ] Layer
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [ ] Documentation
    • [ ] tests & examples
    • [ ] Other

    Description

    In batch back-propagation, we split the full input set into smaller batches
    and during training the model feed-forward through all the input / target pair of a batch without changing the weight / bias but will be updating the gradient (I am not sure on the math behind it). once the batch is complete it updates the weights and biases based on a overall picture of batch rather than using one input. this could help the model find the right weights and biases much faster than the single input back-propagation.

    Examples

    How I would expect the final method to look like.

    //say arch = 1 -> 2 - > 2
    
    input = [1] 
    target = [1 , 0]
    nn.backpropagate(input , target)
    // for one input, target pair
    
    input =[ [ 1 ] , [-1 ] , [-5 ], [ 5 ] , [ 2 ] ]
    target =[ [ 1 ,  0 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 1 , 0 ] , [ 1 , 0 ] ]
    
    nn.backpropagate(input, target)
    // expression remains same, internally it just need to check if Array.isArray(input[0]) => then batch train.
    

    Additional context

    I think this is available in all major ML libraries due to its great efficiency, also will help in creating distributed training capabilities.

    enhancement good first issue Priority: Medium 
    opened by solid-droid 4
  • [🔷 Feature request ]: Derivative of Softmax

    [🔷 Feature request ]: Derivative of Softmax

    Feature

    Softmax activation function.

    Type

    • [x] Dann
    • [ ] Matrix
    • [ ] Layer
    • [x] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [ ] Documentation
    • [ ] tests & examples
    • [ ] Other

    Description

    Here is the softmax function I wrote not so long ago:

    /**
    * Softmax function
    * @method softmax
    * @param z An array of numbers (vector)
    * @return An array of numbers (vector)
    **/
    function softmax(z) {
      let ans = [];
      let denom = 0;
      for (let j = 0; j < z.length; j++) {
        denom += Math.exp(z[j]);
      }
      for (let i = 0; i < z.length; i++) {
        let top = Math.exp(z[i]);
        ans.push(top / denom);
      }
      return ans;
    }
    

    This function is not implemented in the repository yet.

    For this function to work in a Neural Network, we would need to write the derivative of this function. This might be a difficult task since this function takes in & outputs vectors. Vectors that are represented as arrays.

    These two functions would need to be implemented in src/core/functions/actfuncs.js.

    For this function to work with a Dann model, we would need to change how to activations are handled since it expects a vector instead of a number value. I could work on that once the derivative is implemented.

    enhancement Priority: Low math 
    opened by matiasvlevi 0
  • [🔷 Feature request ]: RNNs for Dannjs

    [🔷 Feature request ]: RNNs for Dannjs

    Feature

    Recurrent Neural Networks for Dannjs,

    Type

    • [ ] Dann
    • [ ] Matrix
    • [ ] Layer
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [ ] Documentation
    • [ ] tests & examples
    • [ ] Other
    • [x] Rann (New category for RNNs)

    Description

    Adding a new class for RNNs would be interesting. The class would likely be called Rann for Recurrent Artificial Neural Network, instead of the Dann acronym which stands for Deep Artificial Neural Network. This would be a pretty time-consuming feature to implement. It would require lots of testing before we can publish the feature to the master branch. Maybe creating another testing/dev branch for this feature would be necessary.

    Examples

    These are really early examples and might be completely different once we impelent the feature. This would create the RNN.

    const rnn = new Rann(input_neurons, hidden_neurons, output_neurons);
    

    We could feed a Rann a set of sequences

    let rnn = new Rann(2, 20, 2);
    rnn.train([
      [1, 2],
      [3, 4],
      [5, 6],
      [7, 8]
    ],
    [9, 10]
    );
    rnn.feed([
      [1, 2],
      [3, 4]
    ]);
    // would return [5, 6]
    

    Note

    This is, of course, early speculation about a big addition to the library. It might take quite some time to create & test, and usability might change a whole lot throughout development. I'm assigning this issue to myself because I want to start working on it, but help and criticism are welcome.

    enhancement Priority: High 
    opened by matiasvlevi 2
  • [🔷 Feature request ]: More manual tests & examples

    [🔷 Feature request ]: More manual tests & examples

    Feature

    More manual tests in test/manual-tests/browser and test/manual-tests/node.

    Type

    • [ ] Dann
    • [ ] Matrix
    • [ ] Layer
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [ ] Documentation
    • [x] tests & examples
    • [ ] Other

    Description

    Examples have to be simple and easy to understand. You can use the template test/manual-tests/browser/empty-example and test/manual-tests/node/empty-example to create a new one. Maybe they could also be run as tests along with mocha unit tests.

    enhancement good first issue Priority: Medium 
    opened by matiasvlevi 0
  • [🔷 Feature request ]: Inline docs example linter

    [🔷 Feature request ]: Inline docs example linter

    Feature

    Linter for documentation examples.

    Type

    • [ ] Dann
    • [ ] Matrix
    • [ ] Layer
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [ ] Documentation
    • [x] tests & examples
    • [ ] Other

    Description

    A grunt task that would lint the examples in the inline documentation & overwrite them in between the <code> tags.

    documentation enhancement Priority: Low 
    opened by matiasvlevi 0
  • [🔷 Feature request ]: Browser unit tests

    [🔷 Feature request ]: Browser unit tests

    Feature

    Unit tests in a browser environement

    Type

    • [ ] Dann
    • [ ] Matrix
    • [ ] Layer
    • [ ] Activation functions
    • [ ] Loss functions
    • [ ] Pool functions
    • [ ] Datasets
    • [ ] Documentation
    • [x] tests & examples
    • [ ] Other

    Description

    Running the mocha unit tests for the browser. We currently have unit tests with mocha that run in the command line with node. Having the same tests in a browser environment would eliminate potential errors in the future.

    enhancement Priority: Medium 
    opened by matiasvlevi 0
Releases(v2.4.1e)
  • v2.4.1e(Apr 29, 2022)

    Patches
    • isEs6 missing cases of non-minified es6 methods.
    • Removed deprecated & commented out Dann.prototype.save & Dann.prototype.load methods in unminified build.
    Source code(tar.gz)
    Source code(zip)
  • v2.4.1c(Mar 26, 2022)

    Fixes

    • Dann.prototype.toFunction es6 activation functions fixes for browser & node, pointed out by #48.

    • Activation functions names are now all lowercase, and activation names specified by the user are passed through name.toLocaleLowerCase which allows for mixed cases and backwards compatibility. Feature implemented by @and1can through issue #44

    • Removed Dann.prototype.losses since it was used to store loss values if the saveLoss option was set to true when calling Dann.prototype.backpropagate. This feature did not need to be built in the library, the prefered way to achieve something like this would be:

    let savedLosses = [];
    for (...) {
      nn.backpropagate(input, output);
      savedLosses.push(nn.loss);
    }
    

    This allows more control on when to save a loss, as opposed to always have to save a loss value when Dann.prototype.backpropagate is called with saveLoss ticked to true.

    Source code(tar.gz)
    Source code(zip)
  • v2.4.0(Nov 14, 2021)

    Changes

    • Added asLabel option for feedForward and feed.
    nn.feed([1, 1]) // Outputs an array
    nn.feed([1, 1], { asLabel: true }) // Outputs the index of the largest output value
    
    • Changed exports, Classes are now capitalized, old uncapitalized names are still available for old code support.
    Source code(tar.gz)
    Source code(zip)
  • v2.3.14(Nov 9, 2021)

    Changes

    • Cleaner Dann.prototype.log, Dann.prototype.feedForward, Dann.prototype.backpropagate methods.
    • Added a validity check system to work with the error handling.
    • Restored logo in manual browser tests
    • Added a static Dann.print method to print either as a log or table. Instead of using console.log & console.table in if statements.
    Source code(tar.gz)
    Source code(zip)
  • v2.3.13(Sep 5, 2021)

  • v2.3.12(Aug 30, 2021)

  • v2.3.11(Aug 29, 2021)

    Changes

    • Options now use less conditionals
    • Added a new dropout option, which allows you to set a value in between 0 and 1 to determine the chance of a neuron being idle during a backward pass.

    Here is an example with 10% chance a neuron becomes inactive

    nn.backpropagate([your_inputs],[your_outputs],{ dropout : 0.1 });
    

    Dropout

    image

    Source code(tar.gz)
    Source code(zip)
  • v2.2.11(Aug 15, 2021)

    Changes

    • Added quantile loss function & support for a percentile value. Here is documentation about loss functions, including quantile loss

    Dev changes

    • Fixed missing sig task
    • Added unit tests for quantile loss & percentile value.
    Source code(tar.gz)
    Source code(zip)
  • v2.2.10(Jun 28, 2021)

    Changes

    • Added makeXOR method which allows for the creation of X inputs XOR datasets
    • Added Dann.prototype.feed alias for Dann.prototype.feedForward
    • Added Dann.prototype.train alias for Dann.prototype.backpropagate

    Dev changes

    • Fixed dev dependencies
    Source code(tar.gz)
    Source code(zip)
  • v2.2.9(Jun 17, 2021)

    v2.2.9

    Changes

    • Added Add class
      • Add.activation allows the user to add custom activation functions
      • Add.loss allows the user to add custom loss functions

    Add documentation can be found here

    Dev changes

    • Added grunt tasks for documentation build
    • Added unit tests for Add and its methods
    Source code(tar.gz)
    Source code(zip)
  • v2.2.8(Jun 8, 2021)

    v2.2.8

    Changes

    • added binary & softplus activations
    • documentation rectification

    Dev Changes

    • added unit tests for new activations
    • added deprecation message in documentation
    • added npm shortcut commands
      • npm run browser launches the empty-example html file in a browser
      • npm run doc:show launches the built documentation in a browser
    Source code(tar.gz)
    Source code(zip)
  • v2.2.7(May 29, 2021)

    v2.2.7

    Changes

    • Added softsign activation function
    • Added sinc activation function
    • Added unit tests for sinc & softsign functions
    • fixed unit tests for activation function changed assert.equal to assert.closeTo
    Source code(tar.gz)
    Source code(zip)
  • v2.2.6e(May 22, 2021)

  • v2.2.6d(May 20, 2021)

  • v2.2.6c(May 19, 2021)

  • v2.2.6b(May 19, 2021)

  • v2.2.6(May 19, 2021)

    v2.2.6

    Changes

    • Added Dann.prototype.toFunction(); to convert a dann model to a standalone function.

    • Added Dann.prototype.mapWeights(); to map function to a Dann model's weights.

    • Removed Dann.prototype.save(); for both node & browser.

    • Removed Dann.prototype.load(); for both node & browser.

    • Removed npm dependency fs

    • Removed npm dependency fast-csv

    Source code(tar.gz)
    Source code(zip)
  • v2.2.5b(May 7, 2021)

  • v2.2.5(May 2, 2021)

  • v2.2.4f(Mar 19, 2021)

    JSON functions now renamed

    • createFromObject(); is now createFromJSON();
    • dannObject(); is now toJSON();
    • applyToModel(); is now fromJSON();
    Source code(tar.gz)
    Source code(zip)
  • v2.2.4e(Feb 12, 2021)

    • changes in contribution docs & devsetup.bat you might want to rerun the devsetup.bat to have the updated dev tools.
    • minor changes in Layer
    Source code(tar.gz)
    Source code(zip)
  • v2.2.4d(Feb 9, 2021)

  • v2.2.4c(Feb 1, 2021)

  • v2.2.4b(Jan 31, 2021)

    Changes


    Added XOR dataset.

    require it like so:

    const XOR = require('dannjs').xor;
    

    It looks like this:

    [
      { input: [ 1, 0 ], output: [ 1 ] },
      { input: [ 0, 1 ], output: [ 1 ] },
      { input: [ 0, 0 ], output: [ 0 ] },
      { input: [ 1, 1 ], output: [ 0 ] }
    ]
    



    Added a function that creates binary datasets following a math rule.

    require it like so:

    const makeBinary = dn.makeBinary;
    

    Use it like this:

    const dataset = makeBinary(3):
    console.log(dataset);
    

    which outputs:

    [
      { input: [ 0, 0, 0 ], target: [ 0, 0, 1 ] },
      { input: [ 0, 0, 1 ], target: [ 0, 1, 0 ] },
      { input: [ 0, 1, 0 ], target: [ 0, 1, 1 ] },
      { input: [ 0, 1, 1 ], target: [ 1, 0, 0 ] },
      { input: [ 1, 0, 0 ], target: [ 1, 0, 1 ] },
      { input: [ 1, 0, 1 ], target: [ 1, 1, 0 ] },
      { input: [ 1, 1, 0 ], target: [ 1, 1, 1 ] }
    ]
    

    or by specifying a math rule

    const dataset = makeBinary(4, (x)=>(2*x) ):
    console.log(dataset);
    

    which outputs:

    [
      { input: [ 0, 0, 0, 0 ], target: [ 0, 0, 0, 0 ] },
      { input: [ 0, 0, 0, 1 ], target: [ 0, 0, 1, 0 ] },
      { input: [ 0, 0, 1, 0 ], target: [ 0, 1, 0, 0 ] },
      { input: [ 0, 0, 1, 1 ], target: [ 0, 1, 1, 0 ] },
      { input: [ 0, 1, 0, 0 ], target: [ 1, 0, 0, 0 ] },
      { input: [ 0, 1, 0, 1 ], target: [ 1, 0, 1, 0 ] },
      { input: [ 0, 1, 1, 0 ], target: [ 1, 1, 0, 0 ] },
      { input: [ 0, 1, 1, 1 ], target: [ 1, 1, 1, 0 ] }
    ]
    
    Source code(tar.gz)
    Source code(zip)
  • v2.2.4(Jan 31, 2021)

    Changes:

    • Added static function for dann Dann.createFromModel(data) which takes a dannData object generated by yourmodel.dataObject(); and creates a Dann model from it. ex:
    const nn = new Dann(4,4);
    nn.addHiddenLayer(16,'sigmoid');
    nn.makeWeights();
    
    const modeldata = nn.dataObject();
    
    const newNN = Dann.createFromObject(modeldata);
    newNN.log();
    

    • You can now select an Id of an html element when using yourmodel.load() when working in the browser. If the id is not specified, the input element is going to be placed in <body>

    Html:

    <body>
        <div id="div1"></div>
    </body>
    

    Javascript:

    let nn = new Dann();
    
    nn.load('nn','div1',function(err) {
        if (err) {
            console.log('Failed to load the model.');
        } else {
            console.log('Succesfully loaded the model.');
        }
        nn.log();
    });
    

    To add styling to the <input> element you can reference it like so:

    #div1 {
        /* <input> element styling here */
    }
    
    Source code(tar.gz)
    Source code(zip)
  • v2.2.3d(Jan 29, 2021)

  • v2.2.3c(Jan 29, 2021)

    v2.2.3c

    • Added Dann.dataObject(); function which returns a json-savable javascript object containing information about the model.

    ex:

    const nn = new Dann();
    
    // getting the object
    const data = nn.dataObject();
    
    // uploading the object
    nn.applyToModel(data);
    
    
    Source code(tar.gz)
    Source code(zip)
  • v2.2.3b(Jan 29, 2021)

    New Loss Function

    Added the Mean absolute exponential loss (mael).

    //New experimental function: Mean absolute exponential loss
    function mael(predictions,target) {
        let sum = 0;
        let ans = 0;
        let n = target.length;
        for (let i = 0; i < n;  i++) {
            let y = target[i]
            let yHat = predictions[i];
            let x = (y - yHat);
    
            //Mean absolute exponential function
            let top = -x*(exp(-x)-1);
            let down = (exp(-x)+1);
            sum += top/down;
        }
        ans = sum/n;
        return ans;
    }
    

    Definition:



    See loss function docs

    Graph:

    Source code(tar.gz)
    Source code(zip)
  • v2.2.3(Jan 27, 2021)

    Changes

    • the abscence of Dann.makeWeights(); is now a warning instead of error.
    • Dann.feedForward(); now has the decimals option. If it is specified, the output of this function will be rounded to the number of decimals specified.

    ex:

    //creates (1 input,1 output) neural network
    const nn = new Dann();
    nn.makeWeights();
    nn.feedForward([1],{log:true,decimals:2});
    //outputs: [value rounded to 2 decimals]
    
    
    Source code(tar.gz)
    Source code(zip)
  • v2.2.2f(Jan 25, 2021)

    Added callbacks with errors

    • Dann.load() callback with error

    In the browser:

    const nn = new Dann();   
    //opens a DOM file selector
    nn.load('nn',function(err) {
        if (err) {
            console.log('Error loading the Dann model');
        } else {
            console.log('Successfully loaded the Dann model');
            nn.log();
        }
    });
    

    In node:

    const nn = new Dann();
    nn.load('filename',function(err) {
        if (err) {
            console.log('Error loading the Dann model');
        } else {
            console.log('Successfully loaded the Dann model');
            nn.log();
        }
    });
    
    Source code(tar.gz)
    Source code(zip)
Owner
Matias Vazquez-Levi
I like to write stuff, that does stuff.
Matias Vazquez-Levi
Deep Learning in Javascript. Train Convolutional Neural Networks (or ordinary ones) in your browser.

ConvNetJS ConvNetJS is a Javascript implementation of Neural networks, together with nice browser-based demos. It currently supports: Common Neural Ne

Andrej 10.4k Dec 31, 2022
[UNMAINTAINED] Simple feed-forward neural network in JavaScript

This project has reached the end of its development as a simple neural network library. Feel free to browse the code, but please use other JavaScript

Heather 8k Dec 26, 2022
A neural network library built in JavaScript

A flexible neural network library for Node.js and the browser. Check out a live demo of a movie recommendation engine built with Mind. Features Vector

Steven Miller 1.5k Dec 31, 2022
architecture-free neural network library for node.js and the browser

Synaptic Important: Synaptic 2.x is in stage of discussion now! Feel free to participate Synaptic is a javascript neural network library for node.js a

Juan Cazala 6.9k Dec 27, 2022
Powerful Neural Network for Node.js

NeuralN Powerful Neural Network for Node.js NeuralN is a C++ Neural Network library for Node.js with multiple advantages compared to existing solution

TOTEMS::Tech 275 Dec 15, 2022
FANN (Fast Artificial Neural Network Library) bindings for Node.js

node-fann node-fann is a FANN bindings for Node.js. FANN (Fast Artificial Neural Network Library) is a free open source neural network library, which

Alex Kocharin 186 Oct 31, 2022
DN2A - Digital Neural Networks Architecture in JavaScript

DN2A (JavaScript) Digital Neural Networks Architecture About DN2A is a set of highly decoupled JavaScript modules for Neural Networks and Artificial I

Antonio De Luca 464 Jan 1, 2023
A JavaScript deep learning and reinforcement learning library.

neurojs is a JavaScript framework for deep learning in the browser. It mainly focuses on reinforcement learning, but can be used for any neural networ

Jan 4.4k Jan 4, 2023
DN2A - Digital Neural Networks Architecture

DN2A (JavaScript) Digital Neural Networks Architecture About DN2A is a set of highly decoupled JavaScript modules for Neural Networks and Artificial I

Antonio De Luca 464 Jan 1, 2023
A lightweight library for neural networks that runs anywhere

Synapses A lightweight library for neural networks that runs anywhere! Getting Started Why Sypapses? It's easy Add one dependency to your project. Wri

Dimos Michailidis 65 Nov 9, 2022
DN2A - Digital Neural Networks Architecture

DN2A (JavaScript) Digital Neural Networks Architecture About DN2A is a set of highly decoupled JavaScript modules for Neural Networks and Artificial I

Antonio De Luca 464 Jan 1, 2023
🤖chat discord bot powered by Deep learning algorithm🧠

✨ Akaya ✨ ❗ Discord integration functionality not implemented yet! Only the deep-learning module working. Install git clone https://github.com/LyeZinh

Pedro Kaleb! 3 Jun 23, 2022
Pure Javascript OCR for more than 100 Languages 📖🎉🖥

Version 2 is now available and under development in the master branch, read a story about v2: Why I refactor tesseract.js v2? Check the support/1.x br

Project Naptha 29.2k Dec 31, 2022
WebGL-accelerated ML // linear algebra // automatic differentiation for JavaScript.

This repository has been archived in favor of tensorflow/tfjs. This repo will remain around for some time to keep history but all future PRs should be

null 8.5k Dec 31, 2022
Differential Programming in JavaScript.

April 19, 2018 TensorFlow.js was recently released. It is well engineered, provides an autograd-style interface to backprop, and has committed to supp

Propel 2.7k Dec 29, 2022
Machine learning tools in JavaScript

ml.js - Machine learning tools in JavaScript Introduction This library is a compilation of the tools developed in the mljs organization. It is mainly

ml.js 2.3k Jan 1, 2023
A WebGL accelerated JavaScript library for training and deploying ML models.

TensorFlow.js TensorFlow.js is an open-source hardware-accelerated JavaScript library for training and deploying machine learning models. ⚠️ We recent

null 16.9k Jan 4, 2023
JavaScript API for face detection and face recognition in the browser and nodejs with tensorflow.js

face-api.js JavaScript face recognition API for the browser and nodejs implemented on top of tensorflow.js core (tensorflow/tfjs-core) Click me for Li

Vincent Mühler 14.6k Jan 2, 2023
Call Python packages in JavaScript.

Introduction to Boa Boa is the Python Bridge Layer in Pipcook, it lets you call Python functions seamlessly in Node.js, it delivers any Python module

imgcook 64 Jan 5, 2023