Bluzelle is a smart, in-memory data store. It can be used as a cache or as a database.

Related tags

Database swarmDB
Overview

SwarmDB

Build Status Coverage Status License Twitter Gitter chat

ABOUT SWARMDB

Bluzelle brings together the sharing economy and token economy. Bluzelle enables people to rent out their computer storage space to earn a token while dApp developers pay with a token to have their data stored and managed in the most efficient way.

Getting started with Docker

If you want to deploy your swarm immediately you can use our docker-compose quickstart instructions:

Install Docker

Docker Installation Guide

  1. Setup a local docker-compose swarm with the instructions found here
  2. Run docker-compose up in the same directory of your docker-compose.yml. This command will initialize the swarm within your local docker-machine. Full docker-compose documentation can be found here
  3. Nodes are available on localhost port 51010-51012
  4. Connect a test websocket client
  5. Create a node server application using our node.js library
  6. CTRL-C to terminate the docker-compose swarm

Getting started building from source

Installation - Ubuntu

CMake (Ver. 3.10 or greater) etc.

On Ubuntu 18.04 and newer, you can simply install via apt.

$ sudo apt-get install cmake 

If your system packages don't have a new enough version, you can install a different CMake into ~/mycmake/ to avoid overwriting your system's cmake.

$ sudo apt-get install curl libcurl4-openssl-dev
$ mkdir -p ~/mycmake
$ curl -L http://cmake.org/files/v3.11/cmake-3.11.0-Linux-x86_64.tar.gz | tar -xz -C ~/mycmake --strip-components=1

You would then use ~/mycmake/bin/cmake .. instead of cmake .. in further instructions.

Protobuf (Ver. 3 or greater) etc.

$ sudo apt-add-repository ppa:maarten-fonville/protobuf
$ sudo apt-get update
$ sudo apt-get install pkg-config protobuf-compiler libprotobuf-dev libsnappy-dev libbz2-dev

ccache (Optional)

If available, cmake will attempt to use ccache (https://ccache.samba.org) to drastically speed up compilation.

$ sudo apt-get install ccache

Git LFS (Optional)

Git LFS is currently being used to speed up builds if available.

$ sudo apt-get install git-lfs
$ git lfs install

Building the Daemon from Command Line Interface (CLI)

Note: Git LFS is used by default. If you do not have it set up, you must build the dependencies by setting BUILD_DEPEND=YES in your cmake call, e.g. cmake .. -DBUILD_DEPEND=YES, and omit the git lfs commands.

Here are the steps to build the Daemon and unit test application from the command line:

$ mkdir build
$ cd build
$ cmake ..
$ git lfs install
$ git lfs pull
$ sudo make install

Deploying the Daemons

The Bluzelle Configuration File

The Bluzelle daemon is configured by setting the properties of a JSON configuration file provided by the user. This file is usually called bluzelle.json and resides in the current working directory. To specify a different configuration file the daemon can be executed with the -c command line argument:

$ swarm -c peer0.json

The configuration file is a JSON format file, as seen in the following example:

{
    "listener_address" : "127.0.0.1",
    "listener_port" : 50000,
    "bootstrap_file" : "/home/isabel/swarmdb/local/nodes/peers.json",
    "debug_logging" : true,
    "log_to_stdout" : true,
    "use_pbft": true,
    "bootstrap_file": "./peers.json",
    "stack" : "testnet-dev"
}

The complete documentation of the options available for this file is given by

$ swarm --help

but the properties likely useful for a minimal swarm are summarized here:

  • "bootstrap_file" - path to peers file (see below)
  • "debug_logging" - show more log info
  • "listener_address" - the ip address that SwarmDB will listen on (this should be "127.0.0.1" unless you are doing something fancy)
  • "listener_port" - the port that SwarmDB will listen on (each node running on the same host should use a different port)
  • "log_to_stdout" (optional) - log to stdout as well as log file
  • "uuid" - the universally unique identifier that this instance of SwarmDB will use to uniquely identify itself. This should be specified if and only if node cryptography is disabled (the default) - otherwise, nodes use their private keys as their identifier.
  • "stack" - software stack used by swarm

The Bluzelle Bootstrap File

The bootstrap file, identified in the config file by the "bootstrap_file" parameter, see above, provides a list of nodes in the the swarm that the local instance of the SwarmDB daemon can communicate with. If the membership of the swarm has changed, these nodes will be used to introduce the node to the current swarm and catch it up to the current state, and the bootstrap file acts as a "starting peers list".

If you are running a static testnet (i.e., nodes do not join or leave the swarm) then every node should have the same bootstrap_file, and it should include an entry for every node. Thus, each node will appear in its own bootstrap file. If a node is not already in the swarm when it starts (i.e., it should dynamically join the swarm) then it should not be in its own bootstrap file.

The booststrap file format is a JSON array, containing JSON objects describing nodes as seen in the following example:

[
    {
        "host": "127.0.0.1",
        "name": "peer0",
        "port": 49152,
        "uuid": "d6707510-8ac6-43c1-b9a5-160cf54c99f5"
    },
    {
        "host": "127.0.0.1",
        "name": "peer1",
        "port": 49153,
        "uuid": "5c63dfdc-e251-4b9c-8c36-404972c9b4ec"
    },
    ...
    {
        "host": "127.0.0.1",
        "name": "peer1",
        "port": 49153,
        "uuid": "ce4bfdc-63c7-5b9d-1c37-567978e9b893a"
    }
]

where the Peer object parameters are (ALL PARAMETERS MUST MATCH THE PEER CONFIGURATION):

  • "host" - the IP address associated with the external node
  • "name" - the human readable name that the external node uses
  • "port" - the socket address that the external node will listen for protobuf and web socket requests. (listen_port in the config file)
  • "uuid" - the universally unique identifier that the external node uses to uniquely identify itself. This is required to be unique per node and consistent between the peerlist and the config.

Note that if node cryptography is enabled (see swarmdb --help), node uuids are their public keys.

Steps to setup and run Daemon:

  1. Create each of the JSON files as described above in swarmDB/build/output/, where the swarm executable resides. (bluzelle.json, bluzelle2.json, bluzelle3.json, bluzelle4.json, peers.json).
  2. Create an account with Etherscan: https://etherscan.io/register
  3. Create an Etherscan API KEY by clicking Developers -> API-KEYs.
  4. Add your Etherscan API KEY Token to the configuration files.
  5. Modify the ethereum address to be an Ethereum mainnet address that contains tokens or use the sample address provided above.
  6. Ensure that each swarmdb instance is configured to listen on a different port and has a different uuid, and that the peers file contains correct uuids and addresses for all nodes
  7. Deploy your swarm of Daemons. From the swarmDB/build/output/ directory, run:
$ ./swarm -c bluzelle.json
$ ./swarm -c bluzelle2.json
$ ./swarm -c bluzelle3.json

Integration Tests With Bluzelle's Javascript Client

Installation - macOSX

Homebrew

$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Node

$ brew install node

Yarn

$ brew install yarn

Installation - Ubuntu

NPM

$ sudo apt-get install npm

Update NPM

$ sudo npm install npm@latest -g

Yarn

$ curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list

$ sudo apt-get update && sudo apt-get install yarn

Running Integration Tests

Script to clone bluzelle-js repository and copy template configuration files or run tests with your configuration files.

$ qa/integration-tests setup // Sets up template configuration files
$ qa/integration-tests // Runs tests with configuration files you've created

Testing Locally

$ cd scripts

Follow instructions in readme.md

Connectivity Test

$ ./crud -s <SWARM-ID> -n localhost:49154 status

Client: crud-script-0
Sending:
swarm_id: "<SWARM-ID>"
sender: "crud-script-0"
status_request: ""

------------------------------------------------------------

Response: 

swarm_version: "0.3.1443"
swarm_git_commit: "0.3.1096-41-g91cef89"
uptime: "1 days, 17 hours, 29 minutes"
module_status_json: ... 
pbft_enabled: true

Response: 
{
    "module" : 
    [
        {
            "name" : "pbft",
            "status" : 
            {
                "is_primary" : false,
                "latest_checkpoint" : 
                {
                    "hash" : "",
                    "sequence_number" : 3800
                },
                "latest_stable_checkpoint" : 
                {
                    "hash" : "",
                    "sequence_number" : 3800
                },
                "next_issued_sequence_number" : 1,
                "outstanding_operations_count" : 98,
                "peer_index" : 
                [
                    {
                        "host" : "127.0.0.1",
                        "name" : "node_0",
                        "port" : 50000,
                        "uuid" : "MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAE/HIPqL97zXbPN8CW609Dddu4vSKx/xnS1sle0FTgyzaDil1UmmQkrlTsQQqpU7N/kVMbAY+/la3Rawfw6VjVpA=="
                    },
                    {
                        "host" : "127.0.0.1",
                        "name" : "node_1",
                        "port" : 50001,
                        "uuid" : "MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAELUJ3AivScRn6sfBgBsBi3I18mpOC5NZ552ma0QTFSHVdPGj98OBMhxMkyKRI6UhAeuUTDf/mCFM5EqsSRelSQw=="
                    },
                    {
                        "host" : "127.0.0.1",
                        "name" : "node_2",
                        "port" : 50002,
                        "uuid" : "MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAEg+lS+GZNEOqhftj041jCjLabPrOxkkpTHSWgf6RNjyGKenwlsdYF9Xg1UH1FZCpNVkHhCLi2PZGk6EYMQDXqUg=="
                    }
                ],
                "primary" : 
                {
                    "host" : "127.0.0.1",
                    "host_port" : 50001,
                    "name" : "node_1",
                    "uuid" : "MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAELUJ3AivScRn6sfBgBsBi3I18mpOC5NZ552ma0QTFSHVdPGj98OBMhxMkyKRI6UhAeuUTDf/mCFM5EqsSRelSQw=="
                },
                "unstable_checkpoints_count" : 0,
                "view" : 1
            }
        }
    ]
}

------------------------------------------------------------

Create database


./crud -s <SWARM-ID> -n localhost:50000 create-db -u myuuid

Client: crud-script-0
------------------------------------------------------------

Response: 
header {
  db_uuid: "myuuid"
  nonce: 2998754133578549919
}

------------------------------------------------------------

Create

$ ./crud -s <SWARM-ID> -n localhost:50000 create -u myuuid -k mykey -v myvalue

Client: crud-script-0
------------------------------------------------------------
  
Response: 
header {
  db_uuid: "myuuid"
  nonce: 9167923913779064632
}
  
------------------------------------------------------------

Read

$ ./crud -s <SWARM-ID> -n localhost:50000 read -u myuuid -k mykey

Client: crud-script-0
------------------------------------------------------------

Response: 
header {
  db_uuid: "myuuid"
  nonce: 1298794800698891064
}
read {
  key: "mykey"
  value: "myvalue"
}

------------------------------------------------------------

Update

$ ./crud -s <SWARM-ID> -n localhost:50000 update -u myuuid -k mykey -v mynewvalue

Client: crud-script-0
------------------------------------------------------------

Response: 
header {
  db_uuid: "myuuid"
  nonce: 9006453024945657757
}

------------------------------------------------------------

Delete

$ ./crud -s <SWARM-ID> -n localhost:50000 delete -u myuuid -k mykey

Client: crud-script-0
------------------------------------------------------------

Response: 
header {
  db_uuid: "myuuid"
  nonce: 7190311901863172254
}

------------------------------------------------------------

Subscribe

$ ./crud -s <SWARM-ID> -n localhost:50000 subscribe -u myuuid -k mykey

Client: crud-script-0
------------------------------------------------------------

Response: 
header {
  db_uuid: "myuuid"
  nonce: 8777225851310409007
}

------------------------------------------------------------

Waiting....

Response: 
header {
  db_uuid: "myuuid"
  nonce: 8777225851310409007
}
subscription_update {
  key: "mykey"
  value: "myvalue"
}

------------------------------------------------------------

Waiting....

Delete database

./crud -s <SWARM-ID> -n localhost:50000 delete-db -u myuuid

Client: crud-script-0
------------------------------------------------------------

Response: 
header {
  db_uuid: "myuuid"
  nonce: 1540670102065057350
}

------------------------------------------------------------

Adding or Removing A Peer

Dynamically adding and removing peers is not supported in this release. This functionality will be available in a subsequent version of swarmDB.

Help & Options

$ ./crud --help
usage: crud [-h] [-p] [-i ID] -n NODE
            {status,create-db,delete-db,has-db,writers,add-writer,remove-writer,create,read,update,delete,has,keys,size,subscribe}
            ...

crud

positional arguments:
  {status,create-db,delete-db,has-db,writers,add-writer,remove-writer,create,read,update,delete,has,keys,size,subscribe}
    status              Status
    create-db           Create database
    delete-db           Delete database
    has-db              Has database
    writers             Database writers
    add-writer          Add database writers
    remove-writer       Remove database writers
    create              Create k/v
    read                Read k/v
    update              Update k/v
    delete              Delete k/v
    has                 Determine whether a key exists within a DB by UUID
    keys                Get all keys for a DB by UUID
    size                Determine the size of the DB by UUID
    subscribe           Subscribe and monitor changes for a key

optional arguments:
  -h, --help            show this help message and exit
  -i ID, --id ID        Crud script sender id (default 0)
  -s <SWARM-ID>,        Swarm id
    --swarm_id <SWARM-ID>  

required arguments:
  -n NODE, --node NODE  node's address (ex. 127.0.0.1:51010)
Comments
  • [KEP-38] Getting token count from Ropsten network + unit test.

    [KEP-38] Getting token count from Ropsten network + unit test.

    Added ethereum_api class and token_balance() method to pull balance for specific token on specific account. Added unit test to check balance on well known account.

    opened by dmitrymukhin 10
  • running on Mac terminal

    running on Mac terminal

    I've worked my way through the READ ME and found equivalents to the Mac terminal cmds to set up the application. I'm currently stuck at:

    RUNNING THE APPLICATION

    Create 'peers' file in the same directory where the_db executable is located. Peers file contains the list of known nodes. Here is the example of peers file for 5 nodes running on localhost (';' can be used to comment lines):

    ; Comment node_1=localhost:58001 node_2=localhost:58002 node_3=localhost:58003 node_4=localhost:58004 node_5=localhost:58005

    Create simpe shell script to start multiple nodes. Below is the script for Ubuntu Linux: #!/bin/bash

    gnome-terminal -x bash -c './the_db --address 0x006eae72077449caca91078ef78552c0cd9bce8f --port 58001' gnome-terminal -x bash -c './the_db --address 0x006eae72077449caca91078ef78552c0cd9bce8f --port 58002' gnome-terminal -x bash -c './the_db --address 0x006eae72077449caca91078ef78552c0cd9bce8f --port 58003' gnome-terminal -x bash -c './the_db --address 0x006eae72077449caca91078ef78552c0cd9bce8f --port 58004' gnome-terminal -x bash -c './the_db --address 0x006eae72077449caca91078ef78552c0cd9bce8f --port 58005'

    I've made the peers file, but it doesn't seem to pull from it. Also, I can't seem to figure out how to re-write the shell script so that it will function in the Mac terminal. genome-terminal is an unknown command. I'm still very noobish with most of this. Any help figuring this out would be greatly appreciated!

    opened by RBNasty 9
  • [2018-07-07 22:28:30.890888] [0x00007fff91d84380] [error]   (main.cpp:220) - could not find our http port setting!

    [2018-07-07 22:28:30.890888] [0x00007fff91d84380] [error] (main.cpp:220) - could not find our http port setting!

    Hi, I am trying to configure the deamons bluzelle.json files, and when I am trying to run the ./swarm -c bluzelle.json I get the following error [2018-07-07 22:28:30.890888] [0x00007fff91d84380] [error] (main.cpp:220) - could not find our http port setting!.

    Thanks in advance

    opened by abdulwahidgul24085 8
  • KEP-574 pbft - configurations

    KEP-574 pbft - configurations

    Code for storing PBFT peers in an abstract "configuration" that can be versioned and passed as a message. The configuration class stores an individual config. The config_store class stores multiple configurations in order of adding, accesses them by hash, supports setting a configuration "enabled" and selecting a current configuration, as well as purging old ones.

    The main PBFT code was updated to store the initial peers list in configuration and access it from there.

    opened by paularchard 5
  • install-boost.sh fails on macOS

    install-boost.sh fails on macOS

    macOS 10.13.6

    How to reproduce. Follow the instructions in the readme to install boost from the provided bash script in swarmDB/toolchain.

    Result. ~ is taken literally and the install path will end up in /Users/username/swarmDB/toolchain/~/myboost/1_67_0/~/myboost

    A workaround is to install boost with homebrew

    opened by dffffffff 4
  • KEP-973: pbft variables reside in storage

    KEP-973: pbft variables reside in storage

    Persistently stored variable framework and usage in PBFT.

    I recommend reading the block comment in pbft_persistent_storage.hpp first. The intention was to hide all the complexity in there and make using the variables as easy as possible, but there are a couple of scenarios that are a little tricky.

    opened by paularchard 3
  •  KEP-987: Refactor node to re-use sessions. remove raft.

    KEP-987: Refactor node to re-use sessions. remove raft.

    The interesting files are

    swarm.cpp node.cpp session.cpp node_test_common.hpp node_test.cpp session_test.cpp

    pretty much everything else is just collateral damage of the interface changes. I also want to make send_message operate by uuid instead of endpoint, but that requires some design thinking and this was too large already.

    opened by isabelsavannah 3
  • Problem with connection (probably the leader is down)

    Problem with connection (probably the leader is down)

    Hi,

    I'm getting a problem with the connection. Seems that there is a problem related to the leader falling and other nodes not getting elected in time or the redirection not happening. The response is as follows:

    Could not open socket to "13.78.131.94:51010": Connection refused (61).

    It happened after I insist a few times (maybe 10?) with the same request data:

    {
        "bzn-api": "crud",
        "cmd": "read",
        "data": {"key":"1"},
        "db-uuid": "my huge hash",
        "request-id": 4
    }
    

    Considerations:

    • My request ID is hardcoded with "4" (is this a good idea?).
    • I use to attempt new requests in every 2 or 3 seconds (sometimes I loop requesting, is it expected to accept this conditions?)
    • It is a really hard bug to reproduce.

    If there is any feedback on that, let me know. Thanks in advance!

    opened by lotharthesavior 3
  • Story/rnistuk/kep 1318

    Story/rnistuk/kep 1318

    SwarmDB to use the ESR to bootstrap the peers list

    • added functionality to request a peers list for a given swarm from an etherium contract
    • added functionality to request peer info for a given peer in a given swarm from an etherium contract
    • added functionality to parse the contract responses for the above requests
    • added functionality to use the ESR to populate the peers list if the swarm id is provided and the contract returns a peers list
    opened by rnistuk 2
  • kep-1144 Add Swarm ID to all swarm<>swarm and swarm<>client messages

    kep-1144 Add Swarm ID to all swarm<>swarm and swarm<>client messages

    Added swarm id to the bzn_envelope protobuf structure and propagated the changes through the code. The swarm id parameter is set in pbft::wrap_message method.

    opened by rnistuk 2
  • Task/rnistuk/kep 488

    Task/rnistuk/kep 488

    Here's the unit test that shows that 2 yes votes out of 4 will not form a consensus. I set up a raft instant with 3 peers, and have that peer become a candidate and tally votes.

    opened by rnistuk 2
Owner
Bluzelle
Bluzelle is a decentralized data network for dapps to manage data in a secure, tamper-proof, and highly scalable manner.
Bluzelle
AlaSQL.js - JavaScript SQL database for browser and Node.js. Handles both traditional relational tables and nested JSON data (NoSQL). Export, store, and import data from localStorage, IndexedDB, or Excel.

Please use version 1.x as prior versions has a security flaw if you use user generated data to concat your SQL strings instead of providing them as a

Andrey Gershun 6.1k Jan 9, 2023
The Blog system developed by nest.js based on node.js and the database orm used typeorm, the development language used TypeScript

考拉的 Nest 实战学习系列 readme 中有很多要说的,今天刚开源还没来及更新,晚些慢慢写,其实本人最近半年多没怎么写后端代码,主要在做低代码和中台么内容,操作的也不是原生数据库而是元数据Meta,文中的原生数据库操作也当作复习下,数据库的操作为了同时适合前端和Node开发小伙伴,所以并不是很

程序员成长指北 148 Dec 22, 2022
A transparent, in-memory, streaming write-on-update JavaScript database for Small Web applications that persists to a JavaScript transaction log.

JavaScript Database (JSDB) A zero-dependency, transparent, in-memory, streaming write-on-update JavaScript database for the Small Web that persists to

Small Technology Foundation 237 Nov 13, 2022
NodeJS PostgreSQL database performance insights. Locks, index usage, buffer cache hit ratios, vacuum stats and more.

Node Postgres Extras NodeJS port of Heroku PG Extras with several additions and improvements. The goal of this project is to provide powerful insights

Paweł Urbanek 68 Nov 14, 2022
In-memory Object Database

limeDB What is LimeDB LimeDB is object-oriented NoSQL database (OOD) system that can work with complex data objects that is, objects that mirror those

Luks 2 Aug 18, 2022
A remote nodejs Cached sqlite Database Server, for you to have your perfect MAP Cache Saved and useable remotely.

A remote nodejs Cached sqlite Database Server, for you to have your perfect MAP Cache Saved and useable remotely. Easy Server and Client Creations, fast, stores the Cache before stopping and restores it again! it uses ENMAP

Tomato6966 6 Dec 18, 2022
Microsoft-store - Microsoft Store package for LTSC.

Microsoft Store Microsoft Store package for Windows LTSC. Usage Just download the release and double click the exe file. Can be used in Windows LTSC 2

fernvenue 7 Jan 2, 2023
🔥 Dreamy-db - A Powerful database for storing, accessing, and managing multiple database.

Dreamy-db About Dreamy-db - A Powerful database for storing, accessing, and managing multiple databases. A powerful node.js module that allows you to

Dreamy Developer 24 Dec 22, 2022
DolphinDB JavaScript API is a JavaScript library that encapsulates the ability to operate the DolphinDB database, such as: connecting to the database, executing scripts, calling functions, uploading variables, etc.

DolphinDB JavaScript API English | 中文 Overview DolphinDB JavaScript API is a JavaScript library that encapsulates the ability to operate the DolphinDB

DolphinDB 6 Dec 12, 2022
An in memory postgres DB instance for your unit tests

pg-mem is an experimental in-memory emulation of a postgres database. ❤ It works both in Node or in the browser. ⭐ this repo if you like this package,

Olivier Guimbal 1.2k Dec 30, 2022
A lightweight way to cache on graphQL servers

cacheflowQL What is cacheflowQL? CacheflowQL is an npm package with complex caching algorithms that provide developers deep insights into their GraphQ

OSLabs Beta 53 Nov 16, 2022
membuat sebuah module pengganti database engine untuk mengelola data secara advance

Donate Sosial Media Introduction Database atau basis data adalah kumpulan data yang dikelola sedemikian rupa berdasarkan ketentuan tertentu yang salin

Jefri Herdi Triyanto 6 Dec 17, 2021
A Node.js library for retrieving data from a PostgreSQL database with an interesting query language included.

RefQL A Node.js library for retrieving data from a PostgreSQL database with an interesting query language included. Introduction RefQL is about retrie

Rafael Tureluren 7 Nov 2, 2022
A JSON Database that saves your Json data in a file and makes it easy for you to perform CRUD operations.

What is dbcopycat A JSON Database that saves your Json data in a file and makes it easy for you to perform CRUD operations. ⚡️ Abilities Creates the f

İsmail Can Karataş 13 Jan 8, 2023
This is a boilerplate for Nodejs (Nestjs/typescript) that can be used to make http server application.

Hexagonal architecture Table of Contents Overview Code architecture source code Service build information Regular user Advanced user Deployment Helm K

Moeid Heidari 20 Sep 13, 2022
Uniswapv2-pool-funding - A "Nugget Standard for Funding" compliant smart contract to provide liquidity to UniswapV2 (clone) pools. (Quickswap in this case)

uniswap(clone)-v2-pool-funding A smart contract that makes it easy to: supply liquidity to a uniswap(clone)-v2 pool using ERC20 or the network native

null 2 May 14, 2022
a Node.JS script to auto-import USB drives that are attached to a computer. Use it to turn your NAS into a smart photo / file importer.

File Vacuum 5000 ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ WARNING: This script is designed to manipulate files on both an external drive and another specif

null 46 Jan 10, 2022
Store/manage user sessions in the background for Lad

@ladjs/store-sessions Store/manage user sessions in the background for Lad Table of Contents Install Usage Contributors License Install npm: npm insta

Lad is the best Node.js framework 1 Jan 20, 2022
Azure Data Studio is a data management tool that enables you to work with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux.

Azure Data Studio is a data management tool that enables working with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux.

Microsoft 7k Dec 31, 2022