A generative engine that takes various png layers on a sprite sheet format, combines them and then converts them into a .gif file

Overview

Welcome to the Generative GIF Engine v2.0.4 🐤

[8 minute read]

This python and node app generates layered-based gifs to create NFT gif art! It is faster, simpler, and produces higher quality gifs than any other open source gif generative tool out there. Export your animation as a png image sequence, organize your layer folders with rarity, and the code does the rest! I plan to actively maintain this repo and enhance it with various tools for months to come so be sure to ask questions in the discussion and write issues.

There are three steps:

  1. [Python] Converts layers into spritesheets using PIL. This step can be skipped if you already have the spritesheets, but is useful if you want to start with png files and makes the artist's life easier!
  2. [Node] Create generative spritesheets from the layers from step 1.
  3. [Python + gifski] Convert spritesheets to gifs using Python and gifski.

Checkout this Medium post and How does it work? for more information!

Here's an example final result (or you can download the code and run it and see more bouncing balls :)). It is also pushed to production on OpenSea.

EDIT tool now supports z-index/stacking, grouping and if-then statements. See nftchef's docs for more information. Here is an example of having one layer that is both in front and behind the ball.

Requirements

Install an IDE of your preference. Recomended

Install the latest version of Node.js

  • Run this command on your system terminal to check if node is installed:

      node -v
    

Install the latest version of Python 3. I am currently using 3.8.1 but anything above 3.6 should work.

  • Run this command on your system terminal to check if node is installed:

      python3 --version
    

Install gifski. I recommend using brew brew install gifski if you're on Mac OSX. If you don't have brew you can install it using brew on Mac OSX. Or if you're on Windows you can install it using Chocolatey: choco install gifski.

If none of those methods work, follow instructions on gifski gifski Github. Gifski is crucial for this tool because it provides the best gif generation out of all the tools I checked out (PIL, imageio, ImageMagic, js libraries).

If you plan on developing on this repository, run pre-commit to install pre-commit hooks.

Installation

  • Download this repo and extract all the files.

  • Run this command on your root folder using the terminal:

      make first_time_setup
    

If you have any issues with this command, try running each separate command:

   python3 -m pip install --upgrade Pillow && pip3 install -r requirements.txt

   cd step2_spritesheet_to_generative_sheet; npm i

   brew install gifski

Each environment can be different, so try Google your issues. I'll add a few known issues below:

Known issues:

  • M1 Mac: Canvas prebuild isn't built for ARM computers so you need to install it from their Github
  • cd command might not work on Windows depending on what Terminal you are using. You may have to edit the Makefile to use CHDIR or the equivalent.
  • If you're on Windows 10 you might get a 'make' is not recognized. Try follow these instructions. Otherwise you can copy and paste the instructions manually in Makefile.
  • If you don't have brew installed, look at gifski docs for another way to install gifski.

How to run?

Load the png or gif files into the /layers folder where each layer is a folder, and each folder contains another attribute folder which contains the individual frames and a rarity percentage. For example if you wanted a background layer you would have /layers/background/blue#20 and /layers/background/red#20.

In each attribute folder, the frames should be named 0.png -> X.png or 0.gif. See code or step 1 for folder structure. The code will handle any number of layers, so you could have a layer with two frames, another layer with one frame, and another with 20 frames, and as long as you pass numberOfFrames = 20, then the layers will be repeated until they hit 20.

Update global_config.json with:

  1. 'totalSupply' : total number of gifs to generate.
  2. 'height' : height of one frame. This should be equal to width.
  3. 'width' : width of one frame. This should be equal to height.
  4. 'framesPerSecond' : number of frames per second. This will not be exact because PIL takes in integer milliseconds per frame (so 12fps = 83.3ms per frame but rounded to an int = 83ms). This will not be recognizable by the human eye, but worth calling out.
  5. 'numberOfFrames' : number of total frames. For example you could have 24 frames, but you want to render it 12fps.
  6. 'description' : description to be put in the metadata.
  7. 'baseUri' : baseUri to be put in the metadata.
  8. 'saveIndividualFrames' : this is if you want to save the individual final frames, for example if you want to let people pick just one frame for a profile page.
  9. 'layersFolder': this is the folder that you want to use for the layers. The default is layers, but this allows you to have multiple versions of layers and run them side by side. The current repo has four example folders, layers, layers_grouping, layers_if_then, layers_z_index which all demonstrate features from nftchef's repo.
  10. 'quality': quality of the gif, 1-100.

Update step2_spritesheet_to_generative_sheet/src/config.js with your layerConfigurations. If you want the basic configuration, just edit layersOrder, but if you want to take advantage of nftchef's repo, then scroll through the file for some examples and modify layerConfigurations accordingly.

  • To run the process end to end run:

      make all
    

Your output gifs and JSON metadata will appear in build/gifs and build/json. Try it yourself with the default settings and layers!

How does it work?

Step 1

In order to get nftchef's Generative Gif Engine to work, the input layers needs to be in Sprite Sheet. However this is tedious and unintuitive for many artists who use tools that export individual images.

Step 1 simply converts individual images to spritesheets with the rarity percentage. You provide the various layers in the /layers folder with the rarity in the folder name. Each image should be numbered from 0 -> X, and only accepts .png.

If you do not include the rarity weight in the attribute folder name, that attribute will be ignored

You can provide any number of frames in each layer folder, the code will repeat them up until it hits numberOfFrames. It will also trim any that have too many frames.

Example layers folder structure with four layers and two traits each layer:

layers
└───Background
│   └───Grey#50
│       │   0.png
│   └───Pink#50
│       │   0.png
└───Ball
│   └───Blue#50
│       │   0.png
│       │   1.png
│       │   2.png
│       │   ...
│   └───Green#50
│       │   0.png
│       │   1.png
│       │   2.png
│       │   ...
└───Hat
│   └───Birthday#50
│       │   0.png
│       │   1.png
│       │   2.png
│       │   ...
│   └───Cowboy#50
│       │   0.png
│       │   1.png
│       │   2.png
│       │   ...
└───Landscape
│   └───Cupcake#50
│       │   0.png
│   └───Green Tower#50
│       │   0.png

Example layer:

Background:

Grey:

Pink:

Ball:

Blue:

...

Green:

...

Hat:

Birthday:

...

Cowboy:

...

Landscape:

Cupcake:

Green Tower:

I am using python here instead of javascript libraries because I have found that image processing using PIL is much faster and without lossy quality than javascript. These benefits are much clearer in step 3.

You can run only step1 by running:

    make step1

This will convert the pngs into spritesheets and the output will look something like this:

Output:

Background:

Grey#50.png:

Pink#50.png:

Ball:

Blue#50.png:

Green#50.png:

Hat:

Birthday#50.png:

Cowboy#50.png:

Landscape:

Cupcake#50.png:

Green Tower#50.png:

EDIT tool now supports z-index/stacking, grouping and if-then statements. See nftchef's docs for more information. The layers in this step will have to match the format expected in step 2. See the example layer folders for some more info.

EDIT tool now supports gif layers. You can provide layers as gifs and the code will split the gif into frames. See layers_gif_example. It will create a temp folder in step1_layers_to_spritesheet/temp with the resulting separate frames, and then will parse through that folder to create the output. Make sure numberOfFrames is set in global_config.json.

Step 2

Step 2 takes the spritesheets from step 1 and generates all possible combinations based on rarity. This is where all the magic happens! The output is a bunch of spritesheets with all the layers layered on top of each other.

The original idea came from MichaPipo's Generative Gif Engine but now most of the code in this step is forked from nftchef's Generative Engine which is forked from HashLips Generative Art Engine. Please check out Hashlip's 📺 Youtube / 👄 Discord / 🐦 Twitter / ℹ️ Website for a more in depth explanation on how the generative process works.

I recently modified this section to use the code from nftchef's Generative Engine which adds the following features:

  • if-then statements. You can have generative art code that says if this layer, then select these layers. There is an example layers under layers_if_then which has logic for if the ball is pink, wear a birthday or cowboy hat, or if the ball is purple, wear a mini ball hat. See nftchef's docs for more information.
  • grouping statements. You can now group traits into certain groups. So in the layers_grouping we have common balls and hats, and rare balls and hats, and the first totalSupply - 1 balls are common, and the last one is rare.
  • z-index otherwise known as stack order. You can now have multiple stacks for the same layer, for example a basketball hoop landscape which has art in front and behind the ball. See nftchef's docs for more information.

You will need to update global_config.json and also update layerConfigurations in step2_spritesheet_to_generative_sheet/src/config.js.

You can run only step 2 by running:

    make step2

Example output with the layers folder (only first 4 displayed, but there are 16 total):

Example output with the layers_z_index folder:

Step 3

Step 3 takes the spritesheets from step 2 and creates gifs in builds/gifs. This is where Python libraries really shine. Initially I used PIL, but found some issues with pixel quality.

In MichaPipo's original repo, they used javascript libraries to create the gifs. These copied pixel by pixel, and the logic was a bit complicated. Creating just 15 gifs would take 4 minutes, and I noticed some of the pixel hex colors were off. Also depending on CPU usage, the program would crash. I spent days debugging, when I just decided to start from scratch in another language.

Initially I tried PIL, imageio, and a few Python libraries, but they all had issues generating gifs. I spent weeks finding the best tool for this job, and came across gifski. This creates incredibly clean gifs and worked the best.

Now, generating 15 gifs takes < 30 seconds and renders with perfect pixel quality!

You can change the framesPerSecond in global_config.json and you can run only step 3 by running:

    make step3

This allows you to not have to regenerate everything to play around with fps.

Example output with all 16 permutations (click on each gif for the 1000x1000 version):

If you set saveIndividualFrames to true in global_config.json, it will also split the gifs into individual frames and save them in images. This is useful if you want people to be able to choose a single frame for a profile picture.

Some metrics:

MichaPipo's Generative Gif Engine:

  • 15 NFT - 5 minutes with sometimes incorrect pixels.
  • 100 NFT - one hour (with the computer being almost unusable).

New Generative Gif Engine:

  • 15 NFT - 30 seconds with no pixel issues.
  • 100 NFT - 3 minutes and 17 seconds with no pixel issues.
  • 1000 NFT - 45 minutes with no pixel issues and no CPU issues.

Rarity stats

You can check the rarity stats of your collection with:

    make rarity

Provenance Hash Generation - IN PROGRESS

THIS SECTION IS STILL IN PROGRESS, IT DOES NOT GENERATE PROVENANCE HASH CORRECTLY

If you need to generate a provenance hash (and, yes, you should, read about it here ), make sure the following in config.js is set to true

// IF you need a provenance hash, turn this on
const hashImages = true;

Then… After generating images and data, each metadata file will include an imageHash property, which is a Keccak256 hash of the output image.

To generate the Provenance Hash

run the following util

make provenance

The Provenance information is saved to the build directory in _prevenance.json. This file contains the final hash as well as the (long) concatenated hash string.

*Note, if you regenerate the images, You will also need to regenerate this hash.

Update your metadata info

You can change the description and base Uri of your metadata even after running the code by updating global_config.json and running:

    make update_json

Solana metadata

🧪 BETA FEATURE

After running make all you can run generate the Solana metadata in two steps:

  • Edit step2_spritesheet_to_generative_sheet/Solana/solana_config.js
  • make solana to generate the Solana metadata. This will create an output folder build/solanawith the gifs and the metadata.

Most of the code comes from nftchef.

I have not tried this on any test net or production Solana chain, so please flag any issues or create a PR to fix them!

IMPORTANT NOTES

All of the code in step1 and step3 was written by me. The original idea for the repo came from MichaPipo's Generative Gif Engine but now most of the code in step 2 is forked from nftchef's Generative Engine which is forked from HashLips Generative Art Engine.

_ Things to work on: _

FAQ

Q: Why did you decide to use Python for step 1 and step 3?

A: I found that Python PIL work better and faster than JS libraries, and the code is simpler for me. Initially I tried PIL, imageio, and a few Python libraries, but they all had issues generating gifs. I spent weeks finding the best tool for this job, and came across gifski. This creates incredibly clean gifs and worked the best.

My philosophy is pick the right tool for the right job. If someone finds a better library for this specific job, then let me know!

Q: Why didn't you use Python for step 2?

A: The NFT dev community which writes the complicated logic for generative art mainly codes in javascript. I want to make it easy to update my code and incorporate the best features of other repos as easily as possible, and porting everything to Python would be a pain. You can imagine step 1 and step 3 are just helper tools in Python, and step 2 is where most of the business logic comes from.

Be sure to follow me for more updates on this project:

Twitter

GitHub

Medium

Comments
  • Support for Tezos metadata? =)

    Support for Tezos metadata? =)

    Your work is exactly what I had been looking for, thanks! would be great if you could add support/code for tezos metadata as well for sites like OBJKT ? There's very little out there for tezos compared to other networks, this would literally make this an overall solution for just about everything when it comes to generating gifs.. thanks again!

    opened by bxbxyxgx 33
  • Build/output folder missing

    Build/output folder missing

    I tried downloading the code and it runs the first step correctly but gets hung up on step two. I noticed in your tutorial that you have a build folder that I cannot find in this or other versions of the commit. I am not an avid coder and don't know how to remedy this.

    The error message I am receiving:

    cd ./step2_spritesheet_to_generative_sheet; npm run generate The system cannot find the path specified. make[1]: *** [Makefile:11: step2] Error 1 make[1]: Leaving directory 'C:/Users/Sean/Desktop/Generative_Gif_Engine-main make: *** [Makefile:24: all] Error 2

    opened by trying-my-best 22
  • Step 3 stops running and audio not working

    Step 3 stops running and audio not working

    I'm trying to produce a collection of hi-res mp4s which requires me to use batching. Two issues:

    1. After the DNA is created for all editions in Step2, Step3 is started and suddenly quits. I received the following error:

           make[1]: Leaving directory '.../animated-art-engine-main'
           Starting step 3: Converting sprite sheets to mp4
           Starting 0 with should_generate_output flag False
           multiprocessing.pool.RemoteTraceback: 
           """
             File "...\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 125, in worker
               result = (True, func(*args, **kwds))
             File "...\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 51, in starmapstar      
               return list(itertools.starmap(args[0], args[1]))
           TypeError: generate_output() missing 1 required positional argument: 'temp_img_folder'
           """
      
           The above exception was the direct cause of the following exception:
      
           Traceback (most recent call last):
             File "...\animated-art-engine-main\batch.py", line 65, in <module>
               main()
             File "...\animated-art-engine-main\batch.py", line 58, in main
               step3_main(
             File "...\animated-art-engine-main\step3_generative_sheet_to_output\build.py", line 384, in main
               pool.starmap(
             File "...\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 375, in starmap
               return self._map_async(func, iterable, starmapstar, chunksize).get()
             File "...\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 774, in get
               raise self._value
           TypeError: generate_output() missing 1 required positional argument: 'temp_img_folder'
           make: *** [Makefile:40: all_batch] Error 1
      
    2. I tried adding a different wav file in each of my background folders. I even changed the folder in the global config to the layers_audio folder and did not hear any audio played from the exported file. I wonder if this has anything to do with batching.

    Appreciate all the hard work that goes into this. Cheers

    opened by punchingpandas 12
  • make first time setup doesn't work

    make first time setup doesn't work

    I'm having a trouble getting the 'make' command work. The documents say to "You may have to edit the Makefile to use CHDIR or the equivalent." Can someone possibly help me?

    opened by PumpSnooki 10
  • Error with the Make Rarity

    Error with the Make Rarity

    This is an amazing open source project. Thanks for making it. I am having a few issues, pls help Thanks

    rarityData.js:132 rarityDataTraits.elements.forEach((rarityDataTrait) => { ^

    TypeError: Cannot read properties of undefined (reading 'elements')

    opened by NLC1609 8
  • Attempting to render only one layer

    Attempting to render only one layer

    I am attempting to generate only one layer of PNG Sequences. I want each NFT to be exactly the same. I have a GIF that an artist created. I split each frame into it's own PNG, making it a PNG Sequence. I stored the photos under " layers > Background > Main > [0-35].png "

    Whenever I run "make step1" manually, I view the output after parsing and it ends up being transparent? image

    I changed the global configuration so it grabs 36 frames for 36 frames per second. Sadly, the output stays consistent with it being transparent.

    So the question is, how would I go by being able to execute what I am trying to accomplish for my project? :)

    opened by lnterger 6
  • Generate metadata only

    Generate metadata only

    Hello ! Thank you for all the code and explanations.

    I already have my .gif ready, and I wonder if it is possible to only duplicate them and generate metadata for each, without doing the whole process. I tried to put them as my input layer but get various errors at phase 2.

    Thank you in advance for your help.

    opened by drok74 6
  • Polygon metadata

    Polygon metadata

    Hello! Jalagar and thank so much for such Amazing tool and thanks to all in the nft community. I have probably a very dumb question.

    My collection is almost finish by that I mean the code its been running for little over 40 Hours (yes I have an old machine) but I watched I video from hashlips about the minting dapp on polygon and I notice the metadata being little different.. So my question is if you or anyone know the metadata created after running this fork is compatible with Polygon? I'll be selling my collection/nfts on OpenSea, so I'm not using a minting dapp at all (I'll be minting all the nfts), but I just keep watching videos because I want my project to be bugs/errors free.

    Now. my understanding is that ETH Metadata and Polygon Metadata are written in the exact same way, and that both should be 100% compatible with each other, but I can be wrong.

    Also. Jalagar once I'm finish in the next couple of days I would love to air-drop one NFT to you, so can I send it to the same ETH address you have at the end of the "readme" section?? but again my NFTs would be deployed to the Polygon Blockchain so I'm really not sure. yes I know this are very dumb and basic questions but believe me or not I'm a little head-blocked right now.. I'm very anxious now that I'm getting to close to the deploy stage.

    thanks again.

    opened by TheFunkySkullsNFT 6
  • Solana metadata is a problem.

    Solana metadata is a problem.

    Hello. First of all, thank you very much for making useful tools.

    After finishing all the steps, I generated the solana data, but the file name is generated incorrectly If i change the start edition to a different number, the Solana data gets mixed up.

    image

    plz help me! thank you!

    opened by SuperSunki 6
  • Ultra-rares

    Ultra-rares

    Hello!

    With the already classic NFTchef code, we can add handmade artwork with hand written json files (ultra-rare feature, not "grouping"), and then run the code to mix them with the normal collection. But this feature is not available here for now?

    opened by PxlSyl 6
  • The system cannot find the path specified.

    The system cannot find the path specified.

    Whenever I try to run : makestep 2 i get the following error

    cd ./step2_spritesheet_to_generative_sheet; npm run generate The system cannot find the path specified. make: *** [Makefile:10: step2] Error 1

    Im not sure what im supposed to be doing any help would be great!

    opened by Nachi4265 6
  • ffmpeg -version block

    ffmpeg -version block

    I have an issue when generating MP4 with ffmpeg. The try/except (build.py line 143) raise an exception. I have ffmpeg installed, when I do ffmpeg -version in my terminal it works. Commenting the try/except solved the problem and the generation ended without problems.

    I'm on Windows 10 64 bits

    I installed ffmpeg using choco install ffmpeg in Powershell in admin

    ffmpeg -version output : ffmpeg version 5.1.2-essentials_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers built with gcc 12.1.0 (Rev2, Built by MSYS2 project) configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband libavutil 57. 28.100 / 57. 28.100 libavcodec 59. 37.100 / 59. 37.100 libavformat 59. 27.100 / 59. 27.100 libavdevice 59. 7.100 / 59. 7.100 libavfilter 8. 44.100 / 8. 44.100 libswscale 6. 7.100 / 6. 7.100 libswresample 4. 7.100 / 4. 7.100 libpostproc 56. 6.100 / 56. 6.100

    opened by IshariFluttershy 1
  • Make Regenerate

    Make Regenerate

    I am running the make regenerate , and the files finished parsing in step 1 but keep stoping at step 2 , I have tried debug true , it doesn’t display any Error it just stops , I have my previous metadata files and _metadata.json and _dna.json. But it’s not moving past that step

    opened by mrjod 0
  • Extra

    Extra "Output" Trait added into json

    Hey there, absolutely amazing tool! Just having a small issue when randomizing collection.

    For some reason, an extra "Output" trait is being lumped into my json and I honestly have no idea why. I've rechecked my work and can't find anything that would lead to this. Would appreciate some guidance so I can stop this from happening!

    Here's a screenshot of the extra trait being added. Screen Shot 2022-11-04 at 1 27 41 PM

    opened by NFTLordX 4
  • DNA is missing for trait

    DNA is missing for trait

    Hi, Why do i get this error?

    Error: DNA is missing for trait: Backs, is something misnamed or missing? Or is this a one of one? at /Users/neoreo/Downloads/animated-art-engine-main/step2_spritesheet_to_generative_sheet/src/main.js:389:15 at Array.forEach () at /Users/neoreo/Downloads/animated-art-engine-main/step2_spritesheet_to_generative_sheet/src/main.js:375:17 at Array.map () at constructLayerToDna (/Users/neoreo/Downloads/animated-art-engine-main/step2_spritesheet_to_generative_sheet/src/main.js:370:35) at startCreating (/Users/neoreo/Downloads/animated-art-engine-main/step2_spritesheet_to_generative_sheet/src/main.js:819:23) at Command. (/Users/neoreo/Downloads/animated-art-engine-main/step2_spritesheet_to_generative_sheet/index.js:39:7) at Command.listener [as _actionHandler] (/Users/neoreo/Downloads/animated-art-engine-main/step2_spritesheet_to_generative_sheet/node_modules/commander/lib/command.js:488:17) at /Users/neoreo/Downloads/animated-art-engine-main/step2_spritesheet_to_generative_sheet/node_modules/commander/lib/command.js:1227:65 at Command._chainOrCall (/Users/neoreo/Downloads/animated-art-engine-main/step2_spritesheet_to_generative_sheet/node_modules/commander/lib/command.js:1144:12)

    opened by neoreo-terrex 1
  • spritesheet is created using 0.gif

    spritesheet is created using 0.gif

    Hello,

    I am new to this engine and so far it has worked as instructed in the code guide mentioned. However, I ran into one problem and that is, when all the steps are successfully processed, the output is a static gif. No animations. Upon looking at the output of step 1, I have noticed that the sprites are made using only the first image. Looking at the output of step 2, the sprite sheet has the same image.

    P.S: All of the subfolders have weight #100. Example: Background >> Blue#100 >> 0.gif, 1.gif... and so on for all the layers.

    A quick help is highly appreciated.

    Thank You

    opened by Fhahroz 1
Owner
Jalagar
Developer for @FitFriends_NFT. Data driven web3 enthusiast, NFT collector, and solidity and full stack web developer.
Jalagar
This Photoshop script exports all top-level layers and groups to cropped PNG and JPEG files and creates a file usable in Tumult Hype 4 based on your Photoshop document.

Export To Hype (Photoshop Edition) This Photoshop script exports all top-level layers and groups to cropped PNG and JPEG files and creates a file usab

Max Ziebell 6 Nov 9, 2022
A Javascript library to export svg charts from the DOM and download them as an SVG file, PDF, or raster image (JPEG, PNG) format. Can be done all in client-side.

svg-exportJS An easy-to-use client-side Javascript library to export SVG graphics from web pages and download them as an SVG file, PDF, or raster imag

null 23 Oct 5, 2022
A script that combines a folder of SVG files into a single sprites file and generates type definitions for safe usage.

remix-sprites-example A script that combines a folder of .svg files into a single sprites.svg file and type definitions for safe usage. Technical Over

Nicolas Kleiderer 19 Nov 9, 2022
Converts your IPv4 address to a 4x4 2-bit PNG which you can extract the IP from.

IP-to-PNG Converts your IPv4 address to a 4x4 2-bit PNG which you can extract the IP from. https://www.npmjs.com/package/ip2png Run npm install ip2png

Görkem / Federal 18 Nov 30, 2022
Landscape Generator is An open Source web application that generates landscape drawings randomly, then gives you the ability to edit it and export it as SVG or PNG.

Landscape Generator ## About Landscape Generator is An open Source web application that generates landscape drawings randomly, then gives you the abil

null 9 Apr 15, 2022
This project will be using various AI and Rule Engine algorithm to detect various attack against a company!

?? Introduction This project will be using various AI and Rule Engine algorithm to detect various attack against a website! ?? Mission After starting

Harish S.G 4 Apr 29, 2022
A JavaScript plugin to turn many images into a gif

GiffyImages A JavaScript plugin to turn many elements images into a gif magically. Preview Getting started npm i giffy-images -D or yarn add giffy-im

Leonardo Carey 6 Nov 6, 2021
In this repository, I try to perform a mainnet fork and then simulate popular smart contract exploits on various DEFI Protocols using Hardhat Framework.

defiHacks_via_Hardhat 1. Alchemix Access Control Bug Any user could have called setWhitelist() to give an attacker the ability to call the harvest fun

null 34 Dec 27, 2022
VanillaSelectBox - A dropdown menu with lots of features that takes a select tag and transforms it into a single or multi-select menu with 1 or 2 levels

vanillaSelectBox v1.0.0 vanillaSelectBox : v1.00 : Adding a package.json file New : Possibility to change the dropdown tree and change the remote sear

philippe MEYER 103 Dec 16, 2022
This tool allows you to draw up plans for facilities from Foxhole's new Inferno update. It takes power and resource needs into account to help you efficiently design your facilities.

Foxhole Facility Planner This tool allows you to draw up plans for facilities from Foxhole's new Inferno update. It takes power and resource needs int

Brandon Ray 23 Dec 23, 2022
An npm package for demonstration purposes using TypeScript to build for both the ECMAScript Module format (i.e. ESM or ES Module) and CommonJS Module format. It can be used in Node.js and browser applications.

An npm package for demonstration purposes using TypeScript to build for both the ECMAScript Module format (i.e. ESM or ES Module) and CommonJS Module format. It can be used in Node.js and browser applications.

Snyk Labs 57 Dec 28, 2022
Sheetzapper imports your account value accross Zapper.fi supported wallets and dapps into a Google Sheet

Overview Sheetzapper imports your account value accross Zapper.fi supported wallets and dapps into a Google Sheet. This allows you to chart your net w

null 4 Nov 27, 2022
A typescript transform that converts exported const enums into object literal.

ts-transformer-optimize-const-enum A typescript transformer that convert exported const enum into object literal. This is just like the one from @babe

Fonger 22 Jul 27, 2022
Converts an iterable, iterable of Promises, or async iterable into a Promise of an Array.

iterate-all A utility function that converts any of these: Iterable<T> Iterable<Promise<T>> AsyncIterable<T> AsyncIterable<Promise<T>> Into this: Prom

Lily Scott 8 Jun 7, 2022
Converts select multiple elements into dropdown menus with checkboxes

jquery-multi-select Converts <select multiple> elements into dropdown menus with a checkbox for each <option>. The original <select> element is hidden

mySociety 22 Dec 8, 2022
✏️ A small jQuery extension to turn a static HTML table into an editable one. For quickly populating a small table with JSON data, letting the user modify it with validation, and then getting JSON data back out.

jquery-editable-table A small jQuery extension to turn an HTML table editable for fast data entry and validation Demo ?? https://jsfiddle.net/torrobin

Tor 7 Jul 31, 2022
Run a command, watch the filesystem, stop the process on file change and then run the command again...

hubmon Run a command, watch the filesystem, stop the process on file change and then run the command again... Install You can install this command lin

Hubert SABLONNIÈRE 7 Jul 30, 2022
Animated sprite hook for react-three-fiber

use-animated-sprite Animated sprite hook for react-three-fiber Dependencies npm install @react-three/drei @react-three/fiber react three Installation

Brit Gardner 7 Dec 4, 2022
All-in-one package for maptalks webgl layers

@maptalks/gl-layers maptalks webgl 图层的汇总包,包含了@maptalks命名空间下webgl基础设施和所有webgl图层插件。 使用时无需再单独安装和引入其他webgl插件,而可以统一从此包中安装引用。 包含的插件 @maptalks/gl @maptalks/v

Fu Zhen 20 Dec 23, 2022