wolkenkit is an open-source CQRS and event-sourcing framework based on Node.js, and it supports JavaScript and TypeScript.

Overview

wolkenkit

wolkenkit is a CQRS and event-sourcing framework based on Node.js. It empowers you to build and run scalable distributed web and cloud services that process and store streams of domain events. It supports JavaScript and TypeScript, and is available under an open-source license. Additionally, there are also enterprise add-ons. Since it works especially well in conjunction with domain-driven design (DDD), wolkenkit is the perfect backend framework to shape, build, and run web and cloud APIs.

BEWARE: This README.md refers to the wolkenkit 4.0 community technology preview (CTP) 6. If you are looking for the latest stable release of wolkenkit, see the wolkenkit documentation.

wolkenkit

Status

Category Status
Version npm
Dependencies David
Dev dependencies David
Build GitHub Actions
License GitHub

Quick start

First you have to initialize a new application. For this, execute the following command and select a template and a language. The application is then created in a new subdirectory:

$ npx [email protected] init <name>

Next, you need to install the application dependencies. To do this, change to the application directory and run the following command:

$ npm install

Finally, from within the application directory, run the application in local development mode by executing the following command:

$ npx wolkenkit dev

Please note that the local development mode processes all data in-memory only, so any data will be lost when the application is closed.

Sending commands, receiving domain events, and querying views

To send commands or receive domain events, the current version offers an HTTP and a GraphQL interface.

Using the HTTP interface

wolkenkit provides two primary endpoints in local development mode:

  • http://localhost:3000/command/v2/:contextName/:aggregateName/:commandName submits commands for new aggregates
  • http://localhost:3000/command/v2/:contextName/:aggregateName/:aggregateId/:commandName submits commands for existing aggregates
  • http://localhost:3000/views/v2/:viewName/:queryName queries views
  • http://localhost:3000/domain-events/v2 subscribes to domain events
  • http://localhost:3000/notifications/v2 subscribes to notifications

Additionally, the following secondary endpoints are available as well:

  • http://localhost:3000/command/v2/cancel cancels a submitted, but not yet handled command
  • http://localhost:3000/command/v2/description fetches a JSON description of all available commands
  • http://localhost:3000/domain-events/v2/description fetches a JSON description of all available domain events
  • http://localhost:3000/open-api/v2 provides an OpenAPI description of the HTTP interface
  • http://localhost:3001/health/v2 fetches health data
Sending commands

To send a command, send a POST request with the following JSON data structure in the body to the command endpoint of the runtime. Of course, the specific names of the context, the aggregate and the command itself, as well as the aggregate id and the command's data depend on the domain you have modeled:

{
  "text": "Hello, world!"
}

A sample call to curl might look like this:

$ curl \
    -i \
    -X POST \
    -H 'content-type: application/json' \
    -d '{"text":"Hello, world!"}' \
    http://localhost:3000/command/v2/communication/message/send

If you want to address an existing aggregate, you also have to provide the aggregate's id:

$ curl \
    -i \
    -X POST \
    -H 'content-type: application/json' \
    -d '{}' \
    http://localhost:3000/command/v2/communication/message/d2edbbf7-a515-4b66-9567-dd931f1690d3/like
Cancelling a command

To cancel a command, send a POST request with the following JSON data structure in the body to the cancel endpoint of the runtime. Of course, the specific names of the context, the aggregate and the command itself, as well as the aggregate id and the command's data depend on the domain you have modeled:

{
  "contentIdentifier": { "name": "communication" },
  "aggregateIdentifier": { "name": "message", "id": "d2edbbf7-a515-4b66-9567-dd931f1690d3" },
  "name": "send",
  "id": "<command-id>"
}

A sample call to curl might look like this:

$ curl \
    -i \
    -X POST \
    -H 'content-type: application/json' \
    -d '<json>' \
    http://localhost:3000/command/v2/cancel

Please note that you can cancel commands only as long as they are not yet being processed by the domain.

Querying a view

To query a view, send a GET request to the views endpoint of the runtime. The response is a stream of newline separated JSON objects, using application/x-ndjson as its content-type. This response stream does not contain heartbeats and ends as soon as the last item is streamed.

A sample call to curl might look like this:

$ curl \
    -i \
    http://localhost:3000/views/v2/messages/all
Subscribing to domain events

To receive domain events, send a GET request to the domain events endpoint of the runtime. The response is a stream of newline-separated JSON objects, using application/x-ndjson as its content-type. From time to time, a heartbeat will be sent by the server as well, which you may want to filter.

A sample call to curl might look like this:

$ curl \
    -i \
    http://localhost:3000/domain-events/v2
Subscribing to notifications

To receive notifications, send a GET request to the notifications endpoint of the runtime. The response is a stream of newline-separated JSON objects, using application/x-ndjson as its content-type. From time to time, a heartbeat will be sent by the server as well, which you may want to filter.

A sample call to curl might look like this:

$ curl \
    -i \
    http://localhost:3000/notifications/v2

Using the GraphQL interface

wolkenkit provides a GraphQL endpoint under the following address:

  • http://localhost:3000/graphql/v2

You can use it to submit commands and subscribe to domain events, however cancelling commands is currently not supported. If you point your browser to this endpoint, you will get an interactive GraphQL playground.

Sending commands

To send a command, send a mutation with the following data structure to the GraphQL endpoint of the runtime. Of course, the specific names of the context, the aggregate and the command itself, as well as the aggregate id and the command's data depend on the domain you have modeled:

mutation {
  command {
    communication_message_send(aggregateIdentifier: { id: "d2edbbf7-a515-4b66-9567-dd931f1690d3" }, data: { text: "Hello, world!" }) {
      id,
      aggregateIdentifier {
        id
      }
    }
  }
}
Cancelling a command

To cancel a command, send a mutation with the following data structure to the GraphQL endpoint of the runtime. Of course, the specific names of the context, the aggregate and the command itself, as well as the aggregate id and the command's data depend on the domain you have modeled:

mutation {
  cancel(commandIdentifier: {
    contextIdentifier: { name: "communication" },
    aggregateIdentifier: { name: "message", id: "d2edbbf7-a515-4b66-9567-dd931f1690d3" },
    name: "send",
    id: "0a2d394c-2873-4643-84fd-dbcc43d80c5b"
  }) {
    success
  }
}

Please note that you can cancel commands only as long as they are not yet being processed by the domain.

Querying a view

To query a view, send a query to the GraphQL endpoint of the runtime:

query {
  messages {
    all {
      id
      timestamp
      text
      likes
    }
  }
}
Subscribing to domain events

To receive domain events, send a subscription to the GraphQL endpoint of the runtime. The response is a stream of objects, where the domain events' data has been stringified:

subscription {
  domainEvents {
    contextIdentifier { name },
    aggregateIdentifier { name, id },
    name,
    id,
    data
  }
}
Subscribing to notifications

To receive notifications, send a subscription to the GraphQL endpoint of the runtime. The response is a stream of objects, where the domain events' data has been stringified:

subscription {
  notifications {
    name,
    data
  }
}

Managing files

wolkenkit provides a file storage service that acts as a facade to a storage backend such as S3 or the local file system. It can be addressed using an HTTP API.

Using the HTTP interface

wolkenkit provides three primary endpoints in local development mode:

  • http://localhost:3000/files/v2/add-file adds a file
  • http://localhost:3000/files/v2/file/:id gets a file
  • http://localhost:3000/files/v2/remove-file removes a file
Adding files

To add a file, send a POST request with the file to be stored in its body to the add-file endpoint of the runtime. Send the file's id, its name and its content type using the x-id, x-name and content-type headers.

A sample call to curl might look like this:

$ curl \
    -i \
    -X POST \
    -H 'x-id: 03edebb0-7a36-4902-a082-ef979982a12c' \
    -H 'x-name: hello.txt' \
    -H 'content-type: text/plain' \
    -d 'Hello, world!' \
    http://localhost:3000/files/v2/add-file
Getting files

To get a file, send a GET request with the file id as part of the URL to the file endpoint of the runtime.

A sample call to curl might look like this:

$ curl \
    -i \
    http://localhost:3000/files/v2/file/03edebb0-7a36-4902-a082-ef979982a12c

You will get the file's id, name and its content-type in the x-id, x-name and content-type headers.

Removing files

To remove a file, send a POST request with the following JSON structure to the remove-file endpoint of the runtime:

{
  "id": "03edebb0-7a36-4902-a082-ef979982a12c"
}

A sample call to curl might look like this:

$ curl \
    -i \
    -X POST \
    -H 'content-type: application/json' \
    -d '{"id":"03edebb0-7a36-4902-a082-ef979982a12c"}' \
    http://localhost:3000/files/v2/remove-file

Authenticating a user

For authentication wolkenkit relies on OpenID Connect, so to use authentication you have to set up an external identity provider such as Auth0 or Keycloak.

Configure it to use the implicit flow, copy its certificate to your application directory, and set the --identity-provider-issuer and --identity-provider-certificate flags when running npx wolkenkit dev. For details, see the CLI's integrated help. Please make sure that your identity provider issues token using the RS256 algorithm, otherwise wolkenkit won't be able to decode and verify the token.

If a user tries to authenticate with an invalid or expired token, they will receive a 401. If the user doesn't send a token at all, they will be given a token that identifies them as anonymous. By default, you can not differentiate between multiple anonymous users. If you need this, set the x-anonymous-id header in the client accordingly.

Packaging the application into a Docker image

To package the application into a Docker image, change to the application directory and run the following command. Assign a custom tag to name the Docker image:

$ docker build -t <tag> .

Then you can push the created Docker image into a registry of your choice, for example to use it in Kubernetes.

Run the application with docker-compose

Once you have built the Docker image, you can use docker-compose to run the application. The application directory contains a subdirectory named deployment/docker-compose, which contains ready-made scripts for various scenarios.

Basically, you can choose between the single-process runtime and the microservice runtime. While the former runs the entire application in a single process, the latter splits the different parts of the application into different processes, each of which you can then run on a separate machine.

Using docker-compose also allows you to connect your own databases and infrastructure components. For details see the respective scripts.

To start the microservice runtime with a PostgreSQL database for persistence, first start the stores:

$ docker-compose -f stores.postgres.yml up -d

Then run the setup service to prepare the stores and any infrastructure you have added (please note that this only needs to be done once):

$ docker-compose -f setup.postgres.yml run setup

This will show a warning about orphaned containers, which you can safely ignore. Lastly start your application:

$ docker-compose -f microservice.postgres.yml up

Configuring data stores

wolkenkit uses a number of stores to run your application. In the local development mode, these stores are all run in-memory, but if you run the application using Docker, you will probably want to use persistent data stores. The following databases are supported for the domain event store, the lock store, and the priority queue store:

  • In-memory
  • MariaDB
  • MongoDB
  • MySQL
  • PostgreSQL
  • Redis (only for the lock store)
  • SQL Server

Please note that MongoDB must be at least version 4.2, and that you need to run it as a replica set (a single node cluster is fine).

For details on how to configure the databases, please have a look at the source code. This will be explained in more detail in the final version of the documentation.

Getting help

Please remember that this version is a community technology preview (CTP) of the upcoming wolkenkit 4.0. Therefore it is possible that not all provided features work as expected or that some features are missing completely.

BEWARE: Do not use the CTP for production use, it's for getting a first impression of and evaluating the upcoming wolkenkit 4.0.

If you experience any difficulties, please create an issue and provide any steps required to reproduce the issue, as well as the expected and the actual result. Additionally provide the versions of wolkenkit and Docker, and the type and architecture of the operating system you are using.

Ideally you can also include a short but complete code sample to reproduce the issue. Anyway, depending on the issue, this may not always be possible.

Running the build

To build this module use roboter:

$ npx roboter

Running the fuzzer

There is some fuzzing for parts of wolkenkit. To run the fuzzer use:

$ npm run fuzzing

Running the fuzzer can take a long time. The usual time is about 5 hours. Detailed results for the fuzzing operations are written to /tmp.

Publishing an internal version

While working on wolkenkit itself, it is sometimes necessary to publish an internal version to npm, e.g. to be able to install wolkenkit from the registry. To publish an internal version run the following commands:

$ npx roboter build && npm version 4.0.0-internal.<id> && npm publish --tag internal && git push && git push --tags
Comments
  • Docker for Windows using Hyper-V

    Docker for Windows using Hyper-V

    I'm using Docker for windows like the Docker inc. suggested url

    Is it possible to add an installation guide like the "Installing using Docker Machine" but for "Installling using Docker with Hyper-V"?

    Feature Documentation 
    opened by Pandaros 35
  • Events may not get ordered correctly

    Events may not get ordered correctly

    What is this bug about?

    When storing events in the event store, we write a position field to get a global ordering for replaying. Until now we assumed that this is safe, since the position is incremented automatically (except in MongoDB), and we assumed that accessing the sequence is safe due to the surrounding transaction. As it turns out, this is not the case (see this StackOverflow question for details). Unfortunately, this not only affects PostgreSQL, but basically all databases we support.

    A simple way to solve this issue would be to acquire a table lock, but this is pretty expensive. As a cheap alternative, we may use advisory locks, which exist for all relational databases we support:

    In MongoDB, there is no such lock concept, but since MongoDB doesn't have auto incrementing fields anyway, we need a different approach here. We currently assume that if we introduced transactions for MongoDB, things would work (also see #65).

    What steps are needed to fix the bug?

    • [x] Decide whether to use advisory locks or table locking
    • [x] Implement locking for relational databases
    • [ ] Define a way how to deal with this in MongoDB
    • [ ] Implement locking in MongoDB

    What else should we know?

    This bug was brought up by @rkaw92 on Slack, and we should notify him once we have solved this issue (he's also on Twitter).

    Bug Event store 
    opened by goloroden 21
  • Introduce function-based authorization

    Introduce function-based authorization

    What is this feature about?

    Right now, authorization is configured using an object. It would be way more flexible if this was done using a function that is called to get the actual authorization options for a command or an event. The same approach could then be implemented for the read model and the file storage.

    What needs to be done to implement this feature?

    • [x] Update the core
      • [x] Replace the existing authorization mechanism
      • [x] Update tests
    • [x] Update the broker
      • [x] Replace the existing authorization mechanism
      • [x] Update tests
    • [x] Update the CLI
      • [x] Update dependencies
      • [x] Drop support wolkenkit < 4
    • [ ] Update flows
      • [ ] Update impersonation
    • [ ] Update the file storage
      • [ ] Replace the existing authorization mechanism
      • [ ] Update tests
    • [ ] Update the client SDK
      • [ ] Update impersonation
    • [ ] Run a few real-world tests
      • [ ] Boards
      • [ ] Never completed game
    • [ ] Introduce backwards compatibility layer
      • [ ] For transferOwnership
      • [ ] For authorize
    • [ ] Update documentation

    What else should we know?

    This issue was brought up by @schmuto, so we should notify him once this has been done.

    Feature Security 
    opened by goloroden 16
  • fix: bump @types/node from 14.14.12 to 14.14.14

    fix: bump @types/node from 14.14.12 to 14.14.14

    Bumps @types/node from 14.14.12 to 14.14.14.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in the .dependabot/config.yml file in this repo:

    • Update frequency
    • Automerge options (never/patch/minor, and dev/runtime dependencies)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
    Dependencies 
    opened by dependabot-preview[bot] 14
  • fix: bump validate-value from 8.9.9 to 8.9.11

    fix: bump validate-value from 8.9.9 to 8.9.11

    Bumps validate-value from 8.9.9 to 8.9.11.

    Changelog

    Sourced from validate-value's changelog.

    8.9.11 (2020-12-28)

    Bug Fixes

    8.9.10 (2020-12-19)

    Bug Fixes

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in the .dependabot/config.yml file in this repo:

    • Update frequency
    • Automerge options (never/patch/minor, and dev/runtime dependencies)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
    Dependencies 
    opened by dependabot-preview[bot] 13
  • Cannot run demo application on Ubuntu 18.04

    Cannot run demo application on Ubuntu 18.04

    What is this bug about?

    I am setting up wolkenkit and going through the tutorial. I cannot start the demo application.

    What steps are needed to reproduce the bug?

    Don't know if it's reproducible but I'm going through the tutorial:

    • Fresh node and docker setup (also did the Docker post-install steps)
    • Fresh wolkenkit installation
    • wolkenkit init
    • wolkenkit start

    The application stops at Building Docker images.... I got docker images but no containers.

    What is the expected result?

    The demo application starts without issues.

    What is the actual result?

    This is the output if I run wolkenkit start --verbose

      Starting the application...
      Verifying health on environment default...
      Application host local.wolkenkit.io resolves to 127.0.0.1.
      Docker server resolves to 127.0.0.1 and ::1.
      Verifying application status...
      Verifying that ports are available...
      Setting up network...
      Building Docker images...
    ✗ Failed to start the application.
      unable to prepare context: path "/tmp/p1RZVL" not found
    
      ExecutableFailed: unable to prepare context: path "/tmp/p1RZVL" not found
    
        at new CustomError (/home/cwalther/.nvm/versions/node/v10.12.0/lib/node_modules/wolkenkit/node_modules/defekt/dist/defekt.js:44:116)
        at /home/cwalther/.nvm/versions/node/v10.12.0/lib/node_modules/wolkenkit/dist/shell/exec.js:29:18
        at ChildProcess.exithandler (child_process.js:296:5)
        at ChildProcess.emit (events.js:182:13)
        at maybeClose (internal/child_process.js:962:16)
        at Socket.stream.socket.on (internal/child_process.js:381:11)
        at Socket.emit (events.js:182:13)
        at Pipe._handle.close (net.js:606:12)
    
    

    What else should we know?

    I am using:

    • Ubuntu 18.04.1
    • Node 10.12
    • Docker 18.06.1-ce
    Bug CLI Issue awaits feedback 
    opened by CarstenWalther 13
  • How flexible is the read model ?

    How flexible is the read model ?

    What is this question about?

    Hi, I'm new to wolkenkit and while I'm looking around in the doc and on github, some questions popped in my head and i couldn't find anything related to this.

    The read model is base on lists and is really close to MongoDB. But what happens if you need something else than lists or if MongoDB isn't really the best tool for the work ? For example a graph database like Neo4J could be better for some use case.. If my app contains a lot of tables with static columns, SQL might be better here.. When i saw #83, why not use ElasticSearch which is designed for text search.

    So my questions :

    • What type of database is sported by wolkenkit for the read model ?
    • Can we use or is there plans to support multiple databases at the same time for the read model ?
    • Would it be easy to create a custom denormalizer for another database ?
      • This probably mean adding another type like lists
      • It also includes streaming data to clients

    I feel like the read model api is trying to create an abstraction layer with MongoDB but at the same times it doesn't do a good job and give anything more to the developer. Why not juste give direct access to the database or even better remove the database specifics to the api of the read model ?

    const projections = {
      // ...
      'communication.message.liked' (messages, event) {
        messages.update({
          where: { id: event.aggregate.id },
          set: {
            likes: event.data.likes
          }
        });
      }
    };
    

    Instead it could be something like :

    const messages = mongoose.model("Messages");
    
    const projections = {
      async 'communication.message.liked' (event) {
        const msg = await messages.updateOne({
          id: event.aggregate.id
        }, {
          $set: {
            likes: event.data.likes
          }
        });
    
        return msg; // Use this element for streaming ?
      }
    };
    

    (bear with me, my mongoose is rusty)

    Of course this would impact the client api, but in the end the whole thing is much more flexible isn't it ? This might be related to #102 and also #90.

    What else should we know?

    I haven't build an app yet with wolkenkit, but I'm in the process of.

    Read model Question 
    opened by jbeaudoin11 11
  • When trying to execute

    When trying to execute "wolkenkit init" I get the error "Template not found"

    Following the Tutorial in documentation initializing-the-application I get the error "Template not found" when executing "wolkenkit init".

    Console:

    wolkenkitdev@wolkenkitdev ~/jsdoc $ cd chat/
    wolkenkitdev@wolkenkitdev ~/jsdoc/chat $ ls -l
    total 0
    wolkenkitdev@wolkenkitdev ~/jsdoc/chat $ wolkenkit init
    ✗ Failed to initialize a new application.
      Template not found.
    

    This is running with a linux mint installation within a VirtualBox VM on Windows Host. Network is configured as NAT. browsing the web (and installation) worked without setting proxy configuration (so I did not touch them).

    Bug CLI 
    opened by MuwuM 11
  • Events saved to eventstore based on transaction system

    Events saved to eventstore based on transaction system

    What is this feature about?

    This feature is about saving events to eventstore with possiblity to rolling back those events if something fails. I have such situation:

    1. I'm publishing events to eventstore
    2. I need to make ajax call to other microservice/3-rd party resource to make some action
    3. This microservice/3-rd party resource fail with 400
    4. I need to rollback events that was saved in eventstore and reject whole command action

    Currently i'm not be able to do that because wolkenkit-eventstore doesn't support transactions.

    What needs to be done to implement this feature?

    • [ ] I need a seperate method to start transaction and save events, which return some object with methods named for example rollback and commit to be able to reject/finish transaction
    Feature 
    opened by codemasternode 10
  • Lockstore implementations

    Lockstore implementations

    Brings implementations for:

    • MySql
    • MariaDb
    • MongoDb
    • Postgres
    • Redis
    • SqlServer.

    All implementations pass Integration and Unit tests. During development, the integration test renewLock#'renews the lock.' has been know to be a bit flaky with MariaDb/MySql implementations.

    After some interesting reading about distributed locks and this very interesting article by Redis, it may be possible that the current implementation is a bit loose and does not fit 100% of the final requirements of wolkenkit.

    I would recommend to use preferably the Redis lockstore implementation as this is the closest of the recommended redlock algorithm (although working only on a single node).

    The username provided during the initialize of RedisLockstore is used as the value along with the hashed key to validate that the lock is indeed held by that lockstore instance.. Thus providing unique names can guarantee on different operations (I mainly have renewLock) that no other Lockstore instance is trying to hijack a current lock.

    opened by damienbenon 9
  • Upload size is limited by wolkenkit-proxy

    Upload size is limited by wolkenkit-proxy

    What is this bug about?

    wolkenkit-depot-file allows you to store large files. However since the introduction of wolkenkit-proxy the upload size seems to be limited to 1MB.

    What steps are needed to reproduce the bug?

    • Start a wolkenkit application
    • Upload a file larger than 1MB using the wolkenkit-depot-client-js from Node
    • Catch the error and log the output

    What is the expected result?

    • addFile should not throw an error.
    • The file should be uploaded.

    What is the actual result?

    • The wolkenkit-application responds with status 413 and the following result:
    <html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body bgcolor="white">\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.14.2</center>\r\n</body>\r\n</html>\r\n
    

    What else should we know?

    • I've create a repo called depot-test to make this reproducible
    • This problem seems to be caused by the nginx proxy where the client_max_body_size defaults to 1MB. This option should be added only to the nginx server that is responsible for the depot. When set to 0 the check is omitted.
    • Additionally wolkenkit-depot-client-js needs to be adjusted since axios also has an option called maxContentLength. This option seemed to cause errors when uploading files larger than 10mb. If you set the option to Infinity large uploads e.g. with 500mb are possible.
    • This bug was reported by @SteffenGottschalk, and @timrach. So we should notify them once this has been done.
    Bug File storage 
    opened by mattwagl 9
  • Database load in idle

    Database load in idle

    ...as discussed with @goloroden by mail

    What is this bug about?

    In idle, there are a lot of database queries.

    What steps are needed to reproduce the bug?

    • Create a new application from chat template
    • Use PostgreSQL database (set appropriate environment variables: CONSUMER_PROGRESS_STORE_OPTIONS, ...)
    • Start backend (npx wolkenkit dev)

    What else should we know?

    image

    opened by kusigit 0
  • SQL Server: String or binary data would be truncated in table 'db.items-command' column 'item'

    SQL Server: String or binary data would be truncated in table 'db.items-command' column 'item'

    What is this bug about?

    At the moment, it is not possible to send a command with a payload more than about 820 bytes (only SQL Server).

    What is the expected result?

    It should be possible to send a payload with more than 820 bytes.

    What is the actual result?

    The API crashed with a SQL Server Error 2628 (String or binary data would be truncated in table...)

    What steps are needed to reproduce the bug?

    • Use an authenticated user (valid access token)
    • Send a command to the API with more than 820 bytes.

    What else should we know?

    If I had a look to the JSON object, we need about 3180 bytes for metadata like user, initiator, token and so on. Most of them is used for the token string. This means, we have only 820 bytes left for the data object.

    Hint

    Domain Event PriorityQueueStore

    opened by kusigit 0
  • Move database drivers and similar dependencies to optional dependencies

    Move database drivers and similar dependencies to optional dependencies

    What is this task about?

    Currently we have many database drivers and similar packages as dependencies. E.g. mysql, pg, mssql. We always install all of those, regardless of which ones are actually used. This increases the size of installed wolkenkit, adds bloat and potential risk for supply-chain-attacks.

    I think we should let users decide, which database-drivers to install. This adds an additional burden to them, but it is probably justifiable.

    To implement this in the wolkenkit, I think we should always load these libraries dynamically in case they are needed. However, this is just an early thought and might run into problems.

    What needs to be done to complete this task?

    • [ ] Research whether npm's optionalDependencies and dynamic importing of libraries is a good solution to the bloat problem
    • [ ] Move all database-drivers, filestore-drivers etc and only import them dynamically

    What else should we know?

    I have no idea how to test this.

    Task Security 
    opened by yeldiRium 0
  • The CLI replay command might run into issues if a flow is already replaying

    The CLI replay command might run into issues if a flow is already replaying

    What is this bug about?

    The ConsumerProgressStore throws an exception when starting a replay for a consumer that is already replaying. The replay CLI command does not check whether the individual flows are already replaying before starting their replay.

    What is the expected result?

    After running the replay command, all flows mentioned in the command arguments should be replaying. If any of them is already replaying, its replay should either be restarted or keep going. Which is to be decided.

    What is the actual result?

    Not sure, this is just a theory. But the CLI command probably throws and fails if any of the flows it tries to replay is already replaying.

    What steps are needed to reproduce the bug?

    • Create a flow
    • Let it process some domain events
    • Start a replay for it, without actually sending it the domain events again
    • Use the replay CLI command to replay the flow again
    Bug Issue awaits feedback 
    opened by yeldiRium 1
  • Speed up the priority queue

    Speed up the priority queue

    What is this task about?

    For 4.0 we introduced the priority queue and focused on making things work (see #1898). However, the SQL based implementations are not fast – since we have to do several UPDATE calls here for a single change in the heap. We should think about how to speed things up here.

    One idea is to always manage the heap in-memory, and only to store snapshots (or events 😉) in the database, to have less database accesses.

    What needs to be done to complete this task?

    • [ ] Think about how to speed up the heap when using SQL
    • [ ] Implement it
    Task 
    opened by goloroden 1
  • Create a client SDK

    Create a client SDK

    What is this feature about?

    First, we should discuss whether we want to have a client SDK, given that we now have a GraphQL endpoint. However, we probably want to have one at least for the file API. If we do that, we might also want to have a general client SDK.

    What needs to be done to implement this feature?

    • [ ] Discuss what should be part of the client SDK
      • [ ] Accessing the File API
      • [ ] Sending command, subscribing to events, and running queries
    • [ ] Create an SDK
    • [ ] Add tests
    • [ ] Decide how to build it (is it part of the official npm package, or is it a separate npm package?)
    • [ ] Publish it
    Feature 
    opened by goloroden 0
Owner
the native web
JavaScript for everyone
the native web
Elegant and all-inclusive Node.Js web framework based on TypeScript. :rocket:.

https://foalts.org What is Foal? Foal (or FoalTS) is a Node.JS framework for creating web applications. It provides a set of ready-to-use components s

FoalTS 1.7k Jan 4, 2023
Fast and type-safe full stack framework, for TypeScript

Fast and type-safe full stack framework, for TypeScript Why frourio ? Even if you write both the frontend and backend in TypeScript, you can't statica

frourio 1.1k Dec 26, 2022
A Programming Environment for TypeScript & Node.js built on top of VS Code

Programming Environment for TypeScript & Node.js A battery-included TypeScript framework built on top of Visual Studio Code Website Kretes is a progra

Kretes 677 Dec 11, 2022
Noderlang - Erlang node in Node.js

Noderlang allows Node.js programs to easily operate in BEAM environments

devsnek 2 Mar 31, 2022
Modern framework for fast, powerful React apps

FUSION.JS Modern framework for fast, powerful React apps What is it? fu·sion — noun The process or result of joining two or more things together to fo

Fusion.js 1.5k Dec 30, 2022
🦄 0-legacy, tiny & fast web framework as a replacement of Express

tinyhttp ⚡ Tiny web framework as a replacement of Express ?? tinyhttp now has a Deno port (work in progress) tinyhttp is a modern Express-like web fra

v 1 r t l 2.4k Jan 3, 2023
Clock and task scheduler for node.js applications, providing extensive control of time and callback scheduling in prod and test code

#zeit A node.js clock and scheduler, intended to take place of the global V8 object for manipulation of time and task scheduling which would be handle

David Denton 12 Dec 21, 2021
The most powerful headless CMS for Node.js — built with GraphQL and React

A scalable platform and CMS to build Node.js applications. schema => ({ GraphQL, AdminUI }) Keystone Next is a preview of the next major release of Ke

KeystoneJS 7.3k Jan 4, 2023
:desktop_computer: Simple and powerful server for Node.js

server.js for Node.js Powerful server for Node.js that just works so you can focus on your awesome project: // Include it and extract some methods for

Francisco Presencia 3.5k Dec 31, 2022
Micro type-safe wrapper for Node.js AMQP library and RabbitMQ management.

Micro type-safe wrapper for AMQP library and RabbitMQ management Description Section in progress. Getting Started Qupi can be installed by Yarn or NPM

Grzegorz Lenczuk 2 Oct 5, 2021
DDD/Clean Architecture inspired boilerplate for Node web APIs

Node API boilerplate An opinionated boilerplate for Node web APIs focused on separation of concerns and scalability. Features Multilayer folder struct

Talysson de Oliveira Cassiano 3k Dec 30, 2022
🚀 A RESTful API generator for Node.js

A RESTful API generator rest-hapi is a hapi plugin that generates RESTful API endpoints based on mongoose schemas. It provides a powerful combination

Justin Headley 1.2k Dec 31, 2022
nact ⇒ node.js + actors ⇒ your services have never been so µ

nact ⇒ node.js + actors your services have never been so µ Any and all feedback, comments and suggestions are welcome. Please open an issue if you fin

Natalie Cuthbert 1k Dec 28, 2022
In-memory filesystem with Node's API

In-memory filesystem with Node's API

Vadim Dalecky 1.4k Jan 4, 2023
A simple boilerplate generator for your node express backend project! 🚀

A simple boilerplate generator for your node express backend project! ??

Gunvant Sarpate 35 Sep 26, 2022
🔱 Javascript's God Mode. No VM. No Bytecode. No GC. Just native binaries.

Javascript's God Mode: one language to rule them all. Code everything, everywhere, for everything, in JavaScript. No VM. No Bytecode. No packaging. No

null 3.4k Dec 31, 2022
Linked Data API for JavaScript

rdflib.js Javascript RDF library for browsers and Node.js. Reads and writes RDF/XML, Turtle and N3; Reads RDFa and JSON-LD Read/Write Linked Data clie

Read-Write Linked Data 527 Jan 1, 2023
A nodejs module for local and remote Inter Process Communication with full support for Linux, Mac and Windows

A nodejs module for local and remote Inter Process Communication with full support for Linux, Mac and Windows

Rifa Achrinza 15 Sep 28, 2022