Open source infrastructure for scalable, reliable native integrations in B2B SaaS products

Overview

Nango Logo

Open-source infrastructure for native integrations

Native, customer-facing integrations for your B2B SaaS made simple, reliable and extensible.


Explore the docs »

Quickstart 🚀 · Report Bug · Community Slack

 Why

Building native integrations is costly and time consuming, particularly as you support more integrations, deeper integrations and higher traffic. Most companies end up building the same infrastructure: scheduling, queueing, error handling, retries, authentication, logging, local development environment, CI/CD, etc. Nango's goal is to make integration developers 10x more productive by providing them with this common infrastructure.

🎁 A packaged micro-service for native integrations

Nango is an independent micro-service that centralizes interactions with external APIs. It can be run locally, self-hosted or managed by Nango Cloud. Nango runs your own integration-specific code, abstracting away the common infrastructure across integrations. It supports integrations of arbitrary complexity and scale, while remaining simple to use, reliable and extensible.

 Features

Nango comes with bullet-proof infrastructure focused on native integrations:

  • 📁  A lightweight code framework to standardize integrations development
  • Built-in infrastructure for scheduling, queuing and retries
  • 🔒 Builtin OAuth support with automatic token refresh & UI components
  • 🛠  Delightful local development to test integrations as you code
  • 🔍 Powerful logging, monitoring and debugging
  • ❤️   Simple setup with a CLI and native SDKs
  • ⛔️ Automatic rate-limit detection & mitigation
  • 👥  Community-contributed blueprints for common integration use-cases
  • 🧩 Universal: Works with any API, any programming language & framework
  • 💻   Self-hostable, single docker container for easy local development

Soon, we plan to support:

  • 📺   Central dashboard with sync history, API errors, latency, live connections, etc.
  • 🧠   Unified endpoints for multiple 3rd-party APIs & smart data transformation
  • 🚨  Advanced alerting & monitoring, exportable to Datadog, Sentry, etc.
  • ☁️  Cloud-hosted edition

…and many more capabilities.

📘 Blueprints

Our 22+ Blueprints, such as Intercom, Airtable, Asana, Hubspot or Xero, help you kickstart your next integration. Add two lines of code to your frontend & Nango config (see Quickstart) and you get:

  • Builtin & pre-configured OAuth flow (see Quickstart)
  • Builtin & pre-configured requests authorization
  • Automatic auth credentials handling & access token refresh
  • Automatic retries on timeouts
  • Automatic rate-limit handling
  • Full access to the API: Use any endpoint & raw requests/response
  • Community contributed gotchas & learnings which cover everything the API docs missed (add yours too!)

Nango also works with every other API, Blueprints are optional. We add more Blueprints every week.

🚀  Quickstart

Follow our Quickstart guide to build a Slack integration from scratch in 10 minutes!

With Nango, your integration code will look like this (Node.js example, see other languages):

import { Nango } from '@nangohq/node-client';

const nango = new Nango();

// Actions are defined by you and live in your repo as code
// For example: Post a message to a Slack channel
nango.triggerAction('slack', 'notify', userId, {
    channelId: 'XXXXXXX',
    msg: 'Hello @channel, this is a notification triggered by Nango :tada:'
});

And in your frontend, run a full OAuth flow with a single line of code (using Nango's builtin OAuth server):

import Nango from '@nangohq/frontend';

var nango = new Nango('http://localhost:3003');

// Trigger an OAuth flow for 'slack' for the user with user-id 1
nango
    .connect('slack', '1')
    .then((result) => {
        console.log(`OAuth flow succeeded, integration has been setup for the user 🎉`);
    })
    .catch((error) => {
        console.error(`There was an error in the OAuth flow: ${error.error.type} - ${error.error.message}`);
    });

🔍  Learn more

  Like Nango? Follow our development by starring us here on GitHub

Comments
  • Implement webhook updates when new data is synced

    Implement webhook updates when new data is synced

    From a discussion with @cpursley: "Ultimately the end user might want to use that change data to update other records in their app. Perhaps a background job that queries nango tables for changes?"

    As pointed out by @cpursley, webhooks are backend agnostic (compared to callbacks) and could be the solution to avoid periodically polling Nango-synced data to update other records.

    key feature 
    opened by bastienbeurier 4
  • Upserts: Handling records deleted on the API (no longer returned)

    Upserts: Handling records deleted on the API (no longer returned)

    Currently when Nango does a refresh it fetches all the records from the remote, checks if they exist in the DB (using the unique_key) and depending on that updates the existing record or adds a new one.

    This means that if a record has been deleted on the remote, and hence no longer gets returned in the response, we don't do anything and just keep the old record around. This is one possible way to handle this scenario, another would be to also delete the record in the DB. A third option would be to mark it as deleted first and then remove it in a subsequent run (this would give the application a chance to act on the record before it gets removed).

    We are not sure yet which behaviour is best, if you have a use case for this please leave a comment here so we can gather them.

    enhancement 
    opened by rguldener 3
  • Store Nango-related data in any Postgres database & schema

    Store Nango-related data in any Postgres database & schema

    We current store all Nango-related data in the default public schema of Postgres. We should use a separate schema for clarity and make it configurable.

    enhancement 
    opened by bastienbeurier 3
  • Migrate to Temporal

    Migrate to Temporal

    Temporal.io is an OSS library that helps with handling scheduling, tasks and workers. This article shows how Airbyte leverages it and the types of benefits it provides.

    I am already starting to re-implement some basic features about scheduling (e.g. cron), jobs (e.g. status, retries), etc. that would be better handled by Temporal.

    Temporal also plays well with self-hosting and Postgres that we already use.

    enhancement 
    opened by bastienbeurier 3
  • Webhooks not working with self-signed certificates.

    Webhooks not working with self-signed certificates.

    I am running the Nango worker in a container and connecting to my app on the host via host.docker.internal.

    The host app runs using self-signed SSL certs when debugging with Visual Studio. This is causing the webhook to fail with this message:

    [ERROR] Webhook response error from https://host.docker.internal:5001/webhooks/nango for job 3 Sync 2: self-signed certificate (check debug logs for details).
    

    To validate with curl, the following fails with the same error:

    curl -i -X POST -H "content-type: application/json"  --data '{ "type":"HELLO" }' https://host.docker.internal:5001/webhooks/nango
    

    But, if I add the --insecure flag it works.

    One suggestion would be to add an "IGNORE_SSL" flag to the env vars and handle the same way to curl?

    opened by Simcon 2
  • Mapped table created in 'nango' schema, not 'public'.

    Mapped table created in 'nango' schema, not 'public'.

    In the docs under "https://docs.nango.dev/nango-sync/schema-mappings" it reads:

    "If you only specify a table name in the mapped_table parameter, e.g. pokemon, Nango will use (or create) the table in the default schema of your Postgres database (by default this is called public)."

    This appears to work the opposite way - if I don't specify a schema the table gets created under the nango schema. If I specify "public.tablename" it get's created in public schema.

    I think I prefer the "nango as default schema" approach so consider correcting the docs.

    opened by Simcon 2
  • Reddit example is broken

    Reddit example is broken

    From the #help-and-questions channel of the Slack community:

    npm run start syncRedditSubredditPosts redditapi throws errors and db table is not filled.

    Logs: nango_sync_error.log

    The problem seems to be that the DB string types we’re using are too small for some of the fields returned by the Reddit API.

    bug 
    opened by bastienbeurier 2
  • Support sync frequency configuration

    Support sync frequency configuration

    For simplicity sake, we are starting with a default sync frequency of 1h. It should be easy to (configure this sync frequency via:

    • The sync call (using the config object?)
    • Later on with a CLI and dashboard
    enhancement 
    opened by bastienbeurier 2
  • Pagination: Support cursors passed in response headers

    Pagination: Support cursors passed in response headers

    The Shopify Admin REST API uses cursors for pagination, but these are sent to the client as part of the response headers (instead of the JSON body): https://shopify.dev/api/usage/pagination-rest

    A real world example is the orders endpoint: https://shopify.dev/api/admin-rest/2022-10/resources/order#get-orders?since-id=123

    opened by rguldener 2
  • Add rate-limit detection & mitigation

    Add rate-limit detection & mitigation

    Objective

    When an HTTP call to the API fails due to a rate-limit error we want Nango to detect this and automatically implement an appropriate backoff strategy.

    Tasks

    • [ ] Detect rate-limit issues by looking at response status code & headers
    • [ ] Implement backoff (with X-After header if available)
    • [ ] Add configuration per Integration to parametrize rate-limit detection & mitigation tactics
    • [ ] Add proactive throttling if potential rate-limit issue is forecast (so we don't run into it)
    • [ ] Make sure access token refresh is retried if it fails due to a rate-limit issue

    Maybe out of scope for a first version but worth thinking about: Some rate-limits are for very long periods of time, eg. daily limit is exceeded. How should we handle these? If we just wait the main application may wait a veeery long time for triggerAction to complete (or run into issues itself if it is e.g. called from an AWS lambda which have ~15min timeout)

    enhancement 
    opened by rguldener 2
  • Expose logs from docker container in the Nango repo (when running with docker compose)

    Expose logs from docker container in the Nango repo (when running with docker compose)

    Currently when Nango runs in the docker container the log files get written to a file that is in the docker container. This makes it hard for users to access these logs and debug issues.

    When the docker containers are run with docker compose (and we thus know the user is running it from within the Nango repo) we should write these doc files to logs/combined.log and logs/error.log in the repository so the user can easily inspect them.

    One solution for this could be to use bind volumes, which would easily allow us to bind the logs folder into a folder in the container. The old Nango has an example of this here: https://github.com/NangoHQ/nango/blob/7dfa5ffb02b98af13bc608a04621daf73f6bb13e/docker-compose.yaml This will also help with production deployments as it gives users a way to specify where log files should get written.

    enhancement 
    opened by rguldener 1
  • Improve integration tests by making test Syncs run multiple jobs each

    Improve integration tests by making test Syncs run multiple jobs each

    Currently, the integration tests only test the 1st job of each Sync. This will no catch problems due to updating & deleting fields. We should run the tests so that each Sync runs multiple jobs.

    enhancement 
    opened by bastienbeurier 0
  • Trigger integration tests with CI using Github Actions

    Trigger integration tests with CI using Github Actions

    We recently added a bunch of integration tests. They need to be triggered manually at this point.

    We should integrate this to our CI. The main blocker I see is that these tests hit real APIs (e.g. Hubspot) which have already returned some errors due to rate limiting. An easy workaround would be to skip the test when we get a rate limit reponse.

    enhancement 
    opened by bastienbeurier 0
  • Trigger Sync from JSON file

    Trigger Sync from JSON file

    The Sync config is basically a JSON blob.

    To simplify examples in the docs (and codebase), as well as reduce docs updating work, we could centralise each Sync's configuration for all languages in a single JSON file.

    enhancement 
    opened by bastienbeurier 0
  • GitHub stargazers example shows wrong added row count in _nango_jobs

    GitHub stargazers example shows wrong added row count in _nango_jobs

    With Nango v0.4.0 I ran the syncGithubStargazers example:

    npm run example syncGithubStargazers NangoHQ nango 1
    

    The first job runs fine but when I inspect the _nango_jobs table afterwards it shows that 201 records are unchanged. This is not possible as this is the first run of this Sync ever, correct are that 201 records have been added. image

    bug 
    opened by rguldener 0
  • Rate limit error (status 429) makes the entire job fail

    Rate limit error (status 429) makes the entire job fail

    There is a retry policy with back-off between jobs, but not within one job that fetches many pages.

    The default fix for hitting a rate limit when fetching many pages is to adjust the page size + sync frequency.

    As an additional measure, we could keep the job alive and let it wait to fetch the remaining pages.

    enhancement 
    opened by bastienbeurier 0
Releases(v0.4.0)
  • v0.4.0(Dec 20, 2022)

    We are thrilled to announce the improvements coming with the new release v0.4.0 :star-struck:

    Starting with the biggest one: be notified in real-time via a webhook on Sync job completion. We send along detailed information about added/updated/deleted rows. You also get notified when a job fails, so you can set up alerting.

    Another significant improvement: Pizzly (OAuth support) embedded in Nango! You now don't need to run Pizzly & Nango separately anymore. This will make it much easier for you to use Nango with OAuth APIs, while Pizzly can still be used in isolation.

    Other improvements this week:

    • Support for soft deletions, maintaining the record in your db with a deletion date flag
    • More job logging about added/updated/deleted/unchanged/fetched objects + fetched pages
    • Customize the Pizzly OAuth callback
    • Semantic versioning going forward + aligning Pizzly/Nango versions (hence the big version jump)
    • Better performance for detecting row changes using hashes
    • Fixed bug affecting the Reddit example
    • Fixed bug about pausing Syncs

    As always, feel free to share feedback or tell us what you want for following releases!

    Source code(tar.gz)
    Source code(zip)
  • v0.2.11(Dec 13, 2022)

    We have a new wave of improvement for this week with v0.2.11 :gift:

    The most significant improvement is that you now can pause, restart, cancel and re-trigger Syncs from the API & SDK. This is useful if you want to control Sync jobs yourself, but also if you want to embed this capability in your product, e.g. to let users trigger a Sync job manually when they need to.

    Here is the list of other significant improvements for this week’s release:

    • Better structure for the documentation
    • Use cron notation for Syncs to run at fixed times
    • Use natural language for Syncs frequency, e.g. ‘3 minutes’
    • More efficient row updates
    • Specify any Postgres schema for your Sync destination table
    • Optionally use different schemas for Nango’s config vs. Synced data
    • Better log formatting (plain text vs. JSON)
    • Deployment on GCP
    • Add info about updated/new rows in Sync jobs

    Have a great week :v:

    Source code(tar.gz)
    Source code(zip)
  • v0.2.10(Dec 5, 2022)

    Today is the release day of v0.2.10 which brings a series of improvements.

    The most significant is that you can now have multiple Syncs send data to the same SQL table :exploding_head: (docs, issue). This is useful if you want to sync data of the same type (e.g. CRM contacts) across multiple customers in a single SQL table.

    The other big improvement area is observability :eyes:. You can now view information about Sync jobs both in the DB and the logs (docs).

    Other significant improvements:

    • Attach metadata to synced rows (docs, issue)
    • Cleaner logging (docs, issue)
    • Tentative public roadmap on Github here

    Other issues closed: #51, #39

    Source code(tar.gz)
    Source code(zip)
  • v0.2.9(Nov 30, 2022)

    We have a special release today (v0.2.9) that solves one of the biggest hurdles of in-app integrations: OAuth.

    After becoming the official maintainers of the Pizzly OAuth repo last week, we built a way to leverage Pizzly within Nango. It now only takes 2 params in the Nango.sync config to delegate authentication (cf. docs).

    We also made progress with Pizzly, which we’ll communicate about separately in the #pizzly channel.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.7(Nov 21, 2022)

    The past week has been focused on making Nango more sturdy & compatible with a production environment.

    As part of this effort, we migrated to Temporal for background processing orchestration. This has a couple significant benefits:

    • It provides a clean admin/debugging dashboard for Sync jobs at http://localhost:8011
    • It is fully open source and easy to host on a single machine for quick deployments
    • It is battle-hardened for scale by leading tech enterprises
    • It will improve reliability and maintain state despite failures
    • It will accelerate development related to handling background jobs (e.g. retry policies, etc.)

    Temporal also replaces the previous implementation of RabbitMQ and stores job-related data in the same Postgres database that Nango already uses.

    Additionally, this week we release two important features for production environments:

    • The ability to configure who often Syncs run
    • The ability to configure your own custom Postgres database
    Source code(tar.gz)
    Source code(zip)
  • v0.2.6(Nov 14, 2022)

    This week we a releasing an important feature that will make Nango much more usable, with no additional work on your side: Auto JSON-to-SQL mapping.

    For each sync, an (optional) SQL table will be created containing the synced data mapped to a clean SQL schema with the right data types (additional to the JSON blobs stored in the raw table).

    You can find additional information about how we perform the mapping (and more) in the documentation.

    Additionally, this week we release:

    Source code(tar.gz)
    Source code(zip)
  • v0.2.5(Nov 7, 2022)

    We are very excited to release v0.2.5, the very 1st official version of Nango 🎉

    This version contains all the basics to easily synchronize data from any external API:

    • Node SDK to create a continuous sync with a single call to Nango.sync(url, config)
    • HTTP API to create a continuous sync with a single HTTP request
    • <3mins Quickstart to see Nango in action
    • Detailed documentation for Nango
    • Support for various types of endpoint pagination (cursors, URL, link header, etc.)
    • Support for any query, body and header parameters in external API requests
    • Log sync/job information in a Postgres database
    • Containerize all components for easy single-machine deployment
    • Implement examples for Github, Hubspot, Pokémon, Reddit, Slack and Typeform
    • In-browser Postgres GUI to visualize the synced data and logs
    Source code(tar.gz)
    Source code(zip)
A product system made with NestJS. It's a service where you list products, create products or even delete them.

Products-API A product system made with NestJS. It's a service where you list products, create products or even delete them. What I used in this proje

Luiz Sanches 4 May 18, 2022
🍺 A public REST API for retrieving information about Systembolaget's products, and which products that are available in which store

?? systembolaget-api A public REST API for retrieving information about Systembolaget's products, and which products that are available in which store

Daniel Cronqvist 9 Nov 22, 2022
Grupprojekt för kurserna 'Javascript med Ramverk' och 'Agil Utveckling'

JavaScript-med-Ramverk-Laboration-3 Grupprojektet för kurserna Javascript med Ramverk och Agil Utveckling. Utvecklingsguide För information om hur utv

Svante Jonsson IT-Högskolan 3 May 18, 2022
Hemsida för personer i Sverige som kan och vill erbjuda boende till människor på flykt

Getting Started with Create React App This project was bootstrapped with Create React App. Available Scripts In the project directory, you can run: np

null 4 May 3, 2022
Kurs-repo för kursen Webbserver och Databaser

Webbserver och databaser This repository is meant for CME students to access exercises and codealongs that happen throughout the course. I hope you wi

null 14 Jan 3, 2023
Open source data infrastructure platform. Designed for developers, built for speed.

Gigahex is a web based data infrastructure platform to deploy and manage Apache Spark™, Apache Kafka and Apache Hadoop clusters. Currently, it support

Gigahex 22 Dec 6, 2022
Open source data infrastructure platform. Designed for developers, built for speed.

Gigahex is a web based data infrastructure platform to deploy and manage Apache Spark™, Apache Kafka and Apache Hadoop clusters. Currently, it support

Gigahex 21 Apr 1, 2022
Windmill: Open-source platform and runtime to turn any scripts into internal apps, integrations and workflows

. Open-source and self-hostable alternative to Airplane, Pipedream, Superblocks and a simplified Temporal with autogenerated UIs to trigger flows and

Windmill Labs, Inc 1.6k Jan 4, 2023
API dot Open Sauced is NestJS and SupaBase powered OAS3 backend designed to remove client complexity and provide a structured graph of all @open-sauced integrations

?? Open Sauced Nest Supabase API ?? The path to your next Open Source contribution ?? Prerequisites In order to run the project we need the following

TED Vortex (Teodor-Eugen Duțulescu) 13 Dec 18, 2022
proxy 🦄 yxorp is your Web Proxy as a Service (SAAS) Multi-tenant, Multi-Threaded, with Cache & Article Spinner

proxy ?? yxorp is your Web Proxy as a Service (SAAS) Multi-tenant, Multi-Threaded, with Cache & Article Spinner. Batteries are included, Content Spinning and Caching Engine, all housed within a stunning web GUI. A unique high-performance, plug-and-play, multi-threaded website mirror and article spinner

4D/ҵ.com Dashboards 13 Dec 30, 2022
⚡️ LN Tickets seller as Saas

BitKets Plataforma de reserva de tickets con Bitcoin Lightning Network ⚡️ Authors @erichgarciacruz Contributing Contributions are always welcome! See

Erich Garcia Cruz 10 Apr 4, 2022
AppRun is a JavaScript library for developing high-performance and reliable web applications using the elm inspired architecture, events and components.

AppRun AppRun is a JavaScript library for building reliable, high-performance web applications using the Elm-inspired architecture, events, and compon

Yiyi Sun 1.1k Dec 20, 2022
NextCollect is a Next.js framework for reliable collection of user behavioral data

Overview NextCollect.js is a framework for server-side user event collection for Next.Js. It is designed from the ground up to work in Serverless envi

Jitsu 23 Jan 3, 2023
A fast & reliable transaction API for web3 Games, Bridges and other projects

Gelato Relay SDK SDK to integrate into Gelato Multichain Relay Table of Contents Installation Introduction Quick Start Payment Types Request Types Sen

Gelato 17 Dec 31, 2022
chain-syncer is a module which allows you to synchronize your app with any ethereum-compatible blockchain/contract state. Fast. Realtime. Reliable.

Chain Syncer Chain Syncer is a JS module which allows you to synchronize your app with any ethereum-compatible blockchain/contract state. Fast. Realti

Miroslaw Shpak 10 Dec 15, 2022
awsrun 189 Jan 3, 2023
Easy-to-use CDK constructs for monitoring your AWS infrastructure

CDK Monitoring Constructs Easy-to-use CDK constructs for monitoring your AWS infrastructure. Easily add commonly-used alarms using predefined properti

CDK Labs at AWS 214 Jan 6, 2023
PEARL (Planetary Computer Land Cover Mapping) Platform API and Infrastructure

PEARL API & Infrastructure PEARL is a landcover mapping platform that uses human in the loop machine learning approach. This repository contains the A

Development Seed 47 Dec 23, 2022
A tool for managing production-grade cloud clusters, infrastructure as code

Cloudy Description Cloudy is an "infrastructure as code" tool for managing production-grade cloud clusters. It's based on Pulumi that mostly using Ter

Cloudy 24 Jan 1, 2023