A tool set for CSS including fast detailed parser, walker, generator and lexer based on W3C specs and browser implementations

Overview

CSSTree logo

CSSTree

NPM version Build Status Coverage Status NPM Downloads Twitter

CSSTree is a tool set for CSS: fast detailed parser (CSS → AST), walker (AST traversal), generator (AST → CSS) and lexer (validation and matching) based on specs and browser implementations. The main goal is to be efficient and W3C spec compliant, with focus on CSS analyzing and source-to-source transforming tasks.

NOTE: The library isn't in final shape and needs further improvements (e.g. AST format and API are subjects to change in next major versions). However it's stable and used by projects like CSSO (CSS minifier) and SVGO (SVG optimizer) in production. Master branch contains changes for next major version, for stable published version see branch 1.0.

Features

  • Detailed parsing with an adjustable level of detail

    By default CSSTree parses CSS as detailed as possible, i.e. each single logical part is representing with its own AST node (see AST format for all possible node types). The parsing detail level can be changed through parser options, for example, you can disable parsing of selectors or declaration values for component parts.

  • Tolerant to errors by design

    Parser behaves as spec says: "When errors occur in CSS, the parser attempts to recover gracefully, throwing away only the minimum amount of content before returning to parsing as normal". The only thing the parser departs from the specification is that it doesn't throw away bad content, but wraps it in a special node type (Raw) that allows processing it later.

  • Fast and efficient

    CSSTree is created with focus on performance and effective memory consumption. Therefore it's one of the fastest CSS parsers at the moment.

  • Syntax validation

    The build-in lexer can test CSS against syntaxes defined by W3C. CSSTree uses mdn/data as a basis for lexer's dictionaries and extends it with vendor specific and legacy syntaxes. Lexer can only check the declaration values currently, but this feature will be extended to other parts of the CSS in the future.

Documentation

Tools

Related projects

Usage

Install with npm:

> npm install css-tree

Basic usage:

var csstree = require('css-tree');

// parse CSS to AST
var ast = csstree.parse('.example { world: "!" }');

// traverse AST and modify it
csstree.walk(ast, function(node) {
    if (node.type === 'ClassSelector' && node.name === 'example') {
        node.name = 'hello';
    }
});

// generate CSS from AST
console.log(csstree.generate(ast));
// .hello{world:"!"}

Syntax matching:

// parse CSS to AST as a declaration value
var ast = csstree.parse('red 1px solid', { context: 'value' });

// match to syntax of `border` property
var matchResult = csstree.lexer.matchProperty('border', ast);

// check first value node is a <color>
console.log(matchResult.isType(ast.children.first(), 'color'));
// true

// get a type list matched to a node
console.log(matchResult.getTrace(ast.children.first()));
// [ { type: 'Property', name: 'border' },
//   { type: 'Type', name: 'color' },
//   { type: 'Type', name: 'named-color' },
//   { type: 'Keyword', name: 'red' } ]

Top level API

API map

License

MIT

Comments
  • Union with `css-values` and `known-css-properties` repos

    Union with `css-values` and `known-css-properties` repos

    Ref to repo: https://github.com/ben-eb/css-values Ref to repo: https://github.com/betit/known-css-properties

    Why:

    • I do not think that's a good idea to keep everything in the project, its code can be divided into independent modules(regards only csstree).
    • It allow open source community to have a good package.
    • I as a participant in the stylelint very interesting in supporting one well filled package, it will be convenient to all.
    • @ben-eb and @vio currently has a little time to contribute(judging by the number commits and frequency of their), all three (and maybe more :smile: ), you do it much faster and better.

    @ben-eb @vio what do you think about this?

    I would suggest creating a separate organization and hold union in her. It also would give impetus to the creation of new and stable rules for the stylelint and powerfull postcss plugins.

    discussion 
    opened by alexander-akait 18
  • I want to value spacing at csstree 2.0.0

    I want to value spacing at csstree 2.0.0

    version: csstree 2.0.0

    expected

    border-bottom: 2px #f0f0f0 solid;
    box-shadow: 0 -5px 5px 0 rgba(0, 0, 0, 0.05);
    

    result

    border-bottom: 2px#f0f0f0 solid;
    box-shadow: 0-5px 5px 0 rgba(0, 0, 0, 0.05);
    

    result works to modern browser. But I unfortunately apply to IE11. So I want to spacing to values.

    How can I do this??

    Help me please.

    Thanks :)

    opened by demonguyj 11
  • Add a bunch of Grid Layout properties

    Add a bunch of Grid Layout properties

    1. Add named lines syntax detection
    screen shot 2018-02-23 at 10 51 03
    1. Add justify-self and justify-item properties from the spec. MDN: justify-self, justify-item is not there yet.
    screen shot 2018-02-23 at 11 24 29
    1. Add gap shorthand from the spec, which is an alias to [grid-gap](https://developer.mozilla.org/en-US/docs/Web/CSS/grid-gap) property.
    screen shot 2018-02-23 at 11 25 07 syntax 
    opened by pepelsbey 11
  • Please republish alpha.39 as beta.1

    Please republish alpha.39 as beta.1

    Earlier tags were of the form alpha9 (compare with the current alpha.39, which features a dot). The problem is that, per semver spec, it makes alpha9 a higher version than alpha.39:

    Precedence for two pre-release versions with the same major, minor, and patch version MUST be determined by comparing each dot separated identifier from left to right until a difference is found as follows:

    • Identifiers consisting of only digits are compared numerically.
    • Identifiers with letters or hyphens are compared lexically in ASCII sort order.
    • Numeric identifiers always have lower precedence than non-numeric identifiers.
    • A larger set of pre-release fields has a higher precedence than a smaller set, if all of the preceding identifiers are equal.

    Since alpha < alpha9 in the ASCII sort order, the semantically highest version on the registry is 1.0.0-alpha9... Since package managers may install the highest available release, in some cases they'll install the four-years old. The fix is fortunately simple: please just republish the last build with beta.1, which will resolve the ambiguity.

    Ref https://github.com/npm/node-semver/issues/335, https://github.com/yarnpkg/berry/issues/1548 (after digging, it turns out semver is technically correct, and the problem comes from css-tree).

    opened by arcanis 10
  • add back support for browserify and webpack

    add back support for browserify and webpack

    Thanks for a great repository!

    I'm using it at philschatz/css-plus to implement some features in https://www.w3.org/TR/css-content-3/ and https://www.w3.org/TR/css-gcpm-3/ but noticed there is a file missing when building for the browser.

    I'm not sure but it seems this could fix it.

    opened by philschatz 10
  • Add tolerance mode to parser

    Add tolerance mode to parser

    .a {
      display: block;
    }.
    

    Gives

    Parse error: Identifier is expected
        1 |.a{
        2 |  display: block;
        3 |}.
    ---------^
    

    I expect that lexer will notify me about this error, but ignore it and continue to consume tokens

    parser feature request 
    opened by smelukov 9
  • Generate walker code at build time

    Generate walker code at build time

    Let me start by saying that CSSTree is an awesome library. The AST design is great, and the speed is impressive. Thanks so much for writing it, @lahmatiy!

    At the moment, though, I'm running into an issue that prevents me from using it. Currently lib/walker/create.js generates some iterator code at runtime using new Function(). This unfortunately makes it impossible to use this library in the browser on many sites, because eval-like functions are generally blocked by content security policy when CSP is in use.

    This PR fixes that by generating that iterator code at build time instead of runtime. A new gen-walker-code.js script does the work; it places its output in dist/walker-generated.js. I moved some of the other data structures that lib/walker/create.js generates into the same file so that all of the related code could be moved out of lib/walker/create.js. It seems like a win to generate as much stuff at build time as possible.

    @lahmatiy, it has been quite a while since I've written code in the style used in this project, so I probably haven't done things in the best way. I'm happy to make any changes you'd like, so feel free to be as critical as possible. I hope you approve of the general idea at least; I'd love to get this upstreamed and get rid of my fork.

    I didn't bump the version in package.json because I thought you might want to handle that, but let me know if you'd like me to include that change in the patch.

    opened by sethfowler 8
  • !important without whitespace considered invalid in some cases

    !important without whitespace considered invalid in some cases

    It seems like !important immediately preceded by a ) is being considered an invalid value. e.g:

    width:calc(100% - 10px)!important;
    
    color:rgb(0,0,0)!important;
    
    opened by ArgonAlex 7
  • Add a node type

    Add a node type "None"

    I was looking for an easy way to remove nodes that match a certain blacklist, for example media queries that use the color feature.

    To do this, I traverse the tree and when I find a MediaFeature that matches a blacklisted one, I want to remove the atrule from the context.

    csstree.walk(ast, function(node) {
      if (node.type === 'MediaFeature' && node.name === 'color') {
        this.atrule.type = 'Raw';
        this.atrule.value = '';
      }
    });
    

    It occurred to me this is quite a hacky way to do things. I was wondering if we could perhaps add a special node type None that'd simply mean "ignore this node and all of its properties / children"?

    opened by fstanis 7
  • Using leads to mistakenly report value is invalid">

    Using "unitless zero" for leads to mistakenly report value is invalid

    I'm using the latest (1.5.0) stylelint-csstree-validator and this line:

    background: url('image.png') 0 30px no-repeat;
    

    produces an error:

    Invalid value for `background`   csstree/validator
    

    Changing it to

    background: url('image.png') 0px 30px no-repeat;
    

    fixes the csstree/validator issue, but produces a new one:

    Unexpected unit                  length-zero-no-unit
    

    So, the latest stylelint-csstree-validator is in conflict with the length-zero-no-unit rule.

    bug lexer 
    opened by limonte 7
  • How to use with PostCSS?

    How to use with PostCSS?

    I've been unable to find out how to use csstree as the parser in PostCSS. PostCSS has a parser option, but passing 'css-tree' to that doesn't work, and spits out this error:

    TypeError: Cannot read property 'length' of undefined
        at MapGenerator.clearAnnotation (/Users/joelmoss/dev/rails_esbuild_play/node_modules/postcss/lib/map-generator.js:68:34)
        at MapGenerator.generate (/Users/joelmoss/dev/rails_esbuild_play/node_modules/postcss/lib/map-generator.js:273:10)
        at LazyResult.stringify (/Users/joelmoss/dev/rails_esbuild_play/node_modules/postcss/lib/lazy-result.js:228:20)
        at LazyResult.runAsync (/Users/joelmoss/dev/rails_esbuild_play/node_modules/postcss/lib/lazy-result.js:370:17)
        at LazyResult.async (/Users/joelmoss/dev/rails_esbuild_play/node_modules/postcss/lib/lazy-result.js:178:30)
        at LazyResult.then (/Users/joelmoss/dev/rails_esbuild_play/node_modules/postcss/lib/lazy-result.js:163:17)
        at /Users/joelmoss/dev/rails_esbuild_play/node_modules/postcss-cli/index.js:220:10
    

    Can anyone help please?

    wontfix 
    opened by joelmoss 6
  • improve performance of ident tokens

    improve performance of ident tokens

    Unless I overlooked them it appears that there aren't any benchmarks in CSSTree. So I was unable to measure this change.

    1. lower case characters are much more common in CSS source code. Checking these before upper case will result in a faster tokenizer.
    2. -- is a common pattern in custom properties and a cheap check. -webkit-... is also common but a slightly more expensive check. This order should be slightly faster.
    opened by romainmenke 0
  • Expose `/lexer/error`, `/lexer/match`, and `/lexer/prepare-tokens` for use in the package

    Expose `/lexer/error`, `/lexer/match`, and `/lexer/prepare-tokens` for use in the package

    Hi, I'm Yusuke, who develop Markuplint as an HTML and SVG linter.

    Thank you for always and the fantastic module. I use CSSTree at using to validate an attribute value of mainly SVG.

    Now I use version 1.x. I use part of the functions while importing internal files because I can use detailed processing tokens.

    https://github.com/markuplint/markuplint/blob/bf2c9d8a87986e32b567a898ab20ede62ba6c70b/packages/%40markuplint/types/src/css-syntax.ts#L1-L14

    However, I can't import those since v2.x. So I would like you to expose the functions I want.

    ( Or could you create an API? I want a function that it pass a value and a type identifier and then returns a message with AST and location. )

    Please consider.

    opened by YusukeHirao 0
  • Patch entries scroll-timeline-axis and scroll-timeline-name are properties, not types

    Patch entries scroll-timeline-axis and scroll-timeline-name are properties, not types

    Maybe I'm not fully understanding the purpose of these patch entries, but scroll-timeline-axis and scroll-timeline-name are properties, not types. I don't see mdn/data referencing either of these so I'm not sure exactly why they are included in the patch. If you're just adding entries that are missing, you may as well include scroll-timeline as well. https://github.com/csstree/csstree/blob/593bf37cedfbc052ad0890a6ee851510034ad437/data/patch.json#L654-L664

    opened by pyoor 3
  • [Question] Is it possible to get the full selector for a nested rule?

    [Question] Is it possible to get the full selector for a nested rule?

    With the recently-added support for CSS Nesting, is there a good way to get the full selector value for a given nested rule -- that is, including any ancestor rule preludes? For example, given the following CSS:

    .one {
      color: red;
    
      &.two {
        color: green;
      }
    }
    

    From the context of the color: green declaration, ideally I want to get the full selector for that declaration (.one.two). rule.prelude returns &.two -- is there a convenient way to find ancestor/parent selectors, or better yet the full "un-nested" selector/list raw value?

    feature request utils 
    opened by jgerigmeyer 3
  • Color function `lch()` (and probably `lab()` too) are missing whitespace in space separated values

    Color function `lch()` (and probably `lab()` too) are missing whitespace in space separated values

    https://caniuse.com/css-lch-lab

    div { color: lch(67.5345% 42.5 258.2); }
    

    Returns:

    div { color: lch(67.5345%42.5 258.2); }
    

    Notably, comma separated values for these functions are invalid.

    generator 
    opened by zachleat 2
Releases(v2.3.1)
  • v2.3.1(Dec 14, 2022)

    • Added :host, :host() and :host-context() pseudo class support (#216)
    • Fixed generator, parse and parse-selector entry points by adding missed NestedSelector node type
    • Removed npm > 7 version requirement (#218)
    Source code(tar.gz)
    Source code(zip)
  • v2.3.0(Nov 30, 2022)

    • Added CSS Nesting support:
      • Added NestingSelector node type for & (a nesting selector) in selectors
      • Added @nest at-rule
      • Changed behaviour for @media inside a Rule to parse its block content as a Declaration first
      • Changed DeclarationList behaviour to follow the rules for Rule's block
    • Added the dimension units introspection & customisation:
      • Added Lexer#units dictionary to provide unit groups (length, angle, etc.) used for matching
      • Changed Lexer's constructor to take into consideration config.units to override default units
      • Extended lexer's dump to contain a units dictionary
    • Bumped mdn-data to 2.0.30
    Source code(tar.gz)
    Source code(zip)
  • v2.2.1(Aug 14, 2022)

  • v2.2.0(Aug 10, 2022)

    • Bumped mdn-data to 2.0.28
    • Added support for CSS wide keywords revert and revert-layer
    • Dropped support for expression() the same way as CSS wide keywords
    • Patched background-clip property definition to match Backgrounds and Borders 4 (#190)
    • Patched content property definition to allow attr() (#201)
    • Fixed definition syntax matching when a comma is expected before a <delim-token>
    • Fixed at-rule validation fail when no prelude is specified and its syntax allows an empty prelude, that's the case for @page at-rule (#191)
    • Added new units according to current state of CSS Values and Units 4: rex, cap, rcap, rch, ic, ric, lh, rlh, vi, vb, sv*, lv*, dv*
    • Added container relative length units from CSS Containment 3: cqw, cqh, cqi, cqb, cqmin, cqmax
    • Removed vm unit (supposed to be an old IE versions supported this unit instead of vmax)
    • Value definition syntax:
      • Added support for stacked multipliers +# and #? according to spec (#199)
      • Added parsing of a dimension in range definition notations, however, a validation for such ranges is not supported yet (#192)
      • Changed parsing of range definition notation to not omitting [-∞,∞] ranges
    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Feb 27, 2022)

    • Bumped mdn-data to 2.0.27
    • Added module field to package.json
    • Fixed minor issues in CommonJS version
    • Fixed css-tree/utils export (#181)
    • Added css-tree/convertor export
    • Added css-tree/selector-parser export (~27kb when bundled, #183)
    • Reduced bundle size:
      • css-tree/parser 50kb -> 41kb
      • css-tree/generator 46kb -> 23kb
    • Renamed syntaxes into types in css-tree/definition-syntax-data-patch
    • Added parsing support for :is(), :-moz-any(), :-webkit-any() and :where() (#182, #184)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.4(Dec 17, 2021)

    • Extended Node.js support to include ^10
    • Fixed generate() in safe mode to add a whitespace between <dimension-token> and <hash-token>, otherwise some values are broken in IE11, e.g. border properties (#173)
    • Removed allowance for : for an attribute name on AttributeSelector parsing as it does not meet the CSS specs (details)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.3(Dec 15, 2021)

    • Fixed unintended whitespace on generate() in safe mode between type-selector and id-selector (e.g. a#id). A regression was introduces in 2.0.2 since IE11 fails on values when <hash-token> goes after <ident-token> without a whitespace in the middle, e.g. 1px solid#000. Thus, in one case, a space between the <ident-token> and the <hash-token> is required, and in the other, vice versa. Until a better solution found, a workaround is used on id-selector generation by producing a <delim-token> instead of <hash-token>.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.2(Dec 10, 2021)

    • Updated width, min-width and max-width syntax definitions
    • Patched counter related syntaxes to match specs until updated in mdn-data
    • Replaced source-map with source-map-js which reduce install size by ~700KB
    • Fixed calc() function consumption on definition syntax matching
    • Fixed generate() auto emitting a whitespace edge cases when next token starts with a dash (minus)
    • Fixed generate() safe mode to cover more cases for IE11
    • Fixed CommonJS bundling by adding browser files dist/data.cjs and dist/version.cjs
    • Added exports:
      • css-tree/definition-syntax-data
      • css-tree/definition-syntax-data-patch
    Source code(tar.gz)
    Source code(zip)
  • v2.0.1(Dec 4, 2021)

  • v2.0.0(Dec 3, 2021)

    ES2020 syntax and support for ESM

    The source code was refactored to use ES2020 syntax and ES modules by default. However, Commonjs version of library is still supported, so the package became a dual module. Using ESM allowed to reduce bundle size from 167Kb down to 164Kb despite that mdn-data grew up in size by 11Kb.

    In case only a part of CSSTree functionality is used (for instance only for a parsing), it's possible now to use specific exports like css-tree/parser (see the full list of exports) to reduce a bundle size. As new package mechanics are used, the minimal version for Node.js was changed to 14.16+.

    No white space nodes on AST anymore

    Previously white spaces was preserved on CSS parsing as a WhiteSpace node with a single space. This was mostly needed to avoid combining the CSS tokens into one when generating back into a CSS string. This is no longer necessary, as the generator has been reworked to independently determine when to use spaces in output. This simplify analysis and construction of the AST, and also improves the performance and memory consumption a bit.

    Some changes have been made to the AST construction during parsing. First of all, white spaces in selectors are now producing { type: 'Combinator', name: ' ' } nodes when appropriate. Second, white spaces surrounding operators - and + are now replacing with a single space and appending to operators to preserve a behaviour of expressions in calc() functions. Finally, the only case a WhiteSpace node is now created is a custom property declaration with a single whitespace token as the value, since --var: ; and --var:; have different behaviour in terms of CSS.

    Improved generator

    CSS Syntax Module defines rules for CSS serialization that it must "round-trip" with parsing. Starting with this release the CSSTree's generator follows these rules and determines itself when to output the space to avoid unintended CSS tokens combining.

    The spec rules allow to omit whitespaces in most cases. However, some older browsers fail to parse the resulting CSS because they didn't follow the spec. For this reason, the generator supports two modes:

    • safe (by default) which adds an extra space in some edge cases;
    • spec which completely follows the spec.
    import { parse, generate } from 'css-tree';
    
    const ast = parse('a { border: calc(1px) solid #ff0000 }');
    
    // safe mode is by default
    // the same as console.log(generate(ast, { mode: 'safe' }));
    console.log(generate(ast));
    // a{border:calc(1px) solid#ff0000}
    
    // spec mode
    console.log(generate(ast, { mode: 'spec' }));
    // a{border:calc(1px)solid#ff0000}
    

    These changes to the generator bring it closer to the pretty print output that will be implemented in future releases.

    Auto encoding and decoding of values

    For string and url values an auto encoding and decoding was implemented. This means you no longer need to do any preprocessing on string or url values before analyzing or transforming of it. Most noticeable simplification with Url nodes:

    // CSSTree 1.x
    csstree.walk(ast, function(node) {
        if (node.type === 'Url') {
            if (node.value.type === 'String') {
                urls.push(node.value.value.substring(1, node.value.value.length - 1));
            } else {
                urls.push(node.value.value);
            }
        }
    });
    
    // CSSTree 2.0
    csstree.walk(ast, function(node) {
        if (node.type === 'Url') {
            urls.push(node.value);
        }
    });
    

    It is worth noting that despite the fact that in many cases the example above will give the same results for both versions, a solution with CSSTree 1.x still lacks decoding of escaped sequences, that is, additional processing of values is needed.

    Additionaly, encode and decode functions for string, url and ident values are available as utils:

    import { string, url, ident } from 'css-tree';
    
    string.decode('"hello\\9  \\"world\\""') // hello\t "world"
    string.decode('\'hello\\9  "world"\'')   // hello\t "world"
    string.encode('hello\t "world"')         // "hello\9  \"world\""
    string.encode('hello\t "world"', true)   // 'hello\9  "world"'
    
    url.decode('url(file\ \(1\).ext)')  // file (1).ext
    url.encode('file (1).ext')          // url(file\ \(1\).ext)
    
    ident.decode('hello\\9 \\ world')   // hello\t world
    ident.encode('hello\t world')       // hello\9 \ world
    

    Changes

    • Package
      • Dropped support for Node.js prior 14.16 (following patch versions changed it to ^10 || ^12.20.0 || ^14.13.0 || >=15.0.0)
      • Converted to ES modules. However, CommonJS is supported as well (dual module)
      • Added exports for standalone parts instead of internal paths usage (use as import * as parser from "css-tree/parser" or require("css-tree/parser")):
        • css-tree/tokenizer
        • css-tree/parser
        • css-tree/walker
        • css-tree/generator
        • css-tree/lexer
        • css-tree/definition-syntax
        • css-tree/utils
      • Changed bundle set to provide dist/csstree.js (an IIFE version with csstree as a global name) and dist/csstree.esm.js (as ES module). Both are minified
      • Bumped mdn-data to 2.0.23
    • Tokenizer
      • Changed tokenize() to take a function as second argument, which will be called for every token. No stream instance is creating when second argument is ommited.
      • Changed TokenStream#getRawLength() to take second parameter as a function (rule) that check a char code to stop a scanning
      • Added TokenStream#forEachToken(fn) method
      • Removed TokenStream#skipWS() method
      • Removed TokenStream#getTokenLength() method
    • Parser
      • Moved SyntaxError (custom parser's error class) from root of public API to parser via parse.SyntaxError
      • Removed parseError field in parser's SyntaxError
      • Changed selector parsing to produce { type: 'Combinator', name: ' ' } node instead of WhiteSpace node
      • Removed producing of WhiteSpace nodes with the single exception for a custom property declaration with a single white space token as a value
      • Parser adds a whitespace to + and - operators, when a whitespace is before and/or after an operator
      • Exposed parser's inner configuration as parse.config
      • Added consumeUntilBalanceEnd(), consumeUntilLeftCurlyBracket(), consumeUntilLeftCurlyBracketOrSemicolon(), consumeUntilExclamationMarkOrSemicolon() and consumeUntilSemicolonIncluded() methods to parser's inner API to use with Raw instead of Raw.mode
      • Changed Nth to always consume of clause when presented, so it became more general and moves validation to lexer
      • Changed String node type to store decoded string value, i.e. with no quotes and escape sequences
      • Changed Url node type to store decoded url value as a string instead of String or Raw node, i.e. with no quotes, escape sequences and url() wrapper
    • Generator
      • Generator is now determines itself when a white space required between emitting tokens
      • Changed chunk() handler to token() (output a single token) and tokenize() (split a string into tokens and output each of them)
      • Added mode option for generate() to specify a mode of token separation: spec or safe (by default)
      • Added emit(token, type, auto) handler as implementation specific token processor
      • Changed Nth to serialize +n as n
      • Added auto-encoding for a string and url tokens on serialization
    • Lexer
      • Removed Lexer#matchDeclaration() method
    • Utils
      • Added ident, string and url helpers to decode/encode corresponding values, e.g. url.decode('url("image.jpg")') === 'image.jpg'
      • List
        • Changed List to be iterable (iterates data)
        • Changed List#first, List#last and List#isEmpty to getters
        • Changed List#getSize() method to List#size getter
        • Removed List#each() and List#eachRight() methods, List#forEach() and List#forEachRight() should be used instead
    Source code(tar.gz)
    Source code(zip)
  • v1.1.3(Mar 31, 2021)

    • Fixed matching on CSS wide keywords for at-rule's prelude and descriptors
    • Added fit-content to width property patch as browsers are supported it as a keyword (nonstandard), but spec defines it as a function
    • Fixed parsing a value contains parentheses or brackets and parseValue option is set to false, in that case !important was included into a value but must not (#155)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Nov 26, 2020)

  • v1.1.1(Nov 18, 2020)

  • v1.1.0(Nov 18, 2020)

    • Bumped mdn-data to 2.0.14
    • Extended fork() method to allow append syntax instead of overriding for types, properties and atrules, e.g. csstree.fork({ types: { color: '| foo | bar' } })
    • Extended lexer API for validation
      • Added Lexer#checkAtruleName(atruleName), Lexer#checkAtrulePrelude(atruleName, prelude), Lexer#checkAtruleDescriptorName(atruleName, descriptorName) and Lexer#checkPropertyName(propertyName)
      • Added Lexer#getAtrule(atruleName, fallbackBasename) method
      • Extended Lexer#getAtrulePrelude() and Lexer#getProperty() methods to take fallbackBasename parameter
      • Improved SyntaxMatchError location details
      • Changed error messages
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Nov 18, 2020)

  • v1.0.0(Oct 27, 2020)

    • Added onComment option to parser config
    • Added support for break and skip values in walk() to control traversal
    • Added List#reduce() and List#reduceRight() methods
    • Bumped mdn-data to 2.0.12
    • Exposed version of the lib (i.e. import { version } from 'css-tree')
    • Renamed HexColor node type into Hash
    • Removed element() specific parsing rules
    • Removed dist/default-syntax.json from package
    • Fixed Lexer#dump() to dump atrules syntaxes as well
    • Fixed matching comma separated <urange> list (#135)
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-alpha.39(Dec 5, 2019)

  • v1.0.0-alpha.38(Nov 25, 2019)

    • Bumped mdn-data to 2.0.6
    • Added initial implementation for at-rule matching via Lexer#matchAtrulePrelude() and Lexer#matchAtruleDescriptor() methods
    • Added -moz-control-character-visibility, -ms-grid-columns, -ms-grid-rows and -ms-hyphenate-limit-last properties to patch (#111, thanks to @life777)
    • Added flow, flow-root and table-caption values to patched display (#112, thanks to @silverwind)
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-alpha.37(Oct 22, 2019)

  • v1.0.0-alpha.36(Oct 13, 2019)

    • Dropped support for Node < 8
    • Updated dev deps (fixed npm audit issues)
    • Reworked build pipeline
      • Package provides dist/csstree.js and dist/csstree.min.js now (instead of single dist/csstree.js that was a min version)
      • Bundle size (min version) reduced from 191Kb to 158Kb due to some optimisations
    • Definition syntax
      • Renamed grammar into definitionSyntax (named per spec)
      • Added compact option to generate() method to avoid formatting (spaces) when possible
    • Lexer
      • Changed dump() method to produce syntaxes in compact form by default
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-alpha.35(Oct 7, 2019)

    • Walker
      • Changed implementation to avoid runtime compilation due to CSP issues (see #91, #109)
      • Added find(), findLast() and findAll() methods (e.g. csstree.find(ast, node => node.type === 'ClassSelector'))
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-alpha.34(Jul 26, 2019)

    • Tokenizer
      • Added isBOM() function
      • Added charCodeCategory() function
      • Removed firstCharOffset() function (use isBOM() instead)
      • Removed CHARCODE dictionary
      • Removed INPUT_STREAM_CODE* dictionaries
    • Lexer
      • Allowed comments in matching value (just ignore them like whitespaces)
      • Increased iteration count in value matching from 10k up to 15k
      • Fixed missed debugger (#104)
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-alpha.33(Jul 11, 2019)

  • v1.0.0-alpha.32(Jul 11, 2019)

  • v1.0.0-alpha.31(Jul 31, 2019)

    This release improves syntax matching by new features and some fixes.

    Bracketed range notation

    A couple month ago bracketed range notation was added to Values and Units spec. The notation allows restrict numeric values to some range. For example, <integer [0,∞]> is for positive integers, or <number [0,1]> that can be used for an alpha value.

    Since the notation is new thing in syntax definition, it isn't used in specs yet. However, there is a PR (https://github.com/w3c/csswg-drafts/pull/3894) that will bring it to some specs. And CSSTree is ready for this.

    Right now, the notation helped to remove <number-zero-one>, <number-one-or-greater> and <positive-integer> from generic types and define them using a regular grammar (thanks to the notation).

    Low priority type matching

    There are at least two productions that has a low priority in matching. It means that such productions give a chance for other production to claim a token, and if no one – claim a token. This release introduce a solution for such productions. It's hardcoded at the moment, but can be exposed if needed (i.e. if there are more such productions).

    First production is <custom-ident>. The Values and Units spec states:

    When parsing positionally-ambiguous keywords in a property value, a <custom-ident> production can only claim the keyword if no other unfulfilled production can claim it.

    This rule takes place in properties like <'animation'>, <'transition'> and <'list-style'>. Before solves in different ways:

    • <'animation'> – that's not an issue since <'custom-ident'> goes last, however a terms order can be changed in the future
    • <'transition'> – there was a patch for <single-transition> that changes order of terms
    • <'list-style'> – had no fixes, just didn't work in some cases (see #101)

    And now, all those and the rest syntaxes work as expected.

    Second production is a bit tricky. It's about "unitless zero" for <length> production. The spec states:

    ... if a 0 could be parsed as either a <number> or a <length> in a property (such as line-height), it must parse as a <number>.

    This rule takes place in properties like <'line-height'> or <'flex'>. And now it works per spec too (try it here): csstree-length-match

    Changes

    • Bumped mdn/data to 2.0.4 (#99)
    • Lexer
      • Added bracketed range notation support and related refactoring
      • Removed <number-zero-one>, <number-one-or-greater> and <positive-integer> from generic types. In fact, types moved to patch, because those types can be expressed in a regular grammar due to bracketed range notation implemented
      • Added support for multiple token string matching
      • Improved <custom-ident> production matching to claim the keyword only if no other unfulfilled production can claim it (#101)
      • Improved <length> production matching to claim "unitless zero" only if no other unfulfilled production can claim it
      • Changed lexer's constructor to prevent generic types override when used
      • Fixed large ||- and &&-group matching, matching continues from the beginning on term match (#85)
      • Fixed checking that value has var() occurrences when value is a string (such values can't be matched on syntax currently and fail with specific error that can be used for ignorance in validation tools)
      • Fixed <declaration-value> and <any-value> matching when a value contains a function, parentheses or braces
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-alpha.30(Jul 3, 2019)

    This release took too many time to be released. But it was worth the wait, because it unlocks new possibilities and ways for further improvements.

    Reworked tokenizer

    CSSTree tends to be as close as possible to the specifications in reasonable way. It means that CSSTree deviates from specs because specs are generally targeted for user agents (browsers) rather than source processing tools like CSSTree.

    Previously CSSTree's tokenizer used its own token types set, which were selected for better performance and to be convenient enough for building AST. However, this has restricted the further improvement of parser, lexer and even generator, since the basis of CSS is tokens. That's not obvious at first glance, but if you dig deep into specs you'll find that CSS syntax is described in tokens and their productions, serialization relay on tokens, even var() substitution takes place at the level of tokens and so on. Using own token types set means that many rules described in CSS specs can't be implemented as designed. That's why previously CSSTree's tokenizer was actually too far from specs.

    In this release tokenizer was reworked to use token type set defined by CSS Syntax Module Level 3. Algorithms described by spec was adopted by tokenizer implementation and code is provided with excerpts from the specification. It allowed to be very close to spec and helped to fix numerous edge cases.

    Current deviations from the CSS Syntax Module Level 3:

    • No input preprocessing currently. It not a problem actually, since CSS processing tools usually do not do any preprocessing, and looks like it's fine. However, it can be added later via additional option to tokenizer and parser.
    • No comments removal. According to spec tokenizer should not produce tokens for comments, or otherwise preserve them in any way. But comments are useful for source processing tools, so it looks reasonable to keep it as a comment token. Probably this will change in the future.

    Influence on parser

    Changing the token types set led to a significant alteration of parser implementation. Most dramatic changes in AnPlusB and UnicodeRange implementations, because those two microsyntaxes are really hard. Nevertheless, in general, most things became simpler. Also parser continues relaxing on parse stage, more delegating syntax checking to lexer. As a result some parsing errors are no longer occur, so tools using CSSTree have a chance to use AST even for partially invalid CSS.

    This release doesn't change AST format. However, the format will be changing for sure in next releases to be closer to token type set. It will reduce more parse errors and increase tools possibilities.

    Lexer

    Lexer was slightly refactored. Most significant change, syntax matching relies on real CSS tokens produced by a tokenizer rather than generated from AST tokens. In other words, AST is translating to a string and then splitting into tokens by the tokenizer. Consequences of this:

    • Since AST is not used directly for token producing and syntax matching, it became completely optional.
    • A string can be used as a value for matching (i.e. lexer.matchProperty('border', 'red 1px dotted')). So parsing into AST is not required anymore, and that's a good news for tools which using CSSTree for a validation and have another AST format or have no AST at all.
    • Types that is using tokens in their syntax is now can be used for matching. Such syntaxes was omitted from mdn/data by CSSTree's patch recently. Fortunately, it is no longer needed (difference with mdn/data).

    Work on lexer is not completed yet. This version removes some restrictions and its ready for further improvements like at-rules and selectors matching, better mathematical expressions (calc() and friends) support, attr()/toggle()/var() fallback checking, multiple errors, suggestions, improving matching performance and so on.

    Change log (commits)

    • Bumped mdn/data to ~2.0.3
      • Removed type removals from mdn/data due to lack of some generic types and specific lexer restictions (since lexer was reworked, see below)
      • Reduced and updated patches
    • Tokenizer
      • Reworked tokenizer itself to compliment CSS Syntax Module Level 3
      • Tokenizer class splitted into several abstractions:
        • Added TokenStream class
        • Added OffsetToLocation class
        • Added tokenize() function that creates TokenStream instance for given string or updates a TokenStream instance passed as second parameter
        • Removed Tokenizer class
      • Removed Raw token type
      • Renamed Identifier token type to Ident
      • Added token types: Hash, BadString, BadUrl, Delim, Percentage, Dimension, Colon, Semicolon, Comma, LeftSquareBracket, RightSquareBracket, LeftParenthesis, RightParenthesis, LeftCurlyBracket, RightCurlyBracket
      • Replaced Punctuator with Delim token type, that excludes specific characters with its own token type like Colon, Semicolon etc
      • Removed findCommentEnd, findStringEnd, findDecimalNumberEnd, findNumberEnd, findEscapeEnd, findIdentifierEnd and findUrlRawEnd helper function
      • Removed SYMBOL_TYPE, PUNCTUATION and STOP_URL_RAW dictionaries
      • Added isDigit, isHexDigit, isUppercaseLetter, isLowercaseLetter, isLetter, isNonAscii, isNameStart, isName, isNonPrintable, isNewline, isWhiteSpace, isValidEscape, isIdentifierStart, isNumberStart, consumeEscaped, consumeName, consumeNumber and consumeBadUrlRemnants helper functions
    • Parser
      • Changed parsing algorithms to work with new token type set
      • Changed HexColor consumption in way to relax checking a value, i.e. now value is a sequence of one or more name chars
      • Added & as a property hack
      • Relaxed var() parsing to only check that a first arguments is an identifier (not a custom property name as before)
    • Lexer
      • Reworked syntax matching to relay on token set only (having AST is optional now)
      • Extended Lexer#match(), Lexer#matchType() and Lexer#matchProperty() methods to take a string as value, beside AST as a value
      • Extended Lexer#match() method to take a string as a syntax, beside of syntax descriptor
      • Reworked generic types:
        • Removed <attr()>, <url> (moved to patch) and <progid> types
        • Added types:
          • Related to token types: <ident-token>, <function-token>, <at-keyword-token>, <hash-token>, <string-token>, <bad-string-token>, <url-token>, <bad-url-token>, <delim-token>, <number-token>, <percentage-token>, <dimension-token>, <whitespace-token>, <CDO-token>, <CDC-token>, <colon-token>, <semicolon-token>, <comma-token>, <[-token>, <]-token>, <(-token>, <)-token>, <{-token> and <}-token>
          • Complex types: <an-plus-b>, <urange>, <custom-property-name>, <declaration-value>, <any-value> and <zero>
        • Renamed <unicode-range> to <urange> as per spec
        • Renamed <expression> (IE legacy extension) to <-ms-legacy-expression> and may to be removed in next releases
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-alpha.29(May 30, 2018)

    A brand new syntax matching

    This release brings a brand new syntax matching approach. The syntax matching is important feature that allow CSSTree to provide a meaning of each component in a declaration value, e.g. which component of a declaration value is a color, a length and so on. You can see example of matching result on CSSTree's syntax reference page:

    Example of syntax matching result

    Syntax matching is now based on CSS tokens and uses a state machine approach which fixes all problems it has before (see https://github.com/csstree/csstree/issues/67 for the list of issues).

    Token-based matching

    Previously syntax matching was based on AST nodes. Beside it possible to make syntax matching such way, it has several disadvantages:

    • Synchronising of CSS parsing result (AST) and syntax description tree traverses is quite complicated:
      • Every tree represents different things: one node type set for CSS parsing result and another one for syntax description tree
      • Some AST nodes consist of several tokens and contain children nodes
    • Some AST nodes doesn't contain symbols that will be in output on AST translating to string. For instance, Function node contains a function name and a list of children, but it also produce parentheses that isn't store in AST. This introduces many hacks and workarounds. However, it was not enough since approach doesn't work for nodes like Brackets. Also it forces matching algorithm to know a lot of about node types and their features.

    Starting this release, AST (CSS parse result) is converting to a token stream before matching (using CSSTree's generator with a special decorator function). Syntax description tree is also converting into so called Match graph (see details below). Those tree transformations allow to align both tree to work in the same terms – CSS tokens.

    This change make matching algorithm much simpler. Now it know nothing about AST structure, hacks and workarounds were removed. Moreover, syntaxes like <line-names> (contains brackets) and <calc()> (contains operators in nested syntaxes) are now can be matched (previously syntax matching failed for them).

    Update syntax AST format

    Since syntax matching moved from AST nodes to CSS tokens, syntax description tree format was also changed. For instance, functions is now represented as a token sequence. It allows to handle syntaxes that contains a group with several function tokens inside, like this one:

    <color-adjuster> =
        [red( | green( | blue( | alpha( | a(] ['+' | '-']? [<number> | <percentage>] ) |
        [red( | green( | blue( | alpha( | a(] '*' <percentage> ) |
        ...
    

    Despite that<color-mod()> syntax was recently removed from CSS Color Module Level 4, such syntaxes can appear in future, since valid (even looks odd).

    As the result of format changes, all syntaxes in mdn/data can now be parsed, even invalid from the standpoint of CSS Values and Units Module Level 3 spec syntaxes. Due to this, some errors in syntaxes were found and fixed (https://github.com/mdn/data/pull/221, https://github.com/mdn/data/pull/226). Also some suggestions on syntax optimisation were made (https://github.com/mdn/data/pull/223, https://github.com/mdn/data/issues/230).

    Introducing Match graph

    As mentioned above, syntax tree is now transforming to Match graph. This happens on first match for a syntax and then reused. Match graph represents a graph of simple actions (states) and transitions between them. Some complicated thing, like multipliers, are translating in a set of nodes and edges. You can explore which a match graph is building for any syntax on CSSTree's syntax reference page, e.g. the match graph for <'animation-name'>:

    image

    There were some challenges during implementation, most notable of them:

    • &&- and ||- groups. Actually it was a technical blocker that suspended moving to match graph. Finally, a solution was found: split a groups in smaller one by removing a term one by one. For example, a && b && c can be represented as following (pseudo code):
    if match a
      then [b && c]
      else if match b
        then [a && c]
        else if match c
          then [a && b]
          else MISMATCH
    

    So, a size of groups is reducing by one on each step, then we process the smaller groups until a group consists of a single term.

    a && b
    =
    if match a
      then if match b
        then MATCH
        else MISMATCH
      else if match b
        then if match a
          then MATCH
          else MISMATCH
        else MISMATCH
    

    It works fine, but for small groups only. Since it produces at least N! (factorial) nodes, where N is a number of terms in a group. Hopefully, there are not so many syntaxes that contain a group with a big number of terms for &&- or ||- group. However, font-variant syntax contains a group of 20 terms, that means at least 2,432,902,008,176,640,000 nodes in a graph. It's huge and we can't create such number of object due a memory limit. So, alternative solution for groups greater than 5 terms was introduced, it uses special buffer and iterate terms in a loop. The solution is not ideal, but there are just 9 such groups (with 6 or more terms) across all syntaxes, so it should be ok for now.

    • A comma. The task turned out to be a tough nut to crack, because of specific rules. For example, if we have a syntax like that:
    a?, b?, c?
    

    We can match a, b, c, a, c, b, b, c and so on. But input like , b, c, a, , c or a, is not allowed. In other words, comma must not be hanged and must not be followed by an another comma. And when comma is matching to an input, it should notify a positive match even there is no a comma token in the input. This was a blocker that could cancel the whole approach.

    Nevertheless, the problem was solved in elegant way, by checking adjacent tokens for a several patterns. It most non-trivial part of new syntax matching, several lines of code works well only with along other parts of implementation, so may looks like a magic.

    Using state machine

    Another improvement in syntax matching is replacing a recursion-based algorithm with a state machine approach. This allowed to check all possible alternatives during the syntax matching. Previously if nothing matched by a chosen path, algorithm just exited with a mismatch result. New algorithm is returning back to a branching point and choose an alternative path when possible. This fixes following:

    <bg-position> =
        [ left | center | right | top | bottom | <length-percentage> ] |
        [ left | center | right | <length-percentage> ] [ top | center | bottom | <length-percentage> ] |
        [ center | [ left | right ] <length-percentage>? ] && [ center | [ top | bottom ] <length-percentage>? ]
    

    This syntax didn't work before, since it defines shortest form first and matching fell in this path with no chance to use an alternative path. However, reverse order of groups in this syntax makes it work with old algorithm.

    Another example is a new syntax for <rgb()>:

    rgb() = rgb( <percentage>{3} [ / <alpha-value> ]? ) |
            rgb( <number>{3} [ / <alpha-value> ]? ) |
            rgb( <percentage>#{3} , <alpha-value>? ) |
            rgb( <number>#{3} , <alpha-value>? )
    

    Old algorithm doesn't exit from a function content when matched a function, and can't handle such syntaxes. To make matching work for syntaxes like this one, an adoption is required (by a patch as workaround). Now patches are not required.

    Matching for syntaxes not compatible with greedy algorithms. For instance, syntax of composes (CSS Modules) is defined as <custom-ident>+ from <string>, and old matching algorithm failed on it because from is a valid value for <custom-ident> and it's capturing by <custom-ident>+ with no alternatives. New algorithm is not greedy, on first try it takes a minimum count of tokens allowed by a syntax and increases that count if possible on each returning in the branching point. Syntaxes like composes can be matched now as well.

    A state machine approach gives some other benefits like a precise error locations. Previously, location of a problem could be confusing:

    SyntaxMatchError: Mismatch
      syntax: ...
       value: rgb(1,2)
      ------------^
    

    And now it's more helpful:

    SyntaxMatchError: Mismatch
      syntax: ...
       value: rgb(1,2)
      ---------------^
    

    Further improvements on syntax matching can improve error handling and probably provide some sort of suggestions.

    Performance

    New syntax matching approach requires more memory and time, because of AST to token stream transformation and checking all possible alternatives. However, new approach is more effective itself and have a room for further optimisations. Usually it takes the same or ~50% more time (depending on syntax and a matching value) compared with previous algorithm. So that's not a big deal.

    The main goal the release was make it all works, so not every possible optimisation were implemented and more will come in next releases.

    Other changes

    • Lexer
      • Syntax matching was completely reworked. Now it's token-based and uses state machine. Public API has not changed. However, some internal data structures have changed. Most significant change in syntax match result tree structure, it's became token-based instead of node-based.
      • Grammar
        • Changed grammar tree format:
          • Added Token node type to represent a single code point (<delim-token>)
          • Added Multiplier that wraps a single node (term property)
          • Added AtKeyword to represent <at-keyword-token>
          • Removed Slash and Percent node types, they are replaced for a node with Token type
          • Changed Function to represent <function-token> with no children
          • Removed multiplier property from Group
        • Changed generate() method:
          • Method takes an options as second argument now (generate(node, forceBraces, decorator) -> generate(node, options)). Two options are supported: forceBraces and decorator
          • When a second parameter is a function it treats as decorate option value, i.e. generate(node, fn) -> generate(node, { decorate: fn })
          • Decorate function invokes with additional parameter – a reference to a node
    • Tokenizer
      • Renamed Atrule const to AtKeyword
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-alpha.28(Feb 19, 2018)

    • Renamed lexer.grammar.translate() method into generate()
    • Fixed <'-webkit-font-smoothing'> and <'-moz-osx-font-smoothing'> syntaxes (#75)
    • Added vendor keywords for <'overflow'> property syntax (#76)
    • Pinned mdn-data to ~1.1.0 and fixed issues with some updated property syntaxes
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-alpha.27(Jan 14, 2018)

    Most of the changes of this release relate to rework of generator and walker. Instead of plenty methods there just single method for each one: generate() for the generator and walk() for the walker. Both methods take two arguments ast and options (optional for the generator). This makes API much simpler (see details about API in Translate AST to string and AST traversal):

    image

    Also List class API was extended, and some utils methods such as keyword() and property() were changed to be more useful.

    Generator

    • Changed node's generate() methods invocation, methods now take a node as a single argument and context (i.e. this) that have methods: chunk(), node() and children()
    • Renamed translate() to generate() and changed to take options argument
    • Removed translateMarkup(ast, enter, leave) method, use generate(ast, { decorator: (handlers) => { ... }}) instead
    • Removed translateWithSourceMap(ast), use generate(ast, { sourceMap: true }) instead
    • Changed to support for children as an array

    Walker

    • Changed walk() to take an options argument instead of handler, with enter, leave, visit and reverse options (walk(ast, fn) is still works and equivalent to walk(ast, { enter: fn }))
    • Removed walkUp(ast, fn), use walk(ast, { leave: fn })
    • Removed walkRules(ast, fn), use walk(ast, { visit: 'Rule', enter: fn }) instead
    • Removed walkRulesRight(ast, fn), use walk(ast, { visit: 'Rule', reverse: true, enter: fn }) instead
    • Removed walkDeclarations(ast, fn), use walk(ast, { visit: 'Declaration', enter: fn }) instead
    • Changed to support for children as array in most cases (reverse: true will fail on arrays since they have no forEachRight() method)

    Misc

    • List
      • Added List#forEach() method
      • Added List#forEachRight() method
      • Added List#filter() method
      • Changed List#map() method to return a List instance instead of Array
      • Added List#push() method, similar to List#appendData() but returns nothing
      • Added List#pop() method
      • Added List#unshift() method, similar to List#prependData() but returns nothing
      • Added List#shift() method
      • Added List#prependList() method
      • Changed List#insert(), List#insertData(), List#appendList() and List#insertList() methods to return a list that performed an operation
    • Changed keyword() method
      • Changed name field to include a vendor prefix
      • Added basename field to contain a name without a vendor prefix
      • Added custom field that contain a true when keyword is a custom property reference
    • Changed property() method
      • Changed name field to include a vendor prefix
      • Added basename field to contain a name without any prefixes, i.e. a hack and a vendor prefix
    • Added vendorPrefix() method
    • Added isCustomProperty() method
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-alpha.26(Nov 10, 2017)

    This journey started a couple months ago with 1.0.0-alpha20, which added tolerant parsing mode as experimental feature, available behind tolerant option. During 5 releases, the feature was tested on various data, numerous errors and edge cases were fixed. The last necessary changes were made in this release, which makes the feature ready for use. So, I proud to say, CSSTree parser is tolerant to errors by default now.

    That's the significant change, and this meets CSS Syntax Module Level 3, which says:

    When errors occur in CSS, the parser attempts to recover gracefully, throwing away only the minimum amount of content before returning to parsing as normal. This is because errors aren’t always mistakes - new syntax looks like an error to an old parser, and it’s useful to be able to add new syntax to the language without worrying about stylesheets that include it being completely broken in older UAs.

    In other words, spec compliant CSS parser should be able to parse any text as a CSS with no errors. CSSTree is now such parser! 🎉

    The only thing the CSSTree parser departs from the specification is that it doesn't throw away bad content, but wraps it in the Raw nodes, which allows processing it later. This discrepancy is due to the fact that the specification is written for UA that extract meaning from CSS, so incomprehensible parts simply do not make sense to them and can be ignored. CSSTree has a wider range of tasks, and most of them are related to the processing of the source code. These are tasks such as locating errors, error correction, preprocessing, and so on.

    Tolerant mode means you don't need to wrap csstree.parse() into try/catch. To collect parse errors onParseError handler should be set in parse options:

    var csstree = require('css-tree');
    
    csstree.parse('I must! be tolerant to errors', {
        onParseError: function(e) {
            console.error(e.formattedMessage);
        }
    });
    // Parse error: Unexpected input
    //     1 |I must! be tolerant to errors
    // -------------^
    // Parse error: LeftCurlyBracket is expected
    //     1 |I must! be tolerant to errors
    // ------------------------------------^
    

    If you need old parser behaviour, just throw an exception inside onParseError handler, that immediately stops a parsing:

    try {
        csstree.parse('I must! be tolerant to errors', {
            onParseError: function(e) {
                throw e;
            }
        });
    } catch(e) {
        console.error(e.formattedMessage);
    }
    // Parse error: Unexpected input
    //     1 |I must! be tolerant to errors
    // -------------^
    

    Changes

    • Tokenizer
      • Added Tokenizer#isBalanceEdge() method
      • Removed Tokenizer.endsWith() method
    • Parser
      • Made the parser tolerant to errors by default
      • Removed tolerant parser option (no parsing modes anymore)
      • Removed property parser option (a value parsing does not depend on property name anymore)
      • Canceled error for a handing semicolon in a block
      • Canceled error for unclosed Brackets, Function and Parentheses when EOF is reached
      • Fixed error when prelude ends with a comment for at-rules with custom prelude consumer
      • Relaxed at-rule parsing:
        • Canceled error when EOF is reached after a prelude
        • Canceled error for an at-rule with custom block consumer when at-rule has no block (just don't apply consumer in that case)
        • Canceled error on at-rule parsing when it occurs outside prelude or block (at-rule is converting to Raw node)
        • Allowed for any at-rule to have a prelude and a block, even if it's invalid per at-rule syntax (the responsibility for this check is moved to lexer, since it's possible to construct a AST with such errors)
      • Made a declaration value a safe parsing point (i.e. error on value parsing lead to a value is turning into Raw node, not a declaration as before)
      • Excluded surrounding white spaces and comments from a Raw node that represents a declaration value
      • Changed Value parse handler to return a node only with type Value (previously it returned a Raw node in some cases)
      • Fixed issue with onParseError() is not invoked on parse errors on selector and declaration value
      • Changed using of onParseError() to stop parsing if handler throws an exception
    • Lexer
      • Changed grammar.walk() to invoke passed handler on entering to node rather than on leaving the node
      • Improved grammar.walk() to take a walk handler pair as an object, i.e. walk(node, { enter: fn, leave: fn })
      • Changed Lexer#match*() methods to take a node of any type, but with a children field
      • Added Lexer#match(syntax, node) method
      • Fixed Lexer#matchType() method to stop return a positive result for the CSS wide keywords
    Source code(tar.gz)
    Source code(zip)
Owner
CSSTree
Fast detailed CSS parser and related projects
CSSTree
CSS parser with support of preprocessors

Gonzales PE @dev Gonzales PE is a CSS parser which plays nicely with preprocessors. Currently those are supported: SCSS, Sass, LESS. Try out Gonzales

Tony Ganch 322 Dec 10, 2022
A decent CSS parser.

mensch A decent CSS parser. usage npm install mensch var mensch = require('mensch'); var ast = mensch.parse('p { color: black; }'); var css = mensch.

Brett Stimmerman 112 Sep 24, 2022
Plugin framework for CSS preprocessing in Node.js

rework CSS manipulations built on css, allowing you to automate vendor prefixing, create your own properties, inline images, anything you can imagine!

rework 2.8k Dec 6, 2022
Modern CSS to all browsers

stylecow: modern CSS for all browser Node library to fix your css code and make it compatible with all browsers. Created by Óscar Otero. License: MIT

stylecow 155 Dec 21, 2022
A W3C standard compliant Web rendering engine based on Flutter.

WebF WebF (Web on the Flutter) is a W3C standard compliant Web rendering engine based on Flutter, it can run web application on Flutter natively. W3C

openwebf 418 Dec 25, 2022
⏬ Fetch the most up-to-date ABI of verified Smart Contracts (including proxy implementations) from Etherscan in seconds!

etherscan-abi ⏬ ?? Fetch the most up-to-date ABI of verified Smart Contracts (including proxy implementations) from Etherscan in seconds! Usage CLI Fe

Romain Milon 6 Dec 27, 2022
Grupprojekt för kurserna 'Javascript med Ramverk' och 'Agil Utveckling'

JavaScript-med-Ramverk-Laboration-3 Grupprojektet för kurserna Javascript med Ramverk och Agil Utveckling. Utvecklingsguide För information om hur utv

Svante Jonsson IT-Högskolan 3 May 18, 2022
Run cucumber/gherkin-syntaxed specs with Cypress

cypress-cucumber-preprocessor This preprocessor aims to provide a developer experience and behavior similar to that of Cucumber, to Cypress. Installat

Klaveness Digital A/S 6 Dec 7, 2022
Hemsida för personer i Sverige som kan och vill erbjuda boende till människor på flykt

Getting Started with Create React App This project was bootstrapped with Create React App. Available Scripts In the project directory, you can run: np

null 4 May 3, 2022
Kurs-repo för kursen Webbserver och Databaser

Webbserver och databaser This repository is meant for CME students to access exercises and codealongs that happen throughout the course. I hope you wi

null 14 Jan 3, 2023
Json-parser - A parser for json-objects without dependencies

Json Parser This is a experimental tool that I create for educational purposes, it's based in the jq works With this tool you can parse json-like stri

Gabriel Guerra 1 Jan 3, 2022
Binary-encoded serialization of JavaScript objects with generator-based parser and serializer

YaBSON Schemaless binary-encoded serialization of JavaScript objects with generator-based parser and serializer This library is designed to transfer l

Gildas 11 Aug 9, 2022
Data structures & algorithms implementations and coding problem solutions. Written in Typescript and tested with Jest. Coding problems are pulled from LeetCode and Daily Coding Problem.

technical-interview-prep Data structures & algorithms implementations and coding problem solutions. Written in Typescript and tested with Jest. Coding

Lesley Chang 7 Aug 5, 2022
Uptime monitoring RESTful API server that allows authenticated users to monitor URLs, and get detailed uptime reports about their availability, average response time, and total uptime/downtime.

Uptime Monitoring API Uptime monitoring RESTful API server that allows authenticated users to monitor URLs, and get detailed uptime reports about thei

Mohamed Magdi 2 Jun 14, 2022
A PostgreSQL client with strict types, detailed logging and assertions.

Slonik A battle-tested PostgreSQL client with strict types, detailed logging and assertions. (The above GIF shows Slonik producing query logs. Slonik

Gajus Kuizinas 3.6k Jan 3, 2023
BootstrapVue provides one of the most comprehensive implementations of Bootstrap v4 for Vue.js. With extensive and automated WAI-ARIA accessibility markup.

With more than 85 components, over 45 available plugins, several directives, and 1000+ icons, BootstrapVue provides one of the most comprehensive impl

BootstrapVue 14.2k Jan 4, 2023
Search for food, recepies, and full detailed information on how to prepare them.

Foodipy | JavaScript Capstone This is a group project being built in our second module of our curriculum at microverse. its a web application for list

Alexander Oguzie-Ibeh 10 Mar 24, 2022
⛰ "core" is the core component package of vodyani, providing easy-to-use methods and AOP implementations.

Vodyani core ⛰ "core" is the core component package of vodyani, providing easy-to-use methods and AOP implementations. Installation npm install @vodya

Vodyani 25 Oct 18, 2022
This repository contains the Solidity smart contract of Enso, a detailed list of features and deployment instructions.

Enso NFT Smart Contract This repository contains the Solidity smart contract of Enso, a detailed list of features and deployment instructions. We stro

enso NFT 3 Apr 24, 2022