This commit is contained in:
2025-05-09 05:30:08 +02:00
parent 7bb10e7df4
commit 73367bad9e
5322 changed files with 1266973 additions and 313 deletions

View File

@@ -0,0 +1,282 @@
# JSON5 JSON for Humans
[![Build Status](https://app.travis-ci.com/json5/json5.svg?branch=main)][Build
Status] [![Coverage
Status](https://coveralls.io/repos/github/json5/json5/badge.svg)][Coverage
Status]
JSON5 is an extension to the popular [JSON] file format that aims to be
easier to **write and maintain _by hand_ (e.g. for config files)**.
It is _not intended_ to be used for machine-to-machine communication.
(Keep using JSON or other file formats for that. 🙂)
JSON5 was started in 2012, and as of 2022, now gets **[>65M downloads/week](https://www.npmjs.com/package/json5)**,
ranks in the **[top 0.1%](https://gist.github.com/anvaka/8e8fa57c7ee1350e3491)** of the most depended-upon packages on npm,
and has been adopted by major projects like
**[Chromium](https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/platform/runtime_enabled_features.json5;drc=5de823b36e68fd99009a29281b17bc3a1d6b329c),
[Next.js](https://github.com/vercel/next.js/blob/b88f20c90bf4659b8ad5cb2a27956005eac2c7e8/packages/next/lib/find-config.ts#L43-L46),
[Babel](https://babeljs.io/docs/en/config-files#supported-file-extensions),
[Retool](https://community.retool.com/t/i-am-attempting-to-append-several-text-fields-to-a-google-sheet-but-receiving-a-json5-invalid-character-error/7626),
[WebStorm](https://www.jetbrains.com/help/webstorm/json.html),
and [more](https://github.com/json5/json5/wiki/In-the-Wild)**.
It's also natively supported on **[Apple platforms](https://developer.apple.com/documentation/foundation/jsondecoder/3766916-allowsjson5)**
like **MacOS** and **iOS**.
Formally, the **[JSON5 Data Interchange Format](https://spec.json5.org/)** is a superset of JSON
(so valid JSON files will always be valid JSON5 files)
that expands its syntax to include some productions from [ECMAScript 5.1] (ES5).
It's also a strict _subset_ of ES5, so valid JSON5 files will always be valid ES5.
This JavaScript library is a reference implementation for JSON5 parsing and serialization,
and is directly used in many of the popular projects mentioned above
(where e.g. extreme performance isn't necessary),
but others have created [many other libraries](https://github.com/json5/json5/wiki/In-the-Wild)
across many other platforms.
[Build Status]: https://app.travis-ci.com/json5/json5
[Coverage Status]: https://coveralls.io/github/json5/json5
[JSON]: https://tools.ietf.org/html/rfc7159
[ECMAScript 5.1]: https://www.ecma-international.org/ecma-262/5.1/
## Summary of Features
The following ECMAScript 5.1 features, which are not supported in JSON, have
been extended to JSON5.
### Objects
- Object keys may be an ECMAScript 5.1 _[IdentifierName]_.
- Objects may have a single trailing comma.
### Arrays
- Arrays may have a single trailing comma.
### Strings
- Strings may be single quoted.
- Strings may span multiple lines by escaping new line characters.
- Strings may include character escapes.
### Numbers
- Numbers may be hexadecimal.
- Numbers may have a leading or trailing decimal point.
- Numbers may be [IEEE 754] positive infinity, negative infinity, and NaN.
- Numbers may begin with an explicit plus sign.
### Comments
- Single and multi-line comments are allowed.
### White Space
- Additional white space characters are allowed.
[IdentifierName]: https://www.ecma-international.org/ecma-262/5.1/#sec-7.6
[IEEE 754]: http://ieeexplore.ieee.org/servlet/opac?punumber=4610933
## Example
Kitchen-sink example:
```js
{
// comments
unquoted: 'and you can quote me on that',
singleQuotes: 'I can use "double quotes" here',
lineBreaks: "Look, Mom! \
No \\n's!",
hexadecimal: 0xdecaf,
leadingDecimalPoint: .8675309, andTrailing: 8675309.,
positiveSign: +1,
trailingComma: 'in objects', andIn: ['arrays',],
"backwardsCompatible": "with JSON",
}
```
A more real-world example is [this config file](https://github.com/chromium/chromium/blob/feb3c9f670515edf9a88f185301cbd7794ee3e52/third_party/blink/renderer/platform/runtime_enabled_features.json5)
from the Chromium/Blink project.
## Specification
For a detailed explanation of the JSON5 format, please read the [official
specification](https://json5.github.io/json5-spec/).
## Installation and Usage
### Node.js
```sh
npm install json5
```
#### CommonJS
```js
const JSON5 = require('json5')
```
#### Modules
```js
import JSON5 from 'json5'
```
### Browsers
#### UMD
```html
<!-- This will create a global `JSON5` variable. -->
<script src="https://unpkg.com/json5@2/dist/index.min.js"></script>
```
#### Modules
```html
<script type="module">
import JSON5 from 'https://unpkg.com/json5@2/dist/index.min.mjs'
</script>
```
## API
The JSON5 API is compatible with the [JSON API].
[JSON API]:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON
### JSON5.parse()
Parses a JSON5 string, constructing the JavaScript value or object described by
the string. An optional reviver function can be provided to perform a
transformation on the resulting object before it is returned.
#### Syntax
JSON5.parse(text[, reviver])
#### Parameters
- `text`: The string to parse as JSON5.
- `reviver`: If a function, this prescribes how the value originally produced by
parsing is transformed, before being returned.
#### Return value
The object corresponding to the given JSON5 text.
### JSON5.stringify()
Converts a JavaScript value to a JSON5 string, optionally replacing values if a
replacer function is specified, or optionally including only the specified
properties if a replacer array is specified.
#### Syntax
JSON5.stringify(value[, replacer[, space]])
JSON5.stringify(value[, options])
#### Parameters
- `value`: The value to convert to a JSON5 string.
- `replacer`: A function that alters the behavior of the stringification
process, or an array of String and Number objects that serve as a whitelist
for selecting/filtering the properties of the value object to be included in
the JSON5 string. If this value is null or not provided, all properties of the
object are included in the resulting JSON5 string.
- `space`: A String or Number object that's used to insert white space into the
output JSON5 string for readability purposes. If this is a Number, it
indicates the number of space characters to use as white space; this number is
capped at 10 (if it is greater, the value is just 10). Values less than 1
indicate that no space should be used. If this is a String, the string (or the
first 10 characters of the string, if it's longer than that) is used as white
space. If this parameter is not provided (or is null), no white space is used.
If white space is used, trailing commas will be used in objects and arrays.
- `options`: An object with the following properties:
- `replacer`: Same as the `replacer` parameter.
- `space`: Same as the `space` parameter.
- `quote`: A String representing the quote character to use when serializing
strings.
#### Return value
A JSON5 string representing the value.
### Node.js `require()` JSON5 files
When using Node.js, you can `require()` JSON5 files by adding the following
statement.
```js
require('json5/lib/register')
```
Then you can load a JSON5 file with a Node.js `require()` statement. For
example:
```js
const config = require('./config.json5')
```
## CLI
Since JSON is more widely used than JSON5, this package includes a CLI for
converting JSON5 to JSON and for validating the syntax of JSON5 documents.
### Installation
```sh
npm install --global json5
```
### Usage
```sh
json5 [options] <file>
```
If `<file>` is not provided, then STDIN is used.
#### Options:
- `-s`, `--space`: The number of spaces to indent or `t` for tabs
- `-o`, `--out-file [file]`: Output to the specified file, otherwise STDOUT
- `-v`, `--validate`: Validate JSON5 but do not output JSON
- `-V`, `--version`: Output the version number
- `-h`, `--help`: Output usage information
## Contributing
### Development
```sh
git clone https://github.com/json5/json5
cd json5
npm install
```
When contributing code, please write relevant tests and run `npm test` and `npm
run lint` before submitting pull requests. Please use an editor that supports
[EditorConfig](http://editorconfig.org/).
### Issues
To report bugs or request features regarding the JSON5 **data format**,
please submit an issue to the official
**[_specification_ repository](https://github.com/json5/json5-spec)**.
Note that we will never add any features that make JSON5 incompatible with ES5;
that compatibility is a fundamental premise of JSON5.
To report bugs or request features regarding this **JavaScript implementation**
of JSON5, please submit an issue to **_this_ repository**.
### Security Vulnerabilities and Disclosures
To report a security vulnerability, please follow the follow the guidelines
described in our [security policy](./SECURITY.md).
## License
MIT. See [LICENSE.md](./LICENSE.md) for details.
## Credits
[Aseem Kishore](https://github.com/aseemk) founded this project.
He wrote a [blog post](https://aseemk.substack.com/p/ignore-the-f-ing-haters-json5)
about the journey and lessons learned 10 years in.
[Michael Bolin](http://bolinfest.com/) independently arrived at and published
some of these same ideas with awesome explanations and detail. Recommended
reading: [Suggested Improvements to JSON](http://bolinfest.com/essays/json.html)
[Douglas Crockford](http://www.crockford.com/) of course designed and built
JSON, but his state machine diagrams on the [JSON website](http://json.org/), as
cheesy as it may sound, gave us motivation and confidence that building a new
parser to implement these ideas was within reach! The original
implementation of JSON5 was also modeled directly off of Dougs open-source
[json_parse.js] parser. Were grateful for that clean and well-documented
code.
[json_parse.js]:
https://github.com/douglascrockford/JSON-js/blob/03157639c7a7cddd2e9f032537f346f1a87c0f6d/json_parse.js
[Max Nanasy](https://github.com/MaxNanasy) has been an early and prolific
supporter, contributing multiple patches and ideas.
[Andrew Eisenberg](https://github.com/aeisenberg) contributed the original
`stringify` method.
[Jordan Tucker](https://github.com/jordanbtucker) has aligned JSON5 more closely
with ES5, wrote the official JSON5 specification, completely rewrote the
codebase from the ground up, and is actively maintaining this project.

View File

@@ -0,0 +1 @@
{"version":3,"names":["_caching","require","fs","_fs2","data","makeStaticFileCache","fn","makeStrongCache","filepath","cache","cached","invalidate","fileMtime","readFile","nodeFs","existsSync","statSync","mtime","e","code"],"sources":["../../../src/config/files/utils.ts"],"sourcesContent":["import type { Handler } from \"gensync\";\n\nimport { makeStrongCache } from \"../caching.ts\";\nimport type { CacheConfigurator } from \"../caching.ts\";\nimport * as fs from \"../../gensync-utils/fs.ts\";\nimport nodeFs from \"fs\";\n\nexport function makeStaticFileCache<T>(\n fn: (filepath: string, contents: string) => T,\n) {\n return makeStrongCache(function* (\n filepath: string,\n cache: CacheConfigurator<void>,\n ): Handler<null | T> {\n const cached = cache.invalidate(() => fileMtime(filepath));\n\n if (cached === null) {\n return null;\n }\n\n return fn(filepath, yield* fs.readFile(filepath, \"utf8\"));\n });\n}\n\nfunction fileMtime(filepath: string): number | null {\n if (!nodeFs.existsSync(filepath)) return null;\n\n try {\n return +nodeFs.statSync(filepath).mtime;\n } catch (e) {\n if (e.code !== \"ENOENT\" && e.code !== \"ENOTDIR\") throw e;\n }\n\n return null;\n}\n"],"mappings":";;;;;;AAEA,IAAAA,QAAA,GAAAC,OAAA;AAEA,IAAAC,EAAA,GAAAD,OAAA;AACA,SAAAE,KAAA;EAAA,MAAAC,IAAA,GAAAH,OAAA;EAAAE,IAAA,YAAAA,CAAA;IAAA,OAAAC,IAAA;EAAA;EAAA,OAAAA,IAAA;AAAA;AAEO,SAASC,mBAAmBA,CACjCC,EAA6C,EAC7C;EACA,OAAO,IAAAC,wBAAe,EAAC,WACrBC,QAAgB,EAChBC,KAA8B,EACX;IACnB,MAAMC,MAAM,GAAGD,KAAK,CAACE,UAAU,CAAC,MAAMC,SAAS,CAACJ,QAAQ,CAAC,CAAC;IAE1D,IAAIE,MAAM,KAAK,IAAI,EAAE;MACnB,OAAO,IAAI;IACb;IAEA,OAAOJ,EAAE,CAACE,QAAQ,EAAE,OAAON,EAAE,CAACW,QAAQ,CAACL,QAAQ,EAAE,MAAM,CAAC,CAAC;EAC3D,CAAC,CAAC;AACJ;AAEA,SAASI,SAASA,CAACJ,QAAgB,EAAiB;EAClD,IAAI,CAACM,KAAKA,CAAC,CAACC,UAAU,CAACP,QAAQ,CAAC,EAAE,OAAO,IAAI;EAE7C,IAAI;IACF,OAAO,CAACM,KAAKA,CAAC,CAACE,QAAQ,CAACR,QAAQ,CAAC,CAACS,KAAK;EACzC,CAAC,CAAC,OAAOC,CAAC,EAAE;IACV,IAAIA,CAAC,CAACC,IAAI,KAAK,QAAQ,IAAID,CAAC,CAACC,IAAI,KAAK,SAAS,EAAE,MAAMD,CAAC;EAC1D;EAEA,OAAO,IAAI;AACb;AAAC","ignoreList":[]}

View File

@@ -0,0 +1,4 @@
#! /usr/bin/env node
var rc = require('./index')
console.log(JSON.stringify(rc(process.argv[2]), false, 2))

View File

@@ -0,0 +1,19 @@
# @babel/types
> Babel Types is a Lodash-esque utility library for AST nodes
See our website [@babel/types](https://babeljs.io/docs/babel-types) for more information or the [issues](https://github.com/babel/babel/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22pkg%3A%20types%22+is%3Aopen) associated with this package.
## Install
Using npm:
```sh
npm install --save-dev @babel/types
```
or using yarn:
```sh
yarn add @babel/types --dev
```

View File

@@ -0,0 +1,16 @@
"use strict";
module.exports = function (it) {
const { configName, importerName } = it;
return `
"${configName}" is invalid syntax for a config specifier.
* If your intention is to extend from a configuration exported from the plugin, add the configuration name after a slash: e.g. "${configName}/myConfig".
* If this is the name of a shareable config instead of a plugin, remove the "plugin:" prefix: i.e. "${configName.slice("plugin:".length)}".
"${configName}" was referenced from the config file in "${importerName}".
If you still can't figure out the problem, please see https://eslint.org/docs/latest/use/troubleshooting.
`.trimStart();
};

View File

@@ -0,0 +1,119 @@
/**
* @fileoverview Rule to flag unsafe statements in finally block
* @author Onur Temizkan
*/
"use strict";
//------------------------------------------------------------------------------
// Helpers
//------------------------------------------------------------------------------
const SENTINEL_NODE_TYPE_RETURN_THROW =
/^(?:Program|(?:Function|Class)(?:Declaration|Expression)|ArrowFunctionExpression)$/u;
const SENTINEL_NODE_TYPE_BREAK =
/^(?:Program|(?:Function|Class)(?:Declaration|Expression)|ArrowFunctionExpression|DoWhileStatement|WhileStatement|ForOfStatement|ForInStatement|ForStatement|SwitchStatement)$/u;
const SENTINEL_NODE_TYPE_CONTINUE =
/^(?:Program|(?:Function|Class)(?:Declaration|Expression)|ArrowFunctionExpression|DoWhileStatement|WhileStatement|ForOfStatement|ForInStatement|ForStatement)$/u;
//------------------------------------------------------------------------------
// Rule Definition
//------------------------------------------------------------------------------
/** @type {import('../shared/types').Rule} */
module.exports = {
meta: {
type: "problem",
docs: {
description: "Disallow control flow statements in `finally` blocks",
recommended: true,
url: "https://eslint.org/docs/latest/rules/no-unsafe-finally",
},
schema: [],
messages: {
unsafeUsage: "Unsafe usage of {{nodeType}}.",
},
},
create(context) {
/**
* Checks if the node is the finalizer of a TryStatement
* @param {ASTNode} node node to check.
* @returns {boolean} - true if the node is the finalizer of a TryStatement
*/
function isFinallyBlock(node) {
return (
node.parent.type === "TryStatement" &&
node.parent.finalizer === node
);
}
/**
* Climbs up the tree if the node is not a sentinel node
* @param {ASTNode} node node to check.
* @param {string} label label of the break or continue statement
* @returns {boolean} - return whether the node is a finally block or a sentinel node
*/
function isInFinallyBlock(node, label) {
let labelInside = false;
let sentinelNodeType;
if (node.type === "BreakStatement" && !node.label) {
sentinelNodeType = SENTINEL_NODE_TYPE_BREAK;
} else if (node.type === "ContinueStatement") {
sentinelNodeType = SENTINEL_NODE_TYPE_CONTINUE;
} else {
sentinelNodeType = SENTINEL_NODE_TYPE_RETURN_THROW;
}
for (
let currentNode = node;
currentNode && !sentinelNodeType.test(currentNode.type);
currentNode = currentNode.parent
) {
if (
currentNode.parent.label &&
label &&
currentNode.parent.label.name === label.name
) {
labelInside = true;
}
if (isFinallyBlock(currentNode)) {
if (label && labelInside) {
return false;
}
return true;
}
}
return false;
}
/**
* Checks whether the possibly-unsafe statement is inside a finally block.
* @param {ASTNode} node node to check.
* @returns {void}
*/
function check(node) {
if (isInFinallyBlock(node, node.label)) {
context.report({
messageId: "unsafeUsage",
data: {
nodeType: node.type,
},
node,
line: node.loc.line,
column: node.loc.column,
});
}
}
return {
ReturnStatement: check,
ThrowStatement: check,
BreakStatement: check,
ContinueStatement: check,
};
},
};

View File

@@ -0,0 +1,138 @@
"use strict";
Object.defineProperty(exports, Symbol.toStringTag, { value: "Module" });
const React = require("react");
const Asset = require("./Asset.cjs");
const useRouter = require("./useRouter.cjs");
const useRouterState = require("./useRouterState.cjs");
function _interopNamespaceDefault(e) {
const n = Object.create(null, { [Symbol.toStringTag]: { value: "Module" } });
if (e) {
for (const k in e) {
if (k !== "default") {
const d = Object.getOwnPropertyDescriptor(e, k);
Object.defineProperty(n, k, d.get ? d : {
enumerable: true,
get: () => e[k]
});
}
}
}
n.default = e;
return Object.freeze(n);
}
const React__namespace = /* @__PURE__ */ _interopNamespaceDefault(React);
const useTags = () => {
const router = useRouter.useRouter();
const routeMeta = useRouterState.useRouterState({
select: (state) => {
return state.matches.map((match) => match.meta).filter(Boolean);
}
});
const meta = React__namespace.useMemo(() => {
const resultMeta = [];
const metaByAttribute = {};
let title;
[...routeMeta].reverse().forEach((metas) => {
[...metas].reverse().forEach((m) => {
if (!m) return;
if (m.title) {
if (!title) {
title = {
tag: "title",
children: m.title
};
}
} else {
const attribute = m.name ?? m.property;
if (attribute) {
if (metaByAttribute[attribute]) {
return;
} else {
metaByAttribute[attribute] = true;
}
}
resultMeta.push({
tag: "meta",
attrs: {
...m
}
});
}
});
});
if (title) {
resultMeta.push(title);
}
resultMeta.reverse();
return resultMeta;
}, [routeMeta]);
const links = useRouterState.useRouterState({
select: (state) => state.matches.map((match) => match.links).filter(Boolean).flat(1).map((link) => ({
tag: "link",
attrs: {
...link
}
})),
structuralSharing: true
});
const preloadMeta = useRouterState.useRouterState({
select: (state) => {
const preloadMeta2 = [];
state.matches.map((match) => router.looseRoutesById[match.routeId]).forEach(
(route) => {
var _a, _b, _c, _d;
return (_d = (_c = (_b = (_a = router.ssr) == null ? void 0 : _a.manifest) == null ? void 0 : _b.routes[route.id]) == null ? void 0 : _c.preloads) == null ? void 0 : _d.filter(Boolean).forEach((preload) => {
preloadMeta2.push({
tag: "link",
attrs: {
rel: "modulepreload",
href: preload
}
});
});
}
);
return preloadMeta2;
},
structuralSharing: true
});
const headScripts = useRouterState.useRouterState({
select: (state) => state.matches.map((match) => match.headScripts).flat(1).filter(Boolean).map(({ children, ...script }) => ({
tag: "script",
attrs: {
...script
},
children
})),
structuralSharing: true
});
return uniqBy(
[
...meta,
...preloadMeta,
...links,
...headScripts
],
(d) => {
return JSON.stringify(d);
}
);
};
function HeadContent() {
const tags = useTags();
return tags.map((tag) => /* @__PURE__ */ React.createElement(Asset.Asset, { ...tag, key: `tsr-meta-${JSON.stringify(tag)}` }));
}
function uniqBy(arr, fn) {
const seen = /* @__PURE__ */ new Set();
return arr.filter((item) => {
const key = fn(item);
if (seen.has(key)) {
return false;
}
seen.add(key);
return true;
});
}
exports.HeadContent = HeadContent;
exports.useTags = useTags;
//# sourceMappingURL=HeadContent.cjs.map

View File

@@ -0,0 +1,22 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.default = _extends;
function _extends() {
exports.default = _extends = Object.assign ? Object.assign.bind() : function (target) {
for (var i = 1; i < arguments.length; i++) {
var source = arguments[i];
for (var key in source) {
if (Object.prototype.hasOwnProperty.call(source, key)) {
target[key] = source[key];
}
}
}
return target;
};
return _extends.apply(null, arguments);
}
//# sourceMappingURL=extends.js.map

View File

@@ -0,0 +1,30 @@
// Copyright (c) 2010 LearnBoost <tj@learnboost.com>
#pragma once
#include <stdint.h> // node < 7 uses libstdc++ on macOS which lacks complete c++11
#include <cstdlib>
/*
* RGBA struct.
*/
typedef struct {
double r, g, b, a;
} rgba_t;
/*
* Prototypes.
*/
rgba_t
rgba_create(uint32_t rgba);
int32_t
rgba_from_string(const char *str, short *ok);
void
rgba_to_string(rgba_t rgba, char *buf, size_t len);
void
rgba_inspect(int32_t rgba);

View File

@@ -0,0 +1,10 @@
declare namespace clsx {
type ClassValue = ClassArray | ClassDictionary | string | number | bigint | null | boolean | undefined;
type ClassDictionary = Record<string, any>;
type ClassArray = ClassValue[];
function clsx(...inputs: ClassValue[]): string;
}
declare function clsx(...inputs: clsx.ClassValue[]): string;
export = clsx;

View File

@@ -0,0 +1 @@
module.exports={A:{A:{"1":"F A B","2":"K D E mC"},B:{"1":"0 9 C L M G N O P Q H R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z AB BB CB DB EB FB GB HB IB JB KB LB MB NB OB I"},C:{"1":"0 9 YB ZB aB bB cB dB eB fB gB hB iB jB kB lB mB nB oB pB qB rB sB tB uB vB MC wB NC xB yB zB 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B AC BC CC DC Q H R OC S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z AB BB CB DB EB FB GB HB IB JB KB LB MB NB OB I PC EC QC RC oC pC","2":"1 nC LC J PB K D E F A B C L M G N O P QB qC rC","4":"2 3 4 5 6 7 8 RB SB TB UB VB WB XB"},D:{"1":"0 1 2 3 4 5 6 7 8 9 J PB K D E F A B C L M G N O P QB RB SB TB UB VB WB XB YB ZB aB bB cB dB eB fB gB hB iB jB kB lB mB nB oB pB qB rB sB tB uB vB MC wB NC xB yB zB 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B AC BC CC DC Q H R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z AB BB CB DB EB FB GB HB IB JB KB LB MB NB OB I PC EC QC RC"},E:{"1":"J PB K D E F A B C L M G SC tC uC vC wC TC FC GC xC yC zC UC VC HC 0C IC WC XC YC ZC aC 1C JC bC cC dC eC fC 2C KC gC hC iC jC 3C","2":"sC"},F:{"1":"0 6 7 8 RB SB TB UB VB WB XB YB ZB aB bB cB dB eB fB gB hB iB jB kB lB mB nB oB pB qB rB sB tB uB vB wB xB yB zB 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B AC BC CC DC Q H R OC S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z","2":"1 2 3 4 5 F B C G N O P QB 4C 5C 6C 7C FC kC 8C GC"},G:{"1":"E SC 9C lC AD BD CD DD ED FD GD HD ID JD KD LD MD ND OD PD QD RD SD UC VC HC TD IC WC XC YC ZC aC UD JC bC cC dC eC fC VD KC gC hC iC jC"},H:{"2":"WD"},I:{"1":"I bD cD","4":"LC J XD YD aD lC","132":"ZD"},J:{"1":"D A"},K:{"1":"B C H FC kC GC","2":"A"},L:{"1":"I"},M:{"1":"EC"},N:{"1":"A B"},O:{"1":"HC"},P:{"1":"1 2 3 4 5 6 7 8 J dD eD fD gD hD TC iD jD kD lD mD IC JC KC nD"},Q:{"1":"oD"},R:{"1":"pD"},S:{"1":"qD rD"}},B:6,C:"MPEG-4/H.264 video format",D:true};

View File

@@ -0,0 +1,439 @@
import { last } from './utils'
import type { MatchLocation } from './RouterProvider'
import type { AnyPathParams } from './route'
export interface Segment {
type: 'pathname' | 'param' | 'wildcard'
value: string
}
export function joinPaths(paths: Array<string | undefined>) {
return cleanPath(
paths
.filter((val) => {
return val !== undefined
})
.join('/'),
)
}
export function cleanPath(path: string) {
// remove double slashes
return path.replace(/\/{2,}/g, '/')
}
export function trimPathLeft(path: string) {
return path === '/' ? path : path.replace(/^\/{1,}/, '')
}
export function trimPathRight(path: string) {
return path === '/' ? path : path.replace(/\/{1,}$/, '')
}
export function trimPath(path: string) {
return trimPathRight(trimPathLeft(path))
}
export function removeTrailingSlash(value: string, basepath: string): string {
if (value?.endsWith('/') && value !== '/' && value !== `${basepath}/`) {
return value.slice(0, -1)
}
return value
}
// intended to only compare path name
// see the usage in the isActive under useLinkProps
// /sample/path1 = /sample/path1/
// /sample/path1/some <> /sample/path1
export function exactPathTest(
pathName1: string,
pathName2: string,
basepath: string,
): boolean {
return (
removeTrailingSlash(pathName1, basepath) ===
removeTrailingSlash(pathName2, basepath)
)
}
// When resolving relative paths, we treat all paths as if they are trailing slash
// documents. All trailing slashes are removed after the path is resolved.
// Here are a few examples:
//
// /a/b/c + ./d = /a/b/c/d
// /a/b/c + ../d = /a/b/d
// /a/b/c + ./d/ = /a/b/c/d
// /a/b/c + ../d/ = /a/b/d
// /a/b/c + ./ = /a/b/c
//
// Absolute paths that start with `/` short circuit the resolution process to the root
// path.
//
// Here are some examples:
//
// /a/b/c + /d = /d
// /a/b/c + /d/ = /d
// /a/b/c + / = /
//
// Non-.-prefixed paths are still treated as relative paths, resolved like `./`
//
// Here are some examples:
//
// /a/b/c + d = /a/b/c/d
// /a/b/c + d/ = /a/b/c/d
// /a/b/c + d/e = /a/b/c/d/e
interface ResolvePathOptions {
basepath: string
base: string
to: string
trailingSlash?: 'always' | 'never' | 'preserve'
caseSensitive?: boolean
}
export function resolvePath({
basepath,
base,
to,
trailingSlash = 'never',
caseSensitive,
}: ResolvePathOptions) {
base = removeBasepath(basepath, base, caseSensitive)
to = removeBasepath(basepath, to, caseSensitive)
let baseSegments = parsePathname(base)
const toSegments = parsePathname(to)
if (baseSegments.length > 1 && last(baseSegments)?.value === '/') {
baseSegments.pop()
}
toSegments.forEach((toSegment, index) => {
if (toSegment.value === '/') {
if (!index) {
// Leading slash
baseSegments = [toSegment]
} else if (index === toSegments.length - 1) {
// Trailing Slash
baseSegments.push(toSegment)
} else {
// ignore inter-slashes
}
} else if (toSegment.value === '..') {
baseSegments.pop()
} else if (toSegment.value === '.') {
// ignore
} else {
baseSegments.push(toSegment)
}
})
if (baseSegments.length > 1) {
if (last(baseSegments)?.value === '/') {
if (trailingSlash === 'never') {
baseSegments.pop()
}
} else if (trailingSlash === 'always') {
baseSegments.push({ type: 'pathname', value: '/' })
}
}
const joined = joinPaths([basepath, ...baseSegments.map((d) => d.value)])
return cleanPath(joined)
}
export function parsePathname(pathname?: string): Array<Segment> {
if (!pathname) {
return []
}
pathname = cleanPath(pathname)
const segments: Array<Segment> = []
if (pathname.slice(0, 1) === '/') {
pathname = pathname.substring(1)
segments.push({
type: 'pathname',
value: '/',
})
}
if (!pathname) {
return segments
}
// Remove empty segments and '.' segments
const split = pathname.split('/').filter(Boolean)
segments.push(
...split.map((part): Segment => {
if (part === '$' || part === '*') {
return {
type: 'wildcard',
value: part,
}
}
if (part.charAt(0) === '$') {
return {
type: 'param',
value: part,
}
}
return {
type: 'pathname',
value: part.includes('%25')
? part
.split('%25')
.map((segment) => decodeURI(segment))
.join('%25')
: decodeURI(part),
}
}),
)
if (pathname.slice(-1) === '/') {
pathname = pathname.substring(1)
segments.push({
type: 'pathname',
value: '/',
})
}
return segments
}
interface InterpolatePathOptions {
path?: string
params: Record<string, unknown>
leaveWildcards?: boolean
leaveParams?: boolean
// Map of encoded chars to decoded chars (e.g. '%40' -> '@') that should remain decoded in path params
decodeCharMap?: Map<string, string>
}
type InterPolatePathResult = {
interpolatedPath: string
usedParams: Record<string, unknown>
}
export function interpolatePath({
path,
params,
leaveWildcards,
leaveParams,
decodeCharMap,
}: InterpolatePathOptions): InterPolatePathResult {
const interpolatedPathSegments = parsePathname(path)
function encodeParam(key: string): any {
const value = params[key]
const isValueString = typeof value === 'string'
if (['*', '_splat'].includes(key)) {
// the splat/catch-all routes shouldn't have the '/' encoded out
return isValueString ? encodeURI(value) : value
} else {
return isValueString ? encodePathParam(value, decodeCharMap) : value
}
}
const usedParams: Record<string, unknown> = {}
const interpolatedPath = joinPaths(
interpolatedPathSegments.map((segment) => {
if (segment.type === 'wildcard') {
usedParams._splat = params._splat
const value = encodeParam('_splat')
if (leaveWildcards) return `${segment.value}${value ?? ''}`
return value
}
if (segment.type === 'param') {
const key = segment.value.substring(1)
usedParams[key] = params[key]
if (leaveParams) {
const value = encodeParam(segment.value)
return `${segment.value}${value ?? ''}`
}
return encodeParam(key) ?? 'undefined'
}
return segment.value
}),
)
return { usedParams, interpolatedPath }
}
function encodePathParam(value: string, decodeCharMap?: Map<string, string>) {
let encoded = encodeURIComponent(value)
if (decodeCharMap) {
for (const [encodedChar, char] of decodeCharMap) {
encoded = encoded.replaceAll(encodedChar, char)
}
}
return encoded
}
export function matchPathname(
basepath: string,
currentPathname: string,
matchLocation: Pick<MatchLocation, 'to' | 'fuzzy' | 'caseSensitive'>,
): AnyPathParams | undefined {
const pathParams = matchByPath(basepath, currentPathname, matchLocation)
// const searchMatched = matchBySearch(location.search, matchLocation)
if (matchLocation.to && !pathParams) {
return
}
return pathParams ?? {}
}
export function removeBasepath(
basepath: string,
pathname: string,
caseSensitive: boolean = false,
) {
// normalize basepath and pathname for case-insensitive comparison if needed
const normalizedBasepath = caseSensitive ? basepath : basepath.toLowerCase()
const normalizedPathname = caseSensitive ? pathname : pathname.toLowerCase()
switch (true) {
// default behaviour is to serve app from the root - pathname
// left untouched
case normalizedBasepath === '/':
return pathname
// shortcut for removing the basepath if it matches the pathname
case normalizedPathname === normalizedBasepath:
return ''
// in case pathname is shorter than basepath - there is
// nothing to remove
case pathname.length < basepath.length:
return pathname
// avoid matching partial segments - strict equality handled
// earlier, otherwise, basepath separated from pathname with
// separator, therefore lack of separator means partial
// segment match (`/app` should not match `/application`)
case normalizedPathname[normalizedBasepath.length] !== '/':
return pathname
// remove the basepath from the pathname if it starts with it
case normalizedPathname.startsWith(normalizedBasepath):
return pathname.slice(basepath.length)
// otherwise, return the pathname as is
default:
return pathname
}
}
export function matchByPath(
basepath: string,
from: string,
matchLocation: Pick<MatchLocation, 'to' | 'caseSensitive' | 'fuzzy'>,
): Record<string, string> | undefined {
// check basepath first
if (basepath !== '/' && !from.startsWith(basepath)) {
return undefined
}
// Remove the base path from the pathname
from = removeBasepath(basepath, from, matchLocation.caseSensitive)
// Default to to $ (wildcard)
const to = removeBasepath(
basepath,
`${matchLocation.to ?? '$'}`,
matchLocation.caseSensitive,
)
// Parse the from and to
const baseSegments = parsePathname(from)
const routeSegments = parsePathname(to)
if (!from.startsWith('/')) {
baseSegments.unshift({
type: 'pathname',
value: '/',
})
}
if (!to.startsWith('/')) {
routeSegments.unshift({
type: 'pathname',
value: '/',
})
}
const params: Record<string, string> = {}
const isMatch = (() => {
for (
let i = 0;
i < Math.max(baseSegments.length, routeSegments.length);
i++
) {
const baseSegment = baseSegments[i]
const routeSegment = routeSegments[i]
const isLastBaseSegment = i >= baseSegments.length - 1
const isLastRouteSegment = i >= routeSegments.length - 1
if (routeSegment) {
if (routeSegment.type === 'wildcard') {
const _splat = decodeURI(
joinPaths(baseSegments.slice(i).map((d) => d.value)),
)
// TODO: Deprecate *
params['*'] = _splat
params['_splat'] = _splat
return true
}
if (routeSegment.type === 'pathname') {
if (routeSegment.value === '/' && !baseSegment?.value) {
return true
}
if (baseSegment) {
if (matchLocation.caseSensitive) {
if (routeSegment.value !== baseSegment.value) {
return false
}
} else if (
routeSegment.value.toLowerCase() !==
baseSegment.value.toLowerCase()
) {
return false
}
}
}
if (!baseSegment) {
return false
}
if (routeSegment.type === 'param') {
if (baseSegment.value === '/') {
return false
}
if (baseSegment.value.charAt(0) !== '$') {
params[routeSegment.value.substring(1)] = decodeURIComponent(
baseSegment.value,
)
}
}
}
if (!isLastBaseSegment && isLastRouteSegment) {
params['**'] = joinPaths(baseSegments.slice(i + 1).map((d) => d.value))
return !!matchLocation.fuzzy && routeSegment?.value !== '/'
}
}
return true
})()
return isMatch ? params : undefined
}

View File

@@ -0,0 +1 @@
module.exports={A:{D:{"1":"0 9 uB vB MC wB NC xB yB zB 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B AC BC CC DC Q H R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z AB BB CB DB EB FB GB HB IB JB KB LB MB NB OB I PC EC QC RC","2":"1 2 3 4 5 6 7 8 J PB K D E F A B C L M G N O P QB RB SB TB UB VB WB XB YB ZB aB bB cB dB eB fB gB hB iB jB kB lB mB nB oB pB qB rB sB tB"},L:{"1":"I"},B:{"1":"0 9 Q H R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z AB BB CB DB EB FB GB HB IB JB KB LB MB NB OB I","2":"C L M G N O P"},C:{"1":"0 9 ZB aB bB cB dB eB fB gB hB iB jB kB lB mB nB oB pB qB rB sB tB uB vB MC wB NC xB yB zB 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B AC BC CC DC Q H R OC S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z AB BB CB DB EB FB GB HB IB JB KB LB MB NB OB I PC EC QC RC oC pC","2":"nC LC J PB qC rC","33":"1 2 3 4 5 6 7 8 K D E F A B C L M G N O P QB RB SB TB UB VB WB XB YB"},M:{"1":"EC"},A:{"2":"K D E F A B mC"},F:{"1":"0 hB iB jB kB lB mB nB oB pB qB rB sB tB uB vB wB xB yB zB 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B AC BC CC DC Q H R OC S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z","2":"1 2 3 4 5 6 7 8 F B C G N O P QB RB SB TB UB VB WB XB YB ZB aB bB cB dB eB fB gB 4C 5C 6C 7C FC kC 8C GC"},K:{"1":"H","2":"A B C FC kC GC"},E:{"1":"L M G GC xC yC zC UC VC HC 0C IC WC XC YC ZC aC 1C JC bC cC dC eC fC 2C KC gC hC iC jC","2":"J PB K D sC SC tC uC vC 3C","33":"E F A B C wC TC FC"},G:{"1":"LD MD ND OD PD QD RD SD UC VC HC TD IC WC XC YC ZC aC UD JC bC cC dC eC fC VD KC gC hC iC jC","2":"SC 9C lC AD BD CD","33":"E DD ED FD GD HD ID JD KD"},P:{"1":"1 2 3 4 5 6 7 8 fD gD hD TC iD jD kD lD mD IC JC KC nD","2":"J dD eD"},I:{"1":"I","2":"LC J XD YD ZD aD lC bD cD"}},B:6,C:"text-decoration-style property",D:undefined};

View File

@@ -0,0 +1,14 @@
"use strict";
Object.defineProperty(exports, "__esModule", {
value: true
});
exports.default = toBindingIdentifierName;
var _toIdentifier = require("./toIdentifier.js");
function toBindingIdentifierName(name) {
name = (0, _toIdentifier.default)(name);
if (name === "eval" || name === "arguments") name = "_" + name;
return name;
}
//# sourceMappingURL=toBindingIdentifierName.js.map

View File

@@ -0,0 +1,262 @@
/**
* @fileoverview The CodePathSegment class.
* @author Toru Nagashima
*/
"use strict";
//------------------------------------------------------------------------------
// Requirements
//------------------------------------------------------------------------------
const debug = require("./debug-helpers");
//------------------------------------------------------------------------------
// Helpers
//------------------------------------------------------------------------------
/**
* Checks whether or not a given segment is reachable.
* @param {CodePathSegment} segment A segment to check.
* @returns {boolean} `true` if the segment is reachable.
*/
function isReachable(segment) {
return segment.reachable;
}
//------------------------------------------------------------------------------
// Public Interface
//------------------------------------------------------------------------------
/**
* A code path segment.
*
* Each segment is arranged in a series of linked lists (implemented by arrays)
* that keep track of the previous and next segments in a code path. In this way,
* you can navigate between all segments in any code path so long as you have a
* reference to any segment in that code path.
*
* When first created, the segment is in a detached state, meaning that it knows the
* segments that came before it but those segments don't know that this new segment
* follows it. Only when `CodePathSegment#markUsed()` is called on a segment does it
* officially become part of the code path by updating the previous segments to know
* that this new segment follows.
*/
class CodePathSegment {
/**
* Creates a new instance.
* @param {string} id An identifier.
* @param {CodePathSegment[]} allPrevSegments An array of the previous segments.
* This array includes unreachable segments.
* @param {boolean} reachable A flag which shows this is reachable.
*/
constructor(id, allPrevSegments, reachable) {
/**
* The identifier of this code path.
* Rules use it to store additional information of each rule.
* @type {string}
*/
this.id = id;
/**
* An array of the next reachable segments.
* @type {CodePathSegment[]}
*/
this.nextSegments = [];
/**
* An array of the previous reachable segments.
* @type {CodePathSegment[]}
*/
this.prevSegments = allPrevSegments.filter(isReachable);
/**
* An array of all next segments including reachable and unreachable.
* @type {CodePathSegment[]}
*/
this.allNextSegments = [];
/**
* An array of all previous segments including reachable and unreachable.
* @type {CodePathSegment[]}
*/
this.allPrevSegments = allPrevSegments;
/**
* A flag which shows this is reachable.
* @type {boolean}
*/
this.reachable = reachable;
// Internal data.
Object.defineProperty(this, "internal", {
value: {
// determines if the segment has been attached to the code path
used: false,
// array of previous segments coming from the end of a loop
loopedPrevSegments: [],
},
});
/* c8 ignore start */
if (debug.enabled) {
this.internal.nodes = [];
} /* c8 ignore stop */
}
/**
* Checks a given previous segment is coming from the end of a loop.
* @param {CodePathSegment} segment A previous segment to check.
* @returns {boolean} `true` if the segment is coming from the end of a loop.
*/
isLoopedPrevSegment(segment) {
return this.internal.loopedPrevSegments.includes(segment);
}
/**
* Creates the root segment.
* @param {string} id An identifier.
* @returns {CodePathSegment} The created segment.
*/
static newRoot(id) {
return new CodePathSegment(id, [], true);
}
/**
* Creates a new segment and appends it after the given segments.
* @param {string} id An identifier.
* @param {CodePathSegment[]} allPrevSegments An array of the previous segments
* to append to.
* @returns {CodePathSegment} The created segment.
*/
static newNext(id, allPrevSegments) {
return new CodePathSegment(
id,
CodePathSegment.flattenUnusedSegments(allPrevSegments),
allPrevSegments.some(isReachable),
);
}
/**
* Creates an unreachable segment and appends it after the given segments.
* @param {string} id An identifier.
* @param {CodePathSegment[]} allPrevSegments An array of the previous segments.
* @returns {CodePathSegment} The created segment.
*/
static newUnreachable(id, allPrevSegments) {
const segment = new CodePathSegment(
id,
CodePathSegment.flattenUnusedSegments(allPrevSegments),
false,
);
/*
* In `if (a) return a; foo();` case, the unreachable segment preceded by
* the return statement is not used but must not be removed.
*/
CodePathSegment.markUsed(segment);
return segment;
}
/**
* Creates a segment that follows given segments.
* This factory method does not connect with `allPrevSegments`.
* But this inherits `reachable` flag.
* @param {string} id An identifier.
* @param {CodePathSegment[]} allPrevSegments An array of the previous segments.
* @returns {CodePathSegment} The created segment.
*/
static newDisconnected(id, allPrevSegments) {
return new CodePathSegment(id, [], allPrevSegments.some(isReachable));
}
/**
* Marks a given segment as used.
*
* And this function registers the segment into the previous segments as a next.
* @param {CodePathSegment} segment A segment to mark.
* @returns {void}
*/
static markUsed(segment) {
if (segment.internal.used) {
return;
}
segment.internal.used = true;
let i;
if (segment.reachable) {
/*
* If the segment is reachable, then it's officially part of the
* code path. This loops through all previous segments to update
* their list of next segments. Because the segment is reachable,
* it's added to both `nextSegments` and `allNextSegments`.
*/
for (i = 0; i < segment.allPrevSegments.length; ++i) {
const prevSegment = segment.allPrevSegments[i];
prevSegment.allNextSegments.push(segment);
prevSegment.nextSegments.push(segment);
}
} else {
/*
* If the segment is not reachable, then it's not officially part of the
* code path. This loops through all previous segments to update
* their list of next segments. Because the segment is not reachable,
* it's added only to `allNextSegments`.
*/
for (i = 0; i < segment.allPrevSegments.length; ++i) {
segment.allPrevSegments[i].allNextSegments.push(segment);
}
}
}
/**
* Marks a previous segment as looped.
* @param {CodePathSegment} segment A segment.
* @param {CodePathSegment} prevSegment A previous segment to mark.
* @returns {void}
*/
static markPrevSegmentAsLooped(segment, prevSegment) {
segment.internal.loopedPrevSegments.push(prevSegment);
}
/**
* Creates a new array based on an array of segments. If any segment in the
* array is unused, then it is replaced by all of its previous segments.
* All used segments are returned as-is without replacement.
* @param {CodePathSegment[]} segments The array of segments to flatten.
* @returns {CodePathSegment[]} The flattened array.
*/
static flattenUnusedSegments(segments) {
const done = new Set();
for (let i = 0; i < segments.length; ++i) {
const segment = segments[i];
// Ignores duplicated.
if (done.has(segment)) {
continue;
}
// Use previous segments if unused.
if (!segment.internal.used) {
for (let j = 0; j < segment.allPrevSegments.length; ++j) {
const prevSegment = segment.allPrevSegments[j];
if (!done.has(prevSegment)) {
done.add(prevSegment);
}
}
} else {
done.add(segment);
}
}
return [...done];
}
}
module.exports = CodePathSegment;

View File

@@ -0,0 +1,27 @@
{
"name": "acorn-jsx",
"description": "Modern, fast React.js JSX parser",
"homepage": "https://github.com/acornjs/acorn-jsx",
"version": "5.3.2",
"maintainers": [
{
"name": "Ingvar Stepanyan",
"email": "me@rreverser.com",
"web": "http://rreverser.com/"
}
],
"repository": {
"type": "git",
"url": "https://github.com/acornjs/acorn-jsx"
},
"license": "MIT",
"scripts": {
"test": "node test/run.js"
},
"peerDependencies": {
"acorn": "^6.0.0 || ^7.0.0 || ^8.0.0"
},
"devDependencies": {
"acorn": "^8.0.1"
}
}

View File

@@ -0,0 +1,5 @@
const preloadWarning = "Error preloading route! ☝️";
export {
preloadWarning
};
//# sourceMappingURL=link.js.map

View File

@@ -0,0 +1,115 @@
# file-entry-cache
> Super simple cache for file metadata, useful for process that work on a given series of files
> and that only need to repeat the job on the changed ones since the previous run of the process — Edit
[![NPM Version](https://img.shields.io/npm/v/file-entry-cache.svg?style=flat)](https://npmjs.org/package/file-entry-cache)
[![tests](https://github.com/jaredwray/file-entry-cache/actions/workflows/tests.yaml/badge.svg?branch=master)](https://github.com/jaredwray/file-entry-cache/actions/workflows/tests.yaml)
[![codecov](https://codecov.io/github/jaredwray/file-entry-cache/graph/badge.svg?token=37tZMQE0Sy)](https://codecov.io/github/jaredwray/file-entry-cache)
[![npm](https://img.shields.io/npm/dm/file-entry-cache)](https://npmjs.com/package/file-entry-cache)
## install
```bash
npm i --save file-entry-cache
```
## Usage
The module exposes two functions `create` and `createFromFile`.
## `create(cacheName, [directory, useCheckSum])`
- **cacheName**: the name of the cache to be created
- **directory**: Optional the directory to load the cache from
- **usecheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file.
## `createFromFile(pathToCache, [useCheckSum])`
- **pathToCache**: the path to the cache file (this combines the cache name and directory)
- **useCheckSum**: Whether to use md5 checksum to verify if file changed. If false the default will be to use the mtime and size of the file.
```js
// loads the cache, if one does not exists for the given
// Id a new one will be prepared to be created
var fileEntryCache = require('file-entry-cache');
var cache = fileEntryCache.create('testCache');
var files = expand('../fixtures/*.txt');
// the first time this method is called, will return all the files
var oFiles = cache.getUpdatedFiles(files);
// this will persist this to disk checking each file stats and
// updating the meta attributes `size` and `mtime`.
// custom fields could also be added to the meta object and will be persisted
// in order to retrieve them later
cache.reconcile();
// use this if you want the non visited file entries to be kept in the cache
// for more than one execution
//
// cache.reconcile( true /* noPrune */)
// on a second run
var cache2 = fileEntryCache.create('testCache');
// will return now only the files that were modified or none
// if no files were modified previous to the execution of this function
var oFiles = cache.getUpdatedFiles(files);
// if you want to prevent a file from being considered non modified
// something useful if a file failed some sort of validation
// you can then remove the entry from the cache doing
cache.removeEntry('path/to/file'); // path to file should be the same path of the file received on `getUpdatedFiles`
// that will effectively make the file to appear again as modified until the validation is passed. In that
// case you should not remove it from the cache
// if you need all the files, so you can determine what to do with the changed ones
// you can call
var oFiles = cache.normalizeEntries(files);
// oFiles will be an array of objects like the following
entry = {
key: 'some/name/file', the path to the file
changed: true, // if the file was changed since previous run
meta: {
size: 3242, // the size of the file
mtime: 231231231, // the modification time of the file
data: {} // some extra field stored for this file (useful to save the result of a transformation on the file
}
}
```
## Motivation for this module
I needed a super simple and dumb **in-memory cache** with optional disk persistence (write-back cache) in order to make
a script that will beautify files with `esformatter` to execute only on the files that were changed since the last run.
In doing so the process of beautifying files was reduced from several seconds to a small fraction of a second.
This module uses [flat-cache](https://www.npmjs.com/package/flat-cache) a super simple `key/value` cache storage with
optional file persistance.
The main idea is to read the files when the task begins, apply the transforms required, and if the process succeed,
then store the new state of the files. The next time this module request for `getChangedFiles` will return only
the files that were modified. Making the process to end faster.
This module could also be used by processes that modify the files applying a transform, in that case the result of the
transform could be stored in the `meta` field, of the entries. Anything added to the meta field will be persisted.
Those processes won't need to call `getChangedFiles` they will instead call `normalizeEntries` that will return the
entries with a `changed` field that can be used to determine if the file was changed or not. If it was not changed
the transformed stored data could be used instead of actually applying the transformation, saving time in case of only
a few files changed.
In the worst case scenario all the files will be processed. In the best case scenario only a few of them will be processed.
## Important notes
- The values set on the meta attribute of the entries should be `stringify-able` ones if possible, flat-cache uses `circular-json` to try to persist circular structures, but this should be considered experimental. The best results are always obtained with non circular values
- All the changes to the cache state are done to memory first and only persisted after reconcile.
## License
MIT