npm – tsmx https://tsmx.net pragmatic IT Thu, 05 Sep 2024 19:08:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://tsmx.net/wp-content/uploads/2020/09/cropped-tsmx-klein_transparent-2-32x32.png npm – tsmx https://tsmx.net 32 32 Built and signed on GitHub Actions – publishing npm packages with provenance statements https://tsmx.net/npmjs-built-and-signed-on-github-actions/ Fri, 17 Nov 2023 22:50:35 +0000 https://tsmx.net/?p=2646 Read more]]> A quick guide on how to set up a GitHub actions workflow to publish a npm package including the provenance badge and section on npmjs.com.

So far, you may have published your npm packages by simply invoking npm publish. Using GitHub actions provides a more elegant way and also enables you to get this nice green checkmark badge behind the version number…

npm-version-provenance

…as well as the provenance section for your packages at npmjs.com…

npm-package-provenance

This provides an extra level of security by providing evidence of your packages origin facilitating sigstore. In this article we’ll show how to set up a simple GitHub actions workflow to publish your package including the signed provenance details.

Prerequisites

To follow along this guide, you should have:

  • An existing package published at npmjs.com
  • The packages source code in a GitHub repository
  • The GitHub CLI installed and working

Generating a token for publishing packages on npmjs.com

First, you will need an access token for npmjs.com. For that log in at npmjs.com and head over to the Access Tokens section of your account. Then create a Classic Token of type Automation with any name, e.g. gh-actions-publish. On creation, the token value will be shown once and never again, so make sure you get the value and save it in a secure place. After all is done, you should see the newly created token in your accounts token list, like so.

npm-automation-token

Using this token will enable your GitHub actions workflow to publish new package versions including bypassing 2FA.

Storing the npm token on GitHub

Next, store the generated npm token value as a secret in your GitHub repository. For that, head over to Settings -> Secrets and variables and press New repository secret. Enter a name and the token value.

github-npm-token-secret-v2

Here the created secret has the name NPM_TOKEN. Having that, it can be referenced in GitHub actions workflow definitions by ${{ secrets.NPM_TOKEN }}.

Setting up a GitHub action workflow for publishing on npmjs.com

Next step is to add a GitHub actions workflow definition for publishing your npm package at npmjs.com to the repository. For that, add the following YAML as a new file in .github/workflows, e.g. ./github/workflows/npm-publish.yml, in the repository.

name: npm-publish
on:
  workflow_dispatch: 
jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write
    steps:
    - uses: actions/checkout@v3
    - uses: actions/setup-node@v3
      with:
        node-version: '18.x'
        registry-url: 'https://registry.npmjs.org'
    - run: npm ci
    - run: npm publish --provenance --access public
      env:
        NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

This action workflow named npm-publish will deploy a new version of your package at the npm registry with provenance statements. If your package is private, you can omit the --access public option in the publishing step.

Running the workflow

With workflow_dispatch: in the on section, the provided GitHub action does not have any automatic trigger and will only run when started manually. This might be preferred to have full control of when a new package version will be published. If you rather want the publish workflow to run automatically on certain events in your repo, check out the possible triggers for the on section.

To start the workflow manually and publish the package, simply run…

$ gh workflow run npm-publish

…in the project directory. Please make sure the version number of your package was updated and all changes are committed & pushed before invoking the publishing action.

After the workflow has completed successfully, you should see the new version published on npmjs.com including the checkmark badge behind the version and the provenance details section.

In the GitHub actions logs for the npm publish step of the executed workflow you can also see that the provenance statement was signed and transferred to sigstore, like so…

github-actions-npm-publish-log

That’s it.

Happy coding 🙂

Useful links

]]>
npm major package upgrades with backward patch support https://tsmx.net/npm-major-version/ Sun, 03 Oct 2021 20:27:40 +0000 https://tsmx.net/?p=1123 Read more]]> A short checklist for publishing new major versions of your packages in the npm registry including support for patching older versions.

As your NodeJS packages evolve, there will come the point when you will have to make non-backward compatible changes. According to semantic versioning you should then increase the major version number, e.g. from 1.x.x to 2.x.x.

Simple package publishing without major changes

So far, most probably all you did to release a new version of your package was to increase the minor (second) or patch (third) version number in your package.json and simply run npm publish (including --access public for scoped packages) to push the new version to the npm registry. Your “versions” tab in the registry will look quite similar to this with one tag called latest.

npm-versions

Now, for advancing to the next major version 2.0.0, there is a bit more to do and think of. Especially, if you want to continue patch support for the 1.x version. You should consider this good style, because your package is actively in use with other projects and not everbody is able to implement the breaking changes immediately.

Checklist for publishing a new major package version

Here are the tasks you should consider when publishing a new major version of a package…

Branching the old version source code

Typically you will be developing your new major version in a separate branch, let’s say version-2. Meanwhile master is holding the code of the currently published package version.

$ git branch
  master
* version-2

To be able to provide further patch support for version 1.x, you need to retain a copy of the code. For that, create a branch version-1 out of master and push it to the repository. Ideally, you should do this before merging the new version 2.x code to master.

$ git checkout master
$ git checkout -b version-1
$ git push -u origin version-1
$ git branch
* master
  version-1
  version-2

Now you have the 1.x sources in a separate branch for patching the old version if needed.

Creating a dist-tag for the old package version

Additionally to the packages version number, npm has so called tags associated with a package. By default, there’s only one tag: latest. Simply call npm dist-tag to list all current tags.

$ npm dist-tag
latest: 1.0.7

Whenever you do a npm publish without specifying any tag, the new version will be published under the latest tag. Everytime any user does a npm install or visits the npmjs.com site of your package, he’ll get the last published version for the tag latest.

Now what does this mean? When you need to patch an older version, let’s say v1.0.7 to v1.0.8, after you have already published a new major version v2.0.0, you cannot simply do an npm publish for the v1.x patch. Doing so would set v1.0.8 as latest which is not intended!

Luckily, the solution is quite simple. With npm dist-tag you can easily create a new tag, let’s say stable-v1, for the v1.x branch having the current version 1.0.7. This new tag is then used when you need to publish a patch for the older v1.0.7 version.

$ npm dist-tag add my-package@1.0.7 stable-v1
$ npm dist-tag
latest: 1.0.7
stable-v1: 1.0.7

Having that setup in place, you are ready for potential updates of the old package version without affecting the latest version. See updating an older package version for more details.

Copying and updating the external documentation

As with any new version of a package you should update the documentation in README.md according to the changes you made. Additionally you will probably have an own website linked via the homepage property in your package.json.

I recommend not to create a completely new site for the new major version of the package and change the homepage property to that. The reason for that is mainly related to search engines…

  • You may want to retain your current rankings/reputation for the new major version
  • You may want search engines to only index exactly one website for the package, which should contain the documentation of the latest version

This is achieved by…

  • Creating a copy of the existing package’s website which then has a new URL
  • Optionally excluding this new URL from being indexed by search engines, e.g. via disallow in robots.txt or a noindex meta-tag
  • Updating the original website to the new major package version
  • Optionally creating a link to the old documentation on the original website, so that users of the old package version have a chance to find it

Merging the new version source code

After saving the old versions code in a separate branch, it’s time to merge the new major version to master.

$ git checkout master
$ git merge version-2

Publishing the new major version to the npm registry

Make sure that you are in the master branch and the version number in package.json is correctly set to the new major version. Then push the new version to the npm registry.

$ npm publish

Or for scoped packages…

$ npm publish --access public

Heading back to the “versions” tab of your packages site you should see version 2.0.0 released under the latest tag and the recently created tag for the older 1.x version, like so…

npm-versions-with-tags

Great! You have successfully released the new major version of your package ensuring maintenance for the previous version.

Optional final actions

Although you’re done, I recommend the following optional actions for a for a consistent and qualitative development process.

Creating a tag & release in Git

It’s a good practice to tag published versions in your Git repo and create a release for them. To create the tag, do the following in the master branch.

$ git tag -a v2.0.0 -m "Release 2.0.0"
$ git push origin --tags

For creating a release in your Git repo you could either use a command line tool like GitHub CLI or simply switch over to your repos GitHub site. There, it takes only a few clicks:

Releases (right pane) >> Draft a new release >> Select v2.0.0 tag from branch master and enter a title & description >> Publish release.

github-create-release-3

After that you’ll see the latest release version 2.0.0 in the right pane of your repos site.

github-release-latest

Deleting the new versions development branch

After merging the version-2 development branch back to master, you can get rid of it – both, local and remote.

$ git push -d origin version-2
$ git branch -d version-2

Updating an older package version

Let’s assume you need to patch v1.0.7 to v1.0.8 but v2.0.0 is already released. If you followed the checklist steps mentioned before, your package version & tag situation will look like this.

$ npm dist-tag
latest: 2.0.0
stable-v1: 1.0.7

To update the old version v1.0.7, simply do the needed changes in the formerly created branch version-1. Then increase the version number there to v1.0.8 and publish the new 1.x version directly from this branch using the tag stable-v1.

$ git checkout version-1

# Implement changes & test, set version to 1.0.8, commit to branch

$ npm publish --tag stable-v1

This release will not effect the latest tag, meaning that v2.0.0 is still considered to be the most current version of your package when somebody installs it or visits the packages site. Don’t forget to add --access public for scoped packages.

$ npm dist-tag
latest: 2.0.0
stable-v1: 1.0.8

Happy coding 🙂

Useful links

]]>
Use CommonJS npm packages in the browser with Browserify https://tsmx.net/npm-packages-browser/ Wed, 07 Oct 2020 07:28:06 +0000 https://tsmx.net/?p=318 Read more]]> A convenient way to use CommonJS npm packages in client-side JavaScript using Browserify.

Sometimes you might need to use the functionality already implemented in a “good old” CommonJS npm package at client-side JavaScript. Unfortunately there’s no require to directly use it there, so another solution is needed.

I recently needed my own human-readable package in a client-side script. Here is one possible convenient way how this can be accomplished using Browserify.

Initial situation

Lets assume you have the following client-side JavaScript that dynamically adds rows to a table of uploaded files and their size given in bytes.

// File: js/upload-utils.js

function createUploadTableEntry(filename, size) {
    var uploadTable = document.getElementById('upload-table');
    var row = uploadTable.insertRow();
    var cellName = row.insertCell();
    cellName.textContent = filename;
    var cellSize = row.insertCell();
    cellSize.textContent = size;
};

Resulting in a table looking like that:

Now it would be great to change the size column to a more readable expression using our recently created package human-readable.

Preparing the code

To do so using Browserify, we first have to add the package we want to use to our project.

npm i @tsmx/human-readable --save

Note: Since the package dependency is only needed for creating the browserified JavaScript at development time, --save-dev should also be enough in most cases.

Now in our NodeJS project we add a small wrapper script which imports the package and exports exactly the functionality we need in our existing client-side code for building the table. In our case it is a simple function creating a readable string out of an amount of bytes.

// File: browser-utils/human-readable.js

const hr = require('@tsmx/human-readable');

module.exports.getReadableSize = function (bytes) {
    return hr.fromBytes(bytes);
};

This file should be created in a folder which is committed to your code repo but neither deployed in production nor should it be served static as an asset by the server.

In my case the folder is browser-utils which I added to my .gcloudignore (because I’m using gcloud AppEngine… ) to ignore it for production deployment and which is not served static by express – in contrast to the js folder where all client-side scripts reside.

Browserifying it

Once that is done we’re ready to generate a client-side JavaScript by using Browserify. For that you first need to install Browserify.

npm i -g browserify

In our browser-utils folder we then call Browserify to pack our wrapper class and create the client-side script.

browserify human-readable.js --standalone hr >  ../js/human-readable-utils.js

This creates a new script human-readable-utils.js in the static served folder js for client-side usage. Note the standalone option: with this Browserify creates a “named” UMD module that can be used in other modules/scripts. In this case the name is hr. You can omit this option if your client-side script is not used by any other script. Here it is needed because I want to use a function from the generated script human-readable-utils.js in the already existing script upload-utils.js. For more details see the Browserify options docs.

The final file/folder structure looks like that:

path-to-your-app/
├── js/                  # served static e.g. via express.static 
│   ├── ...
│   ├── human-readable-utils.js
│   └── upload-utils.js
├── browser-utils/       # not served, not deployed in production 
│   ├── ...
│   └── human-readable.js
├── app.js
└── package.json

Having the browserified client-side script we can change the existing script and use our exported function via the hr name prefix.

// File: js/upload-utils.js

function createUploadTableEntry(filename, size) {
    var uploadTable = document.getElementById('upload-table');
    var row = uploadTable.insertRow();
    var cellName = row.insertCell();
    cellName.textContent = filename;
    var cellSize = row.insertCell();
    cellSize.textContent = hr.getReadableSize(size); // use browserified function
};

Last thing you have to do is to make sure that the browserified bundle is loaded before the existing code so that it is known and can be used.

<script src="/js/human-readable-utils.js"></script>
<script src="/js/upload-utils.js"></script>

That’s it – we’re done. Reloading the page, the npm package is now used to create human readable string for the file sizes.

Improving the development process with watchify

So far so good. We’re now able to use our npm package in the browser in a client-side JavaScript. One drawback is that we always have to browserify the script again once we changed something in the underlying wrapper script. This could easily be forgotten…

To mitigate this issue we can use watchify during development. It’s like a watchdog for changed files which automatically triggers the browserifying process. It takes the same arguments the browserify command does except that -o is mandatory. First install it as a global package.

$ npm i -g watchify

Then in the main folder of project the run it in, e.g. in a VS Code terminal.

$ watchify ./browser-utils/human-readable.js --standalone hr -v -o ./js/human-readable-utils.js
5583 bytes written to ./js/human-readable-utils.js (0.03 seconds) at 8:49:12

Watchify now will create a new browserified script each time the source is saved. To integrate it a bit more in your development and deployment process I suggest to create script entries in package.json for both, one-time browserify and continously watchify.

...
"scripts": {
    "start": "node app.js",
    "build-utils": "browserify ./browser-utils/human-readable.js --standalone hr > ./js/human-readable-utils.js",
    "build-utils-dev": "watchify ./browser-utils/human-readable.js --standalone hr -v -o ./js/human-readable-utils.js"
  },
...

This allows you to access this commands via npm run build-tools and npm run build-tools-dev where needed or also via the VS Code task runner features (F1 –> task –> …).

Additionally you could set-up a task in VS Code to run the watchify process everytime you open a specific folder.

Useful links

]]>
human-readable – create user-friendly strings for byte sizes in NodeJS https://tsmx.net/human-readable/ Sun, 27 Sep 2020 20:46:22 +0000 https://tsmx.net/?p=225 Read more]]> Lightweight npm-package for creating human-readable strings from bytes (e.g. 17238 → 17.24 kB) or other sizes.

Many functions dealing with sizes of files, buffers, objects etc. often return an amount of bytes. That’s good for further checks, calculations etc. but not for presenting these sizes to users. A classic example is the filesize.

const fs = require('fs');

var stats = fs. statSync('/tmp/large.txt');
console.log('File Size: ' + stats. size);

// File Size: 10485760

To deal with that very common situation we provided a handy package human-readable. The example above then looks like that.

const fs = require('fs');
const hr = require('@tsmx/human-readable');

var stats = fs. statSync('/tmp/large.txt');
console.log('File Size: ' + hr.fromBytes(stats. size));

// File Size: 10.49 MB

One now may say: “Calculating the new number and adding ‘MB’ is quite easy… why should I import a package dependency for that?” At first sight yes, but on deeper examination there are some good reasons to outsource this functionality to a package, because it helps you to…

  • Don’t bloat your projects with self-implemented boilerplate code. Keep them DRY.
  • Don’t waste your time on dealing with byte conversion details like decimal and binary units – focus on your business case.
  • Rely on an out-of-the box well-tested solution instead of having to take care of quality assurance for this task yourself.

YABCP – yet another byte conversion package?

There are quite a lot and also some very good packages dealing with size conversions around at npm. So why this YABCP?

Investigating the available packages and the pros and cons each has, no one seemed to be 100% fitting to the need of our use cases, which are

  • Converting sizes supporting both, decimal (SI) and binary (IEC) units.
  • Supporting manual conversion of sizes in both directions – from small to large and from large to small.
  • Ease of use for directly displaying the result to the user.

If your intentions and use-cases are similar, you might be right with our package… so go ahead.

Usage

Install the package as a dependency.

npm i @tsmx/human-readable --save

Then import an use the functions provided to create human-readable strings.

const hr = require('@tsmx/human-readable');
 
hr.fromBytes(17238);
// '17.24 kB'
 
hr.fromBytes(17238, { mode: 'IEC' });
// '16.83 KiB'
 
hr.fromBytes(17238, { numberOnly: true });
// '17.24'
 
hr.fromBytes(17238, { fixedPrecision: 1 });
// '17.2 kB'
 
hr.fromBytes(17238, { fullPrecision: true });
// '17.238 kB'
 
hr.fromTo(17, 'GBYTE', 'KBYTE');
// '17000000 kB'
 
hr.fromTo(17, 'GBYTE', 'KBYTE', { mode: 'IEC' });
// '17825792 KiB'
 
hr.availableSizes();
// [ 'BYTE', 'KBYTE', 'MBYTE', 'GBYTE', 'TBYTE', 'PBYTE' ]

Note: If you need to use the package in client-side JavaScript in the browser, refer to this guide.

Functionality and API

To cover the mentioned use-cases two functions are exported: fromBytes and fromTo. With fromBytes you can “automatically” create a readable string out of a given amount of bytes. The function takes care of choosing the appropriate target unit etc. If you want an explicit conversion from a size unit to another, use fromTo instead.

fromBytes(bytes, options)

Automatically converts the amount of bytes to a readable string and returns it. Switch to the next size is done when the value of the greater unit is at least 1. So fromBytes(999) would result in 999 B whereas fromBytes(1000) would return 1 kB

ParameterDescription
bytesnumber of bytes to convert
optionsoptional JSON object containing all custom options for the conversion, can have the following properties:

mode
Type: String 
Default: none (use decimal mode)

Can be set to IEC to use binary conversion (factor 1.024) and units (KiB,MiB,…). If not set or to any other value, decimal conversion (factor 1.000) and units (kB, MB,…) are used.

numberOnly
Type: Boolean 
Default: false

If set to true, conversion only returns the number and omits the unit. Overrides 
noWhitespace.

fixedPrecision
Type: Number

If set the returned number string is formatted to the given fixed decimal places. If not set, the default behaviour of the conversion is to use a dynamic number of decimal places from zero up to two.

fullPrecision
Type: Boolean 
Default: false

If set to true, the returned number value will be presented with full available decimal places. Overrides fixedPrecision.

noWhitespace
Type: Boolean 
Default: false

If set to true, the whitespace between the number and unit string is omitted. E.g. 10MB instead of 10 MB.

Without specifying custom options, the function will have the follwing behaviour:

  • Use of decimal units (kB, MB, GB,…) with conversion factor 1.000 between each size.
  • Returns a string containing the resulting value followed by a whitespace and the corresponding decimal unit.
  • Uses a floating amount of decimal places from 0 to 2 according to the calculated result.

Examples:

const hr = require('@tsmx/human-readable');

console.log(hr.fromBytes(17522071));
// '17.52 MB'

console.log(hr.fromBytes(17522071, { fixedPrecision: 0 }));
// '18 MB'

console.log(hr.fromBytes(17522071, { mode: 'IEC', fixedPrecision: 3, noWhitespace: true }));
// '16.710MiB'

console.log(hr.fromBytes(17522071, { mode: 'IEC', fullPrecision: true }));
// '16.710349082946777 MiB'

fromTo(value, fromSize, toSize, options)

Converts the value assuming it is given in fromSize to toSize. Conversion can be done from smaller to a greater size (e.g. kB to MB) or from greater to smaller (e.g. GB to MB). Default behaviour (unit string, precision) and available options are the same as in fromBytes.

ParameterDescription
valuevalue to convert
fromSizethe size value has, must be one out of availableSizes()
toSizethe size value should be converted to, must be one out of availableSizes()
optionsoptional object containing all custom options for the conversion, see fromBytes

Examples:

const hr = require('@tsmx/human-readable');

console.log(hr.fromTo(3, 'MBYTE', 'KBYTE', { mode: 'IEC' }));
// '3072 KiB'

console.log(hr.fromTo(793, 'MBYTE', 'GBYTE', { numberOnly: true }));
// '0.79'

availableSizes()

Returns an array of strings representing the available sizes for manual conversion with fromTo.

Array Entrydecimal unit (SI)binary unit (IEC)
BYTEByte (B)Byte (B)
KBYTEKilobyte (kB)Kibibyte (KiB)
MBYTEMegabyte (MB)Mebibtye (MiB)
GBYTEGigabyte (GB)Gibibyte (GiB)
TBYTETerabyte (TB)Tebibyte (TiB)
PBYTEPetabyte (PB)Pebibyte (PiB)

Testing

The package contains a set of unit tests with a good overall coverage of the code. To run them, install or clone the package and run the tests via npm:

npm run test

To output the code coverage run:

npm run test-coverage

Also check out the current coverage stats at Coveralls.

Useful Links

]]>
secure-config – easy and secure NodeJS configuration management https://tsmx.net/secure-config/ Thu, 27 Aug 2020 21:16:45 +0000 https://tsmx.net/?p=16 Read more]]> A convenient npm package to handle multi-environment NodeJS configurations with encrypted secrets and HMAC validation. Works with CommonJS and ESM/ECMAScript.

With secure-config we provide a lean and secure configuration management for NodeJS.

  • Lean: following the KISS principle – minimalistic and focused feature set, easy to use
  • Secure: state-of-the-art encryption and data integrity for sensible configuration data

Just follow some basic guidelines like a common naming convention and you are in the game!

Note: This documentation refers to the latest version of secure-config, for docs on older versions please look here.

Benefits

  • Easy integration with just a few lines of code. Works with CommonJS as well as ESM/ECMAScript.
  • Strong AES-256-CBC encryption. Secured configuration is still a valid, legible JSON.
  • HMAC validation of your configuration to ensure data integrity.
  • Configuration management for different environments (dev, test, prod…)
  • No need to use 3rd party secret stores, no vendor or platform lock-in.
  • Works perfectly on-premise, with Docker, cloud platforms like Google AppEngine etc.

Encryption and HMAC validation of your configurations are totally optional. You can also use secure-config just to manage plain, unencrypted JSON configurations for different environments. Also you can combine different security features for the different environments, e.g. encryption-only for TEST and encryption plus HMAC validation for PROD.

Usage

Suppose you have the following JSON configuration file config.json with secret information about your database connection…

{ 
  "database": {
    "host": "127.0.0.1", 
    "user": "MySecretDbUser",
    "pass": "MySecretDbPass" 
  } 
} 

Step 1 – Install secure-config-tool [optional]

[tsmx@localhost ]$ npm i -g @tsmx/secure-config-tool

Note: In the explanation I will use the provided tool secure-config-tool. You can do all that without the tool as it is NodeJS crypto standard, but it’s a lot more easier so…

Step 2 – Get a secret key and export it.

[tsmx@localhost ]$ secure-config-tool genkey
df9ed9002b...
[tsmx@localhost ]$ export CONFIG_ENCRYPTION_KEY=df9ed9002b...

Step 3 – Encrypt your configuration JSON values and generate a new, secure configuration file.

[tsmx@localhost ]$ secure-config-tool create config.json > conf/config.json
[tsmx@localhost ]$ cat conf/config.json
{ 
  "database": {
    "host": "127.0.0.1", 
    "user": "ENCRYPTED|50ceed2f97223100fbdf842ecbd4541f|df9ed9002bfc956eb14b1d2f8d960a11",
    "pass": "ENCRYPTED|8fbf6ded36bcb15bd4734b3dc78f2890|7463b2ea8ed2c8d71272ac2e41761a35" 
  },
  "__hmac": "3023eb8cf76894c0d5c7f893819916d876f98f781f8944b77e87257ef77c1adf"
}

The generated file should be in the conf/ subfolder of your app. For details see naming conventions.

Step 4 – Use your configuration in the code.

// CommonJS
const conf = require('@tsmx/secure-config')();

// ESM
import secureConfig from '@tsmx/secure-config';
const conf = secureConfig();

function MyFunc() {
  let dbHost = conf.database.host; // = '127.0.0.1'
  let dbUser = conf.database.user; // = 'MySecretDbUser'
  let dbPass = conf.database.pass; // = 'MySecretDbPass'
  //...
}

To change the default behaviour or make use of more advanced features like HMAC validation you can pass custom options to the function returned by require/import.

Step 5 – Run your app using the new encrypted configuration.

[tsmx@localhost ]$ export CONFIG_ENCRYPTION_KEY=df9ed9002b...
[tsmx@localhost ]$ node app.js

File name and directory conventions

You can have multiple configuration files for different environments or stages. They are distinguished by the environment variable NODE_ENV. The basic configuration file name is config.json if this environment variable is not present. If it is present, a configuration file with the name config-[NODE_ENV].json is used. An exception will be thrown if no configuration file is found.

To change the default configuration file name or loading multiple configuration files you can pass the prefix option.

By default, all configuration files are expected to be located in a conf/ directory of the current running app, meaning a direct subdirectory of the current working directory (CWD/conf/). To overwrite this behaviour, you can pass the directory option.

Example project structure

Let’s assume you are working on a typical project where you are developing locally, using automated tests and finally will deploy to a production environment. From a configuration perspective, you’ll have these three stages…

Development stage

  • NODE_ENV: not set
  • Configuration file: conf/config.json

Production stage

  • NODE_ENV: production
  • Configuration file: conf/config-production.json

Test stage, e.g. Jest

  • NODE_ENV: test
  • Configuration file: conf/config-test.json

The final project structure would look like that:

path-to-your-app/
├── conf/
│       ├── config.json
│       ├── config-production.json
│       └── config-test.json
├── app.js
└── package.json

Note: it is a very common practice that NODE_ENV is set to test when running automated tests. Some test libraries like Jest will do that automatically.

Custom options

To retrieve a configuration using all default values and without advanced features, you simply invoke a function after the require/import statement without any argument (empty set of parenthesis after require or simple method call after import).

// CommonJS
const conf = require('@tsmx/secure-config')();

// ESM
import secureConfig from '@tsmx/secure-config';
const conf = secureConfig();

To make use of the more advanced features and customize default values, you can pass an options object to this function call.

const confOptions = {
  keyVariable: 'CUSTOM_CONFIG_KEY',
  hmacValidation: true, 
  hmacProperty: '_signature',
  prefix: 'myconf'
}

// CommonJS
const conf = require('@tsmx/secure-config')(confOptions);

// ESM
import secureConfig from '@tsmx/secure-config';
const conf = secureConfig(confOptions);

The following options are available.

keyVariable

Type: String

Default: CONFIG_ENCRYPTION_KEY

The name of the environment variable containing the key for decrypting configuration values and validating the HMAC. See also options on how to pass the key.

hmacValidation

Type: Boolean 

Default: false

Specifies if the loaded configuration should be validated against a given HMAC. If set to true, secure-config will validate the HMAC of the decrypted configuration content against a given HMAC using the current key. If the validation fails, an exception will be thrown. If it succeeds, the decrypted configuration will be returned.

The given HMAC is retrieved from a configuration file property with the name of hmacProperty, e.g.:

{
  "database": {
    "host": "127.0.0.1",
    "user": "ENCRYPTED|50ceed2f97223100fbdf842ecbd4541f|df9ed9002bfc956eb14b1d2f8d960a11",
    "pass": "ENCRYPTED|8fbf6ded36bcb15bd4734b3dc78f2890|7463b2ea8ed2c8d71272ac2e41761a35"
  },
  "__hmac": "3023eb8cf76894c0d5c7f893819916d876f98f781f8944b77e87257ef77c1adf"
}

Enabling this option is recommended for production environments as it adds more security to your configuration management ensuring the loaded configuration is safe against tampering. Unwanted modifications of any – even unencrypted – entries in your configuration would cause the HMAC validation to fail and throw the error HMAC validation failed.

Please ensure that your stored configuration files have an appropriate HMAC property before enabling this option. Otherwise loading the configuration would always fail. secure-config-tool adds the HMAC by default when creating secured configuration files.

To get more information on how the HMAC creation & validation works internally, please refer to the package object-hmac which is used for that. The HMAC value is created out of the entire configuration data before encryption / after decryption. Also take a look on the short description how secure-config is working under the hood.

hmacProperty

Type: String 

Default: __hmac

The name of the HMAC property in a configuration file to be validated against. Only used when hmacValidation is set tor true.

Example configuration file using a custom HMAC property name:

{
  "database": {
    "host": "127.0.0.1",
    "user": "ENCRYPTED|50ceed2f97223100fbdf842ecbd4541f|df9ed9002bfc956eb14b1d2f8d960a11",
    "pass": "ENCRYPTED|8fbf6ded36bcb15bd4734b3dc78f2890|7463b2ea8ed2c8d71272ac2e41761a35"
  },
  "_signature": "3023eb8cf76894c0d5c7f893819916d876f98f781f8944b77e87257ef77c1adf"
}

Loading the configuration with HMAC validation enabled:

const confOptions = {
    hmacValidation: true, 
    hmacProperty: '_signature'
}

const conf = require('@tsmx/secure-config')(confOptions);

directory

Type: String

Default: ./conf/

Use this parameter to change the directory where the configuration files should be loaded from.

E.g. if the files are located under /var/myapp/configurations:

const confOptions = {
    directory: '/var/myapp/configurations'
}
const conf = require('@tsmx/secure-config')(confOptions);

This option can be combined with the prefix option to control the configuration filenames within the directory. Naming conventions according to NODE_ENV are applied as normal.

Hint: Setting a relative path within the current running app or an unit-test can easily be achieved by using path.join with process.cwd. E.g. if the files are located in ./test/configurations.

const confOptions = {
    directory: path.join(process.cwd(), 'test/configurations')
}

prefix

Type: String 

Default: config

Use this parameter to change the default file name pattern from config-[NODE_ENV].json to [prefix]-[NODE_ENV].json for loading files with deviating names or additional ones. The value of NODE_ENV will be evaluated as described in the naming conventions.

To load multiple configurations, use the following pattern in your code.

const secureConf = require('@tsmx/secure-config');
const config = secureConf();
const myconf = secureConf({ prefix: 'myconf', keyVariable: 'MYCONF_KEY' });

This example will load the default config.json using the the key from environment variable CONFIG_ENCRYPTION_KEY as well as the additional myconf.json using the key from MYCONF_KEY. Note that different configurations should use different encryption keys.

Depending on the value of NODE_ENV the following configuration files will be loaded in this example.

Value of NODE_ENVVariableFilename
not setconfigconf/config.json
myconfconf/myconf.json
productionconfigconf/config-production.json
myconfconf/myconf-production.json
testconfigconf/config-test.json
myconfconf/myconf-test.json

Injecting the decryption key

The key for decrypting the encrypted values is derived from an environment variable named CONFIG_ENCRYPTION_KEY. You can use the keyVariable option to change the name if you need to.

You can set the environment variable whatever way is most suitable. Here are some examples for common use-cases.

Set/export directly in the command line.

export CONFIG_ENCRYPTION_KEY=0123456789qwertzuiopasdfghjkly

Set the key in your launch.json configuration for developing/debugging.

... 
    "env": { "CONFIG_ENCRYPTION_KEY": "0123456789qwertzuiopasdfghjklyxc" },
...

Best and most secure way of passing the decryption key to a cloud function is using GCP Secret Manager. Create a new secret for the key (e.g. CONFIG_KEY) there and pass it as an an environment variable using the --set-secrets option.

gcloud functions deploy my-cloud-function \
--source=. \
--entry-point=myFunction \
--trigger-http \
--set-secrets=CONFIG_ENCRYPTION_KEY=projects/100374066341/secrets/CONFIG_KEY:latest

Replace the number in the key reference path with your GCP project number. For more detailed instructions take a look at the article about secure configuration management for a cloud function.

Set the key in an environment block in app.yml.

... 
env_variables:
  CONFIG_ENCRYPTION_KEY: "0123456789qwertzuiopasdfghjklyxc"
...

Note: Make sure to not include the productive app.yml deployment descriptor in your code repository to not expose the production key. In general only your deployment managers should have access to this file as it also contains other sensitive and cost-relevant information like instance sizes and scaling options.

Pass the key to the docker run command.

docker run --env CONFIG_ENCRYPTION_KEY=0123456789qwertzuiopasdfghjklyxc MYIMAGE

There’s a complete docker test example available on GitHub.

For testing with Jest I recommend to create a test key and set it globally for all tests in the jest.config.js.

process.env['CONFIG_ENCRYPTION_KEY'] = '0123456789qwertzuiopasdfghjklyxc';
module.exports = { testEnvironment: 'node' };

For more details also have a look at how to configure Jest.

If your NodeJS app using secure-config should run as a systemd service, set the key in the [Service] section of your service file.

...
[Service]

Environment=CONFIG_ENCRYPTION_KEY=0123456789qwertzuiopasdfghjklyxc
WorkingDirectory=/path/to/your/app
...

Note: You should also set the WorkingDirectory in the service configuration so that secure-config can find the configuration files in the conf/ subfolder correctly.

The key length must be 32 bytes! The value set in CONFIG_ENCRYPTION_KEY has to be:

  • a string of 32 characters length, or
  • a hexadecimal value of 64 characters length (= 32 bytes)

Otherwise an error will be thrown.

Examples of valid key strings:

  • 32 byte string: MySecretConfigurationKey-123$%&/
  • 32 byte hex value: 9af7d400be4705147dc724db25bfd2513aa11d6013d7bf7bdb2bfe050593bd0f

Different keys for each configuration environment are strongly recommended.

Working with secure-config-tool

For an easy handling of secure configuration files it is recommended to use the accompanying secure-config-tool. It is an easy command line tool covering all necessary workflows.

To install the tool, simply run the following command with sufficent rights:

[tsmx@localhost ]$ npm i -g @tsmx/secure-config-tool

Please note that using secure-config-tool is not a must. It is absolutely possible to do everything on your own. For details refer to the implementation available in the GitHub repo. All functions used in the tool are either NodeJs standard or open-source.

Obtaining a secret key

First thing you have to do when securing your configuration files is to get a key for encryption and/or HMAC generation. For that, secure-config-tool offers the genkey option.

[tsmx@localhost ]$ secure-config-tool genkey
a0e9a2d6447adf308cabeb64a075f6f3a45041be956c3f7a94de8a28c0df695e

This function uses the NodeJS standard function randomBytes to generate a cryptographically strong 32 bit long hex key.

Keep the generated key secret. Separate keys for different environments are strongly recommended.

For every operation dealing with encryption/decryption or HMAC’s, the key must be present in an environment variable CONFIG_ENCRYPTION_KEY.

[tsmx@localhost ]$ export CONFIG_ENCRYPTION_KEY=a0e9a2d6447adf308cabeb64a075f6f3a45041be956c3f7a94de8a28c0df695e

As a best practice you should save the generated key into a file and export it directly from there.

[tsmx@localhost ]$ secure-config-tool genkey > prod-key.txt
[tsmx@localhost ]$ export CONFIG_ENCRYPTION_KEY=$(cat prod-key.txt)

Initial creation of a secure configuration file

To easily create a secured configuration out of an existing JSON file, secure-config-tool provides the create option. Using the default settings, this will take your file, encrypt the values of properties matching the terms (user, pass, token) and add a HMAC.

[tsmx@localhost ]$ cat config-plain.json
{
  "database": {
    "host": "localhost",
    "username": "secret-db-user",
    "password": "Test123",
    "port": 1521
  },
  "settings": {
    "https": true,
    "attempts": 5
  }
}
[tsmx@localhost ]$ secure-config-tool create config-plain.json > config-production.json
[tsmx@localhost ]$ cat config-production.json
{
  "database": {
    "host": "localhost",
    "username": "ENCRYPTED|143456e2d2e401bde5d63224a88d44bb|312d01644cc65161f77ce2ed47458a60",
    "password": "ENCRYPTED|2bca8b3b444e9061e57f4e288ef8aecd|99453a3e55cf621f492c474c8a1c0c81",
    "port": 1521
  },
  "settings": {
    "https": true,
    "attempts": 5
  },
  "__hmac": "9d68c2082f2e5fc61268a8171872ae1c15bf15b6a6ab0f82e27e40f19cc156bd"
}

Now you’re ready to use the secured configuration file. For more details on how to customize the file generation, please refer to the docs for create.

Adding new encrypted properties

As you work on your project, you will have to add further sensible properties to your configuration file. Let’s assume you want wo add a property schema with a value of secret-db-schema to the database section in the example above.

For that, use secure-config-tool’s encrypt function for generating the encrypted value and put it into the config file.

[tsmx@localhost ]$ secure-config-tool encrypt "secret-db-schema"
ENCRYPTED|56c1cd4685e1405607b28a84fa083881|bf052def52a7544cfe9b50badd9c907c2b7b399fade2d86b3b9715baa79e6efb
[tsmx@localhost ]$ vi config-production.json # copy and paste the new property value
[tsmx@localhost ]$ cat config-production.json
{
  "database": {
    "host": "localhost",
    "username": "ENCRYPTED|143456e2d2e401bde5d63224a88d44bb|312d01644cc65161f77ce2ed47458a60",
    "password": "ENCRYPTED|2bca8b3b444e9061e57f4e288ef8aecd|99453a3e55cf621f492c474c8a1c0c81",
    "schema": "ENCRYPTED|56c1cd4685e1405607b28a84fa083881|bf052def52a7544cfe9b50badd9c907c2b7b399fade2d86b3b9715baa79e6efb",
    "port": 1521
  },
  "settings": {
    "https": true,
    "attempts": 5
  },
  "__hmac": "9d68c2082f2e5fc61268a8171872ae1c15bf15b6a6ab0f82e27e40f19cc156bd"
}

The new encrypted property is now ready to use.

Please note that after any change to a configuration file, an eventually present HMAC property – like in our example above – will be invalid until it is re-calculated. To do so, just read on…

Updating the HMAC

If you make use of the HMAC validation, you must be aware of that any change of the content of a configuration file will change it’s HMAC value and therefore invalidate any existing HMAC property.

For re-calculation of the HMAC to make it valid again, simply use the update-hmac function.

[tsmx@localhost ]$ secure-config-tool update-hmac -o config-production.json

This command will take your configuration file and re-calculate the HMAC property. Any other value, especially encrypted values, will stay untouched.

Please also refer to the update-hmac docs and the description of how the HMAC validation is working.

Test an existing configuration file

With the test function, secure-config-tool offers an easy way to test if a configuration file is valid with regards to:

  • decrypting the content
  • validating the HMAC

To test a file, simply call…

[tsmx@localhost ]$ secure-config-tool test config-production.json
Decryption: PASSED
HMAC:       PASSED

If you are using a wrong key or encrypted entries are malformed, you”ll get an error.

[tsmx@localhost ]$ secure-config-tool test config-production.json
Decryption failed. Please check that the right key is used and the encrypted secret is valid and has the form "ENCRYPTED|IV|DATA"
See the docs under: https://github.com/tsmx/secure-config
Decryption: FAILED
HMAC:       not tested

If you get this error, please check if you are using the right key and if all encrypted entries are valid according to the specification. If necessary, you can re-create them with the create or encrypt function.

For more details, please refer to the test docs.

To check a single encrypted entry, just copy it and use the decrypt function. It will print out the decrypted value.

[tsmx@localhost ]$ secure-config-tool decrypt "ENCRYPTED|143456e2d2e401bde5d63224a88d44bb|312d01644cc65161f77ce2ed47458a60"
secret-db-user

Encrypted configuration entries

Specification

Encrypted configuration entries must always have the form: ENCRYPTED | IV | DATA.

PartDescription
ENCRYPTEDThe prefix ENCRYPTED used to identify configuration values that must be decrypted.
IVThe ciphers initialization vector (IV) that was used for encryption. Hexadecimal value.
DATAThe AES-256-CBC encrypted value. Hexadecimal value.

Example: "ENCRYPTED|aebc07dd97af3f857cb585b4c956661b|ea18ce1feaa5b8cf4ecb471b9b4401da"

Generation

In general, it is recommended to use secure-config-tool to create and maintain your configuration files. Have a look here for some typical workflows.

Anyways, it’s totally fine to use the standard crypto functions from NodeJS. With the following snippet you can create valid encrypted entries for your configuration files:

const crypto = require('crypto');
const algorithm = 'aes-256-cbc';

function encrypt(value) { 
  let iv = crypto.randomBytes(16); 
  let key = Buffer.from('YOUR_KEY_HERE'); 
  let cipher = crypto.createCipheriv(algorithm, key, iv); 
  let encrypted = cipher.update(value); 
  encrypted = Buffer.concat([encrypted, cipher.final()]); 
  return 'ENCRYPTED|' + iv.toString('hex') + '|' + encrypted.toString('hex');
}

Under the hood

This is how secure-config works: when importing and invoking the package via require('@tsmx/secure-config')() it…

  1. Retrieves the decryption key out of the environment variable CONFIG_ENCRYPTION_KEY
  2. Loads the applicable JSON configuration file.
  3. Recursively iterates the loaded JSON and decrypts all encrypted entries that were found.
  4. Optionally, checks the HMAC of the decrypted configuration object.
  5. Returns a simple JSON object with all secrets decrypted.

Important notes / good to know:

  • Decryption is only done in-memory. Decrypted values are never persisted anywhere.
  • The CONFIG_ENCRYPTION_KEY environment variable is always required, even if the config file doesn’t contain encrypted secrets.
  • Recursion depth is not limited, encrypted entries can be used at every object level in the JSON config file.
  • HMAC validation is optional and can be used without any encryption, e.g. if you don’t have sensitive data but want to ensure data integrity anyways.
  • Recursion depth is not limited, encrypted entries can be used at every object level in the JSON config file.
  • At the time of writing only decryption of simple values is supported (no arrays or complete sub-objects. but properties of sub-objects). This covers most of the use-cases as sensitive data like passwords are normally all plain, nonstructured values.

Sample projects

GitHub

To get familiar with the use of secure-config I provided a secure-config-test project on Github.

Docker Hub

For trying out with Docker and Kubernetes a public docker image is also available on Docker-Hub.

Unit tests

The package contains a comprehensive set of unit tests with a very high coverage of the code. To run them, install or clone the package and run the tests via npm:

npm run test

To output the code coverage run:

npm run test-coverage

Also check out the current coverage stats at Coveralls.

Changelog

Version 2.1.0 (13.12.2023)

  • Support for encrypted properties of objects in arrays added, e.g.
    { configArray: [ { key: 'ENCRYPTED|...' }, { key: 'ENCRYPTED|... ' } ] }

Version 2.2.0 (29.03.2024)

  • Support for loading multiple configurations with new option prefix added.

Version 2.3.0 (05.09.2024)

  • Support for custom configuration file path with new option directory added.

Older versions

This article refers to the current version of secure-config. Documentations for older versions can be found here.

Useful Links

]]>