NodeJS – tsmx https://tsmx.net pragmatic IT Tue, 20 Aug 2024 19:15:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://tsmx.net/wp-content/uploads/2020/09/cropped-tsmx-klein_transparent-2-32x32.png NodeJS – tsmx https://tsmx.net 32 32 Migrating eslintrc.json to eslint.config.js in a CommonJS project https://tsmx.net/migrating-eslintrc-to-flat-config-in-commonjs/ Mon, 19 Aug 2024 20:37:10 +0000 https://tsmx.net/?p=3036 Read more]]> A practical end-to-end guide for migrating an existing .eslintrc.json config (ESLint v8 and before) to the new flat file config eslint.config.js (ESLint v9 and above) in a CommonJS Node.js project including linting of unit-tests with Jest.

This article comes along with a public GitHub example repository enabling you to comprehend everything and easily switch between before/after migration state.

Starting point: existing eslintrc configuration

In this migration guide we’ll use a very standard ESLint configuration set which should cover basic linting for the vast majority of your projects.

  • Configure ESLint for use with Node.js and CommonJS using a specified ECMA version
  • Ensure a proper linting of Jest tests
  • Use a predefined set of linting rules as a starting point
  • Ensure right linting of basics like indention, unused variables, use of globals, semicolons and quote signs used

So far, the usual way to configure ESLint in Node.js was to place an .eslintrc.json file in the root folder of the project. The below .eslintrc.json covers all the mentioned points and serves as the basis for this guide.

{
    "env": {
        "node": true,
        "commonjs": true,
        "es6": true,
        "jest": true
    },
    "extends": "eslint:recommended",
    "globals": {
        "Atomics": "readonly",
        "SharedArrayBuffer": "readonly"
    },
    "parserOptions": {
        "ecmaVersion": 2018
    },
    "rules": {
        "indent": [
            "error",
            4,
            {
                "SwitchCase": 1
            }
        ],
        "quotes": [
            "error",
            "single"
        ],
        "semi": [
            "error",
            "always"
        ],
        "no-unused-vars": [
            2,
            {
                "args": "after-used",
                "argsIgnorePattern": "^_"
            }
        ]
    }
}

Additionally, you might have a .eslintignore file placed in the root folder of the project to exclude files and paths from linting, e.g. to exclude the two directories conf and coverage – like so:

conf/
coverage/

Errors after upgrading ESLint to v9

Having this configuration in place you’ll notice that your environment, in this case VSCode, is coming up with an error after upgrading to ESLint v9. The highlighting of linting errors and warnings also isn’t working any more.

eslint-config-error-vscode

Having a look in the ESLint output quickly gives you the reason why.

[Info  - 20:51:44] ESLint server is starting.
[Info  - 20:51:44] ESLint server running in node v20.14.0
[Info  - 20:51:44] ESLint server is running.
[Info  - 20:51:46] ESLint library loaded from: /home/tsmx/projects/weather-tools/node_modules/eslint/lib/api.js
(node:4117) ESLintIgnoreWarning: The ".eslintignore" file is no longer supported. Switch to using the "ignores" property in "eslint.config.js": https://eslint.org/docs/latest/use/configure/migration-guide#ignoring-files
(Use `code --trace-warnings ...` to show where the warning was created)
[Error - 20:51:46] Calculating config file for file:///home/tsmx/projects/weather-tools/weather-tools.js) failed.
Error: Could not find config file.

Starting with ESLint v9.0.0, the default configuration was changed to flat file and eslintrc.json as well as eslintignore became deprecated. Although it’s possible to continue using eslintrc.json, it’s recommended to switch to the new file format being future-proof.

Migrating to the new flat file configuration

For a CommonJS project, the new flat file configuration is a normal JavaScript file called eslint.config.js which is placed in the root folder and simply exports an array of ESLint configuration objects via module.exports.

Installing needed dev dependencies

The flat file config doesn’t contain an env section anymore that allows you to specify ESLint is running in Node.js and enabling Jest features for correct linting of unit-test files. Also, the recommended ruleset has been outsourced to an own module.

To include all these features in the new ESLint v9 configuration, you’ll need to install the following dependencies in your project.

  • @eslint/js for using the recommended ruleset as a basis
  • eslint-plugin-jest to enable proper linting of Jest test files
  • globals to make ESLint aware of common global variables for Node.js and Jest avoiding they are marked as undefined

As these dependencies are only used for ESLint, you should install them – like ESLint itself – as dev dependencies in your Node.js project.

# npm install @eslint/js eslint-plugin-jest globals --save-dev

Creating eslint.config.js

Next, in the root folder of your Node.js project, create an eslint.config.js file with the following contents. This will lead to an almost identical, yet more customizable, linting behaviour as the old .eslintrc.json did.

const { configs } = require('@eslint/js');
const jest = require('eslint-plugin-jest');
const globals = require('globals');

module.exports = [
    configs.recommended,
    {
        languageOptions: {
            ecmaVersion: 2018,
            sourceType: 'commonjs',
            globals: { 
                ...globals.node, 
                ...globals.jest, 
                Atomics: 'readonly', 
                SharedArrayBuffer: 'readonly' 
            }
        },
        rules: {
            semi: 'error',
            quotes: ['error', 'single'],
            indent: ['error', 4, { 'SwitchCase': 1 }],
            'no-unused-vars':
                [
                    'warn',
                    {
                        'varsIgnorePattern': '^_',
                        'args': 'after-used',
                        'argsIgnorePattern': '^_'
                    }
                ]
        },
        ignores: ['conf/', 'coverage/']
    },
    {
        languageOptions: {
            globals: { ...globals.jest }
        },
        files: ['test/*.test.js'],
        ...jest.configs['flat/recommended'],
        rules: {
            ...jest.configs['flat/recommended'].rules
        }
    }
];

Thats’s already it. Linting now should work again as expected and you can safely delete the old .eslintrc.json as well as .eslintignore in your project.

Breakdown of the new flat file configuration

As noted before, the flat file configuration is simply an exported array of ESLint configuration objects. Based on our eslintrc.json we want to migrate, this array will have three entries.

Part 1: Importing recommended ESLint ruleset

First element of the configuration array should be the recommended ruleset that is delivered by the @eslint/js package. This line is the replacement for the "extends": "eslint:recommended" entry in the old eslintrc.

configs.recommended

Part 2: Custom rules for normal JavaScript code files and files to be ignored

Next object in the configuration array holds all our own custom rules and properties for normal JavaScript code files as well as the patterns of all files/folders that shout be ignored bei ESLint.

{
    languageOptions: {
        ecmaVersion: 2018,
        sourceType: 'commonjs',
        globals: { 
            ...globals.node, 
            ...globals.jest, 
            Atomics: 'readonly', 
            SharedArrayBuffer: 'readonly' 
         }
    },
    rules: {
        semi: 'error',
        quotes: ['error', 'single'],
        indent: ['error', 4, { 'SwitchCase': 1 }],
        'no-unused-vars':
            [
                'warn',
                {
                    'varsIgnorePattern': '^_',
                    'args': 'after-used',
                    'argsIgnorePattern': '^_'
                }
            ]
    },
    ignores: ['conf/', 'coverage/']
}

This section is quite self-explanatory when compared to the old eslintrc configuration. The key differences are:

  • There is no env section anymore, most of that configuration is now located under languageOptions.
  • Note that in the globals object all node and jest globals where added explicitly by using the corresponding arrays provided in the globals-package. This ensures that all common Node.js globals like process and Jest globals like expect are not treated as undefined variables. The latter makes sense if you create some kind of test-utils files which use Jest commands but are not unit-test files themselves. See the example GitHub repository for such an example (/tests/test-utils.js).
  • There is now an ignore property that takes an array of files/folders to be ignored by ESLint. The syntax of the entries is the same as it was in .eslintignore which is now obsolete. For more details see ignoring files.

The linting rules themselves are quite unchanged in the new configuration style.

Part 3: Rules for linting Jest test files

The last needed configuration object is for correct linting of Jest tests. There is no "jest": true option anymore which was very simple. Instead, we’ll need to import the eslint-plugin-jest package and use recommended rules out of it. In the example, all Jest test files are located in the projects folder test/ and have the common extension .test.js.

The resulting configuration object for our eslint.config.js is:

{
    languageOptions: {
        globals: { ...globals.jest }
    },
    files: ['test/*.test.js'],
    ...jest.configs['flat/recommended'],
    rules: {
        ...jest.configs['flat/recommended'].rules
    }
}

This ensures a proper linting of all Jest tests located under test/. If you have test files located in other/additional locations, simply add them to the files property.

Note: If you use Node.js globals like process in your Jest tests, you should add ...globals.node to the globals property. This prevents ESLint from reporting those globals as undefined variables.

Example project on GitHub

To see a practical working example of a migration before and after, clone the eslintrc-to-flatfile GitHub repository and browse through the branches eslint-v8 and eslint-v9. The code files contain several example errors in formatting, quoting etc. that should be highlighted as linting errors or warnings. For details see the code comments.

# clone the example project
git clone https://github.com/tsmx/eslintrc-to-flatfile.git

# check out/switch to the original ESLint v8 branch using eslintrc.json
git checkout eslint-v8
npm install

# check out/switch to the migrated ESLint v9 branch using new eslint.config.js
git checkout eslint-v9
npm install

Useful links

]]>
Built and signed on GitHub Actions – publishing npm packages with provenance statements https://tsmx.net/npmjs-built-and-signed-on-github-actions/ Fri, 17 Nov 2023 22:50:35 +0000 https://tsmx.net/?p=2646 Read more]]> A quick guide on how to set up a GitHub actions workflow to publish a npm package including the provenance badge and section on npmjs.com.

So far, you may have published your npm packages by simply invoking npm publish. Using GitHub actions provides a more elegant way and also enables you to get this nice green checkmark badge behind the version number…

npm-version-provenance

…as well as the provenance section for your packages at npmjs.com…

npm-package-provenance

This provides an extra level of security by providing evidence of your packages origin facilitating sigstore. In this article we’ll show how to set up a simple GitHub actions workflow to publish your package including the signed provenance details.

Prerequisites

To follow along this guide, you should have:

  • An existing package published at npmjs.com
  • The packages source code in a GitHub repository
  • The GitHub CLI installed and working

Generating a token for publishing packages on npmjs.com

First, you will need an access token for npmjs.com. For that log in at npmjs.com and head over to the Access Tokens section of your account. Then create a Classic Token of type Automation with any name, e.g. gh-actions-publish. On creation, the token value will be shown once and never again, so make sure you get the value and save it in a secure place. After all is done, you should see the newly created token in your accounts token list, like so.

npm-automation-token

Using this token will enable your GitHub actions workflow to publish new package versions including bypassing 2FA.

Storing the npm token on GitHub

Next, store the generated npm token value as a secret in your GitHub repository. For that, head over to Settings -> Secrets and variables and press New repository secret. Enter a name and the token value.

github-npm-token-secret-v2

Here the created secret has the name NPM_TOKEN. Having that, it can be referenced in GitHub actions workflow definitions by ${{ secrets.NPM_TOKEN }}.

Setting up a GitHub action workflow for publishing on npmjs.com

Next step is to add a GitHub actions workflow definition for publishing your npm package at npmjs.com to the repository. For that, add the following YAML as a new file in .github/workflows, e.g. ./github/workflows/npm-publish.yml, in the repository.

name: npm-publish
on:
  workflow_dispatch: 
jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write
    steps:
    - uses: actions/checkout@v3
    - uses: actions/setup-node@v3
      with:
        node-version: '18.x'
        registry-url: 'https://registry.npmjs.org'
    - run: npm ci
    - run: npm publish --provenance --access public
      env:
        NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

This action workflow named npm-publish will deploy a new version of your package at the npm registry with provenance statements. If your package is private, you can omit the --access public option in the publishing step.

Running the workflow

With workflow_dispatch: in the on section, the provided GitHub action does not have any automatic trigger and will only run when started manually. This might be preferred to have full control of when a new package version will be published. If you rather want the publish workflow to run automatically on certain events in your repo, check out the possible triggers for the on section.

To start the workflow manually and publish the package, simply run…

$ gh workflow run npm-publish

…in the project directory. Please make sure the version number of your package was updated and all changes are committed & pushed before invoking the publishing action.

After the workflow has completed successfully, you should see the new version published on npmjs.com including the checkmark badge behind the version and the provenance details section.

In the GitHub actions logs for the npm publish step of the executed workflow you can also see that the provenance statement was signed and transferred to sigstore, like so…

github-actions-npm-publish-log

That’s it.

Happy coding 🙂

Useful links

]]>
Integrating GCP Secret Manager with App Engine environment variables https://tsmx.net/integrating-gcp-secret-manager-with-app-engine-environment-variables/ Sun, 15 Oct 2023 21:21:55 +0000 https://tsmx.net/?p=2458 Read more]]> Showing a convenient way on how to use Secret Manager to securely pass sensible data as environment values to Google App Engine (GAE) services running with Node.js.

Unfortunately, App Engine doesn’t deliver an out-of-the box solution for passing env vars from Secret Manager like it’s available in Cloud Functions by using the --set-secrets option of gcloud functions deploy.

In this article a convenient way by using a simple npm package is shown to achieve this. The goals are:

  • Direct use of Secret Manager secret references in the standard App Engine deployment descriptor.
  • Minimal impact on the code.
  • No vendor and platform lock-in, no hard dependency to App Engine. The solution should still run in any other environment.
  • Should work with CommonJS as well as ESM/ECMAScript.

Let’s go to it…

Integrating Secret Manager with App Engine

Setting-up a secret in GCP Secret Manager

First, create one or more secrets in Secret Manager of your GCP project. Here, the secret is named MY_SECRET and has a reference path of projects/100374066341/secrets/MY_SECRET.

gcp-secret-manager-my-secret

For a more detailed guide on how to enbale Secret Manager and creating secrets please refer to this section about secure configuration management in Cloud Functions.

Granting Secret Manager rights to the GAE service account

In order to resolve secrets from Secret Manager, the service account principal running your App Engine service – by default PROJECT_ID@appspot.gserviceaccount.com – must have at least the Secret Manager Secret Accessor role. For more details refer to the Secret Manager access control documentation.

To do so, go to IAM in the console and edit the App Engine principal. There, click “Add another role” and search for Secret Manager Secret Accessor and save, like so.

gcp-iam-access-scecret-role

Referencing a secret in app.yaml

In the standard app.yaml deployment descriptor of your App Engine service, create an appropriate environment variable in the env_variables section containing the secrets reference path. Like so…

service: my-service
runtime: nodejs20

env_variables:
  MY_SECRET: "projects/100374066341/secrets/MY_SECRET/versions/latest"

Note that you have to add /versions/latest to reference the lastest version of the secret or /versions/x to reference the version with number x, e.g. /versions/2. For details see referencing secrets.

Add the gae-env-secrets package to your project

Next, add the gae-env-secrets package as a dependency in your project. This will provide the functionality to retrieve Secret Manager values for environment variables used in App Engine.

npm i gae-env-secrets --save

Use the Secret Manager value in your code

Import the gae-env-secrets package in your code and call the async getEnvSecrets function out of it. Once completed, you’ll be able to access the values stored in GCP Secret Manager by simply accessing the env vars used in the deployment descriptor. Works with CommonJS as well as ESM.

CommonJS

const { getEnvSecrets } = require('gae-env-secrets');

getEnvSecrets().then(() => {
  const secret = process.env['MY_SECRET']; // value of MY_SECRET from Secret Manager
});

ESM

import { getEnvSecrets } from 'gae-env-secrets';

await getEnvSecrets();
const secret = process.env['MY_SECRET']; // value of MY_SECRET from Secret Manager

That’s it. You can now seamlessly use Secret Manager secret values in your GAE App Engine services by referencing env vars.

To learn more on how the gae-env-secrets package is working and how its usage can be customized, read on.

Under the hood

Referencing secrets in the deployment descriptor

To reference secrets in the app.yaml deployment descriptor, you’ll need to pass the versioned reference of the secret from Secret Manager. This has the form of…

projects/[Project-Number]/secrets/[Secret-Name]/versions/[Version-Number|latest]

To retrieve the reference path of a secrets version in Secret Manager simply click “Copy resource name” on the three dots behind a version. Specifying latest as the version instead of a number will always supply the highest active version of a secret.

gcp-secret-manager-my-secret-name

Then pass the secrets reference to the desired variable in the env_variables block of the deployment descriptor, like so…

env_variables:
  SECRET_ENV_VAR: "projects/100374066341/secrets/MY_SECRET/versions/1"

For more details, refer to the app.yaml reference.

Determining the runtime-environment

gae-env-secrets will evaluate environment variables to detect if it is running directly in App Engine. If the following env vars both are present, the library would assume it’s running in GAE and substitute relevant env vars with their respective secret values from Secret Manager:

  • GAE_SERVICE
  • GAE_RUNTIME

If these two env vars are not present, the library won’t do anything. So it should be safe to call it unconditionally in your code without inferring local development, testing etc.

To simulate running under GAE, simply set those two env vars to anything.

Substituting env vars from Secret Manager

If running under GAE is detected, calling getEnvSecrets will iterate through all env vars and substitute the value with the corresponding secret derived from Secret Manager if one of the following condition is true:

  • The name of the env var ends with _SECRET (default suffix) or another deviating suffix passed via the options
  • Auto-Detection is enabled via options and the value of the anv var matches a Secret Manager secret reference

For accessing the Secret Manager, the library uses the package @google-cloud/secret-manager.

Error handling

By default and for security reasons, the library will throw an error if substituting an env vars value from Secret Manager fails for any reason…

  • secret reference is invalid
  • secret is inactive or not present
  • invalid version number
  • missing permissions to access Secret Manager
  • or else…

So make sure to use an appropriate error handling with try/catch or .catch().

To change this behaviour, use the strict property available in the options.

Passing options to getEnvSecrets

You can pass an options object when calling getEnvSecrets to customize the behaviour. The following options are available.

suffix

Type: String Default: _SECRET

All env vars whose name is ending with the suffix will be substituted with secrets from Secret Manager.

Pass another value to change the env vars of your choice.

// will substitue all env vars ending with '_KEY'
getEnvSecrets({ suffix: '_KEY' });

strict

Type: Boolean Default: true

By default strict is true which means that if a secret cannot be resolved an error will be thrown.

Setting strict to false will change this behaviour so that the error is only written to console.error. The value of the env var(s) where the error occured will remain unchanged.

// error will only be logged and respective env vars remain unchanged
getEnvSecrets({ strict: false });

autoDetect

Type: Boolean Default: false

The autoDetect feature enables automatic detection of env var values that contain a Secret Manager secret reference for substitution regardless of the suffix and env vars name.

This feature is additional to the provided suffix, meaning that all env vars ending with the suffix AND all automatically detected will be substituted.

To turn on this feature, pass true in the options object.

// turn on autoDetect
getEnvSecret({ autoDetect: true });

Example: Having this feature enabled, the following env var would be substituted with version 2 of the secret MY_SECRET regardless of the suffix because is contains a value of a Secret Manager reference.

env_variables:
  VAR_WITH_ANY_NAME: "projects/00112233/secrets/MY_SECRET/versions/2"

Considerations & limitations when using gae-env-secrets

Please keep in mind the following points when using this solution.

  • Since the getEnvSecrets function is async you’ll need to await the result or chain on using .then to be able to work with the secret values. CommonJS does not support top-level await.
  • As the env var secrets are resolved at runtime of your code, any top-level code of other modules that is executed upon require/import cannot make use of the secret values and instead would see the secret references as values of the env vars.
  • Resolving the secrets from Secret Manager using the underlying Google library will usually take 1-2 seconds.

Summary

This article shows how to integrate Secret Manager easily with App Engine by using one simple package and few lines of code. No vendor or platform lock-in is created.

However, once Google is supplying an out-of-the box feature to make the integration work like in Cloud Functions, it should be considered switching to this to maybe overcome the limitations of this solution, e.g. secret resolution at runtime.

Happy coding 🙂

Useful links

]]>
Secure configuration management for a GCP cloud function in Node.js https://tsmx.net/secure-configuration-management-for-a-gcp-cloud-function-in-node-js/ Sat, 02 Sep 2023 19:53:08 +0000 https://tsmx.net/?p=2331 Read more]]> Creating a convenient and production-grade configuration management for a GCP cloud function in Node.js using Secret Manager and the secure-config package. Includes a complete example project on GitHub.

Goals and general setup of the cloud function configuration

Like in a traditional app, it’s very common that you’ll need sensitive configuration data in a GCP cloud function, e.g. a DB username and password. This article shows a proper way of achieving this by leveraging managed cloud services and an additional Node.js package. The goals of this setup are…

  • Industry-standard AES encryption an non-exposing of any needed configuration value
  • Full JSON flexibility for the configuration like nested values, arrays etc.
  • Use of managed GCP services without loosing the capability to run on other platforms, e.g. local testing, on a traditional server, Docker, Kubernetes or else – no vendor lock-in

To achieve this, we’ll be using two components for the cloud functions configuration setup:

  1. The secure-config package to securely store the complete configuration as an encrypted JSON file. Uses strong AES encryption and standard JSON, works with nearly any runtime environment.
  2. GCP Secret Manager for secure storage and passing of the secure-config master key to the cloud function by using an environment variable.

If you wonder that using Secret Manager itself without any addiational package may be sufficient, take a look on the further thoughts.

Steps to implement the configuration management in your cloud function

Install secure-config and create the config files

Install the secure-config packge by running:

npm install @tsmx/secure-config --save

Having this, create a conf subfolder in your project with the configuration files. In the tutorial we’ll create two files, one for local testing purposes without any encrpytion and a production version which will be used in GCP with an encrypted secret.

The unencrypted config file will be conf/config.json with the following simple content:

{
  "secret": "secret-config-value"
}

To create the encrypted production version I recommend to use the secure-config-tool. If you don’t want to install this tool, refer to the secure-config documentation on how to generate encrypted entries without it.

For simplicity I assume you have secure-config-tool installed an we will use 00000000000000000000000000000000 (32x 0) as the encryption key. Having this, create the encrypted configuration for production of the cloud function as following…

cd conf/ 
export CONFIG_ENCRYPTION_KEY=00000000000000000000000000000000
secure-config-tool create -nh -p "secret" ./config.json > ./config-production.json

This will create config-production.json in the conf directory with an encrypted secret, like so:

{ 
  "secret": "ENCRYPTED|a2890c023f1eb8c3d66ee816304e4c30|bd8051d2def1721588f469c348ab052269bd1f332809d6e6401abc3c5636299d
}

Note: By default, GCP will set NODE_ENV=production when you run a cloud function. That’s why the secure-config package will look for conf/config-production.json if you don’t specify something else. For all available options of the secure-config package, refer to the documentation.

To prevent unwanted exposure of sensitive data, use a .gcloudignore file in the root folder of your project to only upload the encrypted production configuration to GCP when deploying. The following lines will tell gcloud to ignore all files in the conf/ folder but the config-production.json.

# don't upload non-production configurations
conf/*
!conf/config-production.json

Make sure to also check uploads to any public code repo in the same way using .gitignore or something similar.

Use the configuration in your code

Use the configuration values in your cloud function code. Here as an ES module in the main function ./index.js:

import secureConfig from '@tsmx/secure-config';
const config = secureConfig();

export const helloGCP = (req, res) => {
 res.json({
    info: 'Hello from GCP cloud functions!',
    secret: config.secret
  });
}

Of course this also works with CommonJS using require. From a code perspective that’s all, next step is to set-up GCP for passing the configurations key to the function.

Store the configuration key in Secret Manager

In your GCP console search for “secret manager” and enable the API if not already done

gcp-secret-manager
gcp-secret-manager-enable

After secret manager is enabled, click on “CREATE SECRET” on the top and create a new secret with name CONFIG_KEY and a secret value of 00000000000000000000000000000000. After creating the secret you can see it in the list and click on it to view the details.

gcp-secret-manager-config-key

Directly below the friendly name you can find the secret’s reference which in this case is projects/100374066341/secrets/CONFIG_KEY. This reference will be used later to securely pass the secret as an environment variable to the cloud function.

To verify the value of a secret, click on the three dots behind a version and go to “View secret value”:

gcp-secret-manager-config-key-value

Last step is to grant the service account used for function execution the Secret Manager Secret Accessor role so that the secret can be accessed. By default, GCP uses the following accounts to execute cloud functions depending on the generation:

  • Gen 1: PROJECT_ID@appspot.gserviceaccount.com
  • Gen 2: PROJECT_NUMBER-compute@developer.gserviceaccount.com

For more details on the used IAM account refer to the function identity documentation. Depending on the generation you’ll deploy the function, select the appropriate account under IAM in the console and make sure it has the Secret Manager Secret Accessor role. Add it if necessary by clicking “Edit principal” and then “Add another role”.

gcp-iam-access-secret

That’s it for the secret manager part. The master key needed for decryption of the configuration is now securely stored and ready to use.

Deploy and run the cloud function

The cloud function is now ready to deploy. To do so, we use gcloud functions deploy.

gcloud functions deploy secure-config-function \
--gen2 \
--runtime=nodejs18 \
--region=europe-west3 \
--source=. \
--entry-point=helloGCP \
--set-secrets=CONFIG_ENCRYPTION_KEY=projects/100374066341/secrets/CONFIG_KEY:latest \
--trigger-http \
--allow-unauthenticated

The option --set-secrets=CONFIG_ENCRYPTION_KEY=projects/100374066341/secrets/CONFIG_KEY:latest tells GCP to supply the cloud function with an env var named CONFIG_ENCRYPTION_KEY expected by secure-config with a value of the latest version of the secret projects/100374066341/secrets/CONFIG_KEY. Make sure to replace the secret’s reference with your specific value.

For a complete description of the options refer to the documentation of gcloud functions deploy.

On completion, gcloud will tell you the URL of the successfully deployed function.

...
updateTime: '2023-09-01T20:44:08.493437974Z'
url: https://europe-west3-tsmx-gcp.cloudfunctions.net/secure-config-function

Call this URL to verify the function is working.

curl https://europe-west3-tsmx-gcp.cloudfunctions.net/secure-config-function
{"info":"Hello from GCP cloud functions!","secret":"secure-config-value"}

You should also see the function with a green check mark in your GCP console.

gcp-cloud-functions-overview

Perfect! The cloud function is deployed and works using a secure configuration management.

Example project at GitHub

A complete example project is available on GitHub.

git clone https://github.com/tsmx/secure-config-cloud-function.git

For easy deployment of the function, a deploy script is provided in package.json. Simply invoke this with npm run. Make sure gcloud is configured properly and you are in the right project.

npm run deploy

Further thoughts

If – and only if – your function is using very few simple configuration values, nesting, structuring, using arrays and managing multiple environments in the configuration are not of interest, I would suggest to stick with Secret Manager only and leave out the usage of the secure-config package.

Normally, at least some of that options are of interest in your project and the use of the package absolutely makes sense. For a full view of the features you’ll get out of that package refer to the documentation.

Happy coding 🙂

Useful links

]]>
Default Node.js process.env variables in GCP cloud functions and app engine https://tsmx.net/nodejs-env-vars-in-gcp-cloud-functions-and-app-engine/ Mon, 21 Aug 2023 20:37:41 +0000 https://tsmx.net/?p=2247 Read more]]> Discovering the default process.env variables provided in cloud functions and app engine services on Google Cloud Platform covering Node.js 16, 18 and 20 as well as Gen1 and Gen2 functions. Including a simple project for retrieving the values.

For cloud functions or app engine services using Node.js runtimes GCP will provide a default set of environment variables accessible through process.env. In this article we will explore how this env vars look like for different versions of Node.js in these GCP services.

A very simple project for deploying the needed functions and app engine services to discover these env vars in your own GCP account is also provided.

Cloud functions environment variables

See below for the default process.env variables provided by GCP cloud functions with different Node.js runtimes. Click the link to call a provided function and retrieve the most current result.

{
  LANGUAGE: "en_US:en",
  NODE_OPTIONS: "--max-old-space-size=192",
  K_REVISION: "node20-gen2-get-env-00003-sip",
  PWD: "/workspace",
  FUNCTION_SIGNATURE_TYPE: "http",
  PORT: "8080",
  CNB_STACK_ID: "google.gae.22",
  NODE_ENV: "production",
  CNB_GROUP_ID: "33",
  NO_UPDATE_NOTIFIER: "true",
  HOME: "/root",
  LANG: "en_US.UTF-8",
  K_SERVICE: "node20-gen2-get-env",
  GAE_RUNTIME: "nodejs20",
  SHLVL: "0",
  CNB_USER_ID: "33",
  LC_ALL: "en_US.UTF-8",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  FUNCTION_TARGET: "getEnv",
  K_CONFIGURATION: "node20-gen2-get-env",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 20 Gen2 env vars for cloud functions

{
  LANGUAGE: "en_US:en",
  NODE_OPTIONS: "--max-old-space-size=192",
  K_REVISION: "3",
  PWD: "/workspace",
  FUNCTION_SIGNATURE_TYPE: "http",
  PORT: "8080",
  CNB_STACK_ID: "google.gae.22",
  NODE_ENV: "production",
  CNB_GROUP_ID: "33",
  NO_UPDATE_NOTIFIER: "true",
  HOME: "/root",
  LANG: "en_US.UTF-8",
  GCF_BLOCK_RUNTIME_go112: "410",
  K_SERVICE: "node20-get-env",
  GAE_RUNTIME: "nodejs20",
  SHLVL: "0",
  CNB_USER_ID: "33",
  LC_ALL: "en_US.UTF-8",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  GCF_BLOCK_RUNTIME_nodejs6: "410",
  FUNCTION_TARGET: "getEnv",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 20 env vars for cloud functions

{
  LANGUAGE: "en_US:en",
  NODE_OPTIONS: "--max-old-space-size=192",
  K_REVISION: "node18-gen2-get-env-00003-gic",
  PWD: "/workspace",
  FUNCTION_SIGNATURE_TYPE: "http",
  PORT: "8080",
  CNB_STACK_ID: "google.gae.22",
  NODE_ENV: "production",
  CNB_GROUP_ID: "33",
  NO_UPDATE_NOTIFIER: "true",
  HOME: "/root",
  LANG: "en_US.UTF-8",
  K_SERVICE: "node18-gen2-get-env",
  GAE_RUNTIME: "nodejs18",
  SHLVL: "0",
  CNB_USER_ID: "33",
  LC_ALL: "en_US.UTF-8",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  FUNCTION_TARGET: "getEnv",
  K_CONFIGURATION: "node18-gen2-get-env",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 18 Gen2 env vars for cloud functions

{
  LANGUAGE: "en_US:en",
  NODE_OPTIONS: "--max-old-space-size=192",
  K_REVISION: "3",
  PWD: "/workspace",
  FUNCTION_SIGNATURE_TYPE: "http",
  PORT: "8080",
  CNB_STACK_ID: "google.gae.22",
  NODE_ENV: "production",
  CNB_GROUP_ID: "33",
  NO_UPDATE_NOTIFIER: "true",
  HOME: "/root",
  LANG: "en_US.UTF-8",
  GCF_BLOCK_RUNTIME_go112: "410",
  K_SERVICE: "node18-get-env",
  GAE_RUNTIME: "nodejs18",
  SHLVL: "0",
  CNB_USER_ID: "33",
  LC_ALL: "en_US.UTF-8",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  GCF_BLOCK_RUNTIME_nodejs6: "410",
  FUNCTION_TARGET: "getEnv",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 18 env vars for cloud functions

{
  NO_UPDATE_NOTIFIER: "true",
  FUNCTION_TARGET: "getEnv",
  NODE_OPTIONS: "--max-old-space-size=192",
  NODE_ENV: "production",
  PWD: "/workspace",
  HOME: "/root",
  DEBIAN_FRONTEND: "noninteractive",
  PORT: "8080",
  K_REVISION: "node16-gen2-get-env-00003-rok",
  K_SERVICE: "node16-gen2-get-env",
  SHLVL: "1",
  GAE_RUNTIME: "nodejs16",
  FUNCTION_SIGNATURE_TYPE: "http",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  K_CONFIGURATION: "node16-gen2-get-env",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 16 Gen2 env vars for cloud functions

{
  GCF_BLOCK_RUNTIME_nodejs6: "410",
  NO_UPDATE_NOTIFIER: "true",
  FUNCTION_TARGET: "getEnv",
  GCF_BLOCK_RUNTIME_go112: "410",
  NODE_OPTIONS: "--max-old-space-size=192",
  NODE_ENV: "production",
  PWD: "/workspace",
  HOME: "/root",
  DEBIAN_FRONTEND: "noninteractive",
  PORT: "8080",
  K_REVISION: "3",
  K_SERVICE: "node16-get-env",
  SHLVL: "1",
  GAE_RUNTIME: "nodejs16",
  FUNCTION_SIGNATURE_TYPE: "http",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 16 env vars for cloud functions

App engine environment variables

See below for the default process.env variables provided by GCP app engine with different Node.js runtimes. Click the link to call a provided service and retrieve the most current result.

{
  S2A_ACCESS_TOKEN: "xxxx",
  GAE_MEMORY_MB: "384",
  NO_UPDATE_NOTIFIER: "true",
  LANGUAGE: "en_US:en",
  GAE_INSTANCE: "00c61b117c1d6452581b06dcb5f23b1f1bc9a6c6aaebd47203070aa80a23270580b3af586743534bbd4012aa004c1437350e8cda1dc423b4be",
  HOME: "/root",
  PORT: "8081",
  NODE_OPTIONS: "--max-old-space-size=300 ",
  GAE_SERVICE: "node20-get-env",
  PATH: "/srv/node_modules/.bin/:/workspace/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  CNB_GROUP_ID: "33",
  CNB_USER_ID: "33",
  GAE_DEPLOYMENT_ID: "454320553896815573",
  LANG: "en_US.UTF-8",
  GOOGLE_CLOUD_PROJECT: "tsmx-gcp",
  GAE_ENV: "standard",
  PWD: "/workspace",
  GAE_APPLICATION: "h~tsmx-gcp",
  LC_ALL: "en_US.UTF-8",
  GAE_RUNTIME: "nodejs20",
  GAE_VERSION: "20230819t221144",
  NODE_ENV: "production",
  CNB_STACK_ID: "google.gae.22"
}

Link: Get Node.js 20 env vars for app engine

{
  S2A_ACCESS_TOKEN: "xxxx",
  GAE_MEMORY_MB: "384",
  LANGUAGE: "en_US:en",
  NO_UPDATE_NOTIFIER: "true",
  GAE_INSTANCE: "00c61b117cf7b9059c648a310d860a5f66a16dc1761710ac82df9a38392a9624cbdebd451f3978bb6cfc0640f5680590beee2d4afe3b214a879c",
  HOME: "/root",
  PORT: "8081",
  NODE_OPTIONS: "--max-old-space-size=300 ",
  GAE_SERVICE: "node18-get-env",
  PATH: "/srv/node_modules/.bin/:/workspace/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  CNB_GROUP_ID: "33",
  CNB_USER_ID: "33",
  GAE_DEPLOYMENT_ID: "454320535647197464",
  LANG: "en_US.UTF-8",
  GOOGLE_CLOUD_PROJECT: "tsmx-gcp",
  GAE_ENV: "standard",
  GAE_APPLICATION: "h~tsmx-gcp",
  LC_ALL: "en_US.UTF-8",
  PWD: "/workspace",
  GAE_RUNTIME: "nodejs18",
  GAE_VERSION: "20230819t221044",
  NODE_ENV: "production",
  CNB_STACK_ID: "google.gae.22"
}

Link: Get Node.js 18 env vars for app engine

{
  S2A_ACCESS_TOKEN: "xxxx",
  NO_UPDATE_NOTIFIER: "true",
  GAE_MEMORY_MB: "384",
  GAE_INSTANCE: "00c61b117c641ca0a31d2baf0347dc63fc3e870aee7b8707eccebd1f31d5b5372af69bd5178f08349fb3f3ee5ac460efeae28ed842e96fe861",
  HOME: "/root",
  PORT: "8081",
  NODE_OPTIONS: "--max-old-space-size=300 ",
  GAE_SERVICE: "node16-get-env",
  PATH: "/srv/node_modules/.bin/:/workspace/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  GAE_DEPLOYMENT_ID: "454320521333239759",
  DEBIAN_FRONTEND: "noninteractive",
  GOOGLE_CLOUD_PROJECT: "tsmx-gcp",
  GAE_ENV: "standard",
  GAE_APPLICATION: "h~tsmx-gcp",
  PWD: "/workspace",
  GAE_RUNTIME: "nodejs16",
  GAE_VERSION: "20230819t220949",
  NODE_ENV: "production"
}

Link: Get Node.js 16 env vars for app engine

Project for discovering process.env in different services and runtimes

Alongside with this article the gcp-get-env project is provided on GitHub. This simple Node.js solution ships a function and an express application that can be deployed either as a GCP cloud function or an app engine service. Simply use the provided scripts in package.json to deploy as a function or service with different runtimes.

For that you’ll need an active GCP account and a configured and ready-to-go gcloud CLI on your machine. For details on the installation & configuration see here.

The following deployment scripts are provided in package.json. Simply call npm run [scriptname] to deploy.

scriptnamedeploysfunction/service name
deploy-node16-funccloud function with runtime Node.js 16 Gen 1node16-get-env
deploy-node16-func-gen2cloud function with runtime Node.js 16 Gen 2node16-gen2-get-env
deploy-node18-funccloud function with runtime Node.js 18 Gen 1node18-get-env
deploy-node18-func-gen2cloud function with runtime Node.js 18 Gen 2node18-gen2-get-env
deploy-node20-funccloud function with runtime Node.js 20 Gen 1node20-get-env
deploy-node20-func-gen2cloud function with runtime Node.js 20 Gen 2node20-gen2-get-env
deploy-node16-gaeapp engine service with runtime Node.js 16node16-get-env
deploy-node18-gaeapp engine service with runtime Node.js 18node18-get-env
deploy-node20-gaeapp engine service with runtime Node.js 20node20-get-env

Please note that the deployed functions and services would be publicly accessable and may cause charges to your GCP account.

Happy coding 🙂

Useful links

]]>
CommonJS vs. ESM/ECMAScript cheat-sheet https://tsmx.net/commonjs-vs-esm-ecmascript-cheat-sheet/ Wed, 16 Aug 2023 20:40:19 +0000 https://tsmx.net/?p=2152 Read more]]> Short comparison of the most common statements in CommonJS vs. ESM/ECMAScript for importing, exporting and in your package.json.

See the table below for a brief comparison on the most used statements that should cover the vast majority of use-cases.

Use-CaseCommonJSESM / ECMAScript
Importing
Default import
(NPM module)
const imp = require('module');import imp from 'module';
Default import
(own module)
const imp = require('./myModule');

Note: path is mandatory, file extension is optional
import imp from './myModule.js';

Note: path and file extension are mandatory
Named importconst { namedImp } = require('module');import { namedImp } from 'module';
Import with function
invocation
const funcImp = require('module')(myParam);import imp from 'module';
const funcImp = imp(myParam);


Note: ESM doesn’t support invocations
on importing, so two lines of code are needed
Exporting
Default export
(unnamed)
module.exports = function() {/* */}export default function() {/* */}
Named export
(e.g. a function, works also with objects, classes etc.)
module.exports.myFunction = function() {/* */}export function myFunction() {/* */}
Exporting an arrow functionmodule.exports.myFunction = () => {/* */}export const MyFunction = () => {/* */}

Note: the const keyword is needed here
package.json entries
Module type…nothing…
or
"type": "commonjs"

Note: since CommonJS is the default, normally no type entry is present in package.json
"type": "module"

Note: this tells Node.js to treat all .js files as ES modules without the need to use the .mjs extension which is often preferred
Entry point"main": "index.js""exports": "./index.js"

Note: path and file extension are mandatory

Please note that this cheat-sheet is just an excerpt of all possible module.exports/require and import/export constellations as well as all available package.json options. For more details, refer to the very comprehensive documentation sites:

If you are about to migrate from CommonJS, also check out the article on converting an existing Node.js project to ESM.

Happy coding 😉

]]>
Destructuring nested object properties in JavaScript https://tsmx.net/destructuring-of-nested-object-properties-in-javascript/ Mon, 24 Jul 2023 20:25:49 +0000 https://tsmx.net/?p=2114 Read more]]> Quick tutorial on how to use destructuring for accessing nested JavaScript object properties.

This is a really short article on an interesting JavaScript feature I wasn’t aware of long time: destructuring of nested object properties. Really helpful when you need to deal with only some properties of a heavily sub-structured object passed around, for example parts of a Redux state object in a React app.

Normal JavaScript object destructuring

First things first. Let’s assume you have the following JavaScript object.

const person = {
  name: 'John',
  address: {
    country: 'USA',
    city: 'New York',
    details: {
      street: 'Broadway',
      number: 3155
    }
  }
}

To only access the name property of the object when its passed over, you can use standard destructuring like so…

const writeName = ({ name }) => {
  console.log(name);
}

writeName(person);
// John

Naive approach to access nested properties using destructuring

But what when you need to access the city property of the object and only that? With standard destructuring the approach would look something like this.

const writeCity = ({ address }) => {
  console.log(address.city);
}

writeCity(person);
// New York

That’s defnitively working but there is an even more elegant approach to directly access the desired property instead of still need to step down one level using address.city.

Accessing nested object properties using deep destructuring

With using the following syntax for deep object destructuring, a direct extraction of the needed nested property is possible.

const writeCity = ({ address: { city } }) => {
  console.log(city);
}

writeCity(person);
// New York

That’s it! A really elegant way to access nested object properties. This is also working for deeper nested properties, just repeat the destructuring syntax using colons. The innermost of the destructuring will be available as local variables. Also you can of course destructure a list of properties, like so…

const writeAddressDetails = ({ address: { details: { street, number } } }) => {
  console.log(street);
  console.log(number);
}

writeAddressDetails(person);
// Broadway
// 3155

BTW: this is an official JavaScript feature also included in the JavaScript documentation for object destructuring. But it’s a bit hidden 😉

Putting it all together

const person = {
  name: 'John',
  address: {
    country: 'USA',
    city: 'New York',
    details: {
      street: 'Broadway',
      number: 3155
    }
  }
}

const writeName = ({ name }) => {
  console.log(name);
}

const writeCityLong = ({ address }) => {
  console.log(address.city);
}

const writeCity = ({ address: { city } }) => {
  console.log(city);
}

const writeAddressDetails = ({ address: { details: { street, number } } }) => {
  console.log(street);
  console.log(number);
}

writeName(person);
// John

writeCityLong(person);
// New York

writeCity(person);
// New York

writeAddressDetails(person);
// Broadway
// 3155

Happy coding 🙂

Useful links

]]>
Convert an existing NodeJS project from CommonJS to ESM / ECMAScript https://tsmx.net/convert-existing-nodejs-project-from-commonjs-to-esm/ Thu, 25 May 2023 20:57:25 +0000 https://tsmx.net/?p=1967 Read more]]> A quick guide for converting your existing NodeJS projects from CommonJS to ESM/ECMAScript. Including Express, Jest, Supertest and ESLint.

ECMAScript or ESM modules are the official way of developing JavaScript software today and many projects and libraries are moving towards this format to leverage its advantages. Support for CommonJS, which was the de facto standard for NodeJS projects so far, is even being dropped by some famous libs. The question now is how your existing NodeJS projects can be migrated to the new level?

Fortunately, for the vast majority of projects this should be a no-brainer. In this guide we’ll migrate a minimal project using popular libs and tools like Express, Jest, Supertest and ESLint from CommonJS to an ESM project.

tl;dr – fast-lane conversion

For the impatient here’s the straight forward uncommented list of steps to convert an existing project using Jest and ESLint from CommonJS to ESM. For more details on each step and other sidenotes see further below.

  1. In package.json: add "type": "module" and "exports": "./start.js", remove the "main": "start.js" entry.
  2. Replace all module.exports statements with export and all require statements with import in your source files.
  3. Set or add "sourceType": "module" and "ecmaVersion": "latest" in the parserOptions section of your ESLint configuration.
  4. Replace the "jest" start command in package.json with "NODE_OPTIONS=--experimental-vm-modules npx jest".

That’s already it. You should now be able to run the project and all tests using ESM. Keep on reading for an example and more details on the conversion steps.

Example project on GitHub

This post is accompanied by the node-commonjs-to-esm example project on GitHub which uses Express, Jest, Supertest and ESLint. The project comes with two branches commonjs and esm showing the original state using CommonJS and the migrated one using ESM.

# clone the example project
git clone https://github.com/tsmx/node-commonjs-to-esm.git

# install needed dependencies
cd node-commonjs-to-esm
npm install

# check out the original CommonJS project
git checkout commonjs

# check out the migrated ESM project
git checkout esm

To start the project or test suite – regardless of the branch you are in – run the following commands.

# run the project to start a simple server on localhost:3000 
# with GET routes for '/', '/route1' and '/route2'
npm run start

# run the Jest tests
npm run test

Note: When switching between the ESM and the CommonJS branch, a new npm install is not necessary as the conversion doesn’t affect the dependencies.

Required NodeJS version

To make full use of ECMAScript/ESM and convert your projects accordingly, a NodeJS version of at least v12.20.0 or v14.13.0 is needed.

Please note that if you are on an older version of NodeJS it’s highly recommended to update anyways since these versions are quite old and even support for v14 has ended by the time of writing this article. For the example NodeJS v18 LTS was used.

CommonJS to ESM conversion steps in detail

ESM modifications to package.json

To switch from CommonJS to ECMAScript/ESM we’ll first make two slight changes to our package.json:

  • Replacing "main": "start.js" with "exports": "./start.js"
    Please note the leading "./" as with ESM every reference to own files/modules has to be a full pathname including the directory and also file extension.
  • Adding "type": "module"
    This is to tell NodeJS we are on ESM now and prevents you having to rename all *.js files in your project to *.mjs as proposed in some guides. Also with ESM we can stay with the *.js file extension as it is still normal JavaScript.
# in package.json

# before
"main": "start.js",

# change to
"exports": "./start.js",
"type": "module",

For more details on exporting the entry point and the module type declaration refer to the official NodeJS documentation for package.json fields.

Replacing require/module.exports with import/export

Now the biggest change has to be done: in all of your source code files you have to replace the statements for exporting and importing as module.exports and require are not longer supported with ESM.

For unnamed exports do the following changes…

// before: CommonJS - unnamed export

module.exports = app;

// after: ESM - unnamed export

export default app;

And for named exports of functions…

// before: CommonJS - named exports

module.exports.route1 = function (req, res) { ... }
module.exports.route2 = (req, res) => { ... };


// after: ESM - named exports

export function route1 (req, res) { ... }
export const route2 = (req, res) => { ... };

Please note the const keyword instead of function when exporting an arrow function. For a complete list of available export statements for other types like arrays, classes, literals and so on please refer to the export statement documentation.

After changing all the exports, let’s move on with replacing require by import…

// before: CommonJS requiring in dependencies

const express = require('express'); // standard module
const app = require('./app'); // own module    
const routes = require('./handlers/routes'); // own module with named exports routes.route1 and routes.route2

// after: ESM importing dependencies

import express from 'express'; // without trailing .js
import app from './app.js'; // with trailing .js
import * as routes from './handlers/routes.js'; // '*' to import all named exports, with trailing .js

Note that own modules must always be imported by providing the path and file extension whereas standard modules installed via npm (like express) don’t.

Importing named exports can also be done selectively if you need only some imports from a module by using destructuring in curly braces like so…

import { route1, route2 } from './handlers/routes.js';

For a full list of available import declarations please refer to the import statement documentation.

You should now already be able to start the migrated ESM project by running npm run start.

Updating the ESLint configuration

Although everything is running, you should notice some ESLint errors in your code…

eslint-module-export-error

This is because ESLint isn’t aware of that you’ve switched from CommonJS to ESM. To fix that simply add the following entries at the top level to your ESLint configuration – in case of the sample project the .eslintrc.json file.

"parserOptions": {
  "ecmaVersion": "latest",
  "sourceType": "module"
},

Now ESLint knows to validate against ECMAScript/ESM syntax with the latest features and no errors should show up any more. For a complete description of the available options have a look at the ESLint parsing docs.

Making Jest working again

Last thing to fix up is running the unit tests with Jest. Invoking npm run test would give you an error like that…

jest-esm-error

To fix this, we change the starting command in our package.json for Jest as suggested in the provided documentation site for ECMAScript modules.

# before: CommonJS Jest start script under 'scripts'

"test": "jest"

# after ESM Jest start script

"test": "NODE_OPTIONS=--experimental-vm-modules npx jest"

Although this feature is still considered being experimental, the tests are now running again.

jest-esm-success

That’s it! The project is now completely converted to ESM.

You may also check out the CommonJs vs. ESM cheat-sheet for further reading.

Happy coding 🙂

Useful links

]]>
Express: passing dates in an URL using route parameters including regex validation https://tsmx.net/express-pass-dates-in-url-with-regex-validation/ Mon, 06 Jun 2022 21:21:43 +0000 https://tsmx.net/?p=1717 Read more]]> Showing an elegant way on passing dates to your REST API’s and webservices using Express route parameters and regex validation standard features.

When implementing RESTful API’s or other webservices in NodeJS with Express you will sooner or later come to the point where you need to pass dates as parameters. Something like YYYY-MM-DD. This article provides a way how to do that getting the most out of Express standard features.

Query strings and route parameters quick intro

For this showcase let’s assume you want to write an API with a GET route /valueofday on localhost:3000. This route needs to receive a specific day as an input parameter to serve the values for this day.

To pass a date value to your Express application you have two possibilities in general:

  • Passing as an query string parameter:
    http://localhost:3000/valueofday?day=20220210
  • Passing as an URL route parameter:
    http://localhost:3000/valueofday/20220210

If you pass the required date as an query string like that, it would be available through a variable req.query.day in the request handler.

Although this is an easy way of grabbing the value ‘20220210’ you should keep in mind that it is a completely unchecked string value that is returned. So every post-processing that is needed to create a date like splitting the string, validating if numbers were passed, casting to numbers etc. is completely up to you.

This would bloat your API’s with a lot of boilerplate code having no business value. In this article you will see how to use out-of-the-box Express features to…

  • Separate the passed values for year, month and day
  • Ensure that only valid numbers are passed

…without the need of any own-written code. This helps keeping your REST API’s and webservices clean and robust. So let’s go for it.

Exploring Express route parameters features

Recognizing that query strings would be a sub-optimal solution let’s go ahead with route parameters in Express. To pass a day value to our route, we would naively specifiy it like so:

app.get("/valueofday/:day", (req, res) => {
    // logic goes here...
});

With that, Express will make the last URL portion available as req.params.day. Anyways, this is not any better then using a query string as it still returns an unchecked arbitrary string we expect to be in the form of YYYY-MM-DD.

Fortunately, Express has some nice out-of-the box features for route parameters we can use to optimize that.

Route parameter separators

First, we’ll use the Express parameters separation feature. To construct a JavaScript date object, we’ll need the values for YYYY, MM and DD separately. Here, Express can do the work for us by using the allowed '-' or '.' route parameter separators, like so:

app.get("/valueofday/:year-:month-:day", (req, res) => {
  // logic goes here...
});

This will give us three variables req.params.year, req.params.month and req.params.day instead of one. E.g. when calling http://localhost:3000/valueofday/2022-04-11 the params would be year=”2022″, month=”04″ and day=”11″. So far, so good!

But the returned variables are still of arbitrary value. So how to ensure that only numbers are passed?

Regex validation for route parameters

To check the passed URL route parameters, we will make use of the regex parameter validation Express offers. For every route parameter you can add a regular expression (in brackets) which it should be validated against. For our showcase, let’s use trivial regular expressions to ensure that only numbers of cerrect length can be passed, like so:

app.get("/valueofday/:year(\\d{4})-:month(\\d{2})-:day(\\d{2})", (req, res) => {
  // logic goes here...
});

This will make Express to ensure that req.params.year is a four digit number and req.params.month and req.params.day are two digit numbers. Of course you can use much more sophisticated regexes in your solution to do any tighter checks. This is just to get you an idea of how this feature works.

Putting it all together: passing a date to an Express API

Having our YYYY-MM-DD values separated in req.params and also ensured we’ll only get passed numbers, we can now easily create our JavaScript date object in the request handler for further processing. Most easy way is to use the unary + operator here to convert everything to a number.

app.get("/valueofday/:year(\\d{4})-:month(\\d{2})-:day(\\d{2})", (req, res) => {
  let queryDate = new Date(
    +req.params.year,
    +req.params.month - 1,
    +req.params.day
  );
  // logic using queryDate goes here...
});

Please note the “-1” here for the month value as the JavaScript date constructor treats this as an index value with a range of 0..11.

That’s it. We are now able to pass a date value to an Express API without the need of any boilerplate code.

Testing the solution

For testing purposes we’ll put together a minimal working Express solution with a catch-all route indicating any miss of our date-passing route.

var express = require("express");
var app = express();

app.get("/valueofday/:year(\\d{4})-:month(\\d{2})-:day(\\d{2})", (req, res) => {
  res.status(200).json({
      queryDate: new Date(
          +req.params.year,
          +req.params.month - 1,
          +req.params.day
      )
  });
});

app.get('*', (req, res) => {
  res.status(404).send('Non-existing route');
});

app.listen(3000);

Now let’s use curl to fire some requests and check the result…

$ curl -w "\n" localhost:3000/valueofday/2022-01-11
{"queryDate":"2022-01-11T00:00:00.000Z"}

$ curl -w "\n" localhost:3000/valueofday/2022-01-1
Non-existing route

$ curl -w "\n" localhost:3000/valueofday/1999-03-11
{"queryDate":"1999-03-11T00:00:00.000Z"}

$ curl -w "\n" localhost:3000/valueofday/1999-03-111
Non-existing route

$ curl -w "\n" localhost:3000/valueofday/1999-03.11
Non-existing route

$ curl -w "\n" localhost:3000/valueofday/1999-03-10
{"queryDate":"1999-03-10T00:00:00.000Z"}

$ curl -w "\n" localhost:3000/valueofday/abcd
Non-existing route

$ curl -w "\n" localhost:3000/valueofday/
Non-existing route

Happy coding 🙂

Useful links

]]>
Full code-coverage with Jest https://tsmx.net/jest-full-code-coverage/ Mon, 03 Jan 2022 22:03:13 +0000 https://tsmx.net/?p=1372 Read more]]> A guided tour on how to achieve the most complete code-coverage with Jest in your NodeJS projects and an some thoughts on why this is not necessarily the primary target.

In this article I assume you have a basic understanding of what Jest is and how it is used. So let’s directly jump to it getting our code covered.

Example function to test

In this tour we’ll use the following function as the test object.

// test-functions.js

module.exports.calculateValue = (valueA, valueB, addOffset = false, addWarning = true) => {
  let result = {
    value: (addOffset ? valueA + valueB + 10 : valueA + valueB),
    message: null
  }
  if (result.value > 100) {
    let messageParts = []
    if (addWarning) messageParts.push('Warning:');
    messageParts.push('result is greater then 100');
    result.message = messageParts.join(' ');
  }
  return result;
}

To follow the next steps save this function in test-functions.js in the main folder of your NodeJS project.

Covering your code with Jest

Initial setup and test function touch

To get Jest up & running, install it as a dev dependency in your project.

npm -i jest --save-dev

For ease of use, also in your CI/CD pipelines, I recommend to add script entries for running Jest with and without collecting coverage statistics in your package.json, like so…

...
"scripts": {
  "test": "jest",
  "test-coverage": "jest --coverage"
}
...

Having this in place, you can run the tests in your project by simply calling npm run test or npm run test-coverage (including gathering coverage stats).

Now let’s start writing the test suite for our sample function. We start with an empty suite saved as test/test-functions.test.js which is simply touching the source file…

// test/test-functions.test.js

const { calculateValue } = require('../test-functions');

describe('test-coverage test suite', () => {

  it('tests something', () => {
  });

});

Now, doing a first call to npm run test-coverage will show you that our source file is touched – also it has nearly no coverage yet.

jest-coverage-no-test

You may ask why we are writing this empty test suite at the beginning before showing the coverage stats? The answer is that Jest will – by default – only collect coverage stats for touched source files. Without this empty suite referencing test-functions.js, Jest would report everything green because there’s not even one source file affected, which is not intended here. Please read this great article on how to change that behaviour if you whish or need to in your project.

First straight-forward test

Now let’s write the first simple test in our suite. We’ll call the function to test with the two necessary parameters for valueA and valueB.

it('calculateValue test 1', () => {
  expect(calculateValue(10, 20).value).toBe(30);
});

Having this test in place, the stats are showing we’ve already covered quite some amount of our code…

jest-coverage-test-1

Great, so far. We covered the main part of our function but the test written is very poor. This is, because our function returns an object with two properties but in the test, we only check one if it. Such tests are prone to not-discovering side effects in later changes etc., although they may increase and polish the coverage…

So let’s refactor that being a better test.

it('calculateValue test 1', () => {
  const result = calculateValue(10, 20);
  expect(result.value).toBe(30);
  expect(result.message).toBe(null);
});

Although this refactoring didn’t change anything regarding the code-coverage, I think you will agree that the quality of the test increased a lot. So always remember: code-coverage is not necessarily a synonym for high quality!

Testing uncovered lines

After running our first test, the output of npm run test-coverage shows uncovered lines 9-12. They are marked red, because they are completely untouched by any test.

These lines are within an if-statement that is only entered when the passed two values together will be greater than 100. So let’s go ahead and write another simple test satisfying this condition.

it('calculateValue test 2', () => {
  const result = calculateValue(100, 20);
  expect(result.value).toBe(120);
  expect(result.message).toBe('Warning: result is greater then 100');
});

Having this second test in place, the stats are now showing a 100% coverage of all lines and statements.

jest-coverage-test-2

However, there are still uncovered lines (5 and 10) left marked yellow. So let’s go on in covering these.

Testing uncovered branches

The uncovered lines marked in yellow are part of code branches (if-else statements, ternary operators or else). Corrsponding to that you can see that our branch coverage is at 75% and therefore not complete yet.

So let’s start in inspecting first line, which is number 5:

value: (addOffset ? valueA + valueB + 10 : valueA + valueB),

Can you see what is not covered here by any test? Now, it might be a bit tricky – even more than in our example – to figure out what is not covered. Fortunately, Jest provides a great tool for further coverage inspection: a HTML report.

When running Jest with the --coverage option (like in our npm run test-coverage), it generates a report under coverage/lcov-report/index.html. Simply open this site iny our browser and navigate to our source file’s sub-page.

jest-coverage-html-report

Here – again marked yellow – you can very easily see, what part of line 5 is not covered by any test. It is the first return path of the ternary operator which would be executed if addOffset is true. Hovering the mouse over it will show you ‘branch not covered’.

So let’s write another test covering this code branch.

it('calculateValue test 3', () => {
  const result = calculateValue(100, 20, true);
  expect(result.value).toBe(130);
  expect(result.message).toBe('Warning: result is greater then 100');
});

Great! Line 5 is now fully covered.

jest-coverage-test-3

Nevertheless, line 10 is still left marked uncovered also the generated HTML report doesn’t show anything there?!

Testing non-existent else-paths

To solve this mystery, let’s investigate line 10:

if (addWarning) messageParts.push('Warning:');

Obviously, the condition for the if-statement is true for all of our existing tests as addWarning is true by default in the test function and we never did override that. Therefore, this line is already covered, BUT: there is an implicit else-branch not written explicitly in the code which is never taken. This is when addWarning is false. Then the code in the if-statement will not be executed which may affect the logic of our function and therefore is subject to test for Jest by default.

So let’s write another test covering the implicit else-branch of our test function.

it('calculateValue test 4', () => {
  const result = calculateValue(100, 20, false, false);
  expect(result.value).toBe(120);
  expect(result.message).toBe('result is greater then 100');
});

Awesome! We have now covered our test function to 100% in all aspects: statements, lines and also branches.

Excluding untestable code-sections from the coverage stats

Despite all attempts you might end up in not finding a feasible or “affordable” test case for every line of code. To keep your coverage stats clean and meaningful anyways, Jest provides a way to explicitly exclude code sections from the coverage stats. For that, please refer to the article on ignoring code for coverage.

Your primary goal should always be to cover a line/branch with an appropriate test. But not at any price.

Summary and further steps

Putting it all together, our final test suite for a full code coverage looks like this. Note that the empty initial test case from the setup is removed as this was only needed to do the initial run of the Jest suite (a suite must contain at least one test).

// test/test-functions.test.js

const { calculateValue } = require('../test-functions');

describe('test-coverage test suite', () => {

  it('calculateValue test 1', () => {
    const result = calculateValue(10, 20);
    expect(result.value).toBe(30);
    expect(result.message).toBe(null);
  });

  it('calculateValue test 2', () => {
    const result = calculateValue(100, 20);
    expect(result.value).toBe(120);
    expect(result.message).toBe('Warning: result is greater then 100');
  });

  it('calculateValue test 3', () => {
    const result = calculateValue(100, 20, true);
    expect(result.value).toBe(130);
    expect(result.message).toBe('Warning: result is greater then 100');
  });

  it('calculateValue test 4', () => {
    const result = calculateValue(100, 20, false, false);
    expect(result.value).toBe(120);
    expect(result.message).toBe('result is greater then 100');
  });

});

Where to go now? As the next step it is highly recommend to automate running the tests in your development process.

One very easy and convenient way to do so is GitHub actions. See my article on setting up a simple CI/CD pipeline with GitHub actions. There it is also explained how to include Coveralls for monitoring coverage stats.

100% code-coverage – worth the effort?

You may now ask yourself if it is always the ultimate target to reach 100% coverage? The answer to that is not easy and depending on many parameters of your project as you may have guessed 😉

In general, it is always a good approach trying to cover everything by following a pragmatic and affordable plan: writing straight forward test-cases, trying to get uncovered and non-existing branches. Normally, you should achieve quite high coverages of 80-90% or even more with that in a reasonable time.

Before you start investing another big amount of work for covering the rest up to 100%, keep in mind that…

  • As always in projects, the “Last Ten Percent” rule will kick in. It is very likely that you’ll spend more time in covering the last 10% then you did for the first 90%.
  • There are limitations by the test framework itself: Although Jest is a very popular and great framework, it is not fully free of technical errors. These might be preventing you to test properly or even produce false-negatives. At the time of writing this article Jest 27.4.5 was the latest version which had ~1.500 issues and 200 pull requests logged at npmjs.com.
  • You might have technically complicated edge-cases where you will spend more time in finding out how to implement the test itself rather then caring about the test logic. One example for that could be testing a process exit in a CLI project.

When reaching such points, you should always ask yourself: Is it beneficial for my project to spend more time here?

As a rule of thumb, I would recommend trying to reach an acceptable level (e.g. >= 80%) of code coverage by executing a pragmatic and straight-forward test implementation plan. Doing that, focus on complete and high-quality test cases where all pre- and post-conditions are checked to avoid side effects, especially for later changes. That’s more important then having full coverage. Then go ahead and don’t spend more time in implementing tests just for polishing coverage stats. Unless your employer or customer demand more or time & budget are unlimited in your project 😉

Useful links

]]>