tsmx-dev – tsmx https://tsmx.net pragmatic IT Tue, 20 Aug 2024 19:15:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://tsmx.net/wp-content/uploads/2020/09/cropped-tsmx-klein_transparent-2-32x32.png tsmx-dev – tsmx https://tsmx.net 32 32 Migrating eslintrc.json to eslint.config.js in a CommonJS project https://tsmx.net/migrating-eslintrc-to-flat-config-in-commonjs/ Mon, 19 Aug 2024 20:37:10 +0000 https://tsmx.net/?p=3036 Read more]]> A practical end-to-end guide for migrating an existing .eslintrc.json config (ESLint v8 and before) to the new flat file config eslint.config.js (ESLint v9 and above) in a CommonJS Node.js project including linting of unit-tests with Jest.

This article comes along with a public GitHub example repository enabling you to comprehend everything and easily switch between before/after migration state.

Starting point: existing eslintrc configuration

In this migration guide we’ll use a very standard ESLint configuration set which should cover basic linting for the vast majority of your projects.

  • Configure ESLint for use with Node.js and CommonJS using a specified ECMA version
  • Ensure a proper linting of Jest tests
  • Use a predefined set of linting rules as a starting point
  • Ensure right linting of basics like indention, unused variables, use of globals, semicolons and quote signs used

So far, the usual way to configure ESLint in Node.js was to place an .eslintrc.json file in the root folder of the project. The below .eslintrc.json covers all the mentioned points and serves as the basis for this guide.

{
    "env": {
        "node": true,
        "commonjs": true,
        "es6": true,
        "jest": true
    },
    "extends": "eslint:recommended",
    "globals": {
        "Atomics": "readonly",
        "SharedArrayBuffer": "readonly"
    },
    "parserOptions": {
        "ecmaVersion": 2018
    },
    "rules": {
        "indent": [
            "error",
            4,
            {
                "SwitchCase": 1
            }
        ],
        "quotes": [
            "error",
            "single"
        ],
        "semi": [
            "error",
            "always"
        ],
        "no-unused-vars": [
            2,
            {
                "args": "after-used",
                "argsIgnorePattern": "^_"
            }
        ]
    }
}

Additionally, you might have a .eslintignore file placed in the root folder of the project to exclude files and paths from linting, e.g. to exclude the two directories conf and coverage – like so:

conf/
coverage/

Errors after upgrading ESLint to v9

Having this configuration in place you’ll notice that your environment, in this case VSCode, is coming up with an error after upgrading to ESLint v9. The highlighting of linting errors and warnings also isn’t working any more.

eslint-config-error-vscode

Having a look in the ESLint output quickly gives you the reason why.

[Info  - 20:51:44] ESLint server is starting.
[Info  - 20:51:44] ESLint server running in node v20.14.0
[Info  - 20:51:44] ESLint server is running.
[Info  - 20:51:46] ESLint library loaded from: /home/tsmx/projects/weather-tools/node_modules/eslint/lib/api.js
(node:4117) ESLintIgnoreWarning: The ".eslintignore" file is no longer supported. Switch to using the "ignores" property in "eslint.config.js": https://eslint.org/docs/latest/use/configure/migration-guide#ignoring-files
(Use `code --trace-warnings ...` to show where the warning was created)
[Error - 20:51:46] Calculating config file for file:///home/tsmx/projects/weather-tools/weather-tools.js) failed.
Error: Could not find config file.

Starting with ESLint v9.0.0, the default configuration was changed to flat file and eslintrc.json as well as eslintignore became deprecated. Although it’s possible to continue using eslintrc.json, it’s recommended to switch to the new file format being future-proof.

Migrating to the new flat file configuration

For a CommonJS project, the new flat file configuration is a normal JavaScript file called eslint.config.js which is placed in the root folder and simply exports an array of ESLint configuration objects via module.exports.

Installing needed dev dependencies

The flat file config doesn’t contain an env section anymore that allows you to specify ESLint is running in Node.js and enabling Jest features for correct linting of unit-test files. Also, the recommended ruleset has been outsourced to an own module.

To include all these features in the new ESLint v9 configuration, you’ll need to install the following dependencies in your project.

  • @eslint/js for using the recommended ruleset as a basis
  • eslint-plugin-jest to enable proper linting of Jest test files
  • globals to make ESLint aware of common global variables for Node.js and Jest avoiding they are marked as undefined

As these dependencies are only used for ESLint, you should install them – like ESLint itself – as dev dependencies in your Node.js project.

# npm install @eslint/js eslint-plugin-jest globals --save-dev

Creating eslint.config.js

Next, in the root folder of your Node.js project, create an eslint.config.js file with the following contents. This will lead to an almost identical, yet more customizable, linting behaviour as the old .eslintrc.json did.

const { configs } = require('@eslint/js');
const jest = require('eslint-plugin-jest');
const globals = require('globals');

module.exports = [
    configs.recommended,
    {
        languageOptions: {
            ecmaVersion: 2018,
            sourceType: 'commonjs',
            globals: { 
                ...globals.node, 
                ...globals.jest, 
                Atomics: 'readonly', 
                SharedArrayBuffer: 'readonly' 
            }
        },
        rules: {
            semi: 'error',
            quotes: ['error', 'single'],
            indent: ['error', 4, { 'SwitchCase': 1 }],
            'no-unused-vars':
                [
                    'warn',
                    {
                        'varsIgnorePattern': '^_',
                        'args': 'after-used',
                        'argsIgnorePattern': '^_'
                    }
                ]
        },
        ignores: ['conf/', 'coverage/']
    },
    {
        languageOptions: {
            globals: { ...globals.jest }
        },
        files: ['test/*.test.js'],
        ...jest.configs['flat/recommended'],
        rules: {
            ...jest.configs['flat/recommended'].rules
        }
    }
];

Thats’s already it. Linting now should work again as expected and you can safely delete the old .eslintrc.json as well as .eslintignore in your project.

Breakdown of the new flat file configuration

As noted before, the flat file configuration is simply an exported array of ESLint configuration objects. Based on our eslintrc.json we want to migrate, this array will have three entries.

Part 1: Importing recommended ESLint ruleset

First element of the configuration array should be the recommended ruleset that is delivered by the @eslint/js package. This line is the replacement for the "extends": "eslint:recommended" entry in the old eslintrc.

configs.recommended

Part 2: Custom rules for normal JavaScript code files and files to be ignored

Next object in the configuration array holds all our own custom rules and properties for normal JavaScript code files as well as the patterns of all files/folders that shout be ignored bei ESLint.

{
    languageOptions: {
        ecmaVersion: 2018,
        sourceType: 'commonjs',
        globals: { 
            ...globals.node, 
            ...globals.jest, 
            Atomics: 'readonly', 
            SharedArrayBuffer: 'readonly' 
         }
    },
    rules: {
        semi: 'error',
        quotes: ['error', 'single'],
        indent: ['error', 4, { 'SwitchCase': 1 }],
        'no-unused-vars':
            [
                'warn',
                {
                    'varsIgnorePattern': '^_',
                    'args': 'after-used',
                    'argsIgnorePattern': '^_'
                }
            ]
    },
    ignores: ['conf/', 'coverage/']
}

This section is quite self-explanatory when compared to the old eslintrc configuration. The key differences are:

  • There is no env section anymore, most of that configuration is now located under languageOptions.
  • Note that in the globals object all node and jest globals where added explicitly by using the corresponding arrays provided in the globals-package. This ensures that all common Node.js globals like process and Jest globals like expect are not treated as undefined variables. The latter makes sense if you create some kind of test-utils files which use Jest commands but are not unit-test files themselves. See the example GitHub repository for such an example (/tests/test-utils.js).
  • There is now an ignore property that takes an array of files/folders to be ignored by ESLint. The syntax of the entries is the same as it was in .eslintignore which is now obsolete. For more details see ignoring files.

The linting rules themselves are quite unchanged in the new configuration style.

Part 3: Rules for linting Jest test files

The last needed configuration object is for correct linting of Jest tests. There is no "jest": true option anymore which was very simple. Instead, we’ll need to import the eslint-plugin-jest package and use recommended rules out of it. In the example, all Jest test files are located in the projects folder test/ and have the common extension .test.js.

The resulting configuration object for our eslint.config.js is:

{
    languageOptions: {
        globals: { ...globals.jest }
    },
    files: ['test/*.test.js'],
    ...jest.configs['flat/recommended'],
    rules: {
        ...jest.configs['flat/recommended'].rules
    }
}

This ensures a proper linting of all Jest tests located under test/. If you have test files located in other/additional locations, simply add them to the files property.

Note: If you use Node.js globals like process in your Jest tests, you should add ...globals.node to the globals property. This prevents ESLint from reporting those globals as undefined variables.

Example project on GitHub

To see a practical working example of a migration before and after, clone the eslintrc-to-flatfile GitHub repository and browse through the branches eslint-v8 and eslint-v9. The code files contain several example errors in formatting, quoting etc. that should be highlighted as linting errors or warnings. For details see the code comments.

# clone the example project
git clone https://github.com/tsmx/eslintrc-to-flatfile.git

# check out/switch to the original ESLint v8 branch using eslintrc.json
git checkout eslint-v8
npm install

# check out/switch to the migrated ESLint v9 branch using new eslint.config.js
git checkout eslint-v9
npm install

Useful links

]]>
Using SQL Developer with PostgreSQL https://tsmx.net/using-sqldeveloper-with-postgresql/ Tue, 18 Jun 2024 20:36:51 +0000 https://tsmx.net/?p=2928 Read more]]> Quick guide on how to connect to a PostgreSQL database using Oracle SQL Developer.

Also it’s primary usage is for Oracle DB, SQL Developer also a good tool for managing other databases like PostgreSQL and you can benefit from known UX/UI. I assume you already have a local installation of SQL Developer and a PostgreSQL up & running at localhost. If you don’t have a local PostgreSQL DB, have look on this article how to achieve this in minutes using Docker.

Preparing SQL Developer to connect to PostgreSQL

To enable SQL Developer to connect to a Postgres DB, get the offical PostgreSQL JDBC driver first. Save the downloaded jar file in an appropriate folder like /opt/postgres and set the permissions so that SQLDeveloper is able to read the file.

In SQL Developer navigate to Tools --> Preferences and there to Database --> Third Party JDBC Drivers. Click on Add Entry and search for the Postgres JDBC jar.

sqldeveloper-postgres-driver

After restarting SQL Developer you have PostgreSQL available in the database type dropdown for a new connection.

sqldeveloper-postgres-new-connection

Create a connection to the PostgreSQL database

Having that, create a new PostgreSQL connection to localhost port 5432 with user postgres and the default password postgres. If you have started PostgreSQL using Docker, provide the password set in the POSTGRES_PASSWORD variable of the Docker run command.

sqldeveloper-postgres-new-connection-2

After saving and connect you are ready to use your PostgreSQL DB in SQL Developer.

sqldeveloper-postgres-connected

Connecting when username does not equal the database name

By default, the connection in SQL Developer is made to a PostgreSQL database named exactly like the user.

Supposing you have a user testuser (without having an own database named the same) and want to connect to a database called testdb. After entering the credentials for that user neither the Choose Database dropdown was populated nor did the button itself worked for me.

The trick – found in this article on StackOverflow – when connecting to a database with a different name is to add the databse name after a slash to the hostname and finally add a question mark at the end, like so: localhost/testdb?.

sqldeveloper-postgres-different-dbname

That’s it. Have fun using SQL Developer with your PostgreSQL DB šŸ™‚

Useful links

]]>
COPY data between localhost and PostgreSQL running with Docker https://tsmx.net/copy-data-postgresql-docker/ Tue, 04 Jun 2024 20:17:16 +0000 https://tsmx.net/?p=2936 Read more]]> A short article showing how to easily use the COPY command to move data between localhost and a PostgreSQL database running with Docker.

With the COPY command PostgreSQL povides an easy way to move data between the database and the local file system. In this article we show how to use this command when PostgreSQL is running with Docker. This is especially useful if you want to move large amounts of data, e.g. populating a database with mass-data.

Mounting a data transfer volume to the PostgreSQL container

In this article I assume you are familiar on how to run PostgreSQL with Docker using a separate user to manage permissions for local mounted volumes. In our example the user is named postgres and has ID 1002.

To be able to transfer data between PostgreSQL in Docker and the host file system, we’ll need a local directory mounted as a volume to the database container. In this article we’ll use /tmp/postgres for that. Any other directory would be good too as long as the PostgreSQL Docker container can access it.

First, we’ll create the local directory with appropriate permissions. This may require root privileges on your system.

$ cd /tmp
$ mkdir postgres
$ chown postgres:postgres postgres/

Having that, let’s mount the new directory as a volume to the PostgreSQL container. For that we’ll add the option -v /tmp/postgres:/tmp/postgres:Z to the Docker run command – for details refer to the guide on running PostgreSQL with Docker. This maps /tmp/postgres from the Docker container to our locally created /tmp/postgres directory.

Note we make use of the “:Z” option for the mounted volume here to overcome potential issues with SELinux. You might not need that depending on you SELinux configuration. For more details also refer to the corresponding section in our MongoDB docker guide.

The final command to launch PostgreSQL with local storage (recommended) and the mounted data transfer directory for COPY’ing data is:

$ docker run -d \
  --name mypostgres \
  --user 1002 \
  -e POSTGRES_PASSWORD=Test123$ \
  -e PGDATA=/var/lib/postgresql/data/pgdata \
  -v /var/db/postgres:/var/lib/postgresql/data:Z \
  -v /tmp/postgres:/tmp/postgres:Z \
  -p 5432:5432 \
  postgres:16.2

That’s it – you now have a PostgreSQL database running in Docker with directory /tmp/postgres mounted to localhost, ready for transferring data.

Extract data to localhost with COPY TO

First, we’ll use COPY to transfer data from a PostgreSQL table to localhost. For that we connect to the database running with Docker and create a simple table tasks having two columns. Then we insert some rows and use COPY to extract the data to a local CSV file.

$ psql -h localhost -p 5432 -U postgres          
Password for user postgres: 

postgres=# CREATE TABLE tasks (id bigint PRIMARY KEY, description varchar(100));
CREATE TABLE
postgres=# INSERT INTO tasks VALUES (1, 'My first Task');
INSERT 0 1
postgres=# INSERT INTO tasks VALUES (2, 'Task Nr. 2');
INSERT 0 1
postgres=# COPY tasks TO '/tmp/postgres/export.csv' DELIMITER ',' CSV HEADER;
COPY 2
postgres=#

Back on localhost after logging out from PostgreSQL we can verify that the data has been written to export.csv in the directory /tmp/postgres of the local file system.

postgres=# quit
$ cat /tmp/postgres/export.csv 
id,description
1,My first Task
2,Task Nr. 2

Brilliant, that’s working as expected.

Insert data from localhost with COPY FROM

Next, we’ll go the other way round and insert data from a local CSV file into a table. For that we place a file import.csv in the transfer directory on localhost.

$ cat /tmp/postgres/import.csv 
id,description
100,Task-100
200,Task-200
300,Task-300

Having that, we connect back again to our PostgreSQL database, truncate the tasks table and populate it with the data from the import file using the COPY command.

$ postgres psql -h localhost -p 5432 -U postgres
Password for user postgres: 

postgres=# TRUNCATE TABLE tasks;
TRUNCATE TABLE
postgres=# COPY tasks FROM '/tmp/postgres/import.csv' DELIMITER ',' CSV HEADER;
COPY 3
postgres=# SELECT * FROM tasks;
 id  | description 
-----+-------------
 100 | Task-100
 200 | Task-200
 300 | Task-300
(3 rows)

postgres=# 

That’s already it. We’ve successfully moved data between the local file system and our dockerized PostgreSQL back and forth using the COPY command.

Useful links

]]>
Running PostgreSQL with Docker on Linux using local persistent data storage https://tsmx.net/postgresql-docker-local-persistent-storage/ Tue, 28 May 2024 20:05:09 +0000 https://tsmx.net/?p=2745 Read more]]> A quick guide demonstrating how to get PostgreSQL up & running in minutes under Linux using Docker. We’ll create local persistent data storage using a Docker volume and connect from localhost with psql.

Preparing local data storage

By default, running PostgreSQL with Docker would store all the data within the container, meaning it would not survive a rebuild of the container. To overcome that, let’s create a local directory that will be mounted as a Docker volume where PostgreSQL can save all of the data to be persistent regardless of container rebuilds etc.

As a best practice, let’s first create a new user called postgres to manage the permissions for all local directories mounted as a volume by the PostgreSQL Docker container. To do so, run the following commands with root privileges.

$ useradd -M postgres
$ usermod -L postgres

This creates a new user postgres having no home directory and a locked password preventing unwanted use. Next, we create a directory (I prefer something under /var/db) for the PostgreSQL data volume and assign it to the newly created user.

$ cd /var/db
$ mkdir postgres
$ chown -R postgres:postgres /var/db/postgres

That’s it for the local preparation. Now let’s move on to start PostgreSQL with Docker. All following steps can be done without having root privileges.

Running PostgreSQL with Docker

To run PostgreSQL with docker, we’ll first download the offical Postgres docker image.

$ docker pull postgres:16.2

Before starting the database, we need to figure out the user ID of the postgres user for passing it to Docker. This will ensure the running container has sufficient rights to access the local data directory mounted as a volume.

$ id -u postgres                                
1002

Now we are ready to run Postgres with Docker using the following command:

$ docker run -d \
  --name mypostgres \
  --user 1002 \
  -e POSTGRES_PASSWORD=Test123$ \
  -e PGDATA=/var/lib/postgresql/data/pgdata \
  -v /var/db/postgres:/var/lib/postgresql/data:Z \
  -p 5432:5432 \
  postgres:16.2

Let’s break down the options of the docker run command in detail.

docker run optionDescription
-dRun the container in background mode.
--name mypostgres[optional] Name the container mypostgres.
--user 1002Set the user ID the container is running with to ensure appropriate rights for accessing the mounted volumes.
-e POSTGRES_PASSWORD=Test123$Set the environment variable for the Postgres admin password.
-e PGDATA=/var/lib/postgresql/data/pgdataSet the environment variable for the Postgres data directory.
-v /var/db/postgres:/var/lib/postgresql/data:ZMount local directory /var/db/postgres as a volume to container directory /var/lib/postgresql/data for persistent data storage on your localhost.
-p 5432:5432Map the Postgres container port 5432 to local port 5432.
postgres:16.2The container image to run.

Note here that the PGDATA variable must be set to something different then /var/lib/postgresql to support mounting to a local persistent directory. For details refere to the corresponding Postgres docker image documentation.

Also you may need the “:Z” option for the volume mounts depending on your SELinux configuration. For an in-depth explanation refer to the Docker MongoDB guide.

Now let’s check if the container is up & running.

$ docker ps | grep postgres
b020097a3a6f   postgres:16.2         "docker-entrypoint.sā€¦"   2 days ago      Up 9 seconds    0.0.0.0:5432->5432/tcp, :::5432->5432/tcp   mypostgres

Looks good – we’re now ready to connect and use the PostgreSQL Docker instance.

Connecting from localhost with psql

Having the container running, let’s connect with the standard Postgres CLI tool psql. If you haven’t installed it already, you should opt-in for a local installation. This is done by installing the postgresl package using dnf with admin rights. If you are on another Linux distribution than Fedora, use your respective package manager.

$ dnf install postgresql

After psql is installed, run the following command as a normal user to connect to the Postgres container from localhost.

$ psql -h localhost -p 5432 -U postgres
Password for user postgres:  # enter the password specified in the docker run command

postgres=#

That’s already it – you’re connected to PostgreSQL running with Docker šŸ™‚

If you want to use other database systems with Docker under Linux, also check out the guides for MongoDB and Oracle.

Useful links

]]>
Built and signed on GitHub Actions – publishing npm packages with provenance statements https://tsmx.net/npmjs-built-and-signed-on-github-actions/ Fri, 17 Nov 2023 22:50:35 +0000 https://tsmx.net/?p=2646 Read more]]> A quick guide on how to set up a GitHub actions workflow to publish a npm package including the provenance badge and section on npmjs.com.

So far, you may have published your npm packages by simply invoking npm publish. Using GitHub actions provides a more elegant way and also enables you to get this nice green checkmark badge behind the version number…

npm-version-provenance

…as well as the provenance section for your packages at npmjs.com…

npm-package-provenance

This provides an extra level of security by providing evidence of your packages origin facilitating sigstore. In this article we’ll show how to set up a simple GitHub actions workflow to publish your package including the signed provenance details.

Prerequisites

To follow along this guide, you should have:

  • An existing package published at npmjs.com
  • The packages source code in a GitHub repository
  • The GitHub CLI installed and working

Generating a token for publishing packages on npmjs.com

First, you will need an access token for npmjs.com. For that log in at npmjs.com and head over to the Access Tokens section of your account. Then create a Classic Token of type Automation with any name, e.g. gh-actions-publish. On creation, the token value will be shown once and never again, so make sure you get the value and save it in a secure place. After all is done, you should see the newly created token in your accounts token list, like so.

npm-automation-token

Using this token will enable your GitHub actions workflow to publish new package versions including bypassing 2FA.

Storing the npm token on GitHub

Next, store the generated npm token value as a secret in your GitHub repository. For that, head over to Settings -> Secrets and variables and press New repository secret. Enter a name and the token value.

github-npm-token-secret-v2

Here the created secret has the name NPM_TOKEN. Having that, it can be referenced in GitHub actions workflow definitions by ${{ secrets.NPM_TOKEN }}.

Setting up a GitHub action workflow for publishing on npmjs.com

Next step is to add a GitHub actions workflow definition for publishing your npm package at npmjs.com to the repository. For that, add the following YAML as a new file in .github/workflows, e.g. ./github/workflows/npm-publish.yml, in the repository.

name: npm-publish
on:
  workflow_dispatch: 
jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write
    steps:
    - uses: actions/checkout@v3
    - uses: actions/setup-node@v3
      with:
        node-version: '18.x'
        registry-url: 'https://registry.npmjs.org'
    - run: npm ci
    - run: npm publish --provenance --access public
      env:
        NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

This action workflow named npm-publish will deploy a new version of your package at the npm registry with provenance statements. If your package is private, you can omit the --access public option in the publishing step.

Running the workflow

With workflow_dispatch: in the on section, the provided GitHub action does not have any automatic trigger and will only run when started manually. This might be preferred to have full control of when a new package version will be published. If you rather want the publish workflow to run automatically on certain events in your repo, check out the possible triggers for the on section.

To start the workflow manually and publish the package, simply run…

$ gh workflow run npm-publish

…in the project directory. Please make sure the version number of your package was updated and all changes are committed & pushed before invoking the publishing action.

After the workflow has completed successfully, you should see the new version published on npmjs.com including the checkmark badge behind the version and the provenance details section.

In the GitHub actions logs for the npm publish step of the executed workflow you can also see that the provenance statement was signed and transferred to sigstore, like so…

github-actions-npm-publish-log

That’s it.

Happy coding šŸ™‚

Useful links

]]>
Running Oracle DB in Docker using the official container image – in minutes, free edition available https://tsmx.net/running-oracledb-in-docker/ Fri, 10 Nov 2023 22:55:11 +0000 https://tsmx.net/?p=2579 Read more]]> Want to run an Oracle database for developing or experimenting for free with minimal setup effort? This article shows how you can get up & running an Oracle DB using Docker in a few minutes.

Searching around for running an Oracle DB on Docker you’ll likely be navigated to the Oracle Database on Docker site on GitHub. This provides comprehensive guides and Dockerfiles to built your own images… interesting, but unfortunately not a quick and simple solution to pull & run a database. So let’s explore a more easy way…

Pulling the official Oracle DB container image

Oracle also provides pre-built Docker images in their own registry located at container-registry.oracle.com.

Navigating to the Database section, you can find images for various versions of the Oracle database like Enterprise, Express or Free. In this article we’ll use the Free version.

By choosing a repository you’ll get additional information on how to pull the image and what parameters are offered for customizing.

oracle-container-registry-database

To use those Docker images, you’ll need an Oracle account to log on to the container registry. If you don’t have one, go ahead and create one for free.

Then log on to the Oracle container registry and pull the database image, e.g. the latest free edition.

$ docker login container-registry.oracle.com

$ docker pull container-registry.oracle.com/database/free:latest

$ docker image ls | grep oracle                                                                                                                
container-registry.oracle.com/database/free    latest    39cabc8e6db0    2 months ago    9.16GB

Having this, you are ready to start the database.

Starting the Oracle database with Docker

Before we start a new container running the database, its recommendable to create a local directory where all the data files can persist outside the container. By doing so, the data persistence is decoupled from the containers lifeycycle. In our example the local directory /opt/oracle/oradata was created with write permissions for any user.

Now you can start up the Oracle DB like so:

$ docker run --name oracle \                                                                                                                   
-p 1521:1521 \
-e ORACLE_PWD=Test123 \
-v /opt/oracle/oradata:/opt/oracle/oradata \
container-registry.oracle.com/database/free:latest

This will start a container named oracle binding it to port 1521 and setting the SYS password of the database to Test123. The local directory /opt/oracle/oradata will be mounted directly in the container ensuring that database files are persistent.

Note 1: If you encounter any permission problems with the data directory under Linux, any SELinux settings are likely the cause. To overcome this, a quick solution is to pass the :Z option to the volume parameter:
-v /opt/oracle/oradata:/opt/oracle/oradata:Z
For more details on that, just have a look on the article about moving MongoDB to Docker.

Note 2: On first startup when the mounted oradata directory is empty, Oracle will create a completely new database in there which might take some minutes.

After the startup is completed, you should see DATABASE IS READY TO USE in the output. Also this gives you the SID of the started DB which is FREE in that case.

Awesome, now let’s connect and use the database…

Connecting to the database

After startup is completed you can connect to the database at localhost:1521 as SYS using the supplied password Test123.

Using SQLDeveloper…

…or using sqlplus

$ sqlplus sys/Test123@localhost as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Fri Nov 10 21:08:06 2023
Version 21.12.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.


Connected to:
Oracle Database 23c Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.3.0.23.09

SQL> 

…or any other DB tool you want.

That’s already it. Have fun playing around with Oracle DB šŸ™‚

Useful links

]]>
Integrating GCP Secret Manager with App Engine environment variables https://tsmx.net/integrating-gcp-secret-manager-with-app-engine-environment-variables/ Sun, 15 Oct 2023 21:21:55 +0000 https://tsmx.net/?p=2458 Read more]]> Showing a convenient way on how to use Secret Manager to securely pass sensible data as environment values to Google App Engine (GAE) services running with Node.js.

Unfortunately, App Engine doesn’t deliver an out-of-the box solution for passing env vars from Secret Manager like it’s available in Cloud Functions by using the --set-secrets option of gcloud functions deploy.

In this article a convenient way by using a simple npm package is shown to achieve this. The goals are:

  • Direct use of Secret Manager secret references in the standard App Engine deployment descriptor.
  • Minimal impact on the code.
  • No vendor and platform lock-in, no hard dependency to App Engine. The solution should still run in any other environment.
  • Should work with CommonJS as well as ESM/ECMAScript.

Let’s go to it…

Integrating Secret Manager with App Engine

Setting-up a secret in GCP Secret Manager

First, create one or more secrets in Secret Manager of your GCP project. Here, the secret is named MY_SECRET and has a reference path of projects/100374066341/secrets/MY_SECRET.

gcp-secret-manager-my-secret

For a more detailed guide on how to enbale Secret Manager and creating secrets please refer to this section about secure configuration management in Cloud Functions.

Granting Secret Manager rights to the GAE service account

In order to resolve secrets from Secret Manager, the service account principal running your App Engine service – by default PROJECT_ID@appspot.gserviceaccount.com – must have at least the Secret Manager Secret Accessor role. For more details refer to the Secret Manager access control documentation.

To do so, go to IAM in the console and edit the App Engine principal. There, click “Add another role” and search for Secret Manager Secret Accessor and save, like so.

gcp-iam-access-scecret-role

Referencing a secret in app.yaml

In the standard app.yaml deployment descriptor of your App Engine service, create an appropriate environment variable in the env_variables section containing the secrets reference path. Like so…

service: my-service
runtime: nodejs20

env_variables:
  MY_SECRET: "projects/100374066341/secrets/MY_SECRET/versions/latest"

Note that you have to add /versions/latest to reference the lastest version of the secret or /versions/x to reference the version with number x, e.g. /versions/2. For details see referencing secrets.

Add the gae-env-secrets package to your project

Next, add the gae-env-secrets package as a dependency in your project. This will provide the functionality to retrieve Secret Manager values for environment variables used in App Engine.

npm i gae-env-secrets --save

Use the Secret Manager value in your code

Import the gae-env-secrets package in your code and call the async getEnvSecrets function out of it. Once completed, you’ll be able to access the values stored in GCP Secret Manager by simply accessing the env vars used in the deployment descriptor. Works with CommonJS as well as ESM.

CommonJS

const { getEnvSecrets } = require('gae-env-secrets');

getEnvSecrets().then(() => {
  const secret = process.env['MY_SECRET']; // value of MY_SECRET from Secret Manager
});

ESM

import { getEnvSecrets } from 'gae-env-secrets';

await getEnvSecrets();
const secret = process.env['MY_SECRET']; // value of MY_SECRET from Secret Manager

That’s it. You can now seamlessly use Secret Manager secret values in your GAE App Engine services by referencing env vars.

To learn more on how the gae-env-secrets package is working and how its usage can be customized, read on.

Under the hood

Referencing secrets in the deployment descriptor

To reference secrets in the app.yaml deployment descriptor, you’ll need to pass the versioned reference of the secret from Secret Manager. This has the form of…

projects/[Project-Number]/secrets/[Secret-Name]/versions/[Version-Number|latest]

To retrieve the reference path of a secrets version in Secret Manager simply click “Copy resource name” on the three dots behind a version. Specifying latest as the version instead of a number will always supply the highest active version of a secret.

gcp-secret-manager-my-secret-name

Then pass the secrets reference to the desired variable in the env_variables block of the deployment descriptor, like so…

env_variables:
  SECRET_ENV_VAR: "projects/100374066341/secrets/MY_SECRET/versions/1"

For more details, refer to the app.yaml reference.

Determining the runtime-environment

gae-env-secrets will evaluate environment variables to detect if it is running directly in App Engine. If the following env vars both are present, the library would assume it’s running in GAE and substitute relevant env vars with their respective secret values from Secret Manager:

  • GAE_SERVICE
  • GAE_RUNTIME

If these two env vars are not present, the library won’t do anything. So it should be safe to call it unconditionally in your code without inferring local development, testing etc.

To simulate running under GAE, simply set those two env vars to anything.

Substituting env vars from Secret Manager

If running under GAE is detected, calling getEnvSecrets will iterate through all env vars and substitute the value with the corresponding secret derived from Secret Manager if one of the following condition is true:

  • The name of the env var ends with _SECRET (default suffix) or another deviating suffix passed via the options
  • Auto-Detection is enabled via options and the value of the anv var matches a Secret Manager secret reference

For accessing the Secret Manager, the library uses the package @google-cloud/secret-manager.

Error handling

By default and for security reasons, the library will throw an error if substituting an env vars value from Secret Manager fails for any reason…

  • secret reference is invalid
  • secret is inactive or not present
  • invalid version number
  • missing permissions to access Secret Manager
  • or else…

So make sure to use an appropriate error handling with try/catch or .catch().

To change this behaviour, use the strict property available in the options.

Passing options to getEnvSecrets

You can pass an options object when calling getEnvSecrets to customize the behaviour. The following options are available.

suffix

Type: String Default: _SECRET

All env vars whose name is ending with the suffix will be substituted with secrets from Secret Manager.

Pass another value to change the env vars of your choice.

// will substitue all env vars ending with '_KEY'
getEnvSecrets({ suffix: '_KEY' });

strict

Type: Boolean Default: true

By default strict is true which means that if a secret cannot be resolved an error will be thrown.

Setting strict to false will change this behaviour so that the error is only written to console.error. The value of the env var(s) where the error occured will remain unchanged.

// error will only be logged and respective env vars remain unchanged
getEnvSecrets({ strict: false });

autoDetect

Type: Boolean Default: false

The autoDetect feature enables automatic detection of env var values that contain a Secret Manager secret reference for substitution regardless of the suffix and env vars name.

This feature is additional to the provided suffix, meaning that all env vars ending with the suffix AND all automatically detected will be substituted.

To turn on this feature, pass true in the options object.

// turn on autoDetect
getEnvSecret({ autoDetect: true });

Example: Having this feature enabled, the following env var would be substituted with version 2 of the secret MY_SECRET regardless of the suffix because is contains a value of a Secret Manager reference.

env_variables:
  VAR_WITH_ANY_NAME: "projects/00112233/secrets/MY_SECRET/versions/2"

Considerations & limitations when using gae-env-secrets

Please keep in mind the following points when using this solution.

  • Since the getEnvSecrets function is async you’ll need to await the result or chain on using .then to be able to work with the secret values. CommonJS does not support top-level await.
  • As the env var secrets are resolved at runtime of your code, any top-level code of other modules that is executed upon require/import cannot make use of the secret values and instead would see the secret references as values of the env vars.
  • Resolving the secrets from Secret Manager using the underlying Google library will usually take 1-2 seconds.

Summary

This article shows how to integrate Secret Manager easily with App Engine by using one simple package and few lines of code. No vendor or platform lock-in is created.

However, once Google is supplying an out-of-the box feature to make the integration work like in Cloud Functions, it should be considered switching to this to maybe overcome the limitations of this solution, e.g. secret resolution at runtime.

Happy coding šŸ™‚

Useful links

]]>
Secure configuration management for a GCP cloud function in Node.js https://tsmx.net/secure-configuration-management-for-a-gcp-cloud-function-in-node-js/ Sat, 02 Sep 2023 19:53:08 +0000 https://tsmx.net/?p=2331 Read more]]> Creating a convenient and production-grade configuration management for a GCP cloud function in Node.js using Secret Manager and the secure-config package. Includes a complete example project on GitHub.

Goals and general setup of the cloud function configuration

Like in a traditional app, it’s very common that you’ll need sensitive configuration data in a GCP cloud function, e.g. a DB username and password. This article shows a proper way of achieving this by leveraging managed cloud services and an additional Node.js package. The goals of this setup are…

  • Industry-standard AES encryption an non-exposing of any needed configuration value
  • Full JSON flexibility for the configuration like nested values, arrays etc.
  • Use of managed GCP services without loosing the capability to run on other platforms, e.g. local testing, on a traditional server, Docker, Kubernetes or else – no vendor lock-in

To achieve this, we’ll be using two components for the cloud functions configuration setup:

  1. The secure-config package to securely store the complete configuration as an encrypted JSON file. Uses strong AES encryption and standard JSON, works with nearly any runtime environment.
  2. GCP Secret Manager for secure storage and passing of the secure-config master key to the cloud function by using an environment variable.

If you wonder that using Secret Manager itself without any addiational package may be sufficient, take a look on the further thoughts.

Steps to implement the configuration management in your cloud function

Install secure-config and create the config files

Install the secure-config packge by running:

npm install @tsmx/secure-config --save

Having this, create a conf subfolder in your project with the configuration files. In the tutorial we’ll create two files, one for local testing purposes without any encrpytion and a production version which will be used in GCP with an encrypted secret.

The unencrypted config file will be conf/config.json with the following simple content:

{
  "secret": "secret-config-value"
}

To create the encrypted production version I recommend to use the secure-config-tool. If you don’t want to install this tool, refer to the secure-config documentation on how to generate encrypted entries without it.

For simplicity I assume you have secure-config-tool installed an we will use 00000000000000000000000000000000 (32x 0) as the encryption key. Having this, create the encrypted configuration for production of the cloud function as following…

cd conf/ 
export CONFIG_ENCRYPTION_KEY=00000000000000000000000000000000
secure-config-tool create -nh -p "secret" ./config.json > ./config-production.json

This will create config-production.json in the conf directory with an encrypted secret, like so:

{ 
  "secret": "ENCRYPTED|a2890c023f1eb8c3d66ee816304e4c30|bd8051d2def1721588f469c348ab052269bd1f332809d6e6401abc3c5636299d
}

Note: By default, GCP will set NODE_ENV=production when you run a cloud function. That’s why the secure-config package will look for conf/config-production.json if you don’t specify something else. For all available options of the secure-config package, refer to the documentation.

To prevent unwanted exposure of sensitive data, use a .gcloudignore file in the root folder of your project to only upload the encrypted production configuration to GCP when deploying. The following lines will tell gcloud to ignore all files in the conf/ folder but the config-production.json.

# don't upload non-production configurations
conf/*
!conf/config-production.json

Make sure to also check uploads to any public code repo in the same way using .gitignore or something similar.

Use the configuration in your code

Use the configuration values in your cloud function code. Here as an ES module in the main function ./index.js:

import secureConfig from '@tsmx/secure-config';
const config = secureConfig();

export const helloGCP = (req, res) => {
 res.json({
    info: 'Hello from GCP cloud functions!',
    secret: config.secret
  });
}

Of course this also works with CommonJS using require. From a code perspective that’s all, next step is to set-up GCP for passing the configurations key to the function.

Store the configuration key in Secret Manager

In your GCP console search for “secret manager” and enable the API if not already done

gcp-secret-manager
gcp-secret-manager-enable

After secret manager is enabled, click on “CREATE SECRET” on the top and create a new secret with name CONFIG_KEY and a secret value of 00000000000000000000000000000000. After creating the secret you can see it in the list and click on it to view the details.

gcp-secret-manager-config-key

Directly below the friendly name you can find the secret’s reference which in this case is projects/100374066341/secrets/CONFIG_KEY. This reference will be used later to securely pass the secret as an environment variable to the cloud function.

To verify the value of a secret, click on the three dots behind a version and go to “View secret value”:

gcp-secret-manager-config-key-value

Last step is to grant the service account used for function execution the Secret Manager Secret Accessor role so that the secret can be accessed. By default, GCP uses the following accounts to execute cloud functions depending on the generation:

  • Gen 1: PROJECT_ID@appspot.gserviceaccount.com
  • Gen 2: PROJECT_NUMBER-compute@developer.gserviceaccount.com

For more details on the used IAM account refer to the function identity documentation. Depending on the generation you’ll deploy the function, select the appropriate account under IAM in the console and make sure it has the Secret Manager Secret Accessor role. Add it if necessary by clicking “Edit principal” and then “Add another role”.

gcp-iam-access-secret

That’s it for the secret manager part. The master key needed for decryption of the configuration is now securely stored and ready to use.

Deploy and run the cloud function

The cloud function is now ready to deploy. To do so, we use gcloud functions deploy.

gcloud functions deploy secure-config-function \
--gen2 \
--runtime=nodejs18 \
--region=europe-west3 \
--source=. \
--entry-point=helloGCP \
--set-secrets=CONFIG_ENCRYPTION_KEY=projects/100374066341/secrets/CONFIG_KEY:latest \
--trigger-http \
--allow-unauthenticated

The option --set-secrets=CONFIG_ENCRYPTION_KEY=projects/100374066341/secrets/CONFIG_KEY:latest tells GCP to supply the cloud function with an env var named CONFIG_ENCRYPTION_KEY expected by secure-config with a value of the latest version of the secret projects/100374066341/secrets/CONFIG_KEY. Make sure to replace the secret’s reference with your specific value.

For a complete description of the options refer to the documentation of gcloud functions deploy.

On completion, gcloud will tell you the URL of the successfully deployed function.

...
updateTime: '2023-09-01T20:44:08.493437974Z'
url: https://europe-west3-tsmx-gcp.cloudfunctions.net/secure-config-function

Call this URL to verify the function is working.

curl https://europe-west3-tsmx-gcp.cloudfunctions.net/secure-config-function
{"info":"Hello from GCP cloud functions!","secret":"secure-config-value"}

You should also see the function with a green check mark in your GCP console.

gcp-cloud-functions-overview

Perfect! The cloud function is deployed and works using a secure configuration management.

Example project at GitHub

A complete example project is available on GitHub.

git clone https://github.com/tsmx/secure-config-cloud-function.git

For easy deployment of the function, a deploy script is provided in package.json. Simply invoke this with npm run. Make sure gcloud is configured properly and you are in the right project.

npm run deploy

Further thoughts

If – and only if – your function is using very few simple configuration values, nesting, structuring, using arrays and managing multiple environments in the configuration are not of interest, I would suggest to stick with Secret Manager only and leave out the usage of the secure-config package.

Normally, at least some of that options are of interest in your project and the use of the package absolutely makes sense. For a full view of the features you’ll get out of that package refer to the documentation.

Happy coding šŸ™‚

Useful links

]]>
Default Node.js process.env variables in GCP cloud functions and app engine https://tsmx.net/nodejs-env-vars-in-gcp-cloud-functions-and-app-engine/ Mon, 21 Aug 2023 20:37:41 +0000 https://tsmx.net/?p=2247 Read more]]> Discovering the default process.env variables provided in cloud functions and app engine services on Google Cloud Platform covering Node.js 16, 18 and 20 as well as Gen1 and Gen2 functions. Including a simple project for retrieving the values.

For cloud functions or app engine services using Node.js runtimes GCP will provide a default set of environment variables accessible through process.env. In this article we will explore how this env vars look like for different versions of Node.js in these GCP services.

A very simple project for deploying the needed functions and app engine services to discover these env vars in your own GCP account is also provided.

Cloud functions environment variables

See below for the default process.env variables provided by GCP cloud functions with different Node.js runtimes. Click the link to call a provided function and retrieve the most current result.

{
  LANGUAGE: "en_US:en",
  NODE_OPTIONS: "--max-old-space-size=192",
  K_REVISION: "node20-gen2-get-env-00003-sip",
  PWD: "/workspace",
  FUNCTION_SIGNATURE_TYPE: "http",
  PORT: "8080",
  CNB_STACK_ID: "google.gae.22",
  NODE_ENV: "production",
  CNB_GROUP_ID: "33",
  NO_UPDATE_NOTIFIER: "true",
  HOME: "/root",
  LANG: "en_US.UTF-8",
  K_SERVICE: "node20-gen2-get-env",
  GAE_RUNTIME: "nodejs20",
  SHLVL: "0",
  CNB_USER_ID: "33",
  LC_ALL: "en_US.UTF-8",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  FUNCTION_TARGET: "getEnv",
  K_CONFIGURATION: "node20-gen2-get-env",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 20 Gen2 env vars for cloud functions

{
  LANGUAGE: "en_US:en",
  NODE_OPTIONS: "--max-old-space-size=192",
  K_REVISION: "3",
  PWD: "/workspace",
  FUNCTION_SIGNATURE_TYPE: "http",
  PORT: "8080",
  CNB_STACK_ID: "google.gae.22",
  NODE_ENV: "production",
  CNB_GROUP_ID: "33",
  NO_UPDATE_NOTIFIER: "true",
  HOME: "/root",
  LANG: "en_US.UTF-8",
  GCF_BLOCK_RUNTIME_go112: "410",
  K_SERVICE: "node20-get-env",
  GAE_RUNTIME: "nodejs20",
  SHLVL: "0",
  CNB_USER_ID: "33",
  LC_ALL: "en_US.UTF-8",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  GCF_BLOCK_RUNTIME_nodejs6: "410",
  FUNCTION_TARGET: "getEnv",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 20 env vars for cloud functions

{
  LANGUAGE: "en_US:en",
  NODE_OPTIONS: "--max-old-space-size=192",
  K_REVISION: "node18-gen2-get-env-00003-gic",
  PWD: "/workspace",
  FUNCTION_SIGNATURE_TYPE: "http",
  PORT: "8080",
  CNB_STACK_ID: "google.gae.22",
  NODE_ENV: "production",
  CNB_GROUP_ID: "33",
  NO_UPDATE_NOTIFIER: "true",
  HOME: "/root",
  LANG: "en_US.UTF-8",
  K_SERVICE: "node18-gen2-get-env",
  GAE_RUNTIME: "nodejs18",
  SHLVL: "0",
  CNB_USER_ID: "33",
  LC_ALL: "en_US.UTF-8",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  FUNCTION_TARGET: "getEnv",
  K_CONFIGURATION: "node18-gen2-get-env",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 18 Gen2 env vars for cloud functions

{
  LANGUAGE: "en_US:en",
  NODE_OPTIONS: "--max-old-space-size=192",
  K_REVISION: "3",
  PWD: "/workspace",
  FUNCTION_SIGNATURE_TYPE: "http",
  PORT: "8080",
  CNB_STACK_ID: "google.gae.22",
  NODE_ENV: "production",
  CNB_GROUP_ID: "33",
  NO_UPDATE_NOTIFIER: "true",
  HOME: "/root",
  LANG: "en_US.UTF-8",
  GCF_BLOCK_RUNTIME_go112: "410",
  K_SERVICE: "node18-get-env",
  GAE_RUNTIME: "nodejs18",
  SHLVL: "0",
  CNB_USER_ID: "33",
  LC_ALL: "en_US.UTF-8",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  GCF_BLOCK_RUNTIME_nodejs6: "410",
  FUNCTION_TARGET: "getEnv",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 18 env vars for cloud functions

{
  NO_UPDATE_NOTIFIER: "true",
  FUNCTION_TARGET: "getEnv",
  NODE_OPTIONS: "--max-old-space-size=192",
  NODE_ENV: "production",
  PWD: "/workspace",
  HOME: "/root",
  DEBIAN_FRONTEND: "noninteractive",
  PORT: "8080",
  K_REVISION: "node16-gen2-get-env-00003-rok",
  K_SERVICE: "node16-gen2-get-env",
  SHLVL: "1",
  GAE_RUNTIME: "nodejs16",
  FUNCTION_SIGNATURE_TYPE: "http",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  K_CONFIGURATION: "node16-gen2-get-env",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 16 Gen2 env vars for cloud functions

{
  GCF_BLOCK_RUNTIME_nodejs6: "410",
  NO_UPDATE_NOTIFIER: "true",
  FUNCTION_TARGET: "getEnv",
  GCF_BLOCK_RUNTIME_go112: "410",
  NODE_OPTIONS: "--max-old-space-size=192",
  NODE_ENV: "production",
  PWD: "/workspace",
  HOME: "/root",
  DEBIAN_FRONTEND: "noninteractive",
  PORT: "8080",
  K_REVISION: "3",
  K_SERVICE: "node16-get-env",
  SHLVL: "1",
  GAE_RUNTIME: "nodejs16",
  FUNCTION_SIGNATURE_TYPE: "http",
  PATH: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  _: "/layers/google.nodejs.functions-framework/functions-framework/node_modules/.bin/functions-framework"
}

Link: Get Node.js 16 env vars for cloud functions

App engine environment variables

See below for the default process.env variables provided by GCP app engine with different Node.js runtimes. Click the link to call a provided service and retrieve the most current result.

{
  S2A_ACCESS_TOKEN: "xxxx",
  GAE_MEMORY_MB: "384",
  NO_UPDATE_NOTIFIER: "true",
  LANGUAGE: "en_US:en",
  GAE_INSTANCE: "00c61b117c1d6452581b06dcb5f23b1f1bc9a6c6aaebd47203070aa80a23270580b3af586743534bbd4012aa004c1437350e8cda1dc423b4be",
  HOME: "/root",
  PORT: "8081",
  NODE_OPTIONS: "--max-old-space-size=300 ",
  GAE_SERVICE: "node20-get-env",
  PATH: "/srv/node_modules/.bin/:/workspace/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  CNB_GROUP_ID: "33",
  CNB_USER_ID: "33",
  GAE_DEPLOYMENT_ID: "454320553896815573",
  LANG: "en_US.UTF-8",
  GOOGLE_CLOUD_PROJECT: "tsmx-gcp",
  GAE_ENV: "standard",
  PWD: "/workspace",
  GAE_APPLICATION: "h~tsmx-gcp",
  LC_ALL: "en_US.UTF-8",
  GAE_RUNTIME: "nodejs20",
  GAE_VERSION: "20230819t221144",
  NODE_ENV: "production",
  CNB_STACK_ID: "google.gae.22"
}

Link: Get Node.js 20 env vars for app engine

{
  S2A_ACCESS_TOKEN: "xxxx",
  GAE_MEMORY_MB: "384",
  LANGUAGE: "en_US:en",
  NO_UPDATE_NOTIFIER: "true",
  GAE_INSTANCE: "00c61b117cf7b9059c648a310d860a5f66a16dc1761710ac82df9a38392a9624cbdebd451f3978bb6cfc0640f5680590beee2d4afe3b214a879c",
  HOME: "/root",
  PORT: "8081",
  NODE_OPTIONS: "--max-old-space-size=300 ",
  GAE_SERVICE: "node18-get-env",
  PATH: "/srv/node_modules/.bin/:/workspace/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  CNB_GROUP_ID: "33",
  CNB_USER_ID: "33",
  GAE_DEPLOYMENT_ID: "454320535647197464",
  LANG: "en_US.UTF-8",
  GOOGLE_CLOUD_PROJECT: "tsmx-gcp",
  GAE_ENV: "standard",
  GAE_APPLICATION: "h~tsmx-gcp",
  LC_ALL: "en_US.UTF-8",
  PWD: "/workspace",
  GAE_RUNTIME: "nodejs18",
  GAE_VERSION: "20230819t221044",
  NODE_ENV: "production",
  CNB_STACK_ID: "google.gae.22"
}

Link: Get Node.js 18 env vars for app engine

{
  S2A_ACCESS_TOKEN: "xxxx",
  NO_UPDATE_NOTIFIER: "true",
  GAE_MEMORY_MB: "384",
  GAE_INSTANCE: "00c61b117c641ca0a31d2baf0347dc63fc3e870aee7b8707eccebd1f31d5b5372af69bd5178f08349fb3f3ee5ac460efeae28ed842e96fe861",
  HOME: "/root",
  PORT: "8081",
  NODE_OPTIONS: "--max-old-space-size=300 ",
  GAE_SERVICE: "node16-get-env",
  PATH: "/srv/node_modules/.bin/:/workspace/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  GAE_DEPLOYMENT_ID: "454320521333239759",
  DEBIAN_FRONTEND: "noninteractive",
  GOOGLE_CLOUD_PROJECT: "tsmx-gcp",
  GAE_ENV: "standard",
  GAE_APPLICATION: "h~tsmx-gcp",
  PWD: "/workspace",
  GAE_RUNTIME: "nodejs16",
  GAE_VERSION: "20230819t220949",
  NODE_ENV: "production"
}

Link: Get Node.js 16 env vars for app engine

Project for discovering process.env in different services and runtimes

Alongside with this article the gcp-get-env project is provided on GitHub. This simple Node.js solution ships a function and an express application that can be deployed either as a GCP cloud function or an app engine service. Simply use the provided scripts in package.json to deploy as a function or service with different runtimes.

For that you’ll need an active GCP account and a configured and ready-to-go gcloud CLI on your machine. For details on the installation & configuration see here.

The following deployment scripts are provided in package.json. Simply call npm run [scriptname] to deploy.

scriptnamedeploysfunction/service name
deploy-node16-funccloud function with runtime Node.js 16 Gen 1node16-get-env
deploy-node16-func-gen2cloud function with runtime Node.js 16 Gen 2node16-gen2-get-env
deploy-node18-funccloud function with runtime Node.js 18 Gen 1node18-get-env
deploy-node18-func-gen2cloud function with runtime Node.js 18 Gen 2node18-gen2-get-env
deploy-node20-funccloud function with runtime Node.js 20 Gen 1node20-get-env
deploy-node20-func-gen2cloud function with runtime Node.js 20 Gen 2node20-gen2-get-env
deploy-node16-gaeapp engine service with runtime Node.js 16node16-get-env
deploy-node18-gaeapp engine service with runtime Node.js 18node18-get-env
deploy-node20-gaeapp engine service with runtime Node.js 20node20-get-env

Please note that the deployed functions and services would be publicly accessable and may cause charges to your GCP account.

Happy coding šŸ™‚

Useful links

]]>
CommonJS vs. ESM/ECMAScript cheat-sheet https://tsmx.net/commonjs-vs-esm-ecmascript-cheat-sheet/ Wed, 16 Aug 2023 20:40:19 +0000 https://tsmx.net/?p=2152 Read more]]> Short comparison of the most common statements in CommonJS vs. ESM/ECMAScript for importing, exporting and in your package.json.

See the table below for a brief comparison on the most used statements that should cover the vast majority of use-cases.

Use-CaseCommonJSESM / ECMAScript
Importing
Default import
(NPM module)
const imp = require('module');import imp from 'module';
Default import
(own module)
const imp = require('./myModule');

Note: path is mandatory, file extension is optional
import imp from './myModule.js';

Note: path and file extension are mandatory
Named importconst { namedImp } = require('module');import { namedImp } from 'module';
Import with function
invocation
const funcImp = require('module')(myParam);import imp from 'module';
const funcImp = imp(myParam);


Note: ESM doesn’t support invocations
on importing, so two lines of code are needed
Exporting
Default export
(unnamed)
module.exports = function() {/* */}export default function() {/* */}
Named export
(e.g. a function, works also with objects, classes etc.)
module.exports.myFunction = function() {/* */}export function myFunction() {/* */}
Exporting an arrow functionmodule.exports.myFunction = () => {/* */}export const MyFunction = () => {/* */}

Note: the const keyword is needed here
package.json entries
Module type…nothing…
or
"type": "commonjs"

Note: since CommonJS is the default, normally no type entry is present in package.json
"type": "module"

Note: this tells Node.js to treat all .js files as ES modules without the need to use the .mjs extension which is often preferred
Entry point"main": "index.js""exports": "./index.js"

Note: path and file extension are mandatory

Please note that this cheat-sheet is just an excerpt of all possible module.exports/require and import/export constellations as well as all available package.json options. For more details, refer to the very comprehensive documentation sites:

If you are about to migrate from CommonJS, also check out the article on converting an existing Node.js project to ESM.

Happy coding šŸ˜‰

]]>