MongoDB – tsmx https://tsmx.net pragmatic IT Sat, 09 Mar 2024 19:18:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://tsmx.net/wp-content/uploads/2020/09/cropped-tsmx-klein_transparent-2-32x32.png MongoDB – tsmx https://tsmx.net 32 32 Moving MongoDB from local installation to Docker – Fedora migration guide https://tsmx.net/local-mongodb-to-docker-fedora-migration-guide/ Wed, 23 Nov 2022 22:12:56 +0000 https://tsmx.net/?p=1830 Read more]]> Complete guide for moving an existing local MongoDB installation to newer versions of Fedora using Docker. Also useful if you are looking for getting a MongoDB container up & running on Linux with locally hosted data.

Problems with a traditional MongoDB installation on Fedora

Trying to install a MongoDB community server locally on a newer version of Fedora (in my case MongoDB 5 on Fedora 36) could be a mess. In the good old days using a repo with DNF was the best option…

$ cat /etc/yum.repos.d/mongodb-org-5.0.repo
[mongodb-org-5.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/8/mongodb-org/5.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-5.0.asc
$ dnf install mongodb-org
Last metadata expiration check: 0:10:51 ago on Tue Nov 22 21:14:41 2022.
Error: 
 Problem: conflicting requests
  ...
  - nothing provides /usr/libexec/platform-python needed by mongodb-org-database-tools-extra-5.0.0-1.el8.x86_64
  (try to add '--skip-broken' to skip uninstallable packages)

Since needed dependencies have been dropped in newer Fedora releases (platform-python at least), this doesn’t work any more out-of-the-box. Falling back to a manual rpm install, this also is not working straight forward since MongoDB is relying on older crypto libraries not shipped with current Fedora releases…

$ rpm -ihv mongodb-org-server-5.0.14-1.el8.x86_64.rpm 
warning: mongodb-org-server-5.0.14-1.el8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID e2c63c11: NOKEY
error: Failed dependencies:
        libcrypto.so.1.1()(64bit) is needed by mongodb-org-server-5.0.14-1.el8.x86_64
        libcrypto.so.1.1(OPENSSL_1_1_0)(64bit) is needed by mongodb-org-server-5.0.14-1.el8.x86_64
        libssl.so.1.1()(64bit) is needed by mongodb-org-server-5.0.14-1.el8.x86_64
        libssl.so.1.1(OPENSSL_1_1_0)(64bit) is needed by mongodb-org-server-5.0.14-1.el8.x86_64
        libssl.so.1.1(OPENSSL_1_1_1)(64bit) is needed by mongodb-org-server-5.0.14-1.el8.x86_64 

You could now investigate hours on how to solve this problems manually, but at the end of the day keep in mind that MongoDB also clearly states that they don’t provide package support for Fedora. So this is a good starting point to think about a more stable & standard alternative, which could be running MongoDB as a Docker container…

Migration goals

Assuming you already have a locally running MongoDB in an older version of Fedora (or any other Linux distribution), this guide focusses on how to:

  • Running MongoDB community server as a Docker container on Fedora or another Linux system
  • Migrating all data and permissions (users/groups) to the Docker version without the need of dumping or other heavy lifting
  • Showing how MongoDB’s data, meta data and configuration can be held outside the Docker container to gain maximum flexibility

Moving from local MongoDB installation to dockerized version

Step 1: Backup existing MongoDB data and configuration

First we’ll need to locate and save all of the current data.

  • Database data: Locate the path where MongoDB is storing all of its data. Normally this is /var/lib/mongo on a Fedora system or another dbPath provided at startup via the --dbPath option or in the storage section of the configuration file which usually is /etc/mongod.conf.
    After you have located the path, backup it entirely including subfolders (e.g. copy to a USB Stick, your NAS or whatever).
  • Configuration data: If your current MongoDB is using a configuration file – normally located at /etc/mongod.conf – or another file provided via the --config parameter then also save this file.

Step 2: Prepare the new environment

To run MongoDB using docker with data and configuration hosted outside the container for flexibility reasons we should first create an appropriate place for that. To do so, switch to root and create a new user mongod and two directories data and conf under /var/db to separate all from existing users and home directories.

$ useradd -M mongod
$ usermod -L mongod
$ cd /var/db
$ mkdir mongo
$ mkdir mongo/data
$ mkdir mongo/conf

The created user mongod has no home directory and is also not permitted to log-in using an password as both is not needed in our scenario. Having this in place, copy the saved MongoDB data from step 1 to the new directories. Finally, change the ownership of all files to the mongod user.

Assuming you have stored the data on an USB stick mounted as /run/myuser/usbstick, like so:

$ cp -R /run/myuser/usbstick/mongo/data /var/db/mongo/data
$ cp /run/myuser/usbstick/mongo/conf /var/db/mongo/conf
$ chown -R mongod:mongod /var/db/mongo

Note: For executing the next steps, you can switch back from root to your normal user.

Step 3: Run MongoDB as a Docker container

First, we should get the appropriate offical image from Docker Hub. To avoid version incompatibilities I recommend start using the same version of MongoDB you had in your local installation before. In my case it was 5.0, so I choosed the image tagged with 5.0.13.

$ docker pull mongo:5.0.13

After downloading, the last thing to do before starting it up is to figure out the id of the mongod user which is the owner of all data in the filesystem.

$ id -u mongod
1005

Now we can start up the MongoDB container with docker run like so:

$ docker run \
-p 27017:27017 \
--user 1005 \ 
-v /var/db/mongo/data:/data/db \
-v /var/db/mongo/conf:/etc/mongo \
mongo:5.0.13 \
--config /etc/mongo/mongod.conf 
...
{
  "t":{"$date":"2022-11-23T20:02:25.878+00:00"},
  "s":"I",
  "c":"NETWORK",
  "id":23016,   
  "ctx":"listener",
  "msg":"Waiting for connections",
  "attr":{"port":27017,"ssl":"off"}
}

Awesome! MongoDB is now running and waiting for connections on port 27017. All databases, collections, users & roles are there like they were before in the local installation.

Note that this runs MongoDB directly in the current shell. To stop it, simply press CTRL-C. If you rather want it to run in the background, add the -d option to the docker run command.

For a better understanding, let’s break down the command issued above.

Command expressionExplanation
docker runTell Docker to start a new container. See the docker run docs for all details.
-p 27017:27017Exposes MongoDB’s standard port 27017 from the container to your local system. Enables you to connect via localhost:27017 to the MongoDB container like it was with a local installation.
--user 1005Tell Docker to run the container as user with id 1005 (= mongod). This is essential since the MongoDB image will depend on that the owner of the local data/config files are identical. Otherwise it’ll try to chown them which would fail.
-v /var/db/mongo/data:/data/dbMaps the local directory /var/db/mongo/data to /data/db in the container. This is the MongoDB image default place for database data inside the container. Can be changed with the dbPath parameter in the storage section of the config file.
-v /var/db/mongo/conf:/etc/mongoMaps the local directory /var/db/mongo/conf to /etc/mongo in the container. This is the MongoDB image default place for the configuration file inside the container. Can be changed with the --config parameter – see below.
mongo:5.0.13Tell Docker to use the image mongo with tag 5.0.13 when running the container.
--config /etc/mongo/mongod.confTell MongoDB to use the specified configuration file within the container.

Troubleshooting

I cannot connect to the MongoDB container also it started up without any error

If everything seems to be running fine without any errors but you are still not able to connect to MongoDB via localhost:27017 it is most likely that your configuration prohibits it.

In that case, please check the configuration in mongod.conf (or the file you use) under /var/db/mongo/conf and there the IP’s allowed to bind to MongoDB. Allowing 0.0.0.0 should solve it and be sufficient in the Docker scenario.

# network interfaces
net:
  port: 27017
  bindIp: 0.0.0.0

The container won’t start because of denied permissions when acessing the local filesystem

If you try to start up the MongoDB container like described above and get an error saying that permissions on the filesystem are denied like so…

dpkg: warning: failed to open configuration file '/data/db/.dpkg.cfg' for reading: Permission denied
{
  "t":{"$date":"2022-11-23T21:09:43.425Z"},
  "s":"F",  
  "c":"CONTROL",  
  "id":6384300, 
  "ctx":"-",
  "msg":"Writing fatal message",
  "attr":{"message":"terminate() called. An exception is active; attempting to gather more information\n"}
}
{
  "t":{"$date":"2022-11-23T21:09:43.425Z"},
  "s":"F",  
  "c":"CONTROL",  
  "id":6384300, 
  "ctx":"-",
  "msg":"Writing fatal message",
  "attr":{"message":"std::exception::what(): boost::filesystem::status: Permission denied: \"/etc/mongo/mongod.conf\"\nActual exception type: boost::filesystem::filesystem_error\n\n"}
}

Then you should first ensure that the ownership of the local filesystem is correctly set to the mongod user. For that, execute as root:

$ chown -R mongod:mongod /var/db/mongo

If that doesn’t solve the problem, it is very likely that you have SELinux enabled with an enforcing policy which prohibits docker from accessing the files. You can check the status with sestatus.

$ sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

In simple words, the Current mode: enforcing is telling you that SELinux is running in a “strict-mode” where you explicitly have to allow Docker to access the local filesystem.

There are three options to solve this:

1. Set SELinux to permissive mode by executing setenforce 0 as root. Note that this change is temporary and has to be re-executed after a reboot.

$ setenforce 0
$ sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

2. Modify the docker run command an add the :Z option (please note it is a capital Z) to the volume mappings, like so.

$ docker run \
-p 27017:27017 \
--user 1005 \ 
-v /var/db/mongo/data:/data/db:Z \
-v /var/db/mongo/conf:/etc/mongo:Z \
mongo:5.0.13 \
--config /etc/mongo/mongod.conf 

3. Relabel the needed directories and files to set the right SELinux context for Docker as decsribed in this very good article.

Option 3 “relabeling” should be the best solution, but to be honest since it is a very complex process I personally decided to go for the :Z addition in the volume mappings and didn’t investigate the relabeling further. In the Docker docs it is stated to better not use this option with /home and /usr directories, which in fact we are not doing here.

Have a great time with your dockerized MongoDB 🙂

Useful links

]]>
Query min/max values with additional non-grouped fields in MongoDB https://tsmx.net/querying-min-max-values-with-additional-non-grouped-fields-in-mongodb/ Fri, 27 May 2022 20:49:09 +0000 https://tsmx.net/?p=1615 Read more]]> Practical example demonstrating how to query for min/max values of grouped documents in MongoDB including additional non-grouped field values using $push, $map and $filter aggregation operators.

Query task: initial data situation & first trivial approach

Let’s assume you have the following documents with hourly measured temperatures in a collection called temperatures in your MongoDB…

[
  {
    "date" : ISODate("2022-05-10T09:00:00.000+02:00"),
    "temp" : 11.3
  },
  {
    "date" : ISODate("2022-05-10T10:00:00.000+02:00"),
    "temp" : 11.7
  },
  {
    "date" : ISODate("2022-05-10T11:00:00.000+02:00"),
    "temp" : 11.5
  },
  {
    "date" : ISODate("2022-05-11T10:00:00.000+02:00"),
    "temp" : 14.3
  },
  {
    "date" : ISODate("2022-05-11T11:00:00.000+02:00"),
    "temp" : 14.7
  },
  {
    "date" : ISODate("2022-05-11T12:00:00.000+02:00"),
    "temp" : 14.9
  }
]

The task now is to query for the maximal temperature for every day and the time(s), this temperature happened the day. So the desired output is:

  • At 2022-05-10 the max. temperature was 11.7 degrees at 10:00:00
  • At 2022-05-11 the max. temperature was 14.9 degrees at 12:00:00

Selecting only the max temperature per day can be easily achieved with a trivial $group aggregation pipeline stage, like this:

db.temperatures.aggregate(
[
  {
    $group: { 
      _id: { $dateToString: { format: '%Y-%m-%d', date: '$date' } }, 
      tempMax: { $max: '$temp' }
    }
  }
]);

The result would be:

[
  {
    "_id" : "2022-05-10",
    "tempMax" : 11.7
  },
  {
    "_id" : "2022-05-11",
    "tempMax" : 14.9
  }
]

Question now is, how additional fields like the time(s) can be added for every day? Obviously, they cannot be retrieved with any aggregation operator like $min/$max/$avg in the grouping stage because these are non-aggregated values. With traditional SQL, this could be achieved using joins or sub-selects – but how to do that in MongoDB?

Solution: projecting non-aggregated fields to a group using $push, $map and $filter

To achieve the desired output, we will use some more MongoDB aggregation operators and extend the first query by the following steps:

  • Collect all documents per day using $push and $$CURRENT in a helper field items in the grouping stage.
  • Add a projection stage where a new field tempMaxDates is created and filled with all the dates out of the groups collected items where temp is equal to tempMax of the grouping stage. To do so, the $map and $filter aggregation operators are used.

The final query is:

db.temperatures.aggregate(
[
  { $group: { 
    _id: { $dateToString: { format: '%Y-%m-%d', date: '$date' } }, 
    tempMax: { $max: '$temp' }, 
    items: { $push: '$$CURRENT' } } },
  { $project: {  
    tempMax: 1, 
    tempMaxDates: { 
      $map: { 
        input: { 
          $filter: { 
            input: '$items', as: 'i', 
            cond: { $eq: [ '$$i.temp', '$tempMax' ] } 
          } 
        }, 
        as: 'maxOccur', 
        in: '$$maxOccur.date' } 
      } 
     } 
   }
]);

With that we get the desired result including the date(s) for the max temperatures per day:

[
  {
    "_id" : "2022-05-10",
    "tempMax" : 11.7,
    "tempMaxDates" : [ 
      ISODate("2022-05-10T10:00:00.000+02:00")
    ]
  },
  {
    "_id" : "2022-05-11",
    "tempMax" : 14.9,
    "tempMaxDates" : [ 
      ISODate("2022-05-11T12:00:00.000+02:00")
    ]
  }
]

Note: Since tempMaxDates is an array where all timestamps for max temperature are pushed, this solution also fits perfectly when the maximum temperature occurs more then once per day (or group).

Please keep in mind that aggregations are operations directly executed on the MongoDB server. Depending on your concrete scenario (number of docs, resulting groups, parallel queries etc.) this query may be slow or cause trouble on your server.

If you encounter this problems, and additional matching stage to narrow down the processed docs could be a possible mitigation, like so:

{ $match: { date: { $gte: dateStart, $lte: dateEnd } } }

The $match step should be the first element of the aggregation operations and placed before $group and $project.

In-depth: explanation of the solution

Let’s have a closer look on the crucial parts of the solution.

Collecting raw data for each group

items: { $push: '$$CURRENT' }

This line in the grouping stage will cause MongoDB to create a field items which contains all original documents that are grouped together – in our case all documents of a day. In other words, items will give us access to the ‘raw data’ of each group in the following stages.

$push adds elements to an array and $$CURRENT references the currently processed/grouped document.

Projecting non-aggregated fields from collected raw data

tempMaxDates: { 
  $map: { 
    input: { 
      $filter: { 
        input: '$items', as: 'i', 
        cond: { $eq: [ '$$i.temp', '$tempMax' ] } 
      } 
    }, 
    as: 'maxOccur', 
    in: '$$maxOccur.date' } 
  } 
}

This part of the query constructs a new field called tempMaxDates in the projection stage.

For that, the items field from the grouping stage is first filtered to get only the documents of the group where the temperature equals the maximum. Note that $ is used to reference root document fields whereas $$ is used to reference variables.

  • $$i.temp refers to the temperature field of all elements in the items array which has variable name i here
  • $tempMax refers to the maximum temperature field determined in the previous grouping stage

From the filtered docs in variable maxOccur, only the date field is mapped to the resulting array using $$maxOccur.date as the mapping expression.

Bonus: SQL solution based on PostgreSQL

If you have an equivalent data-set in a traditional SQL database like PostgreSQL…

test=> select * from temperatures order by date;
        date         | temp  
---------------------+-------
 2022-05-10 09:00:00 | 11.30
 2022-05-10 10:00:00 | 11.70
 2022-05-10 11:00:00 | 11.50
 2022-05-11 10:00:00 | 14.30
 2022-05-11 11:00:00 | 14.70
 2022-05-11 12:00:00 | 14.90

…a possible solution could be to use a WITH clause (or Common Table Expression [CTE]) to extract the maximum temperature per day and then join back to the original data for getting all the timestamps, like so:

test=> with ttt as 
(
  select date_trunc('day', t.date) as date, max(t.temp) as temp 
  from temperatures t 
  group by date_trunc('day', t.date)
)
select t.date, t.temp 
from temperatures t, ttt 
where ttt.date = date_trunc('day', t.date) and t.temp = ttt.temp;

        date         | temp  
---------------------+-------
 2022-05-10 10:00:00 | 11.70
 2022-05-11 12:00:00 | 14.90

Happy querying 🙂

Useful links

]]>
MongoDB: add/update fields referencing other existing fields using aggregation pipelines https://tsmx.net/mongodb-add-update-fields-using-aggregation-pipelines/ Sat, 09 Apr 2022 19:43:29 +0000 https://tsmx.net/?p=1543 Read more]]> Short explanation on how to add or update fields in documents referencing the values of other already existing fields by leveraging the aggregation pipeline framework.

Initial document situation

Let’s assume you have the following documents with hourly temperature values in a collection called temperatures

[
  {
    _id: ObjectId("62508bd0742bfb98b29dbe71"),
    date: ISODate("2022-04-08T08:00:00.000Z"),
    tempC: 7.3
  },
  {
    _id: ObjectId("62508bf0742bfb98b29dbe8c"),
    date: ISODate("2022-04-08T09:00:00.000Z"),
    tempC: 7.8
  },
  {
    _id: ObjectId("62508c02742bfb98b29dbe93"),
    date: ISODate("2022-04-08T10:00:00.000Z"),
    tempC: 8.5
  }
]

The given temperature in field tempC is in degrees Celsius but you may also need the temperature in degrees Fahrenheit. For various reasons it’ll make sense to have the Fahrenheit values persisted in MongoDB instead of calculating them always on-the-fly in your application.

So you want to add a field tempF to every document which holds the temperature in Fahrenheit. The calculation formula for that would be easy: tempF = tempC * 1.8 + 32. But how to achieve that in MongoDB?

Caveats with common update operators

Trying to solve this simple appearing task with the basic MongoDB functions, you will quickly face the following problems:

  • The $set operator used to add new fields or update existing ones cannot be used together with expressions. It only takes plain values, so it is not possible to reference other fields.
  • The traditional MongoDB update operators like $mul and $inc which would be needed here to calculate tempF are not sufficient. This is because they are only made to change an existing field in-line together with a fix value, e.g. “add 10 to a field” or “multiply a field by 2”.

So what is the way to go for with MongoDB to add a new field or update it when the resulting value references another existing field?

Updating documents using aggregation pipelines

Starting with MongoDB 4.2, it is possible to use the powerful aggregation pipeline with updates. This enables the usage of aggregation pipeline operators in normal update statements. These operators are more flexible than the traditional ones and allow expressions referencing other fields by using the ‘$…’ field path notation.

Having this in mind, we can now add the needed field using a $set aggregation pipeline stage using $add and $multiply as following…

db.temperatures.updateMany(
  {},
  [{ $set: { tempF: { $add: [ { $multiply: ['$tempC', 1.8] }, 32] } } }]
);

The new field tempF is now added to every document based on the already existing tempC field…

[
  {
    _id: ObjectId("62508bd0742bfb98b29dbe71"),
    date: ISODate("2022-04-08T08:00:00.000Z"),
    tempC: 7.3,
    tempF: 45.14
  },
  {
    _id: ObjectId("62508bf0742bfb98b29dbe8c"),
    date: ISODate("2022-04-08T09:00:00.000Z"),
    tempC: 7.8,
    tempF: 46.04
  },
  {
    _id: ObjectId("62508c02742bfb98b29dbe93"),
    date: ISODate("2022-04-08T10:00:00.000Z"),
    tempC: 8.5,
    tempF: 47.3
  }
]

Note: As the $set aggregation operator would overwrite the value of the specified field if it already exists in the document, this approach also works perfectly for updating fields.

At the end of the day, that was easy – right? 😉

Useful links

]]>
Connecting from a Docker container to a local MongoDB https://tsmx.net/docker-local-mongodb/ Fri, 14 Jan 2022 22:18:03 +0000 https://tsmx.net/?p=1484 Read more]]> A quick guide demonstrating how to connect to a local MongoDB instance from a Docker container.

Docker containers run in an isolated environment including separate virtual networks by default. Although you can run your MongoDB in a container connected to such a virtual network, e.g. by using docker-compose, there may be situations where you want to use a local MongoDB instance and connect to it from a container.

The Docker bridge network

To establish networking connections between the host and containers, a Docker bridge network can be used. A standard one called bridge is always present and generated by default. You can inspect it using the docker network inspect bridge command.

$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "73cb0c87fb64f19c76e3367e80b2da2d104a67603882ca539ce2cf70bdf97c4f",
        "Created": "2022-01-14T20:53:38.577610832+01:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Notice that this network has a Gateway with the IP 172.17.0.1. This we will use to connect from the container to our MongoDB running on the host.

According to the Docker docs on networking the default bridge network should not be used for production scenarios but is fine for any dev/test environment. In this article we will continue using the default bridge.

Connecting to your local MongoDB from Docker

For connecting to your local MongoDB instance from a Container you must first allow to accept connections from the Docker bridge gateway. To do so, simply add the respective gateway IP in the MongoDB config file /etc/mongod.conf under bindIp in the network interface section.

# network interfaces
net:
  port: 27017
  bindIp: 127.0.0.1,172.17.0.1

Note that binding to 0.0.0.0 would also do the trick but is not recommended due to security reasons.

Second, you should choose a hostname to access the database. Let’s use mongoservice for that. In your containerized app, use this hostname in the connection string, like so…

mongodb://mongoservice:27017/mydb

To introduce the mongoservice hostname in the containers virtual network and bind it to our local host where MongoDB is running, simply use the --add-host option of docker run with the bridge networks gateway IP we discovered earlier.

docker run --add-host=mongoservice:172.17.0.1 repository/image-name

Having this in place, your Docker container now can communicate with the host using the alias mongoservice and your are good to go in connecting to the local MongoDB 🙂

Tip: using the identical hostname alias for MongoDB locally

To avoid inconsistencies and configuration changes between running your app in the local dev environment and the Docker container, simply add an entry for the used hostname mongoservice to your /etc/hosts.

127.0.0.1   localhost localhost.localdomain
::1         localhost localhost.localdomain
127.0.0.1   mongoservice

Now you can use the exactly same connection string also in your local dev environment.

Troubleshooting: MongoDB isn’t starting up any more

After adding the Docker bridge gateway IP to MongoDB’s configuration like described before you may face an issue when trying to start MongoDB. Although everything was fine before.

$ systemctl start mongod
Job for mongod.service failed because the control process exited with error code.
See "systemctl status mongod.service" and "journalctl -xeu mongod.service" for details.
$ systemctl status mongod
× mongod.service - MongoDB Database Server
     Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Thu 2022-03-17 21:08:18 CET; 36s ago
       Docs: https://docs.mongodb.org/manual
    Process: 16629 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
    Process: 16630 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
    Process: 16631 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
    Process: 16632 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=48)

Mar 17 21:08:18 fedora systemd[1]: Starting MongoDB Database Server...
Mar 17 21:08:18 fedora mongod[16632]: about to fork child process, waiting until server is ready for connections.
Mar 17 21:08:18 fedora mongod[16634]: forked process: 16634
Mar 17 21:08:18 fedora mongod[16632]: ERROR: child process failed, exited with 48
Mar 17 21:08:18 fedora mongod[16632]: To see additional information in this output, start without the "--fork" option.
Mar 17 21:08:18 fedora systemd[1]: mongod.service: Control process exited, code=exited, status=48/n/a
Mar 17 21:08:18 fedora systemd[1]: mongod.service: Failed with result 'exit-code'.
Mar 17 21:08:18 fedora systemd[1]: Failed to start MongoDB Database Server.

If this is the case, it is very likely that Docker is not running! Make sure to start Docker before MongoDB. Otherwise MongoDB will fail to start because the Docker bridge gateway isn’t available and it cannot bind to it.

$ systemctl start docker
$ systemctl start mongod

If you want to start MongoDB without having Docker running, then you’ll need to remove or comment out the gateway’s IP in /etc/mongod.conf.

Happy coding 🙂

Useful links

]]>