nerdctl + IPFS = Awesome-sauce

Salim Haniff
6 min readDec 2, 2021

--

Intro

In my last post, we went over the fun and joys of swapping out Docker for nerdctl on the dev server. One issue we will eventually encounter is distributing our images for others to use. Currently, we have had the luxury of using public based container registries. However, with the popularity of container technologies, public registries are now changing their agreements to cover the cost of resources. Another issue of public registries is they become a possible single point of failure if they experience an outage condition.

Web3 technologies are moving towards decentralized models. Decentralization can help offset a single dependency on a single organization providing a service. One specific technology we will be using in this post is the InterPlanetary File System (IPFS). Readers interested in background information on IPFS can refer to a great video of going over IPFS from the following link by Juan Benet (https://www.youtube.com/watch?v=zE_WSLbqqvo). Essentially, IPFS allows you to share files on a distributed network built upon P2P technology. IPFS is free to use and provides cryptographic hashing to ensure the integrity of the files stored on the web. The use of IPFS can help replace the traditional container registries.

The latest version of nerdctl (0.14.0) supports the usage of IPFS, which we will take for a spin in this post. We will be using the RPi4 Dev rig, which has nerdctl installed from our previous post.

Long write up on how I failed (for those strapped on time scroll down to the next section)

The RPi dev rig already contains an IPFS installation. Since port 8080 has been tied up with GitLab, we need to alter the IPFS config for the IPFS gateway to bind to port 8888. We will also change the listening addresses to bind to 0.0.0.0 since we want to remotely connect to this IPFS node on our internal network. The IPFS was initialized with the pi user, so edit /home/pi/.ipfs/config. Change the API address to

“API”: “/ip4/0.0.0.0/tcp/5001”,

and the gateway to

“Gateway”: “/ip4/0.0.0.0/tcp/8888”,

When I ran:

ipfs daemon

and went to http://192.168.8.134:5001/webui/, I hit some error about not finding the API server. Running the following command resolved the issue.

ipfs config profile apply server

Make sure to kill the IPFS daemon and restart it to pick up the new changes. Now go back to the web UI URL, and the problem should be gone.

With some excitement, I quickly entered the following command to see if I could push the image to IPFS and…

… ugh, too good to be true. Now off to see if there are any hints on getting this to work, off to read the manual. Starting with the nerdctl ipfs push help…

It looks like we need to check something with the registry…

Let’s try to bring the registry up…

Getting close, let’s try to copy the API file from our Pi IPFS config to the root directory.

OK. Looking good. Now let’s try to push using the following command

sudo nerdctl --debug --address unix:///run/k3s/containerd/containerd.sock push ipfs://local/alpine:0.1

Alright, the image should be in IPFS now. The last line from the screen (bafkreigjsqvrghp5afnhdhmng3j2defwwtey2oo2dm5ilxef3qcv7vdmfi) indicates the image reference on IPFS via the CID.

We can verify if the container image is in IPFS by going to the IPFS dashboard and inserting the CID in the explore window.

For a test, let’s delete the image from our containerd registry.

Now let’s pull in the image from IPFS.

If we check our listed images on the containerd registry, we should see the pulled image now.

Even though the image is in the images list, the following error is shown when running the image container.

From this point on, I kept encountering this issue when trying to use other standard images. Finally, flipping to nerdctl v0.13.0 resolved the problem but now lost the IPFS for now.

If you want to fast-track start reading from here on down

Even though what I am doing is bending what is usually done, we can still leverage IPFS. Our next attempt will be using v0.13.0 of nerdctl. We will build the image again from scratch and use the native IPFS tools to help distribute our container image. In the end, all we are trying to do is distribute our container using P2P methods.

So let’s retry everything all over again. Yes, I hear your frustration as I am typing this out…

The commands below will be punched into our shell

$ sudo nerdctl --address unix:///run/k3s/containerd/containerd.sock build -t local-alpine:0.1 .$ sudo sudo nerdctl --address unix:///run/k3s/containerd/containerd.sock save my-alpine:0.1 > my-alpine.tarpi@f127-central:~/docker-alpine $ ipfs add my-alpine.taradded QmShBrn7rB1y58BxW2B6jZLQciqwrpWbWNPvay6JricGaj my-alpine.tar2.60 MiB / 2.60 MiB [=======================================================================================================] 100.00%
pi@f127-central:~/docker-alpine $

Now that our image is in IPFS, we will erase all the images from our local instance.

Now let’s download the image into our containerd instance.

$ ipfs get QmShBrn7rB1y58BxW2B6jZLQciqwrpWbWNPvay6JricGaj | cat QmShBrn7rB1y58BxW2B6jZLQciqwrpWbWNPvay6JricGaj | sudo nerdctl --address unix:///run/k3s/containerd/containerd.sock load
2.60 MiB / 2.60 MiB [====================================================================================================] 100.00% 0s
unpacking docker.io/library/my-alpine:0.1 (sha256:3bda385977798e5d15674803839bd9c0b4417341a405957250e0d33cfb7f76c2)...done

Now for the moment of truth, will it run…

sudo nerdctl --address unix:///run/k3s/containerd/containerd.sock run -it --rm my-alpine:0.1

Why yes, it certainly does.

The addition of IPFS to nerdctl is undoubtedly a beneficial help to the community wishing for an easier way to distribute their containers. Very exciting times are coming up as the move to decentralization opens up more opportunities for collaboration and innovative technologies. Even though my RPi setup had a glitch, we managed to figure a workaround in this post. I will continue to figure out what mistake I made that led to the workaround, but at least we can now leverage IPFS.

Stay tuned, in the next post (I promise this time), we will figure out how to get the containers from IPFS into K3s or some fudging workflow alternative…

--

--

Salim Haniff
Salim Haniff

Written by Salim Haniff

Founder of Factory 127, an Industry 4.0 company. Specializing in cloud, coding and circuits.