Improving your Nextcloud upgrade experience

Nextcloud, a truly wonderful open source project, allows you to create your own co-lab hub. You can choose to host it yourself, or get a Nextcloud instance at one of the many service providers, such as WebbPlatsen i Sverige AB. Nextcloud can replace many not-so-GDPR-safe alternatives and other costly solutions. If you, by some small miracly, still do not know what Nextcloud is, go check it out at nextcloud.com.

For the past few years, Nextcloud has kept an almost alarmingly fast pace when it comes to updates. Quite often, a major version is followed by a few minor updates, and then another major version is released. Typically, a lot of new functionality has been thrown in for the major updates. Naturally, we want more functionality and toys 🙂

Updating Nextcloud is usually not a big issue, unless the Nextcloud downloads are slow. And they are, quite frequently, very slow. So slow in fact that the Nextcloud updater exits with an error when it comes to the step where it downloads the new files, even if you increase your PHP maximum execution time to 10 minutes (!)

There are ways to improve your Nextcloud upgrade experience, some of them are mentioned here. The remainder of this post assumes you have some sort of file access to your Nextcloud server.

Nextcloud updater times out when downloading

If you go to the data directory for Nextcloud during an update, there’s typically a directory called updater-nnn, where nnn is an arbitrary string. This is where Nextcloud keeps its files for the upgrader, including progression counters and downloads.

From the Nextcloud web interface, under Administration.Overview, there’s a Download now button that is actually just a link. You can copy the link and download the file manually. Once the download has completed, you can place it in the downloads sub-directory of the updater-nnn directory. Once done, re-start the upgrade process.

Nextcloud updater gets stuck during the update process

In the same updater-nnn directory, there is a “step counter” so that Nextcloud can keep track of where in the update process it is currently positioned. If Nextcloud keeps getting stuck at the download step, or you are presented with “Step 4 is currently in process. Please reload this page later.”, you may try to remove the file .step and re-start the upgrade process.

Manually upgrading Nextcloud from CLI

If you have shell access to your Nextcloud instance, you may also attempt a manual upgrade, which may have different timeout settings (or no timeout settings). The short version is listed below, the long version can be found in the official Nextcloud documentation, please make sure you read it before attempting a manual upgrade!

sudo -u www-data php /var/www/nextcloud/occ upgrade
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off

Replace www-data for the user PHP and/or your web server is running as and correct paths as needed.

 

Ubuntu 20.04, Nginx, Redmine, Ruby, and Passenger-Phusion confusion

Redmine, Ruby, Passenger-Phusion, and Nginx makes for an extremely confusing situation with dependencies, installation “instructions”, and “mismatching” package versions.

Redmine wants Ruby x, your Linux distribution has Ruby y, Passenger-Phusion only works with Ruby z, and you quite often end up in a loop somewhere. This article will not do anything to help that confusion I’m afraid.

On Passenger-Phusion’s website, there’s an interesting explanation as to why you need to let Passenger-Phusion build Nginx for you, unless you can actually use everything pre-packaged, which you cannot if you want to use Ubuntu 20.04.LTS and Redmine 4.1.

Before you begin, you should know that installing Passenger in its Nginx integration mode involves extending Nginx with code from Passenger. However, Nginx does not support loadable modules. This means that in order to install Passenger’s Nginx integration mode, it is necessary to recompile Nginx from source. And that is exactly what we will do in this installation guide.

Now, if you head on over to the Nginx blog, you can read:

NGINX Open Source 1.11.5 and NGINX Plus Release R11 introduced binary compatibility for dynamic modules. This article explains how to compile third‑party modules for use with NGINX Open Source and NGINX Plus in a development environment.

That statement was made for Nginx 1.11. They are, at the time of this writing, at 1.19.

So basically, if you want to run Redmine 4.1 on Ubuntu 20.04.LTS, which ships with Ruby 2.7, and a Passenger-Phusion module that requires Ruby 2.7, you’re on your own since Redmine 4.1 does not support Ruby 2.7. If you use RVM to installed Ruby 2.6.x, you need to manually handle Passenger-Phusion, which eventually requires re-compiling Nginx, regardless of it supporting “dynamic modules”.

There is one small glimmer of hope:
Installing Passenger as a normal or dynamic Nginx module

Using Ubuntu 18.04.LTS, the included Passenger, Nginx, and Ruby all work with Redmine 4.1.x, so that may be a smoother path to take before this is all “fixed”, if it gets fixed.

Checking LSI RAID storage status in Linux

Flying blind is no fun and many hardware manufacturers are notoriously bad at providing info tools form their hardware for Linux. If you have an LSI-chipped RAID controller, this may come in handy. The `storcli`utility replaces the MegaCLI utility and is provided by Broadcom.

You can get `storcli`here:
docs.broadcom.com/docs/007.1506.0000.0000_Unified_StorCLI-PUL.zip

(For Debian/Ubuntu distributions, you may need to install `alien` to convert the .rpm to .deb.)

The following script is what I use and it works for me:

#!/bin/bash
#
# Check LSI RAID storage status
#
# This will trigger on the output NOT being "Optl" (Optimal). This works for me,
# your mileage may vary. One problem in using more "verbose output" from the
# storcli64 binary is that it always (?) outputs some sort of legend with all
# the various status types explained, so it always contains the text "Degraded"
# for example. This minor script will simply search for what we want, which is
# a RAID in optimal state, and if NOT found, perform a full status scan and send
# it as an e-mail message to wherever it needs to be sent.
#
# Joaquim Homrighausen <joho@webbplatsen.se>
# 2020-12-15
#

STATUS=`/usr/local/bin/storcli64 /c0 /v0 show nolog |egrep "RAID6 Optl"`;
EGREP=$?

if [ "$EGREP" -ne 0 ]
then
    TO=my.email@address.ext
    STATUS=`/usr/local/bin/storcli64 /c0 show`;
    /usr/sbin/sendmail -i $TO <<MAIL_END
From: root@my.monitored.server
To: $TO
Subject: RAID WARNING @ `hostname`

$STATUS
MAIL_END
fi

Where did my Emacs color-theme go in Ubuntu 20.04.LTS?!

Having recently upgraded a small VPS from Ubuntu 18.04.LTS to Ubuntu 20.04.LTS, I ran into a little snag with Emacs and its color-theme (from the emacs-goodies-el package).

After some digging, it seems this is now done somewhat differently in Ubuntu 20.04.LTS.

This is what my .emacs file used to contain:

(require 'color-theme) 
(color-theme-initialize) 
(color-theme-charcoal-black)

This is what I changed it to:

(add-to-list 'custom-theme-load-path 
(file-name-as-directory "/usr/share/emacs/site-lisp/elpa/color-theme-modern-0.0.2"))
(load-theme 'charcoal-black t t) 
(enable-theme 'charcoal-black)

In addition, I also had to install the elpa-color-theme-modern package (from the Ubuntu 20.04.LTS distribution).

 

Checksum of file using Windows certutil

After getting files from a remote source, it is often a good idea to get some sort of fingerprint or checksum of the file, and verify it against a known value published in a place or on a website you trust.

For Windows, this can be accomplished with:

certutil -hashfile filename.ext sha256

sha256 can be any of MD2 MD4 MD5 SHA1 SHA256 SHA384 SHA512

View as @geek1968 flashcard on Instagram

pdftk and php-pdftk on Ubuntu 18.04 without using snap

During a product launch I recently came across an “interesting” issue involving pdftk (and php-pdftk). Some of the developers had made assumptions (ever heard that one before?) about the operating environment and how things were/are configured.

These assumptions were based on a development environment that in no way reflected the final production environment (ever heard that one before?). In this particular case, they were expecting the great PDF toolkit (pdftk) to be available and working just like it did in their development environment.

To summarize the issue: pdftk has been removed from Ubuntu 18.04 due to dependency issues. The “recommended” solution is to install pdftk using snap. This, in itself, is not a bad recommendation. But in a web server environment, it may put you in a place you don’t want to be in.

So after a few hours of toying with ideas and testing various things, I figured there must be some Debian-like package that would actually work when installed on Ubuntu 18.04 and that is not a snap package.

There is.

Later versions (or packages) of pdftk now exist as pdftk-java, and they do work with php-pdftk as well.

In my case, I located pdftk-java_3.1.1-1_all.deb and installed it. Or tried to rather. It has a number of dependencies that you will see for yourself. You’ll need to decide if their “weight” makes it worthwhile for you to go down this path. But it was one, reasonably good, way for us to solve the problem.

The developers you ask? They have been sent to /dev/codersgulag/cobol and will spend a number of solar iterations there.

(The file I ended up using was http://ftp.debian.org/debian/pool/main/p/pdftk-java/pdftk-java_3.1.1-1_all.deb, and it does work on Ubuntu 18.04.LTS)

 

 

Webmin, Virtualmin and APT for Ubuntu and Debian

I often use Webmin and Virtualmin to manage basic stuff on Linux servers, mostly so because others sometime need to change minor settings on these servers, and they may or may not be very familiar with doing things from the CLI.

You can, of course, update Webmin and Virtualmin manually, from within Webmin. But if you’re using APT, there is an automated, better, way of keeping these lovely software packages up to date.

Webmin

Create a file in /etc/apt/sources.list.d/ like webmin.list

Add the following line to that file:

deb https://download.webmin.com/download/repository sarge contrib

Add Jamie Cameron’s GPG key for the repository like so:

cd /root
wget https://download.webmin.com/jcameron-key.asc
apt-key add jcameron-key.asc

Finalize everything with

apt-get install apt-transport-https
apt-get update

You may now install/update Webmin with via APT (apt-get, aptitude, etc).

Virtualmin

Create a file in /etc/apt/sources.list.d/ like virtualmin.list

For Ubuntu 18.04.LTS (“Xenial”), add the following to that file:

deb http://software.virtualmin.com/vm/6/gpl/apt virtualmin-xenial main
deb http://software.virtualmin.com/vm/6/gpl/apt virtualmin-universal main

There are, of course, sources available for other distributions too. Simply replace xenial above, with the name of the distribution you’re running. You can find a list of the Debian based distributions here: software.virtualmin.com/vm/6/gpl/apt/dists/

Add the virtualmin GPG key for the repository like so:

cd /root
wget http://software.virtualmin.com/lib/RPM-GPG-KEY-virtualmin-6
apt-key add RPM-GPG-KEY-virtualmin-6

Finalize everything with

apt-get update

You may now install/update Virtualmin via APT (apt-get, aptitude, etc). You can find some more information about this in relation to Virtualmin on the Virtualmin forum.

Forcing apt-get to use IPv4

When or if you run into trouble with apt-get and IPv6 connections timing out or not resolving properly at all, it may be a good idea to simply prevent apt-get from using IPv6.

Use

-o Acquire::ForceIPv4=true

when running apt-get, or create /etc/apt/apt.conf.d/99force-ipv4 and put

Acquire::ForceIPv4 "true"

in it.

If this does not work for you, you may want to have a look at /etc/gai.conf (this will, however, affect your system on a deeper level for IPv4 vs IPv6 connectivity). If you’re not interested in IPv6, it should cause no problems.

See more from @geek1968 on Instagram

URL re-writing with nginx, PHP, and WordPress

There are many posts about nginx, re-directs, PHP, and WordPress. There are somewhat fewer posts that talk about (internal) re-writes, where the request by the web browser is mangled to be served by another resource than the one requested.

For example, I may want a request for https://mysite.foo/cool/penguin to actually be served by https://mysite.foo/coolstuff.php?id=penguin, or simply setup an alias such as https://mysite.foo/cool/penguin to be served by https://mysite.foo/cool/linux, but preserve the URL in the browser address bar.

With PHP-FPM and nginx, you run into an additional problem, which is the fastcgi_parm variables that are passed from nginx to PHP-FPM. So even if you have really fancy URL re-writing configured (and working), the end result may not be passed on to PHP-FPM from nginx.

So solve this, you should look into this construct, which is present in many nginx configurations as a default setup:

fastcgi_param REQUEST_URI $request_uri;

Since your needs probably differ from mine, I wont make this post any longer than it has to be, but that fastcgi_param line above may be a good starting point if you’re experiencing problems with nginx, PHP-FPM, and URL re-writing.

Good luck!