Planet BitFolk

BitFolk WikiUser:Strugglers/Maintenance/2021-04-Re-racking

draft

New page

This is a draft. The dates aren't agreed yet and the details might change.

<syntaxhighlight lang="text">
Hello all,

== TL;DR:

We need to relocate some servers to a different rack within Telehouse.

On Tuesday 20 April 2021 at some point in the 2 hour window starting at 20:00Z (21:00 BST) all customers on the following server will have their VMs either powered off or suspended to storage:

* hen.bitfolk.com

We expect to have it powered back on within 30 minutes.

On Tuesday 27 April 2021 at some point in the 4 hour window starting at 22:00Z (23:00 BST) all customers on the following servers will have their VMs either powered off or suspended to storage:

* clockwork.bitfolk.com
* hobgoblin.bitfolk.com
* jack.bitfolk.com
* leffe.bitfolk.com
* macallan.bitfolk.com
* paradox.bitfolk.com

We expect the work on each server to take less than 30 minutes.

If you can't tolerate a ~30 minute outage at these times then please contact support as soon as possible to ask for your VM to be moved to a server that won't be part of this maintenance.

== Maintenance Background

Our colo provider needs to rebuild one of their racks that houses 7 of our servers. This is required because the infrastructure in the rack (PDUs, switches etc) is of a ten year old vintage and all needs replacing. To facilitate this, all customer hardware in that rack will need to be moved to a different rack or sit outside of the rack while it is rebuilt. We are going to have to move our 7 servers to a different rack.

This is a significant piece of work which is going to affect several hundred of our customers, more than 70% of the customer base. Unfortunately it is unavoidable.

== Networking upgrade

We will also take the opportunity to install 10 gigabit NICs in the servers which are moved. The main benefit of this will be faster inter-server data transfer for when we want to move customer services about. The current 1GE NICs limit this to about 90MiB/sec.

== Suspend & Restore

If you opt in to suspend & restore then instead of shutting your VM down we will suspend it to storage and then when the server boots again it will be restored. That means that you should not experience a reboot, just a period of paused execution. You may find this less disruptive than a reboot, but it is not without risk. Read more here:

https://tools.bitfolk.com/wiki/Suspend_and_restore

== Avoiding the Maintenance

If you cannot tolerate a ~30 minute outage during the maintenance windows listed above then please contact support to agree a time when we can move your VM to a server that won't be part of the maintenance.

Doing so will typically take just a few seconds plus the time it takes your VM to shut down and boot again and nothing will change about your VM.

If you have opted in to suspend & restore then we'll use this to do a "semi-live" migration. This will appear to be a minute or two of paused execution.

Moving your VM is extra work for us which is why we're not doing it by default for all customers, but if you prefer that to experiencing the outage then we're happy to do it at a time convenient to you, as long as we have time to do it and available spare capacity to move you to. If you need this then please ask as soon as possible to avoid disappointment.

It won't be possible to change the date/time of the planned work on an individual customer basis. This work involves 7 of our servers, will affect several hundred of our customers, and also has needed to be scheduled with our colo provider and some of their other customers. The only per-customer thing we may be able to do is move your service ahead of time at a time convenient to you.

== Rolling Upgrades Confusion

We're currently in a cycle of rolling software upgrades to our servers. Many of you have already received individual support tickets to schedule that. It involves us moving your VM from one of our servers to another and full details are given in the support ticket.

This has nothing to do with the maintenance that's under discussion here and we realise that it's unfortunately very confusing to have both things happening at the same time. We did not know that moving our servers would be necessary when we started the rolling upgrades.

We believe we can avoid moving any customer from a server that is not part of this maintenance onto one that will be part of this maintenance. We cannot avoid moving customers between servers that are both going to be affected by this maintenance. For example, at the time of writing, customer services are being moved off of jack.bitfolk.com and most of them will end up on hobgoblin.bitfolk.com.

== Further Notifications

Every customer is supposed to be subscribed to this announcement mailing list, but no doubt some aren't. The movement of customer services between our servers may also be confusing for people, so we will send a direct email notification to the main contact of affected customers a week before the work is due to take place.

So, on Tuesday 13 April we'll send a direct email about this to customers that are hosted on hen.bitfolk.com, and then on Tuesday 20 April we'll send a similar email to customers on all the rest of the affected servers.

== 20 April Will Be a Test Run

We are only planning to move one server on 20 April. The reasons for this are:

* We want to check our assumptions about how long this work will take, per server.
* We're changing the hardware configuration of the server by adding 10GE NICs, and we want to make sure that configuration is stable.

The timings for the maintenance on 27 April may need to be altered if the work on 20 April shows our guesses to be wildly wrong.

== Frequently Asked Questions

=== How do I know if I will be affected?

If your VM is hosted on one of the servers that will be moved then you are going to be affected. There's a few different ways that you can tell which server you are on:

1. It's listed on https://panel.bitfolk.com/
2. It's in DNS when you resolve <youraccountname>.console.bitfolk.com
3. It's on your invoices and data transfer email summaries
4. You can see it on a `traceroute` or `mtr` to or from your VPS.

=== If you can "semi-live" migrate VMs, why don't you just do that?

* This maintenance will involve some 70% of our customer base, so we don't actually have enough spare hardware to move customers to.
* Moving the data takes significant time at 1GE network speeds.

For these reasons we think that it will be easier for most customers to just accept a ~30 minute outage. Those who can't tolerate such a disruption will be able to have their VMs moved to servers that aren't going to be part of the maintenance.
</syntaxhighlight>

Alan PopeActually Upgrading Ubuntu Server

Yesterday I wrote about my attempt to upgrade one of my HP Microservers, running Ubuntu 18.04 LTS to Ubuntu 20.04 LTS. Well, today I had another go. Here’s what happened.

I followed the recommendation from yesterday, to compress the initrd.img using xz compression rather than the previous default gzip. Previously the upgrade failed because it needed 140M disk space in /boot. With the change to the compression scheme, I now have 154M, which should be enough to start the upgrade.

alan@robby:~$ df -h /boot
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       226M   57M  154M  27% /boot

Well, I started the upgrade with sudo do-release-upgrade and once again had to reboot, then started the upgrade again.

Just like last time I needed to okay the change to the sources.list and accept that third party repositories were disabled.

Updating repository information

No valid mirror found 

While scanning your repository information, no mirror entry for the 
upgrade was found. This can happen if you run an internal mirror or 
if the mirror information is out-of-date. 

Do you want to rewrite your 'sources.list' file anyway? If you choose 
'Yes' here, it will update all 'bionic' to 'focal' entries. 
If you select 'No', the upgrade will cancel. 

Continue [yN] y

Third party sources disabled 

Some third party entries in your sources.list were disabled. You can 
re-enable them after the upgrade with the 'software-properties' tool 
or your package manager. 

To continue please press [ENTER]


Okay, so we got past the size check of /boot. That’s good. As usual we get a summary of what’s going to happen. Some things removed, some upgraded and some new things to come. It’s estimating the speed of download based on the fact I have a local mirror.


Do you want to start the upgrade? 


15 installed packages are no longer supported by Canonical. You can 
still get support from the community. 

18 packages are going to be removed. 252 new packages are going to be 
installed. 960 packages are going to be upgraded. 

You have to download a total of 729 M. This download should take 
about 1 minute with your connection. 

Installing the upgrade can take several hours. Once the download has 
finished, the process cannot be cancelled. 

 Continue [yN]  Details [d]

Let’s hit “d” for fun. Normally I just hit “y” and let it rip. We have some packages which are no longer supported:

No longer supported: bzr dh-python dnsutils geoip-database ifenslave 
  ifupdown libdumbnet1 libegl1-mesa libncurses5 libncursesw5 
  libtinfo5 nmap python-dateutil uidmap vlan 

Some which will be removed…

Remove: dstat libapt-inst2.0 libapt-pkg5.0 libcupscgi1 libcupsmime1 
  libcupsppdc1 libpolkit-backend-1-0 libsnmp30 
Remove (was auto installed) libldb1 libmailutils5 libpython-stdlib 
  libsensors4 python python-keyrings.alt python-ldb python-minimal 
  python-samba python-tdb 

… and a bunch which will be installed …

Install: alsa-topology-conf alsa-ucm-conf amd64-microcode 
  bind9-dnsutils bind9-libs bolt brz chafa cpp-9 cryptsetup-initramfs 
  cryptsetup-run finalrd fonts-urw-base35 fwupd fwupd-signed g++-9 
  gcc-10-base gcc-9 gcc-9-base gir1.2-packagekitglib-1.0 
  golang-1.13-go golang-1.13-race-detector-runtime golang-1.13-src 
  guile-2.2-libs intel-microcode iucode-tool kpartx libaio1 
  libappstream4 libapt-pkg6.0 libarchive13 libargon2-1 
  libarray-intspan-perl libasan5 libasync-mergepoint-perl 
  libboost-filesystem1.71.0 libboost-iostreams1.71.0 
  libboost-thread1.71.0 libbrotli1 libcanberra0 libcapnp-0.7.0 
  libcapture-tiny-perl libcbor0.6 libchafa0 libcommon-sense-perl 
  libconst-fast-perl libcontextual-return-perl libcpanel-json-xs-perl 
  libcrypt-dev libcrypt1 libctf-nobfd0 libctf0 libdbus-glib-1-2 
  libdevel-size-perl libdigest-bubblebabble-perl libdns-export1109 
  libefiboot1 libefivar1 libevent-2.1-7 libffi7 libfido2-1 
  libfile-find-rule-perl libfl2 libfont-ttf-perl libfuture-perl 
  libfwupd2 libfwupdplugin1 libgcab-1.0-0 libgcc-9-dev libgcc-s1 
  libgdbm6 libgfortran5 libgitlab-api-v4-perl libglib2.0-bin 
  libgpg-error-l10n libgstreamer1.0-0 libgutenprint-common 
  libgutenprint9 libhash-fieldhash-perl libhogweed5 
  libhttp-tiny-multipart-perl libicu66 libilmbase24 libimagequant0 
  libio-async-loop-epoll-perl libio-async-perl libio-prompter-perl 
  libip4tc2 libip6tc2 libisc-export1105 libisl22 libjs-sphinxdoc 
  libjs-underscore libjson-c4 libjson-maybexs-perl libjson-perl 
  libjson-xs-perl libldb2 liblinear4 liblinux-epoll-perl 
  liblist-someutils-perl liblist-someutils-xs-perl libllvm11 liblmdb0 
  liblog-any-adapter-screen-perl liblog-any-perl liblouis20 
  liblouisutdml9 liblvm2cmd2.03 libmagickcore-6.q16-6 
  libmagickcore-6.q16-6-extra libmagickwand-6.q16-6 libmailutils6 
  libmoox-aliases-perl libmoox-struct-perl libmysqlclient21 
  libncurses6 libncursesw6 libnet-dns-sec-perl libnettle7 libnftnl11 
  libntfs-3g883 libobject-id-perl libonig5 libopenexr24 libopenjp2-7 
  libpackagekit-glib2-18 libpcre2-8-0 libperl4-corelibs-perl 
  libperl5.30 libplymouth5 libpoppler-cpp0v5 libpoppler97 libprocps8 
  libprotobuf-lite17 libprotobuf17 libpython2-stdlib libpython3.8 
  libpython3.8-minimal libpython3.8-stdlib libqpdf26 librdmacm1 
  libre-engine-re2-perl libre2-5 libreadline8 libreadonly-perl 
  libref-util-perl libref-util-xs-perl libregexp-pattern-perl libsane 
  libsensors-config libsensors5 libsereal-decoder-perl 
  libsereal-encoder-perl libsereal-perl libsgutils2-2 libsmbios-c2 
  libsnmp35 libstdc++-9-dev libstemmer0d libstring-shellquote-perl 
  libstruct-dumb-perl libterm-readkey-perl libtest-fatal-perl 
  libtest-refcount-perl libtinfo6 libtorrent21 libtss2-esys0 
  libtype-tiny-perl libtype-tiny-xs-perl libtypes-serialiser-perl 
  libubsan1 libuchardet0 libunbound8 liburcu6 libuv1 libvorbisfile3 
  libvulkan1 libwant-perl libxcb-randr0 libxkbfile1 
  libxml-writer-perl libxmlb1 linux-generic linux-headers-5.4.0-67 
  linux-headers-5.4.0-67-generic linux-image-5.4.0-67-generic 
  linux-image-generic linux-modules-5.4.0-67-generic 
  linux-modules-extra-5.4.0-67-generic logsave lua-lpeg 
  lxd-agent-loader lz4 mesa-vulkan-drivers multipath-tools 
  nmap-common node-normalize.css packagekit packagekit-tools pci.ids 
  perl-modules-5.30 python-configparser python-entrypoints 
  python-is-python2 python2 python2-minimal python3-blinker 
  python3-breezy python3-crypto python3-deprecated python3-distro 
  python3-dnspython python3-dulwich python3-entrypoints 
  python3-fastimport python3-future python3-github python3-gitlab 
  python3-hamcrest python3-jwt python3-keyring python3-kiwisolver 
  python3-launchpadlib python3-lazr.restfulclient python3-lazr.uri 
  python3-ldb python3-markdown python3-oauthlib python3-packaging 
  python3-pexpect python3-ptyprocess python3-pygments python3-samba 
  python3-secretstorage python3-simplejson python3-talloc python3-tdb 
  python3-wadllib python3-wrapt python3.8 python3.8-minimal 
  sbsigntool secureboot-db sg3-utils sg3-utils-udev 
  sound-theme-freedesktop systemd-timesyncd thermald 
  thin-provisioning-tools tpm-udev usb.ids 

With a load being upgraded too…

Upgrade: accountsservice acl acpid adduser adwaita-icon-theme apache2 
  apache2-bin apache2-data apache2-utils apparmor apport 
  apport-symptoms apt apt-transport-https apt-utils at at-spi2-core 
  attr avahi-daemon base-files base-passwd bash bash-completion bc 
  bcache-tools bind9-host binutils binutils-common 
  binutils-x86-64-linux-gnu bsdmainutils bsdutils btrfs-progs 
  build-essential busybox-initramfs busybox-static byobu bzip2 bzr 
  ca-certificates ca-certificates-java cloud-guest-utils 
  cloud-initramfs-copymods cloud-initramfs-dyn-netconf cockpit-bridge 
  cockpit-pcp colord colord-data command-not-found console-setup 
  console-setup-linux coreutils cpio cpp cpp-7 cron cryptsetup 
  cryptsetup-bin cups cups-browsed cups-client cups-common 
  cups-core-drivers cups-daemon cups-filters 
  cups-filters-core-drivers cups-ipp-utils cups-ppdc 
  cups-server-common curl dash dbus dbus-user-session dbus-x11 
  dconf-gsettings-backend dconf-service dctrl-tools debconf 
  debconf-i18n debianutils debmirror devscripts dh-python diffstat 
  diffutils dirmngr distro-info-data dmeventd dmidecode dmsetup 
  dns-root-data dnsmasq-base dnsutils dos2unix dosfstools dpkg 
  dpkg-dev dput e2fslibs e2fsprogs e2fsprogs-l10n ed eject ethtool 
  fakeroot fdisk file findutils fontconfig fontconfig-config 
  fonts-lyx fonts-noto-mono fonts-ubuntu-console 
  fonts-ubuntu-font-family-console friendly-recovery ftp fuse g++ 
  g++-7 gawk gcc gcc-7 gcc-7-base gcc-8-base gcr gddrescue gdisk 
  geoip-database gettext gettext-base ghostscript gir1.2-glib-2.0 git 
  git-man glances glib-networking glib-networking-common 
  glib-networking-services gnupg gnupg-l10n gnupg-utils 
  golang-docker-credential-helpers golang-go 
  golang-race-detector-runtime golang-src 
  google-cloud-print-connector gpg gpg-agent gpg-wks-client 
  gpg-wks-server gpgconf gpgsm gpgv grep groff-base grub-common 
  grub-legacy-ec2 grub-pc grub-pc-bin grub2-common 
  gsettings-desktop-schemas gtk-update-icon-cache guile-2.0-libs gzip 
  hdparm hostname htop ibverbs-providers iftop ifupdown imagemagick 
  imagemagick-6-common imagemagick-6.q16 info init 
  init-system-helpers initramfs-tools initramfs-tools-bin 
  initramfs-tools-core install-info intltool-debian iotop iperf 
  iproute2 iptables iputils-arping iputils-ping iputils-tracepath 
  irqbalance isc-dhcp-client isc-dhcp-common iso-codes iw java-common 
  jq kbd keyboard-configuration klibc-utils kmod krb5-locales 
  landscape-common language-pack-en language-pack-en-base 
  language-selector-common less libaccountsservice0 libacl1 
  libalgorithm-diff-perl libalgorithm-diff-xs-perl libapparmor-perl 
  libapparmor1 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 
  libaprutil1-ldap libapt-pkg-perl libarchive-zip-perl libargon2-0 
  libasan4 libasn1-8-heimdal libasound2 libasound2-data libassuan0 
  libatk-bridge2.0-0 libatk-wrapper-java libatk-wrapper-java-jni 
  libatk1.0-0 libatk1.0-data libatm1 libatomic1 libatspi2.0-0 
  libattr1 libaudit-common libaudit1 libavahi-client3 
  libavahi-common-data libavahi-common3 libavahi-core7 libavahi-glib1 
  libb-hooks-endofscope-perl libb-hooks-op-check-perl libbinutils 
  libblas3 libblkid1 libbluetooth3 libbsd0 libbz2-1.0 libc-bin 
  libc-dev-bin libc6 libc6-dev libcairo-gobject2 libcairo2 libcap-ng0 
  libcap2 libcap2-bin libcc1-0 libcephfs2 libcgi-fast-perl 
  libcgi-pm-perl libcilkrts5 libclass-method-modifiers-perl 
  libclass-xsaccessor-perl libclone-perl libcolord2 libcolorhug2 
  libcom-err2 libcomerr2 libcroco3 libcrypto++-dev libcrypto++6 
  libcryptsetup12 libcups2 libcupsfilters1 libcupsimage2 
  libcurl3-gnutls libcurl4 libdaemon0 libdatrie1 libdb5.3 libdbus-1-3 
  libdconf1 libdebconfclient0 libdevel-callchecker-perl 
  libdevmapper-event1.02.1 libdevmapper1.02.1 libdigest-hmac-perl 
  libdistro-info-perl libdjvulibre-text libdjvulibre21 libdpkg-perl 
  libdrm-amdgpu1 libdrm-common libdrm-intel1 libdrm-nouveau2 
  libdrm-radeon1 libdrm2 libdumbnet1 libedit2 libegl-mesa0 libegl1 
  libegl1-mesa libelf1 libepoxy0 liberror-perl libexif12 libexpat1 
  libexporter-tiny-perl libext2fs2 libfakeroot libfcgi-perl libfdisk1 
  libfftw3-double3 libfile-basedir-perl libfile-copy-recursive-perl 
  libfile-fcntllock-perl libfile-homedir-perl libfile-which-perl 
  libflac8 libfontconfig1 libfontembed1 libfontenc1 libfreetype6 
  libfribidi0 libfuse2 libgbm1 libgc1c2 libgcc-7-dev libgcc1 
  libgck-1-0 libgcr-base-3-1 libgcr-ui-3-1 libgcrypt20 libgd3 
  libgdbm-compat4 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-bin 
  libgdk-pixbuf2.0-common libgeoip1 libgetopt-long-descriptive-perl 
  libgfortran4 libgif7 libgirepository-1.0-1 libgit-wrapper-perl 
  libgl1 libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa libglib2.0-0 
  libglib2.0-data libglvnd0 libglx-mesa0 libglx0 libgmp10 
  libgnutls-openssl27 libgnutls30 libgomp1 libgpg-error0 libgpgme11 
  libgphoto2-6 libgphoto2-l10n libgphoto2-port12 libgraphite2-3 
  libgs9 libgs9-common libgsasl7 libgssapi-krb5-2 libgssapi3-heimdal 
  libgtk-3-0 libgtk-3-bin libgtk-3-common libgtk2.0-0 
  libgtk2.0-common libgudev-1.0-0 libgusb2 libharfbuzz0b 
  libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal 
  libhtml-form-perl libhtml-parser-perl libhtml-tagset-perl 
  libhtml-tree-perl libhttp-cookies-perl libhttp-daemon-perl 
  libhttp-date-perl libhttp-message-perl libhttp-negotiate-perl 
  libhx509-5-heimdal libibverbs1 libice-dev libice6 libidn11 
  libidn2-0 libieee1284-3 libijs-0.35 libio-pty-perl 
  libio-socket-ssl-perl libio-stringy-perl libipc-run-perl 
  libipc-system-simple-perl libiptc0 libisns0 libitm1 libiw30 
  libjansson4 libjbig2dec0 libjpeg-turbo8 libjq1 libjs-jquery 
  libjson-glib-1.0-0 libjson-glib-1.0-common libk5crypto3 
  libkeyutils1 libklibc libkmod2 libkrb5-26-heimdal libkrb5-3 
  libkrb5support0 libkyotocabinet16v5 liblapack3 liblcms2-2 
  libldap-2.4-2 libldap-common liblist-moreutils-perl libllvm10 
  liblocale-gettext-perl liblog-agent-perl liblouis-data 
  liblouisutdml-bin liblouisutdml-data liblsan0 libltdl7 liblua5.2-0 
  liblua5.3-0 liblwp-mediatypes-perl liblwp-protocol-https-perl 
  liblz4-1 liblzma5 liblzo2-2 libmagic-mgc libmagic1 
  libmailtools-perl libmaxminddb0 libmbim-glib4 libmbim-proxy 
  libmirclient9 libmircommon7 libmircore1 libmirprotobuf3 libmm-glib0 
  libmoo-perl libmount1 libmpdec2 libmpfr6 libmpx2 libmspack0 
  libncurses5 libncursesw5 libndp0 libnet-dns-perl libnet-http-perl 
  libnet-ip-perl libnet-libidn-perl libnet-ssleay-perl 
  libnetfilter-conntrack3 libnetplan0 libnewt0.52 libnfnetlink0 
  libnghttp2-14 libnl-3-200 libnl-genl-3-200 libnl-route-3-200 libnm0 
  libnpth0 libnspr4 libnss-mdns libnss-systemd libnss3 libntlm0 
  libnuma1 libogg0 libp11-kit0 libpackage-stash-perl 
  libpackage-stash-xs-perl libpam-modules libpam-modules-bin 
  libpam-runtime libpam-systemd libpam0g libpango-1.0-0 
  libpangocairo-1.0-0 libpangoft2-1.0-0 libpaper-utils libpaper1 
  libparams-classify-perl libparams-util-perl libparams-validate-perl 
  libparse-debianchangelog-perl libparted2 libpath-iterator-rule-perl 
  libpath-tiny-perl libpcap0.8 libpci3 libpciaccess0 libpcp-gui2 
  libpcp-import1 libpcp-mmv1 libpcp-pmda-perl libpcp-pmda3 
  libpcp-trace2 libpcp-web1 libpcp3 libpcre3 libpcsclite1 
  libperlio-gzip-perl libpfm4 libpipeline1 libpixman-1-0 libpng16-16 
  libpolkit-agent-1-0 libpolkit-gobject-1-0 libpopt0 libproxy1v5 
  libpsl5 libpthread-stubs0-dev libpulse0 libpython2.7 
  libpython2.7-minimal libpython2.7-stdlib libpython3-stdlib 
  libqmi-glib5 libqmi-proxy libquadmath0 librados2 libreadline5 
  libregexp-pattern-license-perl librest-0.7-0 libroken18-heimdal 
  librole-tiny-perl librsvg2-2 librsvg2-common librtmp1 
  libsane-common libsane1 libsasl2-2 libsasl2-modules 
  libsasl2-modules-db libseccomp2 libsecret-1-0 libsecret-common 
  libselinux1 libsemanage-common libsemanage1 libsepol1 libsigsegv2 
  libslang2 libsm-dev libsm6 libsmartcols1 libsndfile1 libsnmp-base 
  libsocket6-perl libsort-key-perl libsoup-gnome2.4-1 libsoup2.4-1 
  libsqlite3-0 libss2 libssh-4 libssl-dev libssl-doc libssl1.1 
  libstdc++-7-dev libstdc++6 libstrictures-perl 
  libstring-copyright-perl libsub-identify-perl libsub-name-perl 
  libsub-quote-perl libsystemd0 libtalloc2 libtasn1-6 libtcl8.6 
  libtdb1 libteamdctl0 libtevent0 libtext-charwidth-perl 
  libtext-iconv-perl libtext-wrapi18n-perl libthai-data libthai0 
  libtiff5 libtimedate-perl libtinfo5 libtk8.6 libtokyocabinet9 
  libtsan0 libubsan0 libudev1 libunbound-dev libunicode-utf8-perl 
  libunistring2 libunwind8 liburi-perl libusb-0.1-4 libusb-1.0-0 
  libutempter0 libuuid1 libvariable-magic-perl libvorbis0a 
  libvorbisenc2 libwayland-client0 libwayland-cursor0 libwayland-egl1 
  libwayland-egl1-mesa libwayland-server0 libwbclient0 
  libwind0-heimdal libwmf0.2-7 libwrap0 libwww-perl 
  libwww-robotrules-perl libx11-6 libx11-data libx11-dev libx11-doc 
  libx11-xcb1 libx86-1 libxau-dev libxau6 libxcb-dri2-0 libxcb-dri3-0 
  libxcb-glx0 libxcb-present0 libxcb-render0 libxcb-shape0 
  libxcb-shm0 libxcb-sync1 libxcb-xfixes0 libxcb1 libxcb1-dev 
  libxcomposite1 libxcursor1 libxdamage1 libxdmcp-dev libxdmcp6 
  libxext6 libxfixes3 libxft2 libxi6 libxinerama1 libxkbcommon0
  libxml-libxml-perl libxml-parser-perl libxml-sax-expat-perl 
  libxml-sax-perl libxml-simple-perl libxml2 libxmlrpc-core-c3 
  libxmlsec1 libxmlsec1-openssl libxmu6 libxmuu1 libxrandr2 
  libxslt1.1 libxss1 libxtables12 libxxf86dga1 libxxf86vm1 
  libyaml-0-2 libyaml-libyaml-perl libzstd1 licensecheck lintian 
  linux-base linux-firmware linux-headers-generic linux-libc-dev 
  lm-sensors locales login logrotate lsb-base lsb-release lshw lsof 
  ltrace lvm2 lynx lynx-common mailutils mailutils-common make 
  makedev man-db manpages manpages-dev mawk mdadm mime-support 
  mlocate modemmanager mosh motd-news-config mount mtr-tiny mutt 
  mysql-common nano ncdu ncurses-base ncurses-bin ncurses-term 
  neofetch net-tools netbase netcat-openbsd netdiscover nethogs 
  netplan.io network-manager network-manager-pptp networkd-dispatcher 
  nmap nmon ntfs-3g open-iscsi open-vm-tools openjdk-8-jdk 
  openjdk-8-jdk-headless openjdk-8-jre openjdk-8-jre-headless 
  openssh-client openssh-server openssh-sftp-server openssl os-prober 
  overlayroot p0f parted passwd pastebinit patch pciutils pcp 
  pcp-conf perl perl-base perl-openssl-defaults pinentry-gnome3 
  pkg-config plymouth plymouth-theme-ubuntu-text pm-utils pngquant 
  policykit-1 pollinate poppler-data poppler-utils popularity-contest 
  postfix powermgmt-base ppp pptp-linux printer-driver-gutenprint 
  procps psmisc publicsuffix python-apt-common python-asn1crypto 
  python-cffi-backend python-crypto python-cryptography 
  python-dateutil python-dbus python-dnspython python-enum34 
  python-gi python-httplib2 python-idna python-ipaddress 
  python-keyring python-matplotlib-data python-oauth 
  python-pkg-resources python-simplejson python-six 
  python-zope.interface python2.7 python2.7-minimal python3 
  python3-apport python3-apt python3-asn1crypto python3-attr 
  python3-automat python3-bottle python3-certifi python3-cffi-backend 
  python3-chardet python3-click python3-colorama 
  python3-commandnotfound python3-configobj python3-constantly 
  python3-cryptography python3-cycler python3-dateutil python3-dbus 
  python3-debconf python3-debian python3-distro-info 
  python3-distupgrade python3-distutils python3-docker 
  python3-dockerpycreds python3-gdbm python3-gi python3-gpg 
  python3-httplib2 python3-hyperlink python3-idna python3-incremental 
  python3-influxdb python3-lib2to3 python3-magic python3-matplotlib 
  python3-minimal python3-netifaces python3-newt python3-numpy 
  python3-olefile python3-openssl python3-pam python3-pcp python3-pil 
  python3-pkg-resources python3-ply python3-problem-report 
  python3-psutil python3-pyasn1 python3-pyasn1-modules 
  python3-pycryptodome python3-pycurl python3-pyparsing python3-pysmi 
  python3-pysnmp4 python3-pystache python3-requests 
  python3-requests-unixsocket python3-serial python3-service-identity 
  python3-six python3-software-properties python3-systemd python3-tk 
  python3-twisted python3-twisted-bin python3-tz python3-unidiff 
  python3-update-manager python3-urllib3 python3-websocket 
  python3-xdg python3-yaml python3-zope.interface qpdf 
  readline-common rename resolvconf rsync rsyslog rtorrent samba 
  samba-common samba-common-bin samba-dsdb-modules samba-libs 
  samba-vfs-modules sane-utils screen sed sensible-utils sgml-base 
  shared-mime-info smartmontools snap-confine snapd 
  software-properties-common sosreport squashfs-tools ssh-import-id 
  strace sudo systemd systemd-sysv sysvinit-utils t1utils tar tasksel 
  tasksel-data tcpd tcpdump tdb-tools telnet tmux tree tzdata 
  ubuntu-advantage-tools ubuntu-cloudimage-keyring 
  ubuntu-core-launcher ubuntu-keyring ubuntu-minimal ubuntu-mono 
  ubuntu-release-upgrader-core ubuntu-server ubuntu-standard ucf udev 
  ufw uidmap unattended-upgrades unrar unzip update-inetd 
  update-manager-core update-notifier-common uptimed usb-modeswitch 
  usb-modeswitch-data usbutils util-linux uuid-runtime vim vim-common 
  vim-runtime vim-tiny vlan vnstat w3m wamerican wdiff wget whiptail 
  wireless-regdb wireless-tools wpasupplicant x11-common x11-utils 
  x11proto-core-dev x11proto-dev xauth xdg-user-dirs xfsprogs 
  xkb-data xml-core xtrans-dev xxd xz-utils zerofree zlib1g 

After this I hit “y” to start the upgrade.

We’re off! Hundreds of packages downloading at ~10-14MB/s. Wheeee!

Get:540 http://192.168.1.8/ubuntu focal/main amd64 python3-requests all 2.22.0-2ubuntu1 [47.1 kB]
Get:541 http://192.168.1.8/ubuntu focal-updates/main amd64 python3-urllib3 all 1.25.8-2ubuntu0.1 [88.3 kB]                             
Get:542 http://192.168.1.8/ubuntu focal/main amd64 python3-requests-unixsocket all 0.2.0-2 [7,272 B]                                   
Get:543 http://192.168.1.8/ubuntu focal-updates/main amd64 python3-apport all 2.20.11-0ubuntu27.16 [84.9 kB]                           
Get:544 http://192.168.1.8/ubuntu focal-updates/main amd64 apport all 2.20.11-0ubuntu27.16 [129 kB]                                    
Get:545 http://192.168.1.8/ubuntu focal/main amd64 libfl2 amd64 2.6.4-6.2 [11.5 kB]                                                    
Get:546 http://192.168.1.8/ubuntu focal/main amd64 at amd64 3.1.23-1ubuntu1 [38.7 kB]                                                  
Get:547 http://192.168.1.8/ubuntu focal/main amd64 gawk amd64 1:5.0.1+dfsg-1 [418 kB]                                                  
Get:548 http://192.168.1.8/ubuntu focal-updates/main amd64 bcache-tools amd64 1.0.8-3ubuntu0.1 [19.5 kB]                               
28% [Waiting for headers]                                                      14 MB/s 20s

The upgrade chugged along for a while, and once complete I had to reboot again, into the new release.

System upgrade is complete.

Restart required 

To complete the upgrade, a system restart is required. 
If you select 'y' the system will be restarted. 

Continue [yN] 

Boom! It worked

alan@robby:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.2 LTS
Release:        20.04
Codename:       focal

Even better, the python script I had trouble with, which made me need to upgrade in the first place, works!

Alan PopeUpgrading Ubuntu Server

I have a few old and crusty HP MicroServers in the loft at home. I started out with one when HP did a cashback offer, making them very affordable. Over time I’ve acquired a couple more. One, named colossus is running rsnapshot to provide backups of my other machines. Another, called shirka is a Plex Media Server and the last, robby is a general purpose box running various jobs and reports. All run Ubuntu Server as the OS.

They’re getting a bit long in the tooth now. I should probably consider replacing them before they start failing. Ideally I’d replace all three with something a bit beefier and put everything in containers. I have considered that for a while, just not got round to it all the while they work fine. But like I say, they’re old, here’s the specs:

  • AMD Turionâ„¢ II Neo N40L Dual-Core Processor
  • 8GiB - 2x 4GiB DIMM Synchronous 1333 MHz (0.8 ns)
  • 2x500GB - 2x Hitachi HDP725050GLA360 - Ubuntu 18.04 Server
  • 2x2TB - 2x SAMSUNG HD204UI - Storage

What triggered the release upgrade today was that robby has been running a few python scripts for a long while now, and one needed updating. Unfortunately the upstream script needs a newer version of Python than Ubuntu 18.04 ships with. I could have monkeyed around getting the newer python and all the modules, but I figured it’s easier to just update Ubuntu. So I thought “No problem, let’s upgrade!”.

Narrator: It was not “No problem”, as Alan thought.

The upgrade process for Ubuntu Server is basically to run do-release-upgrade and follow the prompts. So that’s what I did. Initially it told me I hadn’t rebooted since the last package update - which is true, as I’ve written before, I’m reboot-averse. So I rebooted, and crossed my fingers that it would come back okay. It’s in the loft, and I didn’t fancy going up there doing remote-hands on a ladder.

please come back, please come back

From 192.168.1.71 icmp_seq=182 Destination Host Unreachable
From 192.168.1.71 icmp_seq=183 Destination Host Unreachable
From 192.168.1.71 icmp_seq=184 Destination Host Unreachable
From 192.168.1.71 icmp_seq=185 Destination Host Unreachable

It came back though, thankfully.

64 bytes from 192.168.1.8: icmp_seq=186 ttl=64 time=2256 ms
64 bytes from 192.168.1.8: icmp_seq=187 ttl=64 time=1240 ms

Phew!

I then re-ran the upgrade tool. The first question I get is more informational. As I’ve mentioned before I run an Ubuntu mirror, actually on this very host, serving other Ubuntu machines on the LAN.

Updating repository information

No valid mirror found 

While scanning your repository information, no mirror entry for the 
upgrade was found. This can happen if you run an internal mirror or 
if the mirror information is out-of-date. 

Do you want to rewrite your 'sources.list' file anyway? If you choose 
'Yes' here, it will update all 'bionic' to 'focal' entries. 
If you select 'No', the upgrade will cancel. 

Continue [yN] 

The /etc/apt/sources.list just points to the local IP address of this machine. Here’s what my sources.list looks like.

alan@robby:~/tmp$ cat /etc/apt/sources.list
deb http://192.168.1.8/ubuntu/ bionic main restricted universe multiverse
deb-src http://192.168.1.8/ubuntu/ bionic main restricted universe multiverse

deb http://192.168.1.8/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://192.168.1.8/ubuntu/ bionic-updates main restricted universe multiverse

deb http://192.168.1.8/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://192.168.1.8/ubuntu/ bionic-backports main restricted universe multiverse

deb http://security.ubuntu.com/ubuntu bionic-security main restricted
deb-src http://security.ubuntu.com/ubuntu bionic-security main restricted
deb http://security.ubuntu.com/ubuntu bionic-security universe
deb-src http://security.ubuntu.com/ubuntu bionic-security universe
deb http://security.ubuntu.com/ubuntu bionic-security multiverse
deb-src http://security.ubuntu.com/ubuntu bionic-security multiverse

When running do-release-upgrade it checks the sources.list and updates it from the old release codename to the new one. It has figured out that I’m using a “non-standard” mirror, but offers the option to just ninja the codename, which it does fine.

Next warning is that the upgrade will disable some additional repositories which I’d enabled. I think I only really used the Syncthing repo which I can easily re-enable later.

This whole disabling third party sources is a good thing anyway, as Ubuntu Server (and desktop) upgrades are never tested with them enabled, so the outcome can be unpredictable.


Third party sources disabled 

Some third party entries in your sources.list were disabled. You can 
re-enable them after the upgrade with the 'software-properties' tool 
or your package manager. 

To continue please press [ENTER]

Then it failed. As part of the pre-upgrade checks, the tool found out that I don’t have much space in /boot.


Not enough free disk space 

The upgrade has aborted. The upgrade needs a total of 140 M free 
space on disk '/boot'. Please free at least an additional 2,986 k of 
disk space on '/boot'. You can remove old kernels using 'sudo apt 
autoremove' and you could also set COMPRESS=xz in 
/etc/initramfs-tools/initramfs.conf to reduce the size of your 
initramfs. 

This is likely my fault. When I first installed Ubuntu on this box in August 2017, I used the Ubuntu Server 16.04.2 LTS ISO image on a USB key. I manually configured two 500GB disks in an mdraid mirror /dev/md0 for /dev/sdb1 and /dev/sda2 for the root partition, and /dev/sda1 unmirrored for /boot. There was likely some logic to this in my head at the time.

Unfortunately I only made the /dev/sda1 partition as ~226MB, and the rest for /dev/md0 RAID 1 array. The system has been running fine for nearly four years, and has been upgraded from 16.04 to 18.04 in the meantime.

alan@robby:~$ df -h /boot
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       226M   79M  132M  38% /boot

But sadly there’s not quite enough space to upgrade to 20.04. Maybe the suggestion to apt autoremove will get rid of some cruft.

alan@robby:~$ sudo apt autoremove
Reading package lists... Done
Building dependency tree       
Reading state information... Done
0 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.

Nope. The other suggestion to enable xz compression in /etc/initramfs-tools/initramfs.conf seems neat. Not sure it would actually be beneficial enough though. So let’s take the current initrd and do a quick test. I took the initrd image, un-gzipped it then xz’ed it. It was 58MB when gzipped, 165MB uncompressed and only 35MB when xz compressed.

That would give me an extra 23MB of space. I had 132MB and the installer suggested I need 140MB free, so I think that could work! However, it’s late in the day, and I’m not futzing around with that at 11pm, I need my beauty sleep.

So I’ll quit the upgrade, which resets everything back to how it was before I started.

Restoring original system state

Aborting
Reading package lists... Done    
Building dependency tree          
Reading state information... Done
=== Command detached from window (Tue Mar 16 22:24:31 2021) ===
=== Command terminated with exit status 1 (Tue Mar 16 22:24:41 2021) ===

I’ll revisit this another day! But as I said at the start, I really should replace these machines with one big one and containerise all the things. Not sure what I would replace it with though. Thoughts on that most welcome. No, I don’t want a 43U rack in my loft.

Alan PopeUbuntu Wiki Reboot

It’s time to replace the Ubuntu Wiki. In fact it was probably time to replace it a few years ago, but we are where we are. It should be a reliable and useful resource for the Ubuntu community. It’s failing at that. We have failed here.

Aside: There are actually multiple wikis in use in the Ubuntu project. The primary one is wiki.ubuntu.com, which has been in use since forever (in Ubuntu terms). It’s the main topic of this post, but the others are certainly in need of some love too.

Most pages are meeting records, specifications, design & technical documents or team and personal pages. A lot of the pages are valuable to someone. I don’t have access to data on how often pages are visited, but the RecentChanges page shows how often they’re edited. The wiki contains somewhere around eightysix thousand pages, and some of those get edited on most days.

Over the years a few people (including myself) have looked at what it might take to update the wiki. However, time and motivation was lacking, so everything stayed the same. The Wiki is running MoinMoin 1.9.8 (last I checked) with some tweaks.

Problems

There’s a bunch of problems, but I’ll highlight just a few here.

Performance

The wiki is tremendously slow. Logging in, editing pages, previewing, even viewing existing pages can be sluggish and thus frustrating. If a very simple edit, preview and commit takes many minutes (which it can), new contributors are not incentivised to continue or come back.

In the modern world, the GitHub generation is familiar with logging in, clicking an icon and starting to edit. The Ubuntu wiki performance is making that kind of workflow almost impossible.

Cruft

The wiki contains a wealth of historical knowledge about the project. Much of it is no longer of interest to most people. Perhaps some of that could be removed. Who wants to stumble on a wireless debugging page from 2006 mentioning processes we don’t even use anymore? Do we really need to keep every blueprint document for projects that never took off?

Cleaning up pages on the wiki is a painful process thanks to the performance problems.

Spam

There’s no effective anti-spam measures. As a result we’ve had people and bots create accounts and ruin pages, adding spammy links to dodgy sites. To mitigate this we added an ACL to only allow page edits from members of selected groups.

Some developers, all Ubuntu Members and Canonical employees & a few other select groups are granted access. In addition an “Ubuntu Wiki Editors” group was added, which anyone not in the above groups must join in order to gain access.

This leads to the next problem.

Access

A human review is required for anyone joining the Ubuntu Wiki Editors group on Launchpad, which is opaque, time consuming, and gatekeeping for genuine community members or new contributors who want to do ‘drive by’ edit.

The time consuming part is for a someone to evaluate whether a new account is actually a human with good intent, a person with bad intentions or a bot. For a brand new account requesting access, it’s somewhat hard to determine whether the person is a good or bad actor.

It’s not even obvious how you get access to edit the wiki. It’s almost like the instructions are on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard’.

Effect

The wiki doesn’t behave like a real wiki, but more like a read-only website, which requires significant effort to update. It’s not welcoming.

People stopped editing the wiki. It was too hard and too slow. Documentation became outdated. Meeting records stopped being kept. People stopped contributing.

I’m not saying the problems with the wiki single handedly caused the downfall of civilisation, but I’m also not saying it didn’t contribute to it!

Considerations

Some questions we should consider.

  • Should a new wiki be made? Or should we update the platform of the one we have? Throw hardware at it?
  • Should it be scorched earth “Start from scratch”, or should it contain the exact same content on a new, better performing platform?
  • If content is kept, how much? Should only the N “most visited” and M “most edited” pages be retained?
  • Should the existing wiki be frozen in aspic, uneditable, with a brand new start made on a fresh wiki?

Given Canonical would likely end up hosting whatever comes next, they should be heavily involved in the process of selecting and provisioning the new system. This would need to be done alongside the other work the IS team already do, and would clearly be prioritised accordingly.

My thoughts

We need to do something about this. Inaction is hurting us.

The current Wiki uses Ubuntu SSO for authentication, and whatever is used next, should ideally be capable of also using this.

I’m in two minds on the best way forward here.

Part of me thinks we should deploy a brand new, fresh instance of MediaWiki (for example) and freeze the existing wiki. New pages would be created on the new wiki. Perhaps an easy tool to migrate existing pages one-by-one could be made. This might however, lead to fragmentation, having some evergreen pages on the ‘old’ wiki leaving little motivation to re-create them on the new one.

The other part of me thinks we should import everything from the old wiki to a new one, preserving structure, edit histories and attribution. All pages should redirect to their correct conterpart urls. After confirming everything moved, shut the old wiki down. Then, on a new, performant platform, users could be confident in removing pages which are no longer important, because the wiki would peform well enough to even be able to delete pages.

Then part of me thinks a hybrid approach is needed. Start a fresh new wiki, but only import pages of importance. How “importance” is measured is up for discussion, of course. But grabbing design documents, personal pages, team pages and actively edited and visited pages might be a good start. Perhaps a tarball of the old site could be made available, with personally identifying information removed, so anyone could cherry pick old pages and revive them.

Then there’s the debate about what to replace MoinMoin with, or indeed whether to replace it at all. My gut feel is that the world standardized for better or worse on MediaWiki. It’s familiar enough in multiple Linux distributions, and the odd other wiki out there (hello Wikipedia!) that people know the syntax, page layout and tools.

Then the defeatest in me says that maybe we should just eject the wiki into the sun and just use wiki pages in discourse instead? I don’t know.

But maybe there’s something better, another option that I’m not aware of yet?

Whatever happens, this feels like quite the project. I’d love for us to bite the bullet and get this done though. The current wiki is frankly, embarrassing.

Chris WallaceTree canopy and Google historical imagery

Historic imagery on Google Earth now covers a period of about 20 years, a useful period over which...

Alan PopeLinux Application Summit: Call For Papers

The last event I went to before The Event was Linux Application Summit (LAS) in Barcelona, Spain back in November 2019! Time flies.

LAS is a community organised event, sponsored and supported by the GNOME and KDE projects. The conference is “designed to accelerate the growth of the Linux application ecosystem by bringing together everyone involved in creating a great Linux application user experience.”.

In November 2020, the LAS team organised a virtual version of the event. The recorded sessions from both events can be found over on the Linux Application Summit YouTube channel.

The next LAS will be held online from May 13th to 15th, 2021!

LAS

This year LAS is a 3-day single-track virtual event. The event includes talks, panels and Q&As on a range of topics appropriate to Linux users, developers and other contributors. The LAS team are looking for talks from application developers, designers, product managers, user experience specialists, community leaders, academics, and anyone who is interested in the state of Linux application design and development!

There’s a Call for Papers open which closes a week from now, on March 22nd 2021. The team are looking for regular talks up to 40 minutes and lightning talks around 10 minutes in duration. The topics are Ecosystem Growth, Platform Diversity, Innovation, People & Communications and Legal & Licensing.

I’m wanted to raise awareness of the event in general, and the CfP specifically, especially given the impending deadline. If you’re in the Linux app development ecosystem, and feel you have something interesting to contribute, please do submit an abstract!

Alan PopeBook Review: We Are Legion (We Are Bob)

I recently reviewed Split Second (Split Second Book 1) (affiliate link) by Douglas E. Richards. I’d not read any Douglas E. Richards books before, so it was very helpful to me for readers of my humble blog to recommend that and further titles from other authors.

We Are Legion (We Are Bob) (affiliate link) by Dennis E. Taylor was highly recommended, and delivered. We Are Legion (We Are Bob) was published back in 2016, so as usual I’m very late to the party. Again, I grabbed this via Audible and listened to the nearly 10-hour futuristic space-based novel on walks and while cooking.

The premise is initally straightforward. A rich tech mogul - Bob Johansson, has paid for his head to be preserved in the event of his demise while planning his next life steps. He’s subsequently hit while crossing the street and wakes up over a century later.

Much like Murphy in Robocop, he has to come to terms with his surprising second life. Unlike Murphy, he’s not back in a human-shaped body, but running as software on a computer system, somewhere else in the building.

After a short while getting used to this new existence and his capabilities, we’re thrust into space as Bob is in control of one of a number of competing ships looking for a new home for human life. The rest of the book deals with Bob’s main mission, side projects and his own new-found skills as a space aventuring, high-speed, irreverent computer, capable of 3D printing almost anything.

I really enjoyed We Are Legion (We Are Bob). It was sometimes challenging to keep up as the story darted between incarnations of “Bob” scattered across galaxies. But I put that down to not listening in one contiguous lump of time. The characterisation of the members of the “Bobiverse” was well done by Ray Porter though, which helped the listener switch context throughout.

The book certainly kept my interest, and made me keen to pick up further titles in the “Bobiverse” series. Dennis E. Taylor. does well to tell an engaging story littered with plausible future tech, and believable characters and scenarios.

I think I’m becoming a bit of fan of Taylor’s work.

We Are Legion (We Are Bob) by Dennis E. Taylor.- ★★★★★ 5/5 stars

Jon SpriggsMight Amateur Radio be a hobby for you?

If you’re already interested in electronics or computer hardware, you might also be interested in Amateur (or “Ham”) Radio 📻.

Amateur Radio operators can transmit on radio frequencies across short distances (in your local village, town or city), medium distances (from one city to the next), long distances (from one country to another) or even very long distances (to the International Space Station, or bouncing signals off the moon, meteors or asteroids). You can send Morse Code, computer data, “normal” voice signals, or even TV style signals.

There are competitions, to see who can make the most contacts on various bands, or in specific operating conditions. You can exchange contact (QSO) cards with operators in other countries, or you may train for, and later support authorities during emergency situations. There are also radio clubs, rallies and there’s lobby groups for operators to preserve and extend transmission rights for operators. Oh, and you might even talk to someone in your town when you can’t speak to them face-to-face 😀!

How do you become an Amateur Radio Operator?

To be an Amateur Radio operator you are issued a license from your local licensing body (OFCOM in the UK, FCC in the US, etc) which includes an internationally recognised callsign. You don’t have to know Morse Code (although, you may wish to – it’s a particularly effective mode for long-distance communications) – you don’t even have to talk (there are computer encoded transmissions), but the range and bredth of options for operation are quite large and well worth a poke at, particularly if you have an outgoing personality, or are just interested in how radios work!

What do you have to do to get issued a license?

To become a licensed Radio Amateur in the UK you must first pass the “Foundation” exam to prove you have knowledge of basic technologies and conditions of your license. You can then buy and operate radio equipment. If you want to create your own equipment, you need to at least pass the “Intermediate” exam (with a practical assessment, where you must build a circuit, and another paper exam on your knowledge of technologies and conditions). Once you have passed the “Full” exam, you can then operate on a wider range of bands, modes and power levels. You may wish to talk several levels of these exams at once, for example, if you’re particularly confident and competent with electronics already – you may sit both the Foundation and Intermediate exams at the same time.

Other countries have similar exam levels and stages, but usually with different titles, rules and restrictions.

Want to know more?

73 (that means “Best Wishes”), Jon G7VRI

Featured image is “Radios� by “Matt Gibson� on Flickr and is released under a CC-BY license.

Jon SpriggsCalling CQ /P without much luck

For the first time in quite a while, I went out for a walk around my town of Glossop (IO93ak) by myself, with just my little Baofeng UV-5R on Saturday night. I was walking for over 1h, at various elevations and was calling CQ on 2m and 70cm, as well as trying to open my local repeaters (GB3WP, GB3PZ, GB3MN, GB3MR) with no luck (of course, the fact that GB3WP was offline, and I didn’t know it, didn’t help!)

I’ve resigned myself to the fact that the radio is probably dead (which at ~8 years old, isn’t going too badly!) I’m considering replacing it with a RD-5R which supports DMR, although there’s only one DMR repeater near here, but I don’t know whether it’s worth getting this, or instead just getting a new UV-5RTP (affiliate)?

Amateur Radio doesn’t have a high Partner Acceptance Factor in my house (it’s just not something she can understand why I’m interested), so I can’t really put up an antenna outdoors, nor can I run a mobile rig on the bench in my shared office… I’m half tempted to ditch actual RF operations, and just try using Echolink (or DroidStar) or similar from my phone when I’m out and about.

Andy Smithgrub-install: error: embedding is not possible, but this is required for RAID and LVM install

The Initial Problem

The recent security update of the GRUB bootloader did not want to install on my fileserver at home:

$ sudo apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  grub-common grub-pc grub-pc-bin grub2-common
4 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 4,067 kB of archives.
After this operation, 72.7 kB of additional disk space will be used.
Do you want to continue? [Y/n]
…
Setting up grub-pc (2.02+dfsg1-20+deb10u4) ...
Installing for i386-pc platform.
grub-install: warning: your core.img is unusually large.  It won't fit in the embedding area.
grub-install: error: embedding is not possible, but this is required for RAID and LVM install.
Installing for i386-pc platform.
grub-install: warning: your core.img is unusually large.  It won't fit in the embedding area.
grub-install: error: embedding is not possible, but this is required for RAID and LVM install.
Installing for i386-pc platform.
grub-install: warning: your core.img is unusually large.  It won't fit in the embedding area.
grub-install: error: embedding is not possible, but this is required for RAID and LVM install.
Installing for i386-pc platform.
grub-install: warning: your core.img is unusually large.  It won't fit in the embedding area.
grub-install: error: embedding is not possible, but this is required for RAID and LVM install.

Four identical error messages, because this server has four drives upon which the operating system is installed, and I’d decided to do a four way RAID-1 of a small first partition to make up /boot. This error is coming from grub-install.

Ancient History

This system came to life in 2006, so it’s 15 years old. It’s always been Debian stable, so right now it runs Debian buster and during those 15 years it’s been transplanted into several different iterations of hardware.

Choices were made in 2006 that were reasonable for 2006, but it’s not 2006 now. Some of these choices are now causing problems.

Aside: four way RAID-1 might seem excessive, but we’re only talking about the small /boot partition. Back in 2006 I chose a ~256M one so if I did the minimal thing of only having a RAID-1 pair I’d have 2x 256M spare on the two other drives, which isn’t very useful. I’d honestly rather have all four system drives with the same partition table and there’s hardly ever writes to /boot anyway.

Here’s what the identical partition tables of the drives /dev/sd[abcd] look like:

$ sudo fdisk -u -l /dev/sda
Disk /dev/sda: 298.1 GiB, 320069031424 bytes, 625134827 sectors
Disk model: ST3320620AS     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot   Start       End   Sectors  Size Id Type
/dev/sda1  *         63    514079    514017  251M fd Linux raid autodetect
/dev/sda2        514080   6393869   5879790  2.8G fd Linux raid autodetect
/dev/sda3       6393870 625121279 618727410  295G fd Linux raid autodetect

Note that the first partition starts at sector 63, 32,256 bytes into the disk. Modern partition tools tend to start partitions at sector 2,048 (1,024KiB in), but this was acceptable in 2006 for me and worked up until a few days ago.

Those four partitions /dev/sd[abcd]1 make up an mdadm RAID-1 with metadata version 0.90. This was purposefully chosen because at the time of install GRUB did not have RAID support. This metadata version lives at the end of the member device so anything that just reads the device can pretend it’s an ext2 filesystem. That’s what people did many years ago to boot off of software RAID.

What’s Gone Wrong?

The last successful update of grub-pc seems to have been done on 7 February 2021:

$ ls -la /boot/grub/i386-pc/core.img
-rw-r--r-- 1 root root 31082 Feb  7 17:19 /boot/grub/i386-pc/core.img

I’ve got 62 sectors available for the core.img so that’s 31,744 bytes – just 662 bytes more than what is required.

The update of grub-pc appears to be detecting that my /boot partition is on a software RAID and is now including MD RAID support even though I don’t strictly require it. This makes the core.img larger than the space I have available for it.

I don’t think it is great that such a major change has been introduced as a security update, and it doesn’t seem like there is any easy way to tell it not to include the MD RAID support, but I’m sure everyone is doing their best here and it’s more important to get the security update out.

Possible Fixes

So, how to fix? It seems to me the choices are:

  1. Ignore the problem and stay on the older grub-pc
  2. Create a core.img with only the modules I need
  3. Rebuild my /boot partition

Option #1 is okay short term, especially if you don’t use Secure Boot as that’s what the security update was about.

Option #2 doesn’t seem that feasible as I can’t find a way to influence how Debian’s upgrade process calls grub-install. I don’t want that to become a manual process.

Option #3 seems like the easiest thing to do, as shaving ~1MiB off the size of my /boot isn’t going to cause me any issues.

Rebuilding My /boot

Take a backup

/boot is only relatively small so it seemed easiest just to tar it up ready to put it back later.

$ sudo tar -C /boot -cvf ~/boot.tar .

I then sent that tar file off to another machine as well, just in case the worst should happen.

Unmount /boot and stop the RAID array that it’s on

I’ve already checked in /etc/fstab that /boot is on /dev/md0.

$ sudo umount /boot
$ sudo mdadm --stop md0         
mdadm: stopped md0

At this point I would also recommend doing a wipefs -a on each of the partitions in order to remove the MD superblocks. I didn’t and it caused me a slight problem later as we shall see.

Delete and recreate first partition on each drive

I chose to use parted, but should be doable with fdisk or sfdisk or whatever you prefer.

I know from the fdisk output way above that the new partition needs to start at sector 2048 and end at sector 514,079.

$ sudo parted /dev/sda                                                             
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) rm 1
(parted) mkpart primary ext4 2048 514079s
(parted) set 1 raid on
(parted) set 1 boot on
(parted) p
Model: ATA ST3320620AS (scsi)
Disk /dev/sda: 625134827s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start     End         Size        Type     File system  Flags
 1      2048s     514079s     512032s     primary  ext4         boot, raid, lba
 2      514080s   6393869s    5879790s    primary               raid
 3      6393870s  625121279s  618727410s  primary               raid

(parted) q
Information: You may need to update /etc/fstab.

Do that for each drive in turn. When I got to /dev/sdd, this happened:

Error: Partition(s) 1 on /dev/sdd have been written, but we have been unable to
inform the kernel of the change, probably because it/they are in use.  As a result,
the old partition(s) will remain in use.  You should reboot now before making further changes.
Ignore/Cancel?

The reason for this seems to be that something has decided that there is still a RAID signature on /dev/sdd1 and so it will try to incrementally assemble the RAID-1 automatically in the background. This is why I recommend a wipefs of each member device.

To get out of this situation without rebooting I needed to repeat my mdadm --stop /dev/md0 command and then do a wipefs -a /dev/sdd1. I was then able to partition it with parted.

Create md0 array again

I’m going to stick with metadata format 0.90 for this one even though it may not be strictly necessary.

$ sudo mdadm --create /dev/md0 \
             --metadata 0.9 \
             --level=1 \
             --raid-devices=4 \
             /dev/sd[abcd]1
mdadm: array /dev/md0 started.

Again, if you did not do a wipefs earlier then mdadm will complain that these devices already have a RAID array on them and ask for confirmation.

Get the Array UUID

$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 0.90
     Creation Time : Sat Mar  6 03:20:10 2021
        Raid Level : raid1
        Array Size : 255936 (249.94 MiB 262.08 MB)
     Used Dev Size : 255936 (249.94 MiB 262.08 MB)
      Raid Devices : 4
     Total Devices : 4
   Preferred Minor : 0
       Persistence : Superblock is persistent

       Update Time : Sat Mar  6 03:20:16 2021
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              UUID : e05aa2fc:91023169:da7eb873:22131b12 (local to host specialbrew.localnet)
            Events : 0.18
                                                                    
    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1

Change your /etc/mdadm/mdadm.conf for the updated UUID of md0:

$ grep md0 /etc/mdadm/mdadm.conf
ARRAY /dev/md0 level=raid1 num-devices=4 UUID=e05aa2fc:91023169:da7eb873:22131b12

Make a new filesystem on /dev/md0

$ sudo mkfs.ext4 -m0 -L boot /dev/md0
mke2fs 1.44.5 (15-Dec-2018)
Creating filesystem with 255936 1k blocks and 64000 inodes
Filesystem UUID: fdc611f2-e82a-4877-91d3-0f5f8a5dd31d
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729, 204801, 221185

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

My /etc/fstab didn’t need a change because it mounted by device name, i.e. /dev/md0, but if yours uses UUID or label then you’ll need to update that now, too.

Mount it and put your files back

$ sudo mount /boot
$ sudo tar -C /boot -xvf ~/boot.tar

Reinstall grub-pc

$ sudo apt reinstall grub-pc
…
Setting up grub-pc (2.02+dfsg1-20+deb10u4) ...
Installing for i386-pc platform.
Installation finished. No error reported.
Installing for i386-pc platform.
Installation finished. No error reported.
Installing for i386-pc platform.
Installation finished. No error reported.
Installing for i386-pc platform.
Installation finished. No error reported.

Reboot

You probably should reboot now to make sure it all works when you have time to fix any problems, as opposed to risking issues when you least expect it.

$ uprecords 
     #               Uptime | System                                     Boot up
----------------------------+---------------------------------------------------
     1   392 days, 16:45:55 | Linux 4.7.0               Thu Jun 14 16:13:52 2018
     2   325 days, 03:20:18 | Linux 3.16.0-0.bpo.4-amd  Wed Apr  1 14:43:32 2015
->   3   287 days, 16:03:12 | Linux 4.19.0-9-amd64      Fri May 22 12:33:27 2020
     4   257 days, 07:31:42 | Linux 4.19.0-6-amd64      Sun Sep  8 05:00:38 2019
     5   246 days, 14:45:10 | Linux 4.7.0               Sat Aug  6 06:27:52 2016
     6   165 days, 01:24:22 | Linux 4.5.0-rc4-specialb  Sat Feb 20 18:18:47 2016
     7   131 days, 18:27:51 | Linux 3.16.0              Tue Sep 16 08:01:05 2014
     8    89 days, 16:01:40 | Linux 4.7.0               Fri May 26 18:28:40 2017
     9    85 days, 17:33:51 | Linux 4.7.0               Mon Feb 19 17:17:39 2018
    10    63 days, 18:57:12 | Linux 3.16.0-0.bpo.4-amd  Mon Jan 26 02:33:47 2015
----------------------------+---------------------------------------------------
1up in    37 days, 11:17:07 | at                        Mon Apr 12 15:53:46 2021
no1 in   105 days, 00:42:44 | at                        Sat Jun 19 05:19:23 2021
    up  2362 days, 06:33:25 | since                     Tue Sep 16 08:01:05 2014
  down     0 days, 14:02:09 | since                     Tue Sep 16 08:01:05 2014
   %up               99.975 | since                     Tue Sep 16 08:01:05 2014

My Kingdom For 7 Bytes

My new core.img is 7 bytes too big to fit before my original /boot:

$ ls -la /boot/grub/i386-pc/core.img
-rw-r--r-- 1 root root 31751 Mar  6 03:24 /boot/grub/i386-pc/core.img

Jon FautleyUsing the Grafana Cloud Agent with Amazon Managed Prometheus, across multiple AWS accounts

Observability is all the rage these days, and the process of collecting metrics is getting easier. Now, the big(ger) players are getting in on the action, with Amazon releasing a Managed Prometheus offering and Grafana now providing a simplified “all-in-one” monitoring agent. This is a quick guide to show how you can couple these two together, on individual hosts, and incorporating cross-account access control. The Grafana Cloud Agent Grafana Labs have taken (some of) the best bits of the Prometheus monitoring stack and created a unified deployment that wraps the individual moving parts up into a single binary.

Jon FautleyPosts

Chris WallaceSlicing up polyhedra with Hamilton

My explorations of Hamiltonian circuits on the face of a cube and other polyhedra continues.A...

Jon SpriggsAdding MITM (or “Trusted Certificate Authorities”) proxy certificates for Linux and Linux-like Environments

In some work environments, you may find that a “Man In The Middle” (also known as MITM) proxy may have been configured to inspect HTTPS traffic. If you work in a predominantly Windows based environment, you may have had some TLS certificates deployed to your computer when you logged in, or by group policy.

I’ve previously mentioned that if you’re using Firefox on your work machines where you’ve had these certificates pushed to your machine, then you’ll need to enable a configuration flag to make those work under Firefox (“security.enterprise_roots.enabled“), but this is talking about Linux (like Ubuntu, Fedora, CentOS, etc.) and Linux-like environments (like WSL, MSYS2)

Start with Windows

From your web browser of choice, visit any HTTPS web page that you know will be inspected by your proxy.

If you’re using Mozilla Firefox

In Firefox, click on this part of the address bar and click on the right arrow next to “Connection secure”:

Clicking on the Padlock and then clicking on the Right arrow will take you to the “Connection Security” screen.
Certification Root obscured, but this where we prove we have a MITM certificate.

Click on “More Information” to take you to the “Page info” screen

More obscured details, but click on “View Certificate”

In recent versions of Firefox, clicking on “View Certificate” takes you to a new page which looks like this:

Mammoth amounts of obscuring here! The chain runs from left to right, with the right-most blob being the Root Certificate

Click on the right-most tab of this screen, and navigate down to where it says “Miscellaneous”. Click on the link to download the “PEM (cert)”.

The details on the Certificate Authority (highly obscured!), but here is where we get our “Root” Certificate for this proxy.

Save this certificate somewhere sensible, we’ll need it in a bit!

Note that if you’ve got multiple proxies (perhaps for different network paths, or perhaps for a cloud proxy and an on-premises proxy) you might need to force yourself in into several situations to get these.

If you’re using Google Chrome / Microsoft Edge

In Chrome or Edge, click on the same area, and select “Certificate”:

This will take you to a screen listing the “Certification Path”. This is the chain of trust between the “Root” certificate for the proxy to the certificate they issue so I can visit my website:

This screen shows the chain of trust from the top of the chain (the “Root” certificate) to the bottom (the certificate they issued so I could visit this website)

Click on the topmost line of the list, and then click “View Certificate” to see the root certificate. Click on “Details”:

The (obscured) details for the root CA.

Click on “Copy to File” to open the “Certificate Export Wizard”:

In the Certificate Export Wizard, click “Next”
Select “Base-64 encoded X.509 (.CER)” and click “Next”
Click on the “Browse…” button to select a path.
Name the file something sensible, and put the file somewhere you’ll find it shortly. Click “Save”, then click “Next”.

Once you’ve saved this file, rename it to have the extension .pem. You may need to do this from a command line!

Ubuntu or Debian based systems as an OS, or as a WSL environment

As root, copy the proxy’s root key into /usr/local/share/ca-certificates/<your_proxy_name>.crt (for example, /usr/local/share/ca-certificates/proxy.my.corp.crt) and then run update-ca-certificates to update the system-wide certificate store.

RHEL/CentOS as an OS, or as a WSL environment

As root, copy the proxy’s root key into /etc/pki/ca-trust/source/anchors/<your_proxy_name>.pem (for example, /etc/pki/ca-trust/source/anchors/proxy.my.corp.pem) and then run update-ca-trust to update the system-wide certificate store.

MSYS2 or the Ruby Installer

Open the path to your MSYS2 environment (e.g. C:\Ruby30-x64\msys64) using your file manager (Explorer) and run msys2.exe. Then paste the proxy’s root key into the etc/pki/ca-trust/source/anchors subdirectory, naming it <your_proxy_name>.pem. In the MSYS2 window, run update-ca-trust to update the environment-wide certificate store.

Ruby Installer

If you’ve obtained the Ruby Installer from https://rubyinstaller.org/ and installed it from there, assuming you accepted the default path of C:\Ruby<VERSION>-x64 (e.g. C:\Ruby30-x64) you need to perform the above step (running update-ca-trust) and then copy the file from C:\Ruby30-x64\mysys64\etc\pki\ca-trust\extracted\pem\tls-ca-bundle.pem to C:\Ruby30-x64\ssl\cert.pem

Sources:

Featured image is “Honey pots” by “Nicholas” on Flickr and is released under a CC-BY license.

Andy SmithJust had my COVID-19 first vaccination (Pfizer/BioNTech)

Just got back from having my first COVID-19 vaccination. Started queueing at 10:40, pre-screening questions at 10:50, all done by 10:53 then I poked at my phone for 15 minutes while waiting to check I wouldn’t keel over from anaphylactic shock (I didn’t).

I was first notified that I should book an appointment in the form of a text message from sender “GPSurgery” on Monday 22nd February 2021:

Dear MR SMITH,

You have been invited to book your COVID-19 vaccinations.

Please click on the link to book: https://accurx.thirdparty.nhs.uk/…
[Name of My GP Surgery]

The web site presented me with a wide variety of dates and times, the earliest being today, 3 days later, so I chose that. My booking was then confirmed by another text message, and another reminder message was sent yesterday. I assume these text messages were sent by some central service on behalf of my GP whose role was probably just submitting my details.

A very smooth process a 15 minute walk from my home, and I’m hearing the same about the rest of the country too.

Watching social media mentions from others saying they’ve had their vaccination and also looking at the demographics in the queue and waiting room with me, I’ve been struck by how many people have—like me—been called up for their vaccinations quite early unrelated to their age. I was probably in the bottom third age group in the queue and waiting area: I’m 45 and although most seemed older than me, there were plenty of people around my age and younger there.

It just goes to show how many people in the UK are relying on the NHS for the management of chronic health conditions that may not be obviously apparent to those around them. Which is why we must not let this thing that so many of us rely upon be taken away. I suspect that almost everyone reading either is in a position of relying upon the NHS or has nearest and dearest who do.

The NHS gets a lot of criticism for being a bottomless pit of expenditure that is inefficient and slow to embrace change. Yes, healthcare costs a lot of money especially with our ageing population, but per head we spend a lot less than many other countries: half what the US spends per capita or as a proportion of GDP; our care is universal and our life expectancy is slightly longer. In 2017 the Commonwealth Fund rated the NHS #1 in a comparison of 11 countries.

So the narrative that the NHS is poor value for money is not correct. We are getting a good financial deal. We don’t necessarily need to make it perform better, financially, although there will always be room for improvement. The NHS has a funding crisis because the government wants it to have a funding crisis. It is being deliberately starved of funding so that it fails.

The consequences of selling off the NHS will be that many people are excluded from care they need to stay alive or to maintain a tolerable standard of living. As we see with almost every private sector takeover of what were formerly public services, they strip the assets, run below-par services that just about scrape along, and then when there is any kind of downturn or unexpected event they fold and either beg for bailout or just leave the mess in the hands of the government. Either way, taxpayers pay more for less and make a small group of wealthy people even more wealthy.

We are such mugs here in UK that even other countries have realised that they can bid to take over our public services, provide a low standard of service at a low cost to run, charge a lot to the customer and make a hefty profit. Most of our train operating companies are owned by foreign governments.

The NHS as it is only runs as well as it does because the staff are driven to breaking point with an obscene amount of unpaid overtime and workplace stress.

If you’d like to learn some more about the state of the NHS in the form of an engaging read then I recommend Adam Kay’s book This is Going to Hurt: Secret Diaries of a Junior Doctor. It will make you laugh, it will make you cry and if you’ve a soul it will make you angry. Also it may indelibly sear the phrase “penis degloving injury” into your mind.

Do not accept the premise that the NHS is too expensive.

If the NHS does a poor job (and it sometimes does), understand that underfunding plays a big part.

Privatising any of it will not improve matters in any way, except for a very small number of already wealthy people.

Please think about this when you vote.

Chris WallacePaths on Polyhedra

Bob Bosch posted a picture of a rather lovely wooden object.  A grove had been machined around a...

Andy SmithIntel may need me to sign an NDA before I can know the capacity of one of their SSDs

Apologies for the slightly clickbaity title! I could not resist. While an Intel employee did tell me this, they are obviously wrong.

Still, I found out some interesting things that I was previously unaware of.

I was thinking of purchasing some “3.84TB” Intel D3-S4610 SSDs for work. I already have some “3.84TB” Samsung SM883s so it would be good if the actual byte capacity of the Intel SSDs were at least as much as the Samsung ones, so that they could be used to replace a failed Samsung SSD.

To those with little tech experience you would think that two things which are described as X TB in capacity would be:

  1. Actually X TB in size, where 1TB = 1,000 x 1,000 x 1,000 x 1,000 bytes, using powers of ten SI prefixes. Or;
  2. Actually X TiB in size, where 1TiB = 1,024 x 1,024 x 1,024 x 1,024 bytes, using binary prefixes.

…and there was a period of time where this was mostly correct, in that manufacturers would prefer something like the former case, as it results in larger headline numbers.

The thing is, years ago, manufacturers used to pick a capacity that was at least what was advertised (in powers of 10 figures) but it wasn’t standardised.

If you used those drives in a RAID array then it was possible that a replacement—even from the same manufacturer—could be very slightly smaller. That would give you a bad day as you generally need devices that are all the same size. Larger is okay (you’ll waste some), but smaller won’t work.

So for those of us who like me are old, this is something we’re accustomed to checking, and I still thought it was the case. I wanted to find out the exact byte capacity of this Intel SSD. So I tried to ask Intel, in a live support chat.

Edgar (22/02/2021, 13:50:59): Hello. My name is Edgar and I’ll be helping you today.

Me (22/02/2021, 13:51:36): Hi Edgar, I have a simple request. Please could you tell me the exact byte capacity of a SSD-SSDSC2KG038T801 that is a 3.84TB Intel D3-S4610 SSD

Me (22/02/2021, 13:51:47): I need this information for matching capacities in a RAID set

Edgar (22/02/2021, 13:52:07): Hello, thank you for contacting Intel Technical Support. It is going to be my pleasure to help you.

Edgar (22/02/2021, 13:53:05): Allow me a moment to create a ticket for you.

Edgar (22/02/2021, 13:57:26): We have a calculation to get the decimal drive sectors of an SSD because the information you are asking for most probably is going to need a Non-Disclousre Agreement (NDA)

Yeah, an Intel employee told me that I might need to sign an NDA to know the usable capacity of an SSD. This is obviously nonsense. I don’t know whether they misunderstood and thought I was asking about the raw capacity of the flash chips or what.

Me (22/02/2021, 13:58:15): That seems a bit strange. If I buy this drive I can just plug it in and see the capacity in bytes. But if it’s too small then that is a wasted purchase which would be RMA’d

Edgar (22/02/2021, 14:02:48): It is 7,500,000,000

Edgar (22/02/2021, 14:03:17): Because you take the size of the SSD that is 3.84 in TB, in Byte is 3840000000000

Edgar (22/02/2021, 14:03:47): So we divide 3840000000000 / 512 which is the sector size for a total of 7,500,000,000 Bytes

Me (22/02/2021, 14:05:50): you must mean 7,500,000,000 sectors of 512byte, right?

Edgar (22/02/2021, 14:07:45): That is the total sector size, 512 byte

Edgar (22/02/2021, 14:08:12): So the total sector size of the SSD is 7,500,000,000

Me (22/02/2021, 14:08:26): 7,500,000,000 sectors is only 3,750GB so this seems rather unlikely

The reason why this seemed unlikely to me is that I have never seen an Intel or Samsung SSD that was advertised as X.Y TB capacity that did not have a usable capacity of at least X,Y00,000,000,000 bytes. So I would expect a “3.84TB” device to have at least 3,840,000,000,000 bytes of usable capacity.

Edgar was unable to help me further so the support chat was ended. I decided to ask around online to see if anyone actually had one of these devices running and could tell me the capacity.

Peter Corlett responded to me with:

As per IDEMA LBA1-03, the capacity is 1,000,194,048 bytes per marketing gigabyte plus 10,838,016 bytes. A marketing terabyte is 1000 marketing gigabytes.

3840 * 1000194048 + 10838016 = 3840755982336. Presumably your Samsung disk has that capacity, as should that Intel one you’re eyeing up.

My Samsung ones do! And every other SSD I’ve checked obeys this formula, which explains why things have seemed a lot more standard recently. I think this might have been standardised some time around 2014 / 2015. I can’t tell right now because the IDEMA web site is down!

So the interesting and previously unknown to me thing is that storage device sizes are indeed standardised now, albeit not to any sane definition of the units that they use.

What a relief.

Also that Intel live support sadly can’t be relied upon to know basic facts about Intel products. ðŸ™�

Chris WallaceMore Quadrilateral Spiral Tiling

Robert Fathauer tweeted about another form of quadrilateral spiral  tiling.I labelled the...

Chris WallaceTree moisture meters

I had an interesting time last week attending the 4th Trees, People and the Built Environment...

BitFolk WikiIRC/planet

Created page with "The '''planet''' IRC bot posts notifications about new blog articles into the IRC channel. ==Owner== grifferz is responsible for this one. ==Example== <s..."

New page

The '''planet''' IRC bot posts notifications about new blog articles into the IRC channel.

==Owner==
[[User:Strugglers|grifferz]] is responsible for this one.

==Example==
<syntaxhighlight lang="irc">
23:52:24 -planet:#bitfolk- [popey] Messaging Overload - https://popey.com/blog/2021/02/messaging-overload/
</syntaxhighlight>

==Commands==
None. You can't make it do anything. It just does what it does.

BitFolk WikiPlanet BitFolk

Other uses: planet IRC bot

← Older revision Revision as of 14:44, 16 February 2021
Line 10: Line 10:
  
 
==Other uses==
 
==Other uses==
It only builds the static site at the moment but it might be used for other things later. In particular, there is an [[IRC]] bot that announces new articles and it would be good to have the feed information in only one place.
+
Aside from the static web site being built from the list of feeds in [https://github.com/bitfolk/planet-bitfolk/blob/main/config.ini config.ini] the [[IRC/planet|planet]] IRC bot also gets its feed list from here.
  
 
==Experimental==
 
==Experimental==

Jon SpriggsDebian on Docker using Vagrant

I want to use Vagrant-Docker to try standing up some environments. There’s no reasonable justification, it’s just a thing I wanted to do. Normally, I’d go into this long and rambling story about why… but on this occasion, the reason was “Because it’s possible”…

TL;DR?: Get the code from the repo and enjoy �

Installing Docker

On Ubuntu you can install Docker following the instructions on the Docker Install Page, which includes a convenience script (that runs all the commands you need), if you want to use it. Similar instructions for Debian, CentOS and Fedora exist.

On Windows or Mac there are downloads you can get from the Docker Hub. The Windows Version requires WSL2. I don’t have a Mac, so I don’t know what the requirements are there! Installing WSL2 has a whole host of extra steps that I can’t really do justice to. See this Microsoft article for details.

Installing Vagrant

On Debian and Ubuntu you can add the HashiCorp Apt Repo and then install Vagrant, using these commands:

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt install vagrant

There are similar instructions for RHEL, CentOS and Fedora users there too.

Windows and Mac users will have to get the application from the download page.

Creating your Dockerfile

A Dockerfile is a simple text file which has a series of line prefixes which instruct the Docker image processor to add certain instructions to the Docker Image. I found two pages which helped me with what to add for this; “Ansible. Docker. Vagrant. Bringing together” and the git repo “AkihiroSuda/containerized-systemd“.

You see, while a Dockerfile is great at starting single binary files or scripts, it’s not very good at running SystemD… and I needed SystemD to be able to run the SSH service that Vagrant requires, and to also run the scripts and commands I needed for the image I wanted to build…

Sooooo…. here’s the Dockerfile I created:

# Based on https://vtorosyan.github.io/ansible-docker-vagrant/
# and https://github.com/AkihiroSuda/containerized-systemd/

FROM debian:buster AS debian_with_systemd

# This stuff enables SystemD on Debian based systems
STOPSIGNAL SIGRTMIN+3
RUN DEBIAN_FRONTEND=noninteractive apt update && DEBIAN_FRONTEND=noninteractive apt install -y --no-install-recommends systemd systemd-sysv dbus dbus-user-session
COPY docker-entrypoint.sh /
RUN chmod 755 /docker-entrypoint.sh
ENTRYPOINT [ "/docker-entrypoint.sh" ]
CMD [ "/bin/bash" ]

# This part installs an SSH Server (required for Vagrant)
RUN DEBIAN_FRONTEND=noninteractive apt install -y sudo openssh-server
RUN mkdir /var/run/sshd
#    We enable SSH here, but don't start it with "now" as the build stage doesn't run anything long-lived.
RUN systemctl enable ssh
EXPOSE 22

# This part creates the vagrant user, sets the password to "vagrant", adds the insecure key and sets up password-less sudo.
RUN useradd -G sudo -m -U -s /bin/bash vagrant
#    chpasswd takes a colon delimited list of username/password pairs.
RUN echo 'vagrant:vagrant' | chpasswd
RUN mkdir -m 700 /home/vagrant/.ssh
# This key from https://github.com/hashicorp/vagrant/tree/main/keys. It will be replaced on first run.
RUN echo 'ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key' > /home/vagrant/.ssh/authorized_keys
RUN chmod 600 /home/vagrant/.ssh/authorized_keys
RUN chown -R vagrant:vagrant /home/vagrant
RUN echo 'vagrant ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers

This Dockerfile calls out to a separate script, called docker-entrypoint.sh, taken verbatim from AkihiroSuda’s repo, so here’s that file:

#!/bin/bash
set -ex
container=docker
export container

if [ $# -eq 0 ]; then
	echo >&2 'ERROR: No command specified. You probably want to run `journalctl -f`, or maybe `bash`?'
	exit 1
fi

if [ ! -t 0 ]; then
	echo >&2 'ERROR: TTY needs to be enabled (`docker run -t ...`).'
	exit 1
fi

env >/etc/docker-entrypoint-env

cat >/etc/systemd/system/docker-entrypoint.target <<EOF
[Unit]
Description=the target for docker-entrypoint.service
Requires=docker-entrypoint.service systemd-logind.service systemd-user-sessions.service
EOF
cat /etc/systemd/system/docker-entrypoint.target

quoted_args="$(printf " %q" "${@}")"
echo "${quoted_args}" >/etc/docker-entrypoint-cmd
cat /etc/docker-entrypoint-cmd

cat >/etc/systemd/system/docker-entrypoint.service <<EOF
[Unit]
Description=docker-entrypoint.service

[Service]
ExecStart=/bin/bash -exc "source /etc/docker-entrypoint-cmd"
# EXIT_STATUS is either an exit code integer or a signal name string, see systemd.exec(5)
ExecStopPost=/bin/bash -ec "if echo \${EXIT_STATUS} | grep [A-Z] > /dev/null; then echo >&2 \"got signal \${EXIT_STATUS}\"; systemctl exit \$(( 128 + \$( kill -l \${EXIT_STATUS} ) )); else systemctl exit \${EXIT_STATUS}; fi"
StandardInput=tty-force
StandardOutput=inherit
StandardError=inherit
WorkingDirectory=$(pwd)
EnvironmentFile=/etc/docker-entrypoint-env

[Install]
WantedBy=multi-user.target
EOF
cat /etc/systemd/system/docker-entrypoint.service

systemctl mask systemd-firstboot.service systemd-udevd.service
systemctl unmask systemd-logind
systemctl enable docker-entrypoint.service

systemd=
if [ -x /lib/systemd/systemd ]; then
	systemd=/lib/systemd/systemd
elif [ -x /usr/lib/systemd/systemd ]; then
	systemd=/usr/lib/systemd/systemd
elif [ -x /sbin/init ]; then
	systemd=/sbin/init
else
	echo >&2 'ERROR: systemd is not installed'
	exit 1
fi
systemd_args="--show-status=false --unit=multi-user.target"
echo "$0: starting $systemd $systemd_args"
exec $systemd $systemd_args

Now, if you were to run this straight in Docker, it will fail, because you must pass certain flags to Docker to get this to run. These flags are:

  • -t : pass a “TTY” to the shell
  • --tmpfs /tmp : Create a temporary filesystem in /tmp
  • --tmpfs /run : Create another temporary filesystem in /run
  • --tmpfs /run/lock : Apparently having a tmpfs in /run isn’t enough – you ALSO need one in /run/lock
  • -v /sys/fs/cgroup:/sys/fs/cgroup:ro : Mount the CGroup kernel configuration values into the container

(I found these flags via a RedHat blog post, and a Podman issue on Github.)

So, how would this look, if you were to try and run it?

docker exec -t --tmpfs /tmp --tmpfs /run --tmpfs /run/lock -v /sys/fs/cgroup:/sys/fs/cgroup:ro YourImage

Blimey, what a long set of text! Perhaps we could hide that behind something a bit more legible? Enter Vagrant.

Creating your Vagrantfile

Vagrant is an abstraction tool, designed to hide complicated virtualisation scripts into a simple command. In this case, we’re hiding a containerisation script into a simple command.

Like with the Dockerfile, I made extensive use of the two pages I mentioned before, as well as the two pages to get the flags to run this.

# Based on https://vtorosyan.github.io/ansible-docker-vagrant/
# and https://github.com/AkihiroSuda/containerized-systemd/
# and https://developers.redhat.com/blog/2016/09/13/running-systemd-in-a-non-privileged-container/
# with tweaks indicated by https://github.com/containers/podman/issues/3295
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'docker'
Vagrant.configure("2") do |config|
  config.vm.provider "docker" do |d|
    d.build_dir       = "."
    d.has_ssh         = true
    d.remains_running = false
    d.create_args     = ['--tmpfs', '/tmp', '--tmpfs', '/run', '--tmpfs', '/run/lock', '-v', '/sys/fs/cgroup:/sys/fs/cgroup:ro', '-t']
  end
end

If you create that file, and run vagrant up you’ll get a working Vagrant boot… But if you try and execute any shell scripts, they’ll fail to run, as the they aren’t passed in with execute permissions… so I want to use Ansible to execute things, as these don’t require execute permissions on the /vagrant directory (also, as the thing I’m building in there requires Ansible… so it’s helpful either way ğŸ˜�)

Executing Ansible scripts

Ansible still expects to find python in /usr/bin/python but current systems don’t make the symlink to /usr/bin/python3, as python was typically a symlink to /usr/bin/python2… and also I wanted to put the PPA for Ansible in the sources, which is what the Ansible team recommend in their documentation. I’ve done this as part of the Dockerfile, as again, I can’t run scripts from Vagrant. So, here’s the addition I made to the Dockerfile.

FROM debian_with_systemd AS debian_with_systemd_and_ansible
RUN apt install -y gnupg2 lsb-release software-properties-common
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367
RUN add-apt-repository "deb http://ppa.launchpad.net/ansible/ansible/ubuntu trusty main"
RUN apt install -y ansible
# Yes, I know. Trusty? On Debian Buster?? But, that's what the Ansible Docs say!

In the Vagrantfile, I’ve added this block:

config.vm.provision "ansible_local" do |ansible|
  ansible.playbook = "test.yml"
end

And I created a test.yml, which looks like this:

---
- hosts: all
  tasks:
  - debug:
      msg: "Hello from Docker"

Running it

So how does this look on Windows when I run it?

PS C:\Dev\VagrantDockerBuster> vagrant up
==> default: Creating and configuring docker networks...
==> default: Building the container from a Dockerfile...
<SNIP A LOAD OF DOCKER STUFF>
    default: #20 DONE 0.1s
    default:
    default: Image: 190ffdeaeed0b7ed206097e6c1d4b5cc796a428700c9bd3e27eedacce47fb63b
==> default: Creating the container...
    default:   Name: 2021-02-13DockerBusterWithSSH_default_1613469604
    default:  Image: 190ffdeaeed0b7ed206097e6c1d4b5cc796a428700c9bd3e27eedacce47fb63b
    default: Volume: C:/Users/SPRIGGSJ/OneDrive - FUJITSU/Documents/95 My Projects/2021-02-13 Docker Buster With SSH:/vagrant
    default:   Port: 127.0.0.1:2222:22
    default:
    default: Container created: b64ed264d8949b12
==> default: Enabling network interfaces...
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
==> default: Machine booted and ready!
==> default: Running provisioner: ansible_local...
    default: Running ansible-playbook...

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
[WARNING]: Platform linux on host default is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change this. See https://docs.ansible.com/ansible/2.9/referen
ce_appendices/interpreter_discovery.html for more information.
ok: [default]

TASK [debug] *******************************************************************
ok: [default] => {
    "msg": "Hello from Docker"
}

PLAY RECAP *********************************************************************
default                    : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

PS C:\Dev\VagrantDockerBuster>

And on Linux?

Bringing machine 'default' up with 'docker' provider...
==> default: Creating and configuring docker networks...
==> default: Building the container from a Dockerfile...
<SNIP A LOAD OF DOCKER STUFF>
    default: Removing intermediate container e56bed4f7be9
    default:  ---> cef749c205bf
    default: Successfully built cef749c205bf
    default:
    default: Image: cef749c205bf
==> default: Creating the container...
    default:   Name: 2021-02-13DockerBusterWithSSH_default_1613470091
    default:  Image: cef749c205bf
    default: Volume: /home/spriggsj/Projects/2021-02-13 Docker Buster With SSH:/vagrant
    default:   Port: 127.0.0.1:2222:22
    default:
    default: Container created: 3fe46b02d7ad10ab
==> default: Enabling network interfaces...
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Running provisioner: ansible_local...
    default: Running ansible-playbook...

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
[WARNING]: Platform linux on host default is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change this. See https://docs.ansible.com/ansible/2.9/referen
ce_appendices/interpreter_discovery.html for more information.
ok: [default]

TASK [debug] *******************************************************************
ok: [default] => {
    "msg": "Hello from Docker"
}

PLAY RECAP *********************************************************************
default                    : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

So, if you’re crazy and want to do Vagrant using Docker with Debian Buster and Ansible, this is how to do it. I don’t know how much I’m likely to be using this in the future, but if you use it, let me know what you’re doing with it! 😀

Featured image is “Family� by “Ivan� on Flickr and is released under a CC-BY license.

Jon SpriggsWhen starting WSL2, you get “The attempted operation is not supported for the type of object referenced.”

Hello, welcome to my personal knowledgebase article �

I think you only get this if you have some tool or service which hooks WinSock to perform content inspection, but if you do, you need to tell WinSock to reject attempts to hook WSL2.

According to this post on the Github WSL Issues list, you need to add a key into your registry, in the path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WinSock2\Parameters\AppId_Catalog and they mention that the vendor of “proxifier” have released a tool which creates this key. The screen shot in the very next post shows this registry key having been created.

A screenshot of a screenshot of the registry path needed to prevent WinSock from being hooked.

I don’t know if the hex ID of the “AppId_Catalog” path created is relevant, but it was what was in the screenshot, so I copied it, and created this registry export file. Feel free to create your own version of this file, and run it to fix your own issue.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WinSock2\Parameters\AppId_Catalog\0408F7A3]
"AppFullPath"="C:\\Windows\\System32\\wsl.exe"
"PermittedLspCategories"=dword:80000000

As soon as I’d included this registry entry, I was able to access WSL OK again.

Featured image is “Prickily Hooks� by “Derek Gavey� on Flickr and is released under a CC-BY license.

Paul RaynerValentines Gift - a Tidy Computer Cupboard

Today, my lovely wife (who is far more practical than me) gave me this as a valentines present (along with a nice new pair of Nano X1 trainers).

This is my nice new home server rack. It’s constructed from the finest pallet wood and repurposed chipboard, and has 8 caster wheels (cheaper than the apple ones) on the bottom.

After three and a half years living in our house, the cupboard under the stairs was a mess of jumbled cables and computer bits. It all worked, but with things balanced on other things, held up by their cables, and three years of dust everywhere it really needed an overhall. We’ve recently had a new fibre connection go in (yay - 1Gbps at home!), so yet another cable, and yet another box to balance on top of other boxes.

This was the sorry mess that I called a home network this morning:

And thanks to my lovely gift, and some time to rewire everything (make new cables), it now looks like this:

and a close up:

In there I have my server, UPS, NAS, phone system, lighting system, FTTC broadband, Fibre broadband, router, main switch, and a cable going out and round the house into my office. Lovely and neat, and because it’s on wheels, I can pull it out to get round the back :-)

I am very happy with my new setup.

BitFolk WikiIRC/Bots

+AgainstHumanity

← Older revision Revision as of 09:24, 14 February 2021
(One intermediate revision by the same user not shown)
Line 5: Line 5:
 
! Purpose
 
! Purpose
 
! Who's responsible
 
! Who's responsible
 +
|-
 +
| [[IRC/AgainstHumanity|AgainstHumanity]]
 +
| A game of [https://github.com/grifferz/pah-irc Perpetually Against Humanity]
 +
| grifferz
 
|-
 
|-
 
| [[IRC/Enoch|Enoch]]
 
| [[IRC/Enoch|Enoch]]
Line 21: Line 25:
 
| Gates headlines from [[Planet BitFolk]] into IRC
 
| Gates headlines from [[Planet BitFolk]] into IRC
 
| grifferz
 
| grifferz
 +
|-
 +
| [[IRC/sluggle|sluggle]]
 +
| Gives metadata about links that are posted
 +
| chrisjrob
 
|}
 
|}

BitFolk WikiPlanet Bitfolk

Strugglers moved page Planet Bitfolk to Planet BitFolk case silliness

New page

'''Planet BitFolk''' is a blog / feed aggregator that syndicates the blogs and updates of BitFolk, its customers and hangers-on.

== Web site ==
The web site lives at https://planet.bitfolk.com/.

==Adding feeds==
The configuration is [https://github.com/bitfolk/planet-bitfolk managed in GitHub] so please see that repository for information about how to add and remove feeds, etc. Anyone even tenuously related to BitFolk is generally welcome to add their feed.

You can of course also remove your feed if you don't like it being there.

==Other uses==
It only builds the static site at the moment but it might be used for other things later. In particular, there is an [[IRC]] bot that announces new articles and it would be good to have the feed information in only one place.

==Experimental==
This effort is an experiment. Many people say that blogging is dead and hipsters only post their stuff on [[Wikipedia:Information_silo|silos]] that have no aggregation features. If that turns out to be true then this experiment will be abandoned.

==Frequently Asked Questions==
===How often does the site update?===
At the moment, once per hour.

===Why hasn't my post appeared even though it's many hours after posting?===
Assuming your feed does actually appear in [https://github.com/bitfolk/planet-bitfolk/blob/main/config.ini the configuration] and it's really been more than an hour, there could be some problem with your feed. [https://github.com/bitfolk/planet-bitfolk/issues Submit an issue] or ask [https://github.com/grifferz grifferz] or something.

Things that often break feeds:
* Having an article with a bizarre date like 1 January 0001 (Hugo is known to do this for non-blog articles).

===Why is the site giving me security warnings?===
The site's TLS certificate should be valid but we can't stop people embedding non-secure content—like images with an http link—into their own articles. That will give a mixed security warning.

If you see any other sort of security warning please let someone know – a subscribed feed could have been compromised.

===Why is your web design so feeble?===
Because grifferz is a feeble web designer. The [https://github.com/bitfolk/planet-bitfolk/blob/main/theme/asf-bf/default.css default.css] is in the GitHub repository so if you think you can do better please have a go and submit a pull request.

BitFolk Issue TrackerXen Shell - Feature #192 (Closed): Add brief exit instructions to footer bar

Done.

Paul RudkinSpring Festival Extra Bandwidth

Due to local restrictions on mass gatherings, this year’s Spring Festival Ceremony of my employer will be held online for all employees (> 2000).

[](iam.paulg.it/uploads/2…

To support the peak in bandwidth demand, some of the mobile phone providers have added additional cells in the grounds of our company. They have been testing for the last few days, so in a few hours we will see how they will perform!!

Alun JonesMidday and Midnight

If you’ve been following, I’ve become obsessed with the timekeeping on my solar panel monitoring system. It doesn’t have a real-time-clock and tries to estimate the time of day based on when the panels are generating.

around 544 words

Paul Rudkin

The roads of China have just got a little more dangerous. My wife has passed her China driving test today!

People of China, save yourself while you can!

BitFolk Issue TrackerXen Shell - Feature #192 (Closed): Add brief exit instructions to footer bar

People are often confused about how to exit the Xen Shell console. Investigate adding brief "ctrl-] to exit" instructions to the footer.

Andy SmithBooting the CentOS/RHEL installer under Xen PVH mode

CentOS/RHEL and Xen

As of the release of CentOS 8 / RHEL8, Red Hat disabled kernel support for running as a Xen PV or PVH guest, even though such support is enabled by default in the upstream Linux kernel.

As a result—unlike with all previous versions of CentOS/RHEL—you cannot boot the installer in Xen PV or PVH mode. You can still boot it in Xen HVM mode, or under KVM, but that is not very helpful if you don’t want to run HVM or KVM.

At BitFolk ever since the release of CentOS 8 we’ve had to tell customers to use the Rescue VM (a kind of live system) to unpack CentOS into a chroot.

Fortunately there is now a better way.

Credit

This method was worked out by Jon Fautley. Jon emailed me instructions and I was able to replicate them. Several people have since asked me how it was done and Jon was happy for me to write it up, but this was all worked out by Jon, not me.

Overview

The basic idea here is to:

  1. take the installer initrd.img
  2. unpack it
  3. shove the modules from a Debian kernel into it
  4. repack it
  5. use a Debian kernel and this new frankeninitrd as the installer kernel and initrd
  6. switch the installed OS to kernel-ml package from ELRepo so it has a working kernel when it boots

Detailed process

I’ll go into enough detail that you should be able to exactly replicate what I did to end up with something that works. This is quite a lot but it only needs to be done each time the real installer initrd.img changes, which isn’t that often. The resulting kernel and initrd.img can be used to install many guests.

Throughout the rest of this article I’ll refer to CentOS, but Jon initially made this work for RHEL 8. I’ve replicated it for CentOS 8 and will soon do so for RHEL 8 as well.

Extract the CentOS initrd.img

You will find this in the install ISO or on mirrors as images/pxeboot/initrd.img.

$ mkdir /var/tmp/frankeninitrd/initrd
$ cd /var/tmp/frankeninitrd/initrd
$ xz -dc /path/to/initrd.img > ../initrd.cpio
$ # root needed because this will do some mknod/mkdev.
$ sudo cpio -idv < ../initrd.cpio

Copy modules from a working Xen guest

I'm going to use the Xen guest that I'm doing this on, which at the time of writing is a Debian buster system running kernel 4.19.0-13. Even a system that is not currently running as a Xen guest will probably work, as they usually have modules available for everything.

At the time of writing the kernel version in the installer is 4.18.0-240.

If you've got different, adjust filenames accordingly.

$ sudo cp -r /lib/modules/4.19.0-13-amd64 lib/modules/
$ # You're not going to use the original modules
$ # so may as well delete them to save space.
$ sudo rm -vr lib/modules/4.18*

Add dracut hook to copy fs modules

$ cat > usr/lib/dracut/hooks/pre-pivot/99-move-modules.sh <<__EOF__
#!/bin/sh

mkdir -p /sysroot/lib/modules/$(uname -r)/kernel/fs
rm -r /sysroot/lib/modules/4.18*
cp -r /lib/modules/$(uname -r)/kernel/fs/* /sysroot/lib/modules/$(uname -r)/kernel/fs
cp /lib/modules/$(uname -r)/modules.builtin /sysroot/lib/modules/$(uname -r)/
depmod -a -b /sysroot

exit 0
__EOF__
$ chmod +x usr/lib/dracut/hooks/pre-pivot/99-move-modules.sh

Repack initrd

This will take a really long time because xz -9 is sloooooow.

$ sudo find . 2>/dev/null | \
  sudo cpio -o -H newc -R root:root | \
  xz -9 --format=lzma > ../centos8-initrd.img

Use the Debian kernel

Put the matching kernel next to your initrd.

$ cp /boot/vmlinuz-4.19.0-13-amd64 ../centos8-vmlinuz
$ ls -lah ../centos*
-rw-r--r-- 1 andy andy  81M Feb  1 04:43 ../centos8-initrd.img
-rw-r--r-- 1 andy andy 5.1M Feb  1 04:04 ../centos8-vmlinuz

Boot this kernel/initrd as a Xen guest

Copy the kernel and initrd to somewhere on your dom0 and create a guest config file that looks a bit like this:

name       = "centostest"
# CentOS 8 installer requires at least 2.5G RAM.
# OS will run with a lot less though.
memory     = 2560
vif        = [ "mac=00:16:5e:00:02:39, ip=192.168.82.225, vifname=v-centostest" ]
type       = "pvh"
kernel     = "/var/tmp/frankeninitrd/centos8-vmlinuz"
ramdisk    = "/var/tmp/frankeninitrd/centos8-initrd.img"
extra      = "console=hvc0 ip=192.168.82.225::192.168.82.1:255.255.255.0:centostest:eth0:none nameserver=8.8.8.8 inst.stage2=http://www.mirrorservice.org/sites/mirror.centos.org/8/BaseOS/x86_64/os/ inst.ks=http://example.com/yourkickstart.ks"
disk       = [ "phy:/dev/vg/centostest_xvda,xvda,w",
               "phy:/dev/vg/centostest_xvdb,xvdb,w" ]

Assumptions in the above:

  • vif and disk settings will be however you usually do that.
  • "extra" is for the kernel command line and here gives the installer static networking with the ip=IP address::default gateway:netmask:hostname:interface name:auto configuration type option.
  • inst.stage2 here goes to a public mirror but could be an unpacked installer iso file instead.
  • inst.ks points to a minimal kickstart file you'll have to create (see below).

Minimal kickstart file

This kickstart file will:

  • Automatically wipe disks and partition. I use xvda for the OS and xvdb for swap. Adjust accordingly.
  • Install only minimal package set.
  • Switch the installed system over to kernel-ml from EPEL.
  • Force an SELinux autorelabel at first boot.

The only thing it doesn't do is create any users. The installer will wait for you to do that. If you want an entirely automated install just add the user creation stuff to your kickstart file.

url --url="http://www.mirrorservice.org/sites/mirror.centos.org/8/BaseOS/x86_64/os"
text

# Clear all the disks.
clearpart --all --initlabel
zerombr

# A root filesystem that takes up all of xvda.
part /    --ondisk=xvda --fstype=xfs --size=1 --grow

# A swap partition that takes up all of xvdb.
part swap --ondisk=xvdb --size=1 --grow

bootloader --location=mbr --driveorder=xvda --append="console=hvc0"
firstboot --disabled
timezone --utc Etc/UTC --ntpservers="0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org,3.pool.ntp.org"
keyboard --vckeymap=gb --xlayouts='gb'
lang en_GB.UTF-8
skipx
firewall --enabled --ssh
halt

%packages
@^Minimal install
%end 

%post --interpreter=/usr/bin/bash --log=/root/ks-post.log --erroronfail

# Switch to kernel-ml from EPEL. Necessary for Xen PV/PVH boot support.
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum -y install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel -y install kernel-ml
yum -y remove kernel-tools kernel-core kernel-modules

sed -i -e 's/DEFAULTKERNEL=.*/DEFAULTKERNEL=kernel-ml/' /etc/sysconfig/kernel
grub2-mkconfig -o /boot/grub2/grub.cfg

# Force SELinux autorelabel on first boot.
touch /.autorelabel
%end

Launch the guest

$ sudo xl create -c /etc/xen/centostest.conf

Obviously this guest config can only boot the installer. Once it's actually installed and halts you'll want to make a guest config suitable for normal booting. The kernel-ml does work in PVH mode so at BitFolk we use pvhgrub to boot these.

A better way?

The actual modifications needed to the stock installer kernel are quite small: just enable CONFIG_XEN_PVH kernel option and build. I don't know the process to build a CentOS or RHEL installer kernel though, so that wasn't an option for me.

If you do know how to do it please do send me any information you have.

BitFolk Issue TrackerMisc infrastructure - Feature #131 (Closed): IPv6 glue for bitfolk.com and bitfolk.co.uk

BitFolk Issue TrackerMisc infrastructure - Bug #159 (Closed): Lurker list archives show "No matching frontend" errors

BitFolk Issue TrackerMisc infrastructure - Feature #191 (Closed): Add gentoo-portage mirror

Paul Rudkin

Need coffee.

Paul RudkinShanghai reports 6 local COVID-19 cases, first outbreak since November - Global Times

Shanghai found six local confirmed COVID-19 cases in its most populous downtown Huangpu district on Thursday, two months since the last local case was reported in the city, local health authority said on Friday.

Source:Shanghai reports 6 local COVID-19 cases, first outbreak since November - Global Times

Paul Rudkin

According to some reports on social media, two hospitals in Shanghai have been put on lock down and classed as Medium Risk areas. I believe the hospitals to be Renji Hospital and the Fudan Tumour Hospital. Worrying and challenging times ahead.

Andy SmithIf you’re running Ubuntu and/or using snaps, look into CVE-2020-27348

I was reading an article about CVE-2020-27348 earlier, which is quite a nasty bug affecting a lot of snap packages.

My desktop runs Ubuntu 18.04 at the moment, and so does my partner’s laptop. I also have a Debian buster laptop but I’ve never installed snapd there. So it’s just my desktop and my partner’s laptop I’m concerned about.

If you run Ubuntu 20.04 or later I think there’s probably more concern, as I understand the software centre offers snap versions of things by default.

Anyway, I couldn’t recall ever installing a snap on purpose on my desktop except for a short while ago when I intentionally installed signal-desktop. But in fact I have quite a few snaps installed.

$ snap list
Name                  Version                     Rev    Tracking         Publisher     Notes
core                  16-2.48.2                   1058   latest/stable    canonical✓    core
core18                20201210                    1944   latest/stable    canonical✓    base 
gnome-3-26-1604       3.26.0.20200529             100    latest/stable/…  canonical✓    -
gnome-3-28-1804       3.28.0-19-g98f9e67.98f9e67  145    latest/stable    canonical✓    -
gnome-3-34-1804       0+git.3556cb3               66     latest/stable    canonical✓    -
gnome-calculator      3.38.0+git7.c840c69c        826    latest/stable/…  canonical✓    -
gnome-characters      v3.34.0+git9.eeab5f2        570    latest/stable/…  canonical✓    -
gnome-logs            3.34.0                      100    latest/stable/…  canonical✓    -
gnome-system-monitor  3.36.0-12-g35f88a56d7       148    latest/stable/…  canonical✓    -
gtk-common-themes     0.1-50-gf7627e4             1514   latest/stable/…  canonical✓    -
signal-desktop        1.39.5                      345    latest/stable    snapcrafters  -

I don’t know why gnome-calculator is there. It doesn’t appear to be the binary that’s run when I start the calculator.

So are any of them a security risk? Well…

$ grep -l \$LD_LIBRARY_PATH /snap/*/current/snap/snapcraft.yaml
/snap/gnome-calculator/current/snap/snapcraft.yaml
/snap/gnome-characters/current/snap/snapcraft.yaml
/snap/gnome-logs/current/snap/snapcraft.yaml
/snap/gnome-system-monitor/current/snap/snapcraft.yaml

Those are all the snaps on my system which include the value of the (empty) environment variable LD_LIBRARY_PATH, so are likely vulnerable to this.

But does this really end up with an empty item in the LD_LIBRARY_PATH list?

$ which gnome-system-monitor 
/snap/bin/gnome-system-monitor
$ gnome-system-monitor &
$ pgrep -f gnome-system-monitor
8259
$ tr '\0' '\n' < /proc/8259/environ | grep ^LD_LIBR | grep -q :: && echo "oh dear"
oh dear

Yes it really does.

(The tr is necessary above because the /proc/*/environ file is a NUL-separated string, so that modifies it to be one variable per line, then looks for the LD_LIBRARY_PATH line, and checks if it has an empty entry ::)

So yeah, my gnome-system-monitor is a local code execution vector.

As are my gnome-characters, gnome-logs and that gnome-calculator if I ever uninstall the non-snap version.

That CVE seems to have been published on 3 December 2020. I hope that the affected snaps will be fixed soon.

I don't like that the CVE says the impact is:

If a user were tricked into installing a malicious snap or downloading a malicious library, under certain circumstances an attacker could exploit this to affect strict mode snaps that have access to the library and were launched from the directory containing the library.

My first thought upon reading is, "I'm safe, I haven't been tricked into downloading any malicious snaps!" But I do have snaps that aren't malicious, they are just insecure. The hardest part of the exploit is indeed getting a malicious file (a library) into my filesystem in a directory where I will run a snap from.

Alun Jones21st century sun dial revisited

I have some solar panels on the garage roof, which run some LED lighting and a water feature in the garden. The charge controller has a serial port and I’ve got a PIC microcontroller monitoring the data and controlling the lights/fountain.

around 1885 words

Paul RaynerHelper functions for prototyping with Rocket

Over the holidays I have enjoyed playing a little with Rocket. Here are a couple of things I’ve written which might be useful to others when prototyping a new site using rocket.

Firstly, the examples show you how to create an instance of a struct from either a Query or a Form, but when using a template (I am using a rust implementation of Handlebars) it can be useful to just pass all of the fields through as a map. Here are two simple methods (one for Form, another for Query) which populate a HashMap with the incoming data.

struct RequestMap {
    map: HashMap<String,String>
}

impl<'f> FromForm<'f> for RequestMap {
    type Error = ();

    fn from_form(items: &mut FormItems<'f>, _strict: bool) -> Result<RequestMap, ()> {
        let mut map = HashMap::new();

        for item in items {
            let k = item.key.url_decode().map_err(|_| ())?;
            let v = item.value.url_decode().map_err(|_| ())?;
            map.insert(k, v );

        }

        Ok(RequestMap { map})
    }
}

impl FromQuery<'_> for RequestMap {
    type Error = ();

    fn from_query(items: Query) -> Result<RequestMap, ()> {
        let mut map = HashMap::new();
        for item in items {
            let k = item.key.url_decode().map_err(|_| ())?;
            let v = item.value.url_decode().map_err(|_| ())?;
            map.insert(k, v );

        }

        Ok(RequestMap { map})
    }
}

We create a struct RequestMap which contains just a HashMap, then implement the methods FromForm and FromQuery on the map.

Now these maps can be used in routes as follows:

#[get("/example_get/<name>?<params..>")]
fn example_get_route(name: String, params: RequestMap) -> Template {
    Template::render(name, &params.map)
}

#[post("/example_post/<name>", data="<params>")]
fn example_post_route(name: String, params: Form<RequestMap>) -> Template {
    Template::render(name, &params.into_inner().map)
}

In these examples I have also set up a name parameter which maps to the template name, so you can copy and paste templates around and try them out with different parameters easily.

The second Thing I have found useful in prototyping with Rocket is to set up a Handlebars helper to print out information from the provided context. You can put this in to render as a comment in your template so that you can easily see what context is being provided to your template.

Here is the helper definition:

#[derive(Clone,Copy)]
struct DebugInfo;

impl HelperDef for DebugInfo {
    fn call<'reg:'rc,'rc> (&self, h: &Helper, _: &Handlebars, ctxt: &Context, rc: &mut RenderContext, out: &mut dyn Output) -> HelperResult {
        out.write(&format!("Context:{:?}",ctxt.data()))?;
        out.write(&format!("Current Template:{:?}", rc.get_current_template_name()))?;
        out.write(&format!("Root Template:{:?}", rc.get_root_template_name()))?;
        for (key, value) in h.hash() {
            out.write(&format!("HashKey:{:?}, HashValue:{:?}",key, value))?;
        }
        Ok(())
    }
}

and you set it up like this in the Rocket initialisation:

rocket::ignite().mount("/", routes![index,example_get_route,example_post_route])
        .attach(Template::custom(|engines|{
            engines.handlebars.register_helper("debug", Box::new(DebugInfo));
        }))

To use this as a helper, put something like this inside your Handlebars template:

<!--{{debug nameparam=username}}-->

The output should look something like this:

<!--Context:Object({"username": String("aa")})Current Template:Some("testroute")Root Template:Some("testroute")HashKey:"nameparam", HashValue:PathAndJson { relative_path: Some("username"), value: Context(String("paul"), ["username"]) }-->

The above is for a get request where the URL was http:///testroute?username=paul

Thanks for reading. I hope the above proves useful - I am still experimenting with Rocket (also with writing Handlebars helpers and combining all this with htmx ), so there may be simpler or better ways of achieving the above. If you know of one, or if you have any questions, suggestions, or to point out any mistakes, please contact me at the email address below. I’d love to hear from you.

Alun JonesWeather station - Forecasting

Last night, I happened to stumble across an article about the Zambretti Forecaster, a device produced in 1915 for weather forecasting.

around 101 words

Alun JonesWeather station - more fixes

In July, I replaced the failed BME280 pressure/temperature/humidity sensor with a DS18B20 temperature sensor and put the weather station back on the roof.

around 933 words

Alun JonesWeather station - the latest disaster

The BME280 temperature/humidity/pressure sensor seems to have developed a fault. It’s been reading 100% humidity for days and the pressure output goes to over 1100hPa on a regular basis. I’ve decided to replace it with a single DS18B20 temperature sensor. This can easily be made weatherproof. I’ll build a separate unit for measuring air pressure, based around a BMP280. This can be sited indoors, away from the weather. As for humidity, I’m just going to stop recording that. I don’t find it a particularly useful measure in any case, apart from in calculation of dew point and “feels like” temperatures. I’m can live with that inconvenience, I think.

around 108 words

Paul RaynerWhat is an Err?

In rust, we don’t throw exceptions, we return Results. When coming from a language which throws exceptions, it can be easy to think of the Err type in a result as being just like an exception, but sometimes a little more thought can provide something more elegant.

In this example, I am looking at a simple function

fn add_user(username:&str, email:&str, password:&str) -> ??? 

For this function, we want to add a user to a system. The return type for this could take a number of forms.

First Attempt: Return a struct representing the user if we added the user, and an error if we didn’t

Result<User,Err>

This would be the goto function signature for me when starting out with rust. In my head (coming from Java) it maps nicely to a function signature along the lines of

public User add_user(String username, String email, String password)
	 throws SomeException, SomeOtherException {
...
}

So lets think about what those exceptional circumstances would be.

  • You might be trying to write to a database, and the connection fails. That’s an exceptional circumstance, but one we need to handle gracefully. You probably don’t want your real world app to crash when your database is unavailable briefly for some maintenance. You could internalise this in the function if this was a public API, so your return value in this case might be saying something like Something went wrong. This is probably temporary, so please try again in a bit, or in a function internal to a service, you might want to explicitly handle database errors (email to DBA, log something etc.).

  • The data you got might be no good. The username or email might already exist, the password might be blank, etc. Here you want a return value which informs the caller of what happened explicitly so that they can take corrective action within normal program flow.

Mixing these together in an Err return is inelegant for the caller to handle. They have to handle the Ok case and the other program flow cases, but will probably want to pass the exceptional cases back up the stack.

Better: Return an enum representing all events in normal program flow, and an Error for exceptions

Here we add an enum that looks something like this:

type Reason =  String;

enum AddUser {
	Added(User),
    	UsernameExists,
    	EmailExists,
    	UsernameInvalid(Reason),
    	EmailInvalid(Reason),
    	PaswordInvalid(Reason)
}

The function definition would now look like this:

fn add_user(username:&str, email:&str, password:&str) -> Result<AddUser,Err>

This looks much better. Err is reserved for genuine errors which we would likely propogate upwards, and our normal program flow is captured in one match statement. The calling function now looks something like this:

...

match add_user(name,email,pass)? {
	Added(u) => {...},
     	UsernameExists => {...}
    	...
}

...

The Err part can now be propogated unchanged up to a level where we can handle genuinely exceptional cases consistently.

Thanks for reading. If you have any questions, suggestions, or to point out any mistakes, please contact me at the email address below. I’d love to hear from you.

Josh HollandSetting up a dev environment for PostgreSQL with nixos-container

I’ve been using NixOS for about a month now, and one of my favourite aspects is using lorri and direnv to avoid cluttering up my user or system environments with packages I only need for one specific project. However, they don’t work quite as well when you need access to a service like PostgreSQL, since all they can do is install packages to an isolated environment, not run a whole RDBMS in the background.

For that, I have found using nixos-container works very well. It’s documented in the NixOS manual. We’ll be using it in ‘imperative mode’, since editing the system configuration is the exact thing we don’t want to do. You will need sudo/root access to start containers, and I’ll assume you have lorri and direnv set up (e.g. via services.lorri.enable = true in your home-manager config).

We’ll make a directory to work in, and get started in the standard lorri way:

$ mkdir foo && cd foo
$ lorri init
Jul 11 21:23:48.117 INFO wrote file, path: ./shell.nix
Jul 11 21:23:48.117 INFO wrote file, path: ./.envrc
Jul 11 21:23:48.117 INFO done
direnv: error /home/josh/c/foo/.envrc is blocked. Run `direnv allow` to approve its content
$ direnv allow .
Jul 11 21:24:10.826 INFO lorri has not completed an evaluation for this project yet, expr: /home/josh/c/foo/shell.nix
direnv: export +IN_NIX_SHELL

Now we can edit our shell.nix to install PostgreSQL to access it as a client:

# shell.nix
{ pkgs ? import <nixpkgs> {} }:

pkgs.mkShell {
  buildInputs = [
    pkgs.postgresql
  ];
}

Save that and lorri will start installing it in the background.

Now, we can define our container, by providing its configuration in a file. I have called it container.nix, but I don’t think there’s a standard name for it like there is for shell.nix. Here it is:

# container.nix
{ pkgs, ... }:

{
  system.stateVersion = "20.09";

  networking.firewall.allowedTCPPorts = [ 5432 ];

  services.postgresql = {
    enable = true;
    enableTCPIP = true;
    extraPlugins = with pkgs.postgresql.pkgs; [ postgis ];
    authentication = "host all all 10.233.0.0/16 trust";

    ensureDatabases = [ "foo" ];
    ensureUsers = [{
      name = "foo";
      ensurePermissions."DATABASE foo" = "ALL PRIVILEGES";
    }];
  };
}

It’s important to make sure the firewall opens the port so that we can actually access PostgreSQL, and I’ve also installed the postgis extension for geospatial tools. The authentication line means that any user on any container can authenticate as any user with no checking: fine for development purposes, but obviously be careful not to expose this to the internet! Finally, we set up a user and a database to do our work in.

Now, we can actually create and start the container using the nixos-container tool itself. This is the only step that requires admin rights.

$ sudo nixos-container create foo --config-file container.nix
$ sudo nixos-container start foo

By now, lorri should have finished installing PostgreSQL into your local environment, so once nixos-container has finished running, you should be able to access the new database inside the container:

$ psql -U foo -h $(nixos-container show-ip foo) foo
psql (11.8)
Type "help" for help.

foo=> \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
 foo       | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres         +
           |          |          |             |             | postgres=CTc/postgres+
           |          |          |             |             | foo=CTc/postgres
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(4 rows)

And there we go! We have a container that we can access from the command line, or from an app, and we didn’t need to install PostgreSQL globally. We can even have multiple containers like this for different projects, and they’ll all use the same Nix store for binaries but have completely isolated working environments.

The nixos-container tool itself is a fairly thin wrapper around systemd (the containers themselves work via systemd-nspawn). The containers won’t auto-start, and you have to use systemctl to make that happen:

$ sudo systemctl enable container@foo.service

As a final flourish, we can save having to type in the user, host IP and database with very little effort, since we’re already using direnv and most tools can take their PostgreSQL configuration from some standard environment variables. We just have to add them to our .envrc file, and then re-allow it.

$ cat .envrc
PGHOST=$(nixos-container show-ip foo)
PGUSER=foo
PGDATABASE=foo
export PGHOST PGUSER PGDATABASE

eval "$(lorri direnv)"
$ direnv allow .
$ psql
psql (11.8)
Type "help" for help.

foo=>

Josh HollandUse a scratch branch in git

git is the de facto standard for version control in 2020, and has held that position for some time. However, it holds a reputation of being difficult to use, and its toolkit-based design leaves a huge amount of room for different workflows. Here’s one that I’ve found useful recently:

git checkout -b scratch

That’s it! Well, to begin with at least. You’ll probably have to rebase it whenever you pull. But what is this branch for? It’s a general staging area, like a souped up git stash. Stashes are easy to create and apply, but they are also easy to lose and hard to share between computers. With a scratch branch, you can create as many staging commits as you want and push them to other computers if you need to. And once something’s been committed into git, it’s very very hard to lose it unless you are trying to.

What’s this useful for? I had to apply a few tweaks to a work project’s build system to get it to build on NixOS, and I’m not sure if they are more generally applicable. But I want to keep them around for my personal use, so into my scratch branch they go. If later on I want to share them with my co-workers, I can just git rebase -i to tidy them up a bit, merge them into the main branch and push. Or I can just leave them there forever. It doesn’t matter!

Stuff that’s only relevant to your own idiosyncratic setup shouldn’t be left sitting there uncommitted, at risk of causing merge conflicts or accidentally being squashed by a checkout: put it safely into your scratch branch and you can keep track of it with the full power of git as you need to. You can even have multiple scratch branches! Lightweight branches1 are one of git’s most revolutionary features, and it’s easy to forget that you can use them in such a way.


  1. really lightweight: a branch is just a file in $GIT_DIR/refs/heads/<name> containing a commit reference which all the git commands know how to manipulate. Try running cat .git/refs/heads/master.

Alex HudsonThe Trials of Contact Tracing

The UK NHSX “contact tracing” app is being deployed today, in one small place, to test whether or not this approach might help get us out of lockdown. Unfortunately, the launch is beset with published argument one way and the other about whether or not this app is technically good, meets privacy expectations, or simply whether it will work.

Josh HollandHakyll + sourcehut = ☺

This blog is now automatically deployed by pushing to its source repository at sourcehut! Here are the steps I took to convert it from a manual rsync to a job run by builds.sr.ht after each push to its git repository.

The first step is to create an SSH key so that the build service can access the filesystem on the VPS hosting my blog. Obviously we can’t use a passphrase since we need it to connect without any human intervention. Easy enough with the usual command:

desktop $ ssh-keygen -t ed25519 -f srht

Then to the secrets page on sourcehut and upload the content of the srht file (NOT srht.pub) as a new secret with the type ‘SSH Key’ and a reasonable description. We’ll need to make a note of the UUID that got assigned; for me, it was 2959238e-ea6f-4276-9441-cdc71b933f73.

The next step is to set up the user that we will connect as from the sourcehut builds service. We’ll also need to make sure the ownership and permissions on the webroot are properly set up.

server # WEBROOT=/srv/inv.alid.pw
server # adduser --system srht
server # chsh -s /bin/sh srht
server # chown -R srht:adm "$WEBROOT"
server # chmod -R ug=rwX,o=rX "$WEBROOT"
server # find $WEBROOT -type d -exec g+s {} +

I like to have files like this writable by an admin group so that I can delete stuff without having to be root myself, and the last command sets the setgid bit on all the directories so that newly-created files continue to belong to that group. That’s personal preference though: in the end, as long as the srht user can write there (and nowhere else) then everything should work just fine.

We’ll also need to authorise the SSH key we generated to log in as the srht user, and we can restrict it to only run rsync at the same time. Take the content of the srht.pub file we generated earlier, and add it to ~srht/.ssh/authorized_keys, along with the options to lock it down to rsync:

restrict,command="rrsync /srv/inv.alid.pw" ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH4E9kTv92l1NY1DgqXTnkHJWglVW+Laz6mQELviXzGI srht deployment

rrsync is not a typo: it’s a script that comes with the rsync package (in Ubuntu, it’s installed to /usr/share/doc/rsync/scripts/rrsync.gz, so you’ll have to decompress it and copy it somewhere that shows up in the srht user’s PATH) that means logging in with the given SSH key can only run rsync in the specified directory.

And that’s it for the server-side configuration! Now, we just have to fill in the .build.yml file to tell the sourcehut builders how to do all the deployment. Here it is, all in one go:

image: archlinux
packages:
  - stack
  - rsync
sources:
  - https://git.sr.ht/~jshholland/inv.alid.pw
environment:
  user: srht
  webhost: procyon.chrys.alid.pw
  hostkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKemAh9xHg0lQBhwzVp9UemuSgaDqVC5Mwa+CmnXijP8
secrets:
  - 2959238e-ea6f-4276-9441-cdc71b933f73
tasks:
  - build: |
      cd inv.alid.pw
      stack install
      $HOME/.local/bin/site build
  - deploy: |
      mkdir -p $HOME/.ssh
      echo "$webhost $hostkey" > $HOME/.ssh/known_hosts
      cd inv.alid.pw
      rsync -rlptzz --delete _site/ ${user}@${webhost}:

This ties everything together in two steps: first we compile the site binary that generates the website structure in _site, then, in the deploy task, we set up SSH and rsync the files into place. That’s it! We just have to save this into our repo and everything will just happen automagically when we push new commits.

One issue I have at the moment (and probably the reason I’m only finishing this now, when I first started trying to set this up back in November) is that it takes a long time for the build server to download and compile all the Haskell dependencies, particularly Pandoc. In the long term, a half-hour delay to publishing isn’t that bad, but it’s a little annoying. Next on my list is to try building this with NixOS (instead of Stack) to see if the binary caching provided there speeds things up a bit.

Paul RaynerThe rust type system - elegant and simple

Rust has an elegeant type system. To declare a type is as easy as this:

type MyType = u32;

Now that we have MyType, we can use it as an argument to a function, a return type, or anywhere else that you would use a primitive or struct.

The big advantage this confers is safety. I’d like to illustrate this with an aside into programming history, based on a very worthwile read.

Apps Hungarian was a useful way to help prevent common programming errors. Programmers would include a short description of what a variable’s type should be in the name, so an integer representing a relative coordinate in the x-axis might have a name like BoxMinSz_RelX. On seeing the statement

Dim relx_MaxWidth as Integer
set relx_MaxWidth = max(relx_BoxMinSz, absx_LineMinSz) 

It is clear that you are comparing a relative and absolute coordinate, and that this is likely an error.

Hungarian notation got a terrible reputation because the useful technique above is not what people learned as hungarian notation. They learned to write this:

Dim iMaxWidth as Integer
iMaxWidth = max(iBoxMinSz, iLineMinSz) 

Which doesn’t add any new and useful semantic information, just bloat.

Defining a type in rust enforces the first case. You can’t pass an u32 where a MyType is expected. This eliminates a whole class of bugs, and in rust it is a zero cost abstraction, so it’s not the wasteful object wrapping you would get in something like Java where one would typically write

class MyType {
	int inner;
}

which turns a primitive into an object, adding references, indirection and increasing storage size.

Thanks for reading. If you have any questions, suggestions, or to point out any mistakes, please contact me at the email address below. I’d love to hear from you.

Alex HudsonA No Drama Action Plan for Covid 19

It’s all over the news, and this is Yet Another Hot Take. Don’t completely despair: I’m not going to tell you what the virus is, or cover why you should (or shouldn’t) be worried. What I am going to tell you is that your business should be prepared. Even if you have a decent business continuity plan in place, there are reasons to review it now.

Jon FautleySetting up Traefik on AWS using a Network Load Balancer

I’ve recently been looking at various Kubernetes ingress controllers, and have taken a bit of a shine to Traefik. This is a quick guide to installing the Traefik controller on an existing Kubernetes cluster running inside AWS, and using the AWS Network Load Balancer to terminate SSL. Installing Traefik We’re going to use the Helm chart to install Traefik on our existing K8s cluster. In my case, the cluster has been provisioned using kops, and is additionally using the SpotInst controller to ensure the entire cluster is running on AWS Spot instances to reduce the cost.

Jon FautleyUsing RFXmngr to 'remotely' control/configure an RFXtrx433

I recently needed to program a new device on my RFXtrx device, which can only be done using the Windows(ish)-only RFXmngr program. However, my RFXtrx is buried under the stairs, connected to my Home Assistant installation.

Thankfully, it turns out its possible to access the device over the network with reasonably minimal effort, this guide shows you how.

Phil SpencerMy Commodore 64 nostalgia journey

Are you keeping up with the Commodore? No? Maybe you should, it’s had a bit of a public resurgence in the last 2 years and once you dive into the meat of it you’ll find out it never really died to begin with.

C64 Mini

So in 2018 I caught wind of something called the C64 mini, this was like all the other ones, a micro version of a beloved device from yesteryear with a pretty carousel and some preloaded games. It of course did look like the C64 but had no working keyboard and allowed you to connect a Joystick, keyboard or USB stick with additional games to it. The company claimed this was a stop gap before getting to their main target. A full sized working keyboard version.

C64 Mini

So I bought a mini cause to me it seemed like a reasonable investment to ensure one day I could get the big one and not have the company go out of business from lack of support.

I played with it a little bit, multi disk games were not impossible but sort of a pain and not having a keyboard was a deal breaker for any good games as the joystick keyboard was annoying. It joined the other mini’s in a bag and was only taken out periodically

TheC64

A year later, end of 2019 into now early 2020 the full sized version started shipping! NO NORTH AMERICAN RELEASE. Every day I go on twitter and see some lucky EU jerk enjoying the hell out of their TheC64! (sorry…jealous) It was very annoying to me and the best answer I could get from RetroGames was “2020 someday”. This was not acceptable, my C64 obsession had grown in anticipation and seeing so many favourable reviews of this thing clinched it. I needed a C64.

I bought two

So I spent some time on Ebay for a couple weeks, I was getting outbid left and right and the prices were getting pretty high on some of these units. It was obvious there was still demand for these machines 37 years later. I lucked out and won an auction eventually and had one heading my way however I forgot about another one I bid on and ended up with 2… $$$. Now I needed a way to hook it up to my TV. I had lots of RCA composite cables laying around so I bought a Commodore video cable adapter. I also bought an SD2IEC device, this device is an SDcard reader that plugs into the Commodore serial port. The last device I had ordered was a TOM which was a Commodore joystick port to USB converter. I had all the devices I needed on the way, it was expensive but I was going to be back in the game!

Busted!

So the first Commodore arrived, then the day after the video adapter arrived (yay) I didn’t have any software but I could get the device to turn on and show a blue screen. I entered a small BASIC program that printed some text on the screen and asked for my name. This was awesome! The second Commodore arrived however this one did not have as happy of a debut as it powered on but no keyboard input. It came with a cartridge “pitstop II” so I tried that in the first C64 and some of the GFX were missing which of course was concerning.

Then my SD2IEC arrived, I put together a SD card of games I liked and added the FB64 utility and went to try it out. The utility loaded perfectly and allowed me to browse the SD card on my C64 however any program I tried to load crashed out as soon as the GFX tried to initialize….. Great, I have 2 64’s both broken!

Fixed one killed one


Initially I thought this must be the VIC II chip, so I tried swapping the dead keyboards VIC into the bad seemingly video C64…no luck same problem….Swapped back Oh no! The the one VICII pin broke off…shit…. At this point I had figured it must be RAM or CPU on the newer 84′ C64 but the older 82′ C64 I found out during my research might just need the CIA chip replaced. I pulled the IC out of the bad RAM|CPU|Broken VIC C64 and put it in the 82′ and Voila’ the keyboard works. Better yet! It loads games!! No sound?!!!! FFS!!!

Took the SID out of the busted up 84′ C64 and put it into the 82′ and now sound works. So it was a long and perilous journey (some perils I created) but I now had a working C64.


Commodore in 2020

I picked up some Atari joysticks at a local store and my TOM eventually arrived so I now had a functioning C64 in 2020. It was amazing. I joined a FB group for Commodore and low and behold I found out there are still 10’s of thousands of people active in this community. In 2019 there were 60 new games released. I bought a game called Planet X2 made in 2017 it’s an 8bit RTS style game. It’s great! The European DEMO scene is still alive and kicking (NTSC demo’s seemed to have tapered off…so I only can run a few on my machine) There is new hardware, new workalikes , replacement motherboards, SID chips, 37 years later and this 8bit computer never said die.

Initially I was only stopping by for the nostalgia but I stayed for the experience, new and old.

 

 

Alex HudsonDebugging generic errors: yarn could not resolve host registry.yarnpkg.com

I rarely blog about purely technical errors, but this specific message from yarn is something I’ve seen a number of people struggling with. I’m going to explain a bit more about why it comes about, and how I solved it in my situation. This will not work for everyone, but it may give you a hint.

Alex HudsonThe 'No Code' Delusion

Increasingly popular in the last couple of years, I think 2020 is going to be the year of “no code”: the movement that say you can write business logic and even entire applications without having the training of a software developer. I empathise with people doing this, and I think some of the “no code” tools are great. But I also thing it’s wrong at heart.

Alex HudsonOn Becoming a CTO in 2020

A good friend recently wrote to me to ask what it takes to become a CTO in this day and age. Unfortunately, he DM’d me over Twitter: try as I might, there was nothing of note I could squeeze into that format (usual adage of “if I’d had the time, it would have been briefer”). So, I wrote this largely for him, but I think it’s generally useful.

Stuart SwindellsWiki

After years1 of running a Mediawiki instance effectively read-only2, I’ve ported the content across to a static Jekyll site.
Although I didn’t bother measuring it, performance feels much better 3.

There wasn’t a lot of content, so it didn’t take long to copy and paste and turn into Markdown. The plus side of being Markdown-based is that if I decided to use another static site generator4 it wouldn’t be difficult to drop the content in.

The permalinks for each page are set to match the path for the old Mediawiki pages, so any old links should still work. Google Analytics is also no more.

However, the content is very outdated (some was incomplete to begin with; I also started using Ansible to do config management so some is just done differently now) so it might be time to do away with the whole lot. I’ve been playing with the idea of replacing/supplementing this blog with a Jekyll one, so it wasn’t a total waste of time anyway.

  1. Google Analytics stats go back to 2013, AWStats has data going back to 2005!
  2. with user registration disabled and a registered user required to edit content
  3. makes sense: each page should be a bit smaller and there’s no database involved in generating the pages
  4. I briefly played with using Hugo – neither Jekyll or Hugo really lend themselves very well to non-blog content

Phil SpencerStarships on Stream

I have had this idea for a bit but just haven’t committed to it yet so maybe If I write out my thoughts on it i’ll actually get ti started. Let me know what you think in the comments

Basic Idea

Run racing events on Vendetta Online utilizing the already existing Race tracks in Sedina and integrate it to a Twitch stream for external interaction.

Planned features

Host plugin

  • Announce race to 100 periodically (5 or 10 min)
  • In system count down race automatically
  • Once race started track racers using LUA events on entry, death, and finish of race track
  • Track time of racers for statistics separate of VO’s already existing race stats
  • Award points based on 1st, 2nd,3rd finish
  • Process “bets” from twitch users outside the game viewing the stream
  • Create/Show leaderboard in between races either in game or via external web portal
    • Actually a web based would be better it can then be overlayed to twitch
  • Plugin will track the win loss/race k/d ratio of racers to relay out to Twitch before betting opens

Twitch Bot

  • Relay VO chat out to Twitch channel (relaying in will be disabled)
  • Twitch users have channel points, these can be used to bet on players in game
  • Twitch users can bet on a racer they believe will win eg !bet CrazySpence
  • Twitch Bot will announce race starts, player deaths and who killed them and winners in chat
  • Twitch Bot will track the win loss statistics of the player betting plus keep a point scoreboard of betters

 

Laura HobbsLong Summer

This summer was a pile of poop. Life was really busy kicking my ass and the blog didn’t even enter my mind, and anything I had to say wouldn’t have been for public consumption even if I had. But I finished 2 hats and a pair of socks. For the socks, the 2nd one was …

Footnotes