December 31, 2014

Laura Denson (laura)

2014 Wrap Up

Hi! It’s the end of another year where I haven’t managed to post at least once a month, and when I realize that I haven’t posted here since May I feel a bit ashamed and depressed about that. I’ve spent more time in the past year updating my site than I have posting. I’d talk […]

by laura at December 31, 2014 05:56 PM

December 30, 2014

James Beckett (jmb)

Huh? What?

Wow. Not been over this way in a good while.
They've decorated a bit.

December 30, 2014 04:24 PM

December 22, 2014

Tim Waugh (cyberelk)

December 18, 2014

Phil Spencer (CrazySpence)

Thrifty Raspberry Pi A+ case

My Pi A+ continues to impress me with its form factor and abilities but as I begin to settle it into a more permanent role its lack of case becomes a continued problem.

I would like to eventually get the Pibow Royale but being the Christmas season I can’t justify buying myself something at this time so maybe in the new year I’ll pick one up!

IMG_3109Reduce, Reuse, Recycle

If you’ve read any of my arcade Pi posts you’ve probably noticed that both my original and current arcade are a reuse of a box. The first version where I followed the Adafruit tutorial I used the box Adafruit shipped the parts in. The second one I used my VMware 5 year service award box (there was a gift in the box they didn’t give me an empty black box for a service award)

This time I am going to use the box the Pi came in. It has pretty Raspberry Pi and element14 branding on it, it’s mostly the right size and it’s easy!

Cutting edgeIMG_3112

So for any box to case conversions your main…only tool is an exacto knife. It will allow you to easily carve whatever shapes you need into the box of your choosing. For the Raspberry Pi A+ you need to pick a corner for the Pi that will accommodate the single USB port and the power/HDMI/sound ports on the side. I cut a single hole for the USB port and a long slot of the side for the 3 other ports. If you plan to use GPIO as well you can cut a hole in the top of the box for this as well but in my case that was not required.

Keep it in place

Now this box is bigger than the Pi itself and it can move around freely so what I did is I rolled up the static bag the Pi came in and placed it behind the Pi to keep it from moving side to side. Then I folded the part of the roll sticking out the end length wise into the box to keep the Pi from moving back and forth. Keep in mind if you plan to unplug and plug things into the Pi a lot that this does not prevent all movement. You may want a more sturdy case for that. In my situation this Pi will be in a mostly static form with minimal cables coming and going so it will work.

If you are worried about heat you can cut holes into the top of the box but the A+ is pretty decent in that respect and the ventilation provided by the USB and HDMI slots I made appears to be sufficient. Now as you can see I have a functional case for my Pi at 0 cost to me other than 5 minutes of my time. It isn’t perfect but it works

IMG_3114

 

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at December 18, 2014 10:46 PM

December 10, 2014

Phil Spencer (CrazySpence)

Raspbmc: Remember the cool red Confluence ?

When I first started using Rasbmc almost 2 years ago it had that standard Blue XBMC Confluence and since I was new to the whole XBMC thing in general it was still all shiny and fine in its default glory.

Then!

A month later my Pi did its little updates one reboot and all of a sudden I had this totally awesome Raspberry Pi inspired Confluence and my excitement was over 9000!!!!!!!!

timthumb

THEN!!!

A few months later my SD card was corrupted and I had to reinstall and my famous Red Connie (what I will call the Rasbmc Confluence from now on) was gone and mentions of it were stricken from the universe!!

Whyyy ohh why

A year passed and suddenly I wanted Red Connie back and I found a forum thread where they did exactly that, patched it and brought it back. If you are running Gotham (13.x) it is as simple as ssh to your Pi and paste the following to the shell:

cd /home/pi
wget http://mirrors.arizona.edu/raspbmc/downloads/bin/xbmc/skin.confluence.raspbmc.zip
unzip skin.confluence.raspbmc.zip
cp -R /home/pi/.xbmc-current/xbmc-bin/share/xbmc/addons/skin.confluence /home/pi/.xbmc/addons
mv /home/pi/.xbmc/addons/skin.confluence /home/pi/.xbmc/addons/skin.confluence.raspbmc
cd /home/pi/.xbmc/addons/skin.confluence.raspbmc
sed -i 's/id="skin.confluence"/id="skin.confluence.raspbmc"/' addon.xml
sed -i 's/name="Confluence"/name="Confluence Raspbmc"/' addon.xml
cp -R /home/pi/skin.confluence.raspbmc/backgrounds /home/pi/.xbmc/addons/skin.confluence.raspbmc
cp /home/pi/skin.confluence.raspbmc/colors/defaults.xml /home/pi/.xbmc/addons/skin.confluence.raspbmc/colors/
cp /home/pi/skin.confluence.raspbmc/media/Textures.xbt /home/pi/.xbmc/addons/skin.confluence.raspbmc/media/
sudo rm -R /home/pi/skin.confluence.raspbmc
rm /home/pi/skin.confluence.raspbmc.zip
  • Reboot
  • Go into Settings > Appearance > Skin> Skin and select the new Confluence Rasbmc that appears.

Apparently Kodi (14.x XBMC and beyond…….) it doesn’t work so hot but the thread I followed has instructions for that if you so desire.

Link to original thread here

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at December 10, 2014 09:26 PM

December 06, 2014

Graham Bleach (gdb)

Quickly monitor availability of web services with curl

Some of my recent work was to reduce the interruption to users when we deploy changes.

We have comprehensive scenarios that we run against our complete test environments with a performance testing tool to get a thorough answer. It takes quite a while to get feedback from these tests, so to get quicker feedback I needed a fast way to measure continued availability of a web service in isolation, usually on a vagrant box.

This is the kind of thing that I unfortunately forget, so I am documenting it here mostly for my own benefit.

Useful curl options

The basic curl command I ended up using:

curl -sS --max-time 5 -w "%{http_code}\n" -o /dev/null http://www.example.com/

Why I picked these options:

-sS 

Be silent unless errors occur. The default output from curl is too verbose and not particularly amenable to parsing into structured data. But if errors happen, knowing what they were will help debug problems.

--max-time 5

Timeout after 5 seconds. The default timeout in curl is much more lenient than either a user or a client calling an API will be.

-w "%{http_code}\n"

Prints a newline delimited list of HTTP response codes, with ‘000’ used if it encounters a problem that means that it didn’t see a HTTP response. This makes it easy to count error responses.

-o /dev/null

As long as the thing being tested sets its HTTP response codes correctly, there is no need to see the content of the response.

Repeated tests and other embellishments

This runs requests in a loop as fast as possible. If anything goes wrong, output will be printed to the console. I’ve used example.com, as it should work if you have Internet access. Please cancel this after a short time, to avoid needless requests to example.com:

while true; do curl -sS --max-time 5 -w "%{http_code}\n" -o /dev/null http://www.example.com/ | egrep -v '^200$'; done

Some examples that should fail:

while true; do curl -sS --max-time 5 -w "%{http_code}\n" -o /dev/null http://www.example/ | egrep -v '^200$'; done
while true; do curl -sS --max-time 5 -w "%{http_code}\n" -o /dev/null http://blackhole.webpagetest.org/ | egrep -v '^200$'; done

A much friendlier 100 requests, pausing for half a second between requests:

for i in {1..100}; do curl -sS --max-time 5 -w "%{http_code}\n" -o /dev/null http://www.example/ | egrep -v '^200$'; sleep 0.5; done

Add the date and pipe that and the status code to a comma-delimited file for further analysis:

rm -f output.csv; for i in {1..100}; do curl -sS --max-time 5 -w "$(date --rfc-3339=seconds),%{http_code}\n" -o /dev/null http://www.example/ >>output.csv; sleep 0.5; done

December 06, 2014 12:00 AM

November 27, 2014

Phil Spencer (CrazySpence)

Raspberry Pi A+: Testing the limits

IMG_3074So now I am on day two of the A+ and I wanted to see where that 256MB of RAM would leave me. I have a stock Raspian Wheezy image installed plus I have a powered usb hub with a keyboard, mouse and wifi adapter connected for these tests.

MAME
I do a lot of Arcade gaming with my B’s so it was necessary to see what the A+ would do. On reddit all sorts of people say the RAM would prevent it which made little sense to me since these games used little RAM and the emulation was all CPU.

I put mame4all on the Pi, it was no longer in the Pi store anymore so I downloaded it from google code.

If you decide to compile this on your own for some reason you’ll need to add -lasound and -lrt to the LIBS= line they’re missing but really just save yourself the hassle there’s a pre compiled binary with folder layouts in mame4all_pi.zip

I got Ms Pacman and Street Fighter 2 roms onto the Pi. The key to the test will be is the sound emulation perfect as that is the first sign of failure.

Ms Pacman of course passed no issue the real test was SF2. Let me just say SF2 performs as flawlessly as it did on the B, no blips, no slowdowns. Perfect!

So this opens up the possibility of a cheap no solder 2 player Pi Arcade system *adds to project list*

OpenParsec

This one pushes the B to its limit but is it a GPU or CPU limit or RAM? Based on a quick glance at top on the B it might be CPU/GPU but that cache line leaves it open to guess so I better check it out.upload1

This required me to build SDL2 and SDl2-mixer so that gave me a chance to test the Pi’s heat and stability at the 950MHZ rating. I am happy to say it is just as stable as the B is at this frequency and it seems to actually generate less heat under load. Once I had the dependencies installed I was able to compile OpenParsec. This for anyone unfamiliar was a space combat game released as a LAN game in 1999-2002 and then was open sourced in 2003. The latest source tree allows for it to work with the Pi’s hardware accelerated GLES driver.

The game runs and plays music just as well as it did on the B, the RAM was left at around 15MB free with 20 in cache so it was close. So far the A+ is just as capable for anything I have done in the past with the higher RAM B models. I had to increase the GPU memory to 96MB because at 64MB the frames were bleeding together.

So flying through space, collecting items, everything was fine I added 6 bots from my PC to the server and that reduced the framerate quite a bit. The limit for the PI seems to be around 3-4 players max A or B models. Still, for a $20 board I have MAME and space combat so far. I think we’re doing OK.

Epiphany

I had yet to try Epiphany, my one Pi B is XBMC and the other a dedicated console Arcade so it was time to see if this lived up to the hype. We all know the previous browsers were junk and slow so **ANY** improvement would be great.

So, general browsing this seems to work decently, you can see the CPU Load window fill up pretty quickly but it is significantly faster than Midori and others I had used on the B. Even with the reduced RAM this was a significant improvement.

I decided to be a jerk and try to crush the PI and went to Youtube. I was quite happily defeated. I played the Jurassic World Trailer on it with NO ISSUES AT ALL. This must be that fancy hardware decoded video playback I heard about. So even on an A+ you can enjoy Youtube.

I clicked the fullscreen button and things sorta fell apart there…..256mb RAM, I’ll let that slide. I couldn’t watch any Youtube at all on the old browsers even on the 512MB RAM B model.

Conclusion

The only drawback to the A model is the lack of USB and ethernet which with a USB hub you could work around that issue. The RAM doesn’t have too much impact for single purpose projects and its smaller form factor is better for embedding. A great addition to the Pi family and I am glad I got one.

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at November 27, 2014 09:15 PM

Tolien

RIAT 2014

The 2014 Royal International Air Tattoo was last July; I intended to post this rather less than 4 months after the fact1 but it got stuck in my procrastination pile.

I had a three day ticket (Friday to Monday) which included, on the Friday, access to the pit area where the display teams2 parked their aircraft when they weren’t on display.

The photos I’ve gotten3 around to processing and uploading to Flickr:

The weather was very good, which is always a plus when you’re wandering around a field all day. Saturday was scorching hot and Sunday was cloudy with a little rain. However on Friday, it was cloudy but still more than sunny enough for me to get quite sunburned4:

By the time I got home I was peeling lots.

  1. I might even have settled for doing it before tickets for 2015 went on sale
  2. Frecce Tricolori, Red Arrows, Breitling Jet Team, Patrouille de France and Patrouille Suisse
  3. Out of around 4500 images — every time I dig through the pile I find another few worth bothering with
  4. Protip: sun cream needs to be used to be effective…

by tolien at November 27, 2014 12:54 AM

November 25, 2014

Phil Spencer (CrazySpence)

Raspberry Pi A+: First run!

The Pi A+ has had my attention since it first launch and I have eagerly been awaiting for mine to arrive. Today that day finally came and I saw the package waiting to be opened on my desk.

I ordered my Pi from Adafruit along with a pre installed SD/Micro SD combo card with Raspian. I’ve already played the OS install and self setup game for many years and for this project I just wanted to get up and go as soon as it arrived.

IMG_3074The Pi comes in an updated box, the original was just a plane looking white package while this one sports the RPi foundation logo, element14 and product details on the back.

The new form factor of the A+ is really what had my attention and I just had to see it for myself. It is 1/3 shorter than the B models I have. The eventual goal for this Pi is a track view camera for train club powered by a power bank.

First boot

Now the A+ only has 1 USB and no ethernet so for initial setup it is a good idea to have a USB hub around to allow you to connect more peripherals. In my case I already had a hub that I bought when I got my first B model almost 2 years ago. I plugged in my keyboard and mouse, inserted the microsd card (which has a nice push spring release too) and connected the HDMI and power.

The Initial setup screen popped up, I used my whole SD card and set my overclock to 950 (standard among my Pi’s) and rebooted again.IMG_3082

I logged into the OS and ran startx. There was a Minecraft Pi icon on the desktop, this seemed like a good first run test.

I don’t actually play minecraft but the application ran very smoothly and I chopped away at the mountain side with a sword for a bit.

The first boot test was over and the A+ gets an A+.

Projects

First I am going to test the limits of this Pi by having it run in my Arcade controller. We will see if can match or come close to the same perf as the B with the reduced RAM. I believe it will and this will open up the possibility for a 2 player arcade Pi down the pipes. The A’s extra GPIO slots make a full 2 player deck possible. If it doesn’t hold up there is always the B+ model.

Secondly the actual intention for this Pi will be to convert one of my HO scale flatbed train cars into a camera car for the GNMRE club layout. The reduced power footprint of the A+ should make it more co-operative with the small power bank I would like to use whereas the B’s used too much power for it to be possible.

So keep an eye out you’ll be hearing more from me in the coming weeks!

IMG_3079 IMG_3078 FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at November 25, 2014 09:30 PM

November 22, 2014

Phil Spencer (CrazySpence)

Time machine backups to the Raspberry Pi

Time machine is Apples computer backup solution and it works great. I have been using it for 6 years and even supported it for a couple years in Applecare way back when. The nice part about it is you plug a new hard drive and it prompts you with a question asking to use for backups and bam off to the races in 1 click.

Under the hood

Just like all of Apples fancy magic GUI based tools Time machine is actually backed by command line goodness that the power user can make Time Machine go above and beyond the basic “magic” way of getting things done. In Time machines case you can use the command line to make any mountable volume into a backup disk. Now this might not sound like that big of a deal but what it means is that you can use any hard drive formatted in any manner using a variety of protocols to backup your Mac if that’s what you desire. In my case I have a Raspberry Pi in the living room with a 3TB exfat Hard drive that is only barely used and I decided it would make a good network backup server as well.

IMG_1196On the Pi

The easiest way to do this with the Pi would be Samba. Samba is an open source project that allows you share volumes out as Windows shares.

In some cases your distribution may already have a share for mounted USB devices but in this case I’ll assume it doesn’t

Make sure your external drive/usb stick/flashcard/whatever is mounted to your Pi somewhere. Sometimes these are mounted for you automatically by the distribution like in Rasbmc’s case all USB drives get automatically mounted to /media/

If that is not your case mount the disk to a folder of your choosing (usually in /mnt or /media) then proceed

Install Samba

sudo apt-get install samba samba-common-bin

 

Once installed use the editor of your preference to edit the configuration file.

vim /etc/samba/smb.conf

 

Here is a basic configuration that will allow you to mount and read/write the drive from the Mac

[BackupShareName]
browsable = yes
read only = no
valid users = pi
path = /media/BackupVolumeName
force user = root
force group = root
create mask = 0660
directory mask = 0771

 

This uses the already existing Pi user, like I said I was being basic. it would be sensible to make a new user/password for this purpose.

Restart/start Samba

/etc/init.d/samba restart

 

You should be able to mount the drive from the Mac now and it should appear on the sidebar of Finder. You will have to use “Connect As” and change the credentials. If it did not appear on the sidebar you can use the Go menu on Finder and “Connect to Server” with smb://your.pi.ip.address.

On the Mac

The Mac should now have a mounted volume in Finder that you can write to. What we need to do now is create a Sparse bundle disk image on the mounted volume. Go into Applications -> Utilities and open the Disk Utility.diskutil

Don’t select any volumes yet

  • Click New image
  • Name the image
  • Set the appropriate size for your backup volume
  • Format Mac OS Extended Journaled (if you formatted your OS as case sensitive you’ll need to use that as well)
  • Encryption: None (unless you want to I suppose)
  • Partitions: Single Partition GUID
  • Image Format: Sparse bundle disk image
  • Make sure you have expanded the advanced carrot on the save location and save it to the mounted remote volume
  • Click Create

Now you have a sparse bundle on your mounted volume. Time to make the Time Machine magic happen.

First open the sparse bundle and it will mount on the sidebar of Finder. What this also does is puts it into /Volumes on the filesystem which is the key to pulling this all together. Essentially anything you can get to mount can be used for Time Machine if it is in the proper Mac filesystem format.

Second, use the tmutil command from the command line to set the backup

sudo tmutil setdestination /Volumes/MountedSparseBundleName

 

timemachineOpen Time Machine from System preferences and you should see that it now shows a backup Volume being active.

If you do have a case sensitive system and you didn’t make a case sensitive bundle you will get an error. Go back and re create the bundle to match your computers filesystem.

Now when you wish to do a backup all you need to do is mount the remote volume and sparse bundle on your Pi and let it run.

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at November 22, 2014 02:08 PM

November 19, 2014

Alan Pope (popey)

Scopes Contest Mid-way Roundup

I recently blogged about my Ubuntu Scopes Contest Wishlist after we kicked off the Scopes Development Competition where Ubuntu Phone Scope developers can be in with a chance of winning cool devices and swag. See the above links for more details.

As a judge on that contest I’ve been keeping an eye out for interesting scopes that are under development for the competition. As we’re at the half way point in the contest I thought I’d mention a few. Of course me mentioning them here doesn’t mean they’re favourites or winners, I’m just raising awareness of the competition and hopefully helping to inspire more people to get involved.

Developers have until 3rd December to complete their entry to be in with a chance of winning a laptop, tablet and other cool stuff. We’ll accept new scopes in the Ubuntu Click Store at any time though :)

Robert Schroll is working on a GMail scope giving fast access to email.

gmail

Bogdan Cuza is developing a Mixcloud scope making it easy to search for cool songs and remixes.

Screenshot from 2014-11-10 18:57:32

Sam Segers has a Google Places scope making it easy to find local businesses.

Places-scope002

Michael Weimann has been working on a Nearby Scope and has been blogging about his progress.

Main

Dan has also been blogging about the Cinema Scope.

img2

Finally Riccardo Padovani has been posting screenshots of his Duck Duck Go Scope which is already in the click store.

divergentduck

I’m sure there there are other scopes I’ve missed. Feel free to link to them in the comments. It’s incredibly exciting for me to see early adopter developers embracing our fast-moving platform to realise their ideas.

Good luck to everyone entering the contest.

by popey at November 19, 2014 12:03 PM

November 18, 2014

Andy Smith (grifferz)

Paranoid, Init

Having marvelled at the er… unique nature of MikeeUSA’s Systemd Blues: Took our thing (Wooo) blues homage to the perils of using systemd, I decided what the world actually needs is something from the metal genre.

So, here’s the lyrics to Paranoid, Init.

Default soon on Debian
This doesn’t help me with my mind
People think I’m insane
Because I am trolling all the time

All day long I fight Red Hat
And uphold UNIX philosophy
Think I’ll lose my mind
If I can’t use sysvinit on jessie

Can you help me
Terrorise pid 1?
Oh yeah!

Tried to show the committee
That things were wrong with this design
They can’t see Poettering’s plan in this
They must be blind

Some sick joke I could just cry
GNOME needs logind API
QR codes gave me a feel
Then binary logs just broke the deal

And so as you hear these words
Telling you now of my state
Can’t log off and enjoy life
I’ve another sock puppet to create

by Andy at November 18, 2014 11:00 AM

November 03, 2014

Alan Pope (popey)

Ubuntu Scopes Contest Wishlist

We’re running a Scope Development Competition with prizes including a laptop, tablets, and a bunch of cool Ubuntu swag. Check the above link for details.

I’m one of the judges, so I’m not allowed to enter which is both good and bad news. Good because then you won’t see my terrible coding quality, but bad because I would really love one of these sweet Dell XPS laptops! :)

I do have things I’d like to see made as scopes, and some ideas for making ones that I might do in the future when time permits, and I thought I’d share them. As a judge I’m not saying “make this scope and I’ll vote for your entry” of course, I simply figured I can give people some ideas, if they’re stuck. We do have a set of criteria (see link above) for rating the scopes that are submitted, and those will be used for judging. None of those criteria are “whether it was on the list on popey’s blog”. These are just ideas to get people thinking about what might be possible / useful with a scope.

Surfacing my data

One of the goals of scopes is to enable users to easily and quickly get access to their data. That could be local data on the device or remote data in a silo somewhere online. Typically on other platforms you’d need a dedicated app to get at that data. To access your Spotify playlist you need the Spotify app, to access your LinkedIn data you need the LinkedIn app and so on. Many of the sites and services where my data is held is accessible via an API of some kind. I’d love to see scopes created to surface that data directly to my face when I want it.

Manage Spotify Playlist

I use and love Spotify. One problem I have is that I don’t often add new music to my playlists. I don’t use or value the search function in the app, or the social connected features (I don’t have my Spotify hooked up to Facebook, and don’t have any friends on Facebook anyway). I tend to add new music when I’m having a real life verbal conversation with people, or when listening to the radio.

So what I would like is some quick and easy way to add tracks to my playlist, which I can subsequently play later when I’m not in the pub / driving / listening to the radio during breakfast. This could possibly sign in to Spotify using my credentials, allow me to search for tracks and then use the API to add tracks to playlist

Amazon Wishlist

My family tell me I’m really hard to buy presents for, especially at this time of year. I disagree as I have an Amazon wishlist containing over a hundred items at all price points :) When I visit family they may ask what’s on my wishlist to find out what I’m most interested in.

I’d like to be able to pull out my phone, and with a couple of swipes show them my wishlist. It would also be useful if it had the ability to ‘share’ the wishlist URL over some method (email is one, SMS might be another) so they get their own copy to peruse later.

I’d also like to be able to add things to the wishlist easily. Often when I’m out I think “That’s cool, would love one of those” and that could be achieved with a simple search function, then add to my wishlist.

Location Specific

Satellites Overhead

I (and my kids) like to watch the International Space Station go over. Perhaps I enjoy it more than the kids who are made to stand outside in the cold, but whatever. When I travel it would be nice to have a scope which I can turn to during twilight hours to see when the ISS (or indeed other satellites) are passing overhead. This information appears to be publicly available via well documented APIs.

Upcoming TV Programmes

I frequently forget that my favourite TV programmes are on, or available to stream. It would be awesome to pull together data from somewhere like Trakt and show me which of my most loved programmes are going to be broadcast soon, on what local TV channel.

Events Nearby

When I travel I like to know if there’s any music, social or tech events on locally that I might be interested in going to. There’s quite a few sites where people post their events including Songkick, Meetup and Eventbrite (among many others I’m sure) which have a local event look-up API. One of the cool things about scopes is you can aggregate content from multiple scopes together. So there could be a scope for each of the above mentioned sites, plus a general “Local Events” scope which pulls data from all of those together. Going to one scope and refreshing when I arrive in a new location would be a great quick way to find out what’s on locally.

Some of the above may be impractical or not possible due to API limitations or other technical issues, they’re just some ideas I had when thinking about what I would like to see on my phone. I’m sure others can come up with great ideas too! Let your imagination run wild! :)

Good luck to all those entering the contest!

by popey at November 03, 2014 09:44 AM

November 02, 2014

BitFolk Issue Tracker

BitFolk - Feature #40: Show historical transfer data

Total amounts transferred were added to Cacti so that no multiplication of data rates is required.

If you look at https://tools.bitfolk.com/cacti/graph.php?action=view&rra_id=all&local_graph_id=2673 you'll see that you transferred 70 / 64 GB in the last month. You can click the magnifying glass at top right to specify any time period you like.

by admin at November 02, 2014 05:14 PM

BitFolk - Feature #40: Show historical transfer data

Even if it's just a quick reference of "this is how much you used last month" to see if the current month is on track or not.

by jane at November 02, 2014 01:17 PM

October 31, 2014

Phil Spencer (CrazySpence)

Greater Niagara Model Railroad Engineers open house!

The GNMRE is hosting open houses on 2 weekends in November. The first is on November 8th and 9th and the second on the 22nd and 23rd. A year ago I first visited the GNMRE on an open house so it is a great time to come see the layout and maybe even stick around afterwards and join!

Below is the flyer with the dates, times and directions hope to see you out there!

 

flyer

Click image to enlarge

BnrWqAyIMAAdviw Bu-GTgICYAA2a8q Bt7FFOtCcAAqoH0 (2) Bt7FE-eCcAEzUZP IMG_3033 IMG_3035 FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at October 31, 2014 06:24 PM

October 25, 2014

BitFolk Issue Tracker

Auth. DNS - Feature #59: DNSSEC should be implemented for bitfolk.com

Any progress on getting the bitfolk.com zone signed?

From what I have understood the DNSSEC support in PowerDNS has improved a lot the last couple of years.

by halleck at October 25, 2014 09:11 AM

October 24, 2014

Alan Pope (popey)

Sprinting in DC

For the last week I’ve been working with 230 other Ubuntu people in Washington, DC. We have sprints like this pretty frequently now and are a great way to collaborate and Get Things Done™ at high velocity.

This is the second sprint where we’ve invited some of the developers who are blazing a trail with our Core Apps project. Not everyone could make it to the sprint, and those who didn’t were certainly missed. These are people who give their own time to work on some of the featured and default apps on the Ubuntu Phone, and perhaps in the future on the converged desktop.

It’s been a busy week with discussion & planning punctuating intense hacking sessions. Once again I’m proud of the patience, professionalism and and hard work done by these guys working on bringing up our core apps project on a phone that hasn’t event shipped a single device yet!

We’ve spent much of the week discussing and resolving design issues, fixing performance bugs, crashers and platform integration issues, as well as the odd game of ‘Cards Against Humanity’ & ‘We Didn’t Playtest This At All’ in the bar afterwards.

Having 10 community developers in the same place as 200+ Canonical people accelerates things tremendously. Being able to go and sit with the SDK team allowed Robert Schroll to express his issues with the tools when developing Beru, the ebook reader. When Filippo Scognamiglio needed help with mouse and touch input, we could grab Florian Boucault and Daniel d’Andrada to provide tips. Having Renato Filho nearby to fix problems in Evolution Data Server allowed Kunal Parmar and Mihir Soni to resolve calendar issues. The list goes on.

All week we’ve been collaborating towards a common goal of high quality, beautiful, performant and stable applications for the phone today, and desktop of the future. It’s been an incredibly fun and productive week, and I’m a little sad to be heading home today. But I’m happy that we’ve had this time together to improve the free software we all care deeply about.

The relationships built up during these sprints will of course endure. We all exchange email addresses and IRC nicknames, so we can continue the conversation once the sprint is over. Development and meetings will continue beyond the sprint, in the virtual world of IRC, hangouts and mailing lists.

by popey at October 24, 2014 05:17 PM

October 16, 2014

Phil Spencer (CrazySpence)

SD40-2 Chessie System restoration

I picked up a new engine a few weeks ago and new is a relative term as in it means I just recently own it. The engine had some rough points to it. The back truck cover was missing, the railings were broken missing or bent, no horn, no couplers but the body itself was in good shape.

 

image image image image

The engine is an old blue box era Athearn so it had metal railings and when you first get the engine you have to put a lot of the parts on yourself. Personally I liked this era as you couple get decent engines at fairly reasonable prices. Nowadays that isn’t the case everything is RTR and costs twice as much. I picked this up for 30 bucks which really was a bit much given its state but to me the shell alone was worth 20-25.

I made a stop to Just Train Crazy on Friday and low and behold they had an undecorated Athearn blue box with the parts unassembled powered + a dummy for another 30. I picked it up and took it home to see what could be applied to the Chessie.

image image image image

First I put the new trucks on. A train isn’t complete without trucks. This wasn’t hard they mainly snap back into place. They sat off a wee bit so I used some plastic adhesive to hold it in place.

Next I added a Kadee couple box. This unfortunately sat too low and there didn’t appear to be another Athearn style in the box. The dummy SD40-2 sacraficed its couple box.

After some running tests of the undecorated vs the Chessie it was obvious the undecorated had never been run before and was in perfect condition so the shell was transplanted to the new/unused motor and trucks.

I used some adhesive to get the railings to stay in their upright position, many of the rails had the hook part broken at some point. The glue did a well enough job of holding it in place. Some of the broken clips now stuck up over the railings and I eventually clipped these down to match where the railing wire sat.

Finally I glued a new horn in place and Voila! A restored Chessie SD40-2 with just enough natural aging on the colouring to give it character!

IMG_2996

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at October 16, 2014 08:48 PM

October 12, 2014

Andy Smith (grifferz)

Currently not possible

On Thursday 9th, after weeks of low-level frustration at having to press “close” on every login, I sent a complaint to Barclays asking them to stop asking me on every single login to switch to paperless statements with a dialog box that has only two options:

Switch to paperless statements

This morning they replied:

Please be advised that it is currently not possible for us to remove the switch to paperless statements advert.

So, uh, I suppose if you’re a web developer who thinks that it’s acceptable to ask a question on every login and not supply any means for the user to say, “stop asking me this question”, there is still a job for you in the banking industry. No one there will at any point tell you that this is awful user experience. They will probably just tell you, “good job”, from their jacuzzi full of cash that they got from charging people £5.80 a month to have a bank account, of which £0.30 is for posting a bank statement.

Meanwhile, on another part of their site, I attempt to tell them to send me letters by email not post, but the web site does not allow me to because it thinks I do not have an email address set. Even though the same screen shows my set email address which has been set for years.

Go home Barclays, you're drunk

After light mocking on Twitter they asked me to try using a different browser, before completely misunderstanding what I was talking about, at which point I gave up.

by Andy at October 12, 2014 10:18 AM

October 10, 2014

Andy Smith (grifferz)

Diversity at OggCamp comment

There’s an interesting post about diversity at a tech conference. It is itself a response to a number of tweets by an attendee, so you should read both those things, and probably all of the other comments first.

I’ve now tried twice to add a comment to this article, but each time my comment disappears into the ether. Mark tells me that he is not seeing the comments, i.e. they are not being held for moderation, so I just assume some bit of tech somewhere is failing. Yes, I do get the captcha challenge thing and do complete it successfully. Blog comment systems are awful aren’t they?

So anyway, here’s the most recent version of the comment I tried to add:

I originally wrote this comment on the evening of the 6th, but the blog appears to have eaten it, and I no longer have a copy of it so I’ll have to try to re-type it from memory. Also since then I note a number of other comments which are highly opposed to what I wrote, so you’ll have to take my word for it that this is genuine comment and not an attempt to cause strife.

I do not believe that OggCamp specifically has a problem and I agree with much of what Mark has written, particularly that the unconference format is not in fact used to excuse lack of diversity (though it can be, and doubtless will be, by someone). I do believe that OggCamp has tried quite hard to be welcoming to all, and in many ways has succeeded. There seems to be a slightly larger percentage of female attendees at OggCamp compared to other tech conferences I have been to. I feel strongly that there is a larger percentage of female speakers at OggCamp.

I do however believe the widespread observation that tech conferences and tech in general do have a problem with attracting people who aren’t white males. I do believe that any group organising a conference are obligated to try to fix this, which means that the organisers of OggCamp are.

Stating that there is no such problem and that everyone is welcome is not going to fix it. Clearly there is a problem here, there’s people reporting that there’s a problem and they don’t think you’re doing all that you could do to be welcoming. There’s a word for telling people who say they’re subject to an unwelcoming environment that they in fact are wrong about how they feel, and I’d really like for this not to go there.

However I do not think that many of the things that Mark has proposed will actually make any difference, as well-intentioned as they are. To help improve matters I think that OggCamp should do some things that Mark (and many others in these comments, apparently) will not like.

I am in favour of positive reinforcement / affirmative action / speaker quotas / whatever you want to refer to it as, as part of a diversity statement. Like, aspirational. To be regarded as a sort of “could do better” if it wasn’t achieved. I believe it has shown to be effective.

My first suggestion is to have some sort of diversity goal, perhaps one like, “ideally at least one largest-stage slot per day will be taken by a person who is not a white male”. If we assume one largest stage, two slots each on morning and afternoon, that’s four per day so that’s aiming for 25% main stage representation of speakers who aren’t white males. I believe the gender split alone (before we consider race or other marginalised attributes) in the tech industry is something like 80/20 so this doesn’t sound outrageous.

My second suggestion—and I feel this is possibly more important than the first—is to get more diversity in the group of people selecting the invited speakers. I think a bunch of white males (like myself) sitting about pontificating about diversity isn’t very much better than not doing anything at all. Put those decisions into the hands of the demographic we are trying to encourage.

So, I suggest asking zenaynay to speak at the next OggCamp, and I suggest asking zenaynay if they know any other people who aren’t white males who would like to speak at a future OggCamp.

I do not think that merely marketing OggCamp in more places will fix much. People that aren’t white males tend to be put off from speaking at events like OggCamp and the only way to change their minds is to directly contact them. More diverse speakers will lead to more diverse attendees.

In the same vein, there’s the code of conduct issue. We tend to believe that we are all really nice guys doing the best we can; we would never offend or upset anyone, we would never exclude anyone. The thing is, people who aren’t like us have a very different experience of the world. So just saying that we’re not like that isn’t really enough. Codes of conduct for conferences are a good idea for this reason. Many people who are not white males will not attend a conference that doesn’t have one, because they feel like there is no commitment there and they’re not welcome (or in many cases, safe).

Ashe Dryden compiled a useful page of tips for increasing diversity at tech conferences. If there is genuine desire to do this then I think you have to come up with a great counter-argument as to why it isn’t worth trying the things that Ashe Dryden has said have worked for others. Codes of conduct and diversity goals are in there. As is personally inviting speakers.

“We don’t have time to run a full CFP process” seems like one of the stronger counter-arguments to all of this, to which I think there are two answers:

  1. Don’t bother then; nothing changes.
  2. Try to find volunteers to do it for you; something may change.

Shanley Kane wrote a great collection of essays called Your Startup is Broken. Of course this is about startups (and a US-centric slant, too) not conferences, but it is a great read nontheless and touches upon all the sorts of issues that are relevant here. I really recommend it. It’s only $10.

Finally, I feel that many of the commentors are being a little too defensive. Try to take it as an indictment of the tech sector, not an indictment of OggCamp, and try to use it as feedback to improve things.

by Andy at October 10, 2014 09:57 PM

October 07, 2014

Phil Spencer (CrazySpence)

Pi shutdown button: Details

 

mattcomment

It’s rare to get someone who actually goes through the comment process on my blog but since they took the time to not only read my post but also to provide feedback I try to oblige. In this case Matt would like some of the finer details to my Shut down button I did on my media Raspberry Pi so here we go!

The button

The button itself is a flat Japanese style arcade button in translucent red. It was originally used in my first Arcade project and acquired from Adafruit. The features of this particular style of button are it is dead simple for beginners which at the time I was exactly that. There are 2 small prongs on the button and the basic jist of getting this whole thing to work is that 1 pin goes to a live GPIO pin and the other goes to a ground pin.

The cable

Now, Adafruit being as thorough as they are on their tutorials also provided a simple all in 1 cable with the appropriate female connecting prongs on one end and on the other end a double jumper so you can place it over a ground pin and GPIO pin in one step.

IMG_2971 IMG_2972 IMG_2970

 

GPIO

As I mentioned in another post shortly after my shut down button post if you connect a button to Pin 5, better known as GPIO 3 on the GPIO image a Pi in stand by mode will wake up/boot when pressed so if you want a dual function shutdown and power on button you should connect your button to GPIO 3. For the purposes of the equipment I use this also has the extreme convenience of being right across from a ground pin.

GPIO Rev2

GPIO Rev2

Retrogame

Once again hands off to Adafruit for creating this little gem I have since turned it into my Swiss army knife of GPIO button interaction. My version can be found here on my Github account. If you just want duplicate functionality then retrogame-xbmc.c is all you need it is already set up in that source file and you just need to compile it to a binary and go. However, details being what they are….

In the definition of io[] (line 89) in the C file I made sure the last entry in the table pointed to pin 3. Technically the table could be just that one line as I don’t actually use any of the other pins on this particular machine. The key it presses doesn’t matter but it is a good idea to make sure it isn’t a key that whatever application your Pi is running acts on.

{ 3, KEY_LEFTALT }

Line 131 I changed the mask to the same values, this allows it to work as 1 button instead of the 2 the original developer had set it up for.

const unsigned long vulcanMask = (1L << 5) | (1L << 5);

Line 394 I changed the #if condition to always exist, this was blocked off by the original developer as they moved away from this style of shut down method. I suppose I could just take the block out but I was just looking for functionality not cleanliness. I don’t believe this code even exists anymore in the main retrogame tree but with me it will live forever!

		} else if(timeout != -1) { // Vulcan timeout occurred
#if 1 // Old behavior did a shutdown
			(void)system("shutdown -h now");

 

However if you don’t care about any of that an easy way to get started is:

git clone https://github.com/CrazySpence/Adafruit-Retrogame.git
cd Adafruit-Retrogame
cp retrogame-xbmc.c retrogame.c
make

It’ll complain about gamera being missing or something but don’t worry about that retrogame will be compiled. If it complains about anything else make sure you have GCC, make and the uinput kernel module installed on your OS.

Run on boot

Make sure to add “/pathto/yourbinary/retrogame &” to /etc/rc.local file and then it will start on boot and stay in the background.

 

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at October 07, 2014 07:45 PM

October 06, 2014

Adam Sweet

Diet and Exercise for Geeks

I gave a talk on diet and exercise for geeks this weekend at Oggcamp 2014. As promised, my slides are here.

by Adam at October 06, 2014 11:54 AM

October 01, 2014

Alan Pope (popey)

XDA Developer Conference 2014

The XDA Developer community had its second conference last weekend, this time in Manchester, UK. We were asked to sponsor the event and were happy to do so. I went along with Daniel Holbach from the Community Team and Ondrej Kubik from the Phone Delivery Team at Canonical.

This was my first non-Ubuntu conference for a while, so it was interesting for me to meet people from so many different projects. As well as us representing Ubuntu Phone, there were guys from the Jolla project showing off SailfishOS and their handset and ports. Asa Dotzler was also there to represent Mozilla & FirefoxOS.

Daniel did a small Ubuntu app development workshop which enabled us to learn a lot from our materials and process around App Dev Schools which we’ll feed back to later sessions. Ondrej gave a talk to a packed room about hardware bring-up and porting Ubuntu to other devices. It was well receieved and explained the platform nicely. I talked about the history of Ubuntu phone and what the future might hold.

There were other sponsor booths including big names like nVidia showing off the Sheild tablet and Sony demonstrating their rather bizarre Smart EyeGlass technology. Oppo and OnePlus had plenty of devices to lust after too including giant phones with beautiful displays. I enjoyed a bunch of the talks including MediaTek making a big announcement, and demonstrating their new LinkIT One platform.

The ~200 attendees were mostly pretty geeky guys whose ages ranged from 15 to 50. There were Android developers, ROM maintainers, hardware hackers and tech enthusiasts who all seemed very friendly and open to discuss all kinds of tech subjects at every opportunity.

One thing I’d not seen at other conferences which was big at XDA:DevCon was the hardware give-aways. The organisers had obtained a lot of tech from the sponsors to give away. This ranged from phone covers through bluetooth speakers, mobile printers, hardware hacking kits through to phones, smart watches & tablets, including an Oppo Find 7, pebble watch and nVidia Sheild & controller. These were often handed out as a ‘reward’ for attendees asking good questions, or as (free) raffle prizes. It certainly kept everyone on their toes and happy! I was delighted to see an Ubuntu community member get the Oppo Find 7 :) I was rewarded with an Anker MP141 Portable Bluetooth Speaker during one talk for some reason :)

On the whole I found the conference to be an incredibly friendly, well organised event. There was plenty of food and drink at break times and coffee and snacks in between with relaxing beers in the evening. A great conference which I’d certainly go to again.

by popey at October 01, 2014 10:09 AM

September 05, 2014

Phil Spencer (CrazySpence)

Arcade stand up setup @ the trailer

As anyone who has read my blog must have noticed by now I post a lot of Raspberry Pi articles and a lot of Arcade emulation articles. My current arcade console has provided me with hours of education and entertainment in both areas. I usually play sitting down in front of the TV at home or at my trailer but at the trailer I have the issue where the TV is mounted up high. At the wrong angles it can be difficult to see what is going on.

Trailer TV Arcade at trailer Ms Pacman on Pi Arcade on desk

So last weekend I was playing and I had my chair out as I usually do and somehow I suddenly got this idea of putting the Arcade controller where the DVD’s are and pushing back the TV.

I moved the Playstation 2, DVD’s and games out of the way and sure enough the controller fit easily in the spot. I hooked up the HDMI and power and gave it a whirl.

Perfect

Now while camping I have a poor mans stand up cabinet!

IMG_2845 IMG_2843 IMG_2822 IMG_2825 IMG_2828 IMG_2832 IMG_2833 IMG_2839 FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at September 05, 2014 09:22 PM

September 04, 2014

Taras Young (taras)

August 14, 2014

Phil Spencer (CrazySpence)

Model railroading updates

I haven’t posted a train update on the blog in a bit so I figured I would give a quick summary

N Scale

I sort of stalled here for a bit but I have returned to it. Finished laying roadbed on the board and I have track held in place with nails currently until I glue it in place and ballast it.

I took a video of a test run here

The equipment is old and only has one set of powered wheels so it stalls on the turnouts sometimes.

HO Scale

Still no home layout and it will probably continue that way as I have the club in Fenwick to get my train fix every week. I took a pair of new videos one featuring CP Rail and of course it wouldn’t be me if I didn’t also do a Chessie System!

CP Rail video here

Chessie video here

I have also taken a lot of photos of the trains on the layout which I will show off at the end of this post. The best way to follow my train escapades now is probably YouTube or Twitter so follow me there!

Bt7FFOtCcAAqoH0 (2) Bu-GTbSCMAAyw1u Bu-GTE9CMAEHIvY Bu-GTgICYAA2a8q Bu-GTSACAAAeLQ4 Bu-HF6JCUAA8SR6 BnrWqAyIMAAdviw Bt7FE-eCcAEzUZP FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at August 14, 2014 09:17 PM

August 08, 2014

Andy Smith (grifferz)

What’s my btrfs doing? And how do I recover from it?

I’ve been experimenting with btrfs on my home file server for a while. Yes, I know it’s not particularly bleeding edge or anything any more but I’m quite conservative even for just my household’s stuff as restoring from backup would be quite tedious.

Briefly, the btrfs volume was initially across four 2TB disks in RAID10 for data and metadata. At a later date I also added a 500G disk but had never rebalanced so that had no data on it.

$ sudo btrfs filesystem show
Label: 'tank'  uuid: 472ee2b3-4dc3-4fc1-80bc-5ba967069ceb
        Total devices 5 FS bytes used 1.08TB
        devid    1 size 1.82TB used 555.03GB path /dev/sdh
        devid    3 size 1.82TB used 555.03GB path /dev/sdi
        devid    4 size 1.82TB used 555.03GB path /dev/sdj
        devid    5 size 465.76GB used 0.00 path /dev/sdk
        devid    2 size 1.82TB used 555.03GB path /dev/sdg
 
Btrfs v0.20-rc1-358-g194aa4a
$ sudo btrfs filesystem df /srv/tank
Data, RAID10: total=1.08TB, used=1.08TB
System, RAID10: total=64.00MB, used=128.00KB
System: total=4.00MB, used=0.00
Metadata, RAID10: total=2.52GB, used=1.34GB

Yesterday, one of the disks started misbehaving:

Aug  7 12:17:32 specialbrew kernel: [5392685.363089] ata5.00: failed to read SCR 1 (Emask=0x40)
Aug  7 12:17:32 specialbrew kernel: [5392685.369272] ata5.01: failed to read SCR 1 (Emask=0x40)
Aug  7 12:17:32 specialbrew kernel: [5392685.375651] ata5.02: failed to read SCR 1 (Emask=0x40)
Aug  7 12:17:32 specialbrew kernel: [5392685.381796] ata5.03: failed to read SCR 1 (Emask=0x40)
Aug  7 12:17:32 specialbrew kernel: [5392685.388082] ata5.04: failed to read SCR 1 (Emask=0x40)
Aug  7 12:17:32 specialbrew kernel: [5392685.394213] ata5.05: failed to read SCR 1 (Emask=0x40)
Aug  7 12:17:32 specialbrew kernel: [5392685.400213] ata5.15: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen
Aug  7 12:17:32 specialbrew kernel: [5392685.406556] ata5.15: irq_stat 0x00060002, PMP DMA CS errata
Aug  7 12:17:32 specialbrew kernel: [5392685.412787] ata5.00: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen
Aug  7 12:17:32 specialbrew kernel: [5392685.419286] ata5.00: failed command: WRITE DMA
Aug  7 12:17:32 specialbrew kernel: [5392685.425504] ata5.00: cmd ca/00:08:56:06:a1/00:00:00:00:00/e0 tag 1 dma 4096 out
Aug  7 12:17:32 specialbrew kernel: [5392685.425504]          res 9a/d7:00:00:00:00/00:00:00:10:9a/00 Emask 0x2 (HSM violation)
Aug  7 12:17:32 specialbrew kernel: [5392685.438350] ata5.00: status: { Busy }
Aug  7 12:17:32 specialbrew kernel: [5392685.444592] ata5.00: error: { ICRC UNC IDNF ABRT }
Aug  7 12:17:32 specialbrew kernel: [5392685.451016] ata5.01: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen
Aug  7 12:17:32 specialbrew kernel: [5392685.457334] ata5.01: failed command: WRITE DMA
Aug  7 12:17:32 specialbrew kernel: [5392685.463784] ata5.01: cmd ca/00:18:de:67:9c/00:00:00:00:00/e0 tag 0 dma 12288 out
Aug  7 12:17:32 specialbrew kernel: [5392685.463784]          res 9a/d7:00:00:00:00/00:00:00:00:9a/00 Emask 0x2 (HSM violation)
.
.
(lots more of that)
.
.
Aug  7 12:17:53 specialbrew kernel: [5392706.325072] btrfs: bdev /dev/sdh errs: wr 9, rd 0, flush 0, corrupt 0, gen 0
Aug  7 12:17:53 specialbrew kernel: [5392706.325228] btrfs: bdev /dev/sdh errs: wr 10, rd 0, flush 0, corrupt 0, gen 0
Aug  7 12:17:53 specialbrew kernel: [5392706.339976] sd 4:3:0:0: [sdh] Stopping disk
Aug  7 12:17:53 specialbrew kernel: [5392706.346436] sd 4:3:0:0: [sdh] START_STOP FAILED
Aug  7 12:17:53 specialbrew kernel: [5392706.352944] sd 4:3:0:0: [sdh]  
Aug  7 12:17:53 specialbrew kernel: [5392706.356489] end_request: I/O error, dev sdh, sector 0
Aug  7 12:17:53 specialbrew kernel: [5392706.365413] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Aug  7 12:17:53 specialbrew kernel: [5392706.475838] lost page write due to I/O error on /dev/sdh
Aug  7 12:17:53 specialbrew kernel: [5392706.482266] lost page write due to I/O error on /dev/sdh
Aug  7 12:17:53 specialbrew kernel: [5392706.488496] lost page write due to I/O error on /dev/sdh

After that point, /dev/sdh no longer existed on the system.

Okay, so then I told btrfs to forget about that device:

$ sudo btrfs device delete missing /srv/tank
$ sudo btrfs filesystem show
Label: 'tank'  uuid: 472ee2b3-4dc3-4fc1-80bc-5ba967069ceb
        Total devices 5 FS bytes used 1.08TB
        devid    3 size 1.82TB used 555.03GB path /dev/sdi
        devid    4 size 1.82TB used 555.03GB path /dev/sdj
        devid    5 size 465.76GB used 0.00 path /dev/sdk
        devid    2 size 1.82TB used 555.03GB path /dev/sdg
        *** Some devices missing
 
Btrfs v0.20-rc1-358-g194aa4a

Apart from the obvious fact that a device was then missing, things seemed happier at this point. I decided to pull the disk and re-insert it to see if it still gave errors (it’s in a hot swap chassis). After plugging the disk back in it pops up as /dev/sdl and rejoins the volume:

$ sudo btrfs filesystem show
Label: 'tank'  uuid: 472ee2b3-4dc3-4fc1-80bc-5ba967069ceb
        Total devices 5 FS bytes used 1.08TB
        devid    1 size 1.82TB used 555.04GB path /dev/sdl
        devid    3 size 1.82TB used 555.03GB path /dev/sdi
        devid    4 size 1.82TB used 555.03GB path /dev/sdj
        devid    5 size 465.76GB used 0.00 path /dev/sdk
        devid    2 size 1.82TB used 555.03GB path /dev/sdg
 
Btrfs v0.20-rc1-358-g194aa4a

…but the disk is still very unhappy:

Aug  7 17:46:46 specialbrew kernel: [5412439.946138] sd 4:3:0:0: [sdl] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
Aug  7 17:46:46 specialbrew kernel: [5412439.946142] sd 4:3:0:0: [sdl] 4096-byte physical blocks
Aug  7 17:46:46 specialbrew kernel: [5412439.946247] sd 4:3:0:0: [sdl] Write Protect is off
Aug  7 17:46:46 specialbrew kernel: [5412439.946252] sd 4:3:0:0: [sdl] Mode Sense: 00 3a 00 00
Aug  7 17:46:46 specialbrew kernel: [5412439.946294] sd 4:3:0:0: [sdl] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Aug  7 17:46:46 specialbrew kernel: [5412439.952286]  sdl: unknown partition table
Aug  7 17:46:46 specialbrew kernel: [5412439.990436] sd 4:3:0:0: [sdl] Attached SCSI disk
Aug  7 17:46:47 specialbrew kernel: [5412440.471412] btrfs: device label tank devid 1 transid 504721 /dev/sdl
Aug  7 17:47:17 specialbrew kernel: [5412470.408079] btrfs: bdev /dev/sdl errs: wr 7464, rd 0, flush 332, corrupt 0, gen 0
Aug  7 17:47:17 specialbrew kernel: [5412470.415931] lost page write due to I/O error on /dev/sdl

Okay. So by then I was prepared to accept that this disk was toast and I just wanted it gone. How to achieve this?

Given that data was still being read off this disk okay (confirmed by dd, iostat), I thought maybe the clever thing to do would be to tell btrfs to delete this disk while it was still part of the volume.

According to the documentation this would rebalance data off of the device to the other devices (still plenty of capacity available for two copies of everything even with one disk missing). That way the period of time where there was a risk of double disk failure leading to data loss would be avoided.

$ sudo btrfs device delete /dev/sdl /srv/tank

*twiddle thumbs*

Nope, still going.

Hmm, what is it doing?

$ sudo btrfs filesystem show
Label: 'tank'  uuid: 472ee2b3-4dc3-4fc1-80bc-5ba967069ceb
        Total devices 5 FS bytes used 1.08TB
        devid    1 size 1.82TB used 555.04GB path /dev/sdl
        devid    3 size 1.82TB used 556.03GB path /dev/sdi
        devid    4 size 1.82TB used 556.03GB path /dev/sdj
        devid    5 size 465.76GB used 26.00GB path /dev/sdk
        devid    2 size 1.82TB used 556.03GB path /dev/sdg

Seems that it’s written 26GB of data to sdk (previously unused), and a little to some of the others. I’ll guess that it’s using sdk to rebalance onto, and doing so at a rate of about 1GB per minute. So in around 555 minutes this should finish and sdl will be removed, and I can eject the disk and later insert a good one?

Well, it’s now quite a few hours later and sdk is now full, but the btrfs device delete still hasn’t finished, and in fact iostat believes that writes are still taking place to all disks in the volume apart from sdl:

$ sudo iostat -x -d 5 sd{g,i,j,k,l}
Linux 3.13-0.bpo.1-amd64 (specialbrew.localnet)         08/08/14        _x86_64_        (2 CPU)
 
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdg               6.50     0.89    2.49    1.60    54.30   136.42    93.31     0.43  105.19   73.77  154.12   1.63   0.67
sdk               0.00     0.79    0.00    0.89     0.02    97.93   218.89     0.08   91.43    5.69   91.79   5.70   0.51
sdj               2.26     1.10    0.79    1.38    65.45   136.39   185.57     0.19   86.94   46.38  110.20   5.17   1.12
sdi               8.27     1.34    3.39    1.21    88.11   136.39    97.55     0.60  130.79   46.89  365.87   2.72   1.25
sdl               0.24     0.00    0.01    0.00     1.00     0.00   255.37     0.00    1.40    1.40    0.00   1.08   0.00
 
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdg               0.00     0.00    0.00   87.20     0.00  4202.40    96.39     0.64    7.39    0.00    7.39   4.43  38.64
sdk               0.00     0.20    0.00  102.40     0.00  3701.60    72.30     2.40   23.38    0.00   23.38   8.63  88.40
sdj               0.00     0.00    0.00   87.20     0.00  4202.40    96.39     0.98   11.28    0.00   11.28   5.20  45.36
sdi               0.00     0.20    0.00  118.00     0.00  4200.80    71.20     1.21   10.24    0.00   10.24   4.45  52.56
sdl               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
$ sudo btrfs filesystem show
Label: 'tank'  uuid: 472ee2b3-4dc3-4fc1-80bc-5ba967069ceb
        Total devices 5 FS bytes used 1.08TB
        devid    1 size 1.82TB used 555.04GB path /dev/sdl
        devid    3 size 1.82TB used 555.29GB path /dev/sdi
        devid    4 size 1.82TB used 555.29GB path /dev/sdj
        devid    5 size 465.76GB used 465.76GB path /dev/sdk
        devid    2 size 1.82TB used 555.29GB path /dev/sdg
 
Btrfs v0.20-rc1-358-g194aa4a

Worse still, btrfs thinks it’s out of space:

$ touch foo
touch: cannot touch `foo': No space left on device
$ df -h .
Filesystem      Size  Used Avail Use% Mounted on
-               7.8T  2.2T  5.2T  30% /srv/tank/backups

So, that’s a bit alarming.

I don’t think that btrfs device delete is ever going to finish. I think what I probably should have done is just forcibly yanked sdl and then done btrfs device delete missing, and put up with the window of possible double disk failure.

But what’s done is done and now I need to recover from this.

Should I ctrl-c the btrfs device delete? If I do that and the machine is still responsive, should I then yank sdl?

I have one spare disk slot into which I could place the new disk when it arrives, without rebooting or interrupting anything. I assume that will then register as sdm and I could add it to the btrfs volume. Would the rebalancing then start using that and complete, thus allowing me to yank sdl?

Some input from anyone who’s actually been through this would be appreciated!

Update 2014-08-12

It’s all okay again now. Here’s a quick summary for those who just want to know what I did:

  • Asked for some advice from Hugo, who knows a lot more about btrfs than me!
  • Found I could not ctrl-c the device delete and had to reboot.
  • Discovered I could mount the volume with -oro,degraded,recovery, i.e. read-only. It couldn’t be mounted read-write at this stage.
  • Took a complete local backup of the 1.08TiB of data via the read-only mount onto one of the new 3TB disks that had arrived on the Friday.
  • Made a bug report against the Linux kernel for the fact that mount -odegraded,recovery would go into deadlock.
  • Compiled the latest mainline kernel from source using the instructions in the Debian Linux Kernel Handbook. After booting into it mount -odegraded,recovery worked and I had a read-write volume again.
  • Compiled a new btrfs-tools.
  • Inserted one of the new 3TB disks and did a btrfs replace start /dev/sdj /dev/sdl /srv/tank in order to replace the smallest 500GB device (sdj) with the new 3TB device (sdl).
  • Once that was complete, did btrfs filesystem resize 5:max /srv/tank in order to let btrfs know to use the entirety of the device with id 5 (sdl, the new 3TB disk).
  • Did a btrfs balance start -v -dconvert=raid1,soft -mconvert=raid1,soft /srv/tank to convert everything from RAID-10 to RAID-1 so as to be more flexible in future with different-sized devices.
  • Finally btrfs device delete missing /srv/tank to return the volume to non-degraded state.
$ sudo btrfs filesystem show
Label: 'tank'  uuid: 472ee2b3-4dc3-4fc1-80bc-5ba967069ceb
        Total devices 4 FS bytes used 1.09TiB
        devid    2 size 1.82TiB used 372.03GiB path /dev/sdg
        devid    3 size 1.82TiB used 373.00GiB path /dev/sdh
        devid    4 size 1.82TiB used 372.00GiB path /dev/sdi
        devid    5 size 2.73TiB used 1.09TiB path /dev/sdl
 
Btrfs v3.14.2

A more detailed account of the escapade follows, with some chat logs between Hugo and I thrown in to help people’s web searching.

A plan is hatched

<grifferz> according to iostat it's writing quite a lot to all four
           disks, and doing no reading at all
<grifferz> but it is also constantly saying
<grifferz> Aug  8 06:48:28 specialbrew kernel: [5459343.262187] btrfs:
           bdev /dev/sdl errs: wr 122021062, rd 0, flush 74622, corrupt
           0, gen 0
<darkling> OK, reading further, I don't think you'll be able to ^C the
           dev delete.
<darkling> So at this point, it's probably a forcible reboot (as polite
           as you can make it, but...)
<darkling> Take the dead disk out before the OS has a chance to see it.
<grifferz> if I waited and did nothing at all until the new disk
           arrives, if I insert it and add it to the volume do you think
           it will recover?
<darkling> This is then the point at which you run into the other
           problem, which is that you've got a small disk in there with
           4 devices on a RAID-10.
<grifferz> if adding the new disk would allow the dev delete to
           complete, presumably I could then do another dev delete for
           the 500G disk
<darkling> No, dev delete is going to fall over on the corrupt sections
           of the device.
<darkling> I wouldn't recommend using it in this case (unless it's dev
           delete missing)
<grifferz> so you would suggest to reboot, yank sdl, hopefully get up
           and running with a missing device, do dev delete missing,
           insert replacement disk, rebalance?
<darkling> It's kind of a known problem. We probably need a "device
           shoot-in-the-head" for cases where the data can't be
           recovered from a device.
<darkling> Yes.
<darkling> With the small device in the array, it might pay to do the
           dev delete missing *after* adding the new disk.
<grifferz> what problems is the 500G disk going to cause me?
<grifferz> apart from this one that I am having now I suppose :)
<darkling> Well, RAID-10 requires four devices, and will write to all
           four equally.
<darkling> So the array fills up when the smallest device is full.
<darkling> (If you have 4 devices)
<darkling> Have a play with http://carfax.org.uk/btrfs-usage/ to see
           the effects.
<grifferz> is that why it now thinks it is full because I had four 2T
           disks and a 500G one and I tried to delete one of the 2T
           ones?
<darkling> Yes.
<grifferz> ah, it's a shame it couldn't warn me of that, and also a
           shame that if I added a new 2T one (which I can probably do
           today) it won't fix itself
<darkling> I generally recommend using RAID-1 rather than RAID-10 if you
           have unequal-sized disks. It behaves rather better for space
           usage.
<grifferz> I bet I can't convert RAID-10 to RAID-1 can I? :)
<darkling> Of course you can. :)
<darkling> btrfs balance start -dconvert=raid1,soft
           -mconvert=raid1,soft /
<grifferz> oh, that's handy. I saw balance had dconvert and mconvert to
           raid1 but I thought that would only be from no redundancy
<darkling> No, it's free conversion between any RAID level.
<grifferz> nice
<grifferz> well, thanks for that, at least I have some sort of plan now.
           I may be in touch again if reboot ends up with a volume that
           won't mount! :)

Disaster!

In which it doesn’t mount, and then it only mounts read-only.

fuuuuuuuuuuuuuuuuuuuuuu

<grifferz> oh dear, I hit a problem! after boot it won't mount:
<grifferz> # mount /srv/tank
<grifferz> Aug  8 19:05:37 specialbrew kernel: [  426.358894] BTRFS:
           device label tank devid 5 transid 798058 /dev/sdj
<grifferz> Aug  8 19:05:37 specialbrew kernel: [  426.372031] BTRFS
           info (device sdj): disk space caching is enabled
<grifferz> Aug  8 19:05:37 specialbrew kernel: [  426.379825] BTRFS:
           failed to read the system array on sdj
<grifferz> Aug  8 19:05:37 specialbrew kernel: [  426.403095] BTRFS:
           open_ctree failed
<grifferz> mount: wrong fs type, bad option, bad superblock on
           /dev/sdj,
<grifferz> googling around but it seems like quite a generic message
<darkling> Was sdj the device that failed earlier?
<grifferz> no it was sdl (which used to be sdh)
<darkling> OK.
<grifferz> # btrfs fi sh
<grifferz> Label: 'tank'  uuid: 472ee2b3-4dc3-4fc1-80bc-5ba967069ceb
<grifferz>  Total devices 5 FS bytes used 1.08TB
<grifferz>  devid    5 size 465.76GB used 465.76GB path /dev/sdj
<grifferz>  devid    3 size 1.82TB used 555.29GB path /dev/sdh
<grifferz>  devid    4 size 1.82TB used 555.29GB path /dev/sdi
<grifferz>  devid    2 size 1.82TB used 555.29GB path /dev/sdg
<grifferz>  *** Some devices missing
<grifferz> (now)
<grifferz> perhaps ask it to do it via one of the other disks as sdj
           is now the small one?
<darkling> Yeah.
<darkling> Just what I was going to suggest. :)
<grifferz> even when specifying another disk it still says "failed to
           read the system array on sdj"
<darkling> But, with that error, it's not looking very happy. :(
<darkling> What kernel was this on?
<grifferz> it was on 3.13-0 from debian wheezy backports but since I
           rebooted it booted into 3.14-0.bpo.2-amd64
<grifferz> I can try going back to 3.13-0
<darkling> 3.14's probably better to stay with.
<darkling> Just checking it wasn't something antique.
<grifferz> I could also plug that failing disk back in and remove sdj.
           it probably still has enough life to be read from
<darkling> Well, first, what does btrfs check say about the FS?
<darkling> Also try each drive, with -s1 or -s2
<grifferz> check running on sdj, hasn't immediately aborted…
<darkling> Ooh, OK, that's good.
# btrfs check /dev/sdj
Aug  8 19:13:15 specialbrew kernel: [  884.840987] BTRFS: device label tank devid 2 transid 798058 /dev/sdg
Aug  8 19:13:15 specialbrew kernel: [  885.058896] BTRFS: device label tank devid 4 transid 798058 /dev/sdi
Aug  8 19:13:15 specialbrew kernel: [  885.091042] BTRFS: device label tank devid 3 transid 798058 /dev/sdh
Aug  8 19:13:15 specialbrew kernel: [  885.097790] BTRFS: device label tank devid 5 transid 798058 /dev/sdj
Aug  8 19:13:15 specialbrew kernel: [  885.129491] BTRFS: device label tank devid 2 transid 798058 /dev/sdg
Aug  8 19:13:15 specialbrew kernel: [  885.137456] BTRFS: device label tank devid 4 transid 798058 /dev/sdi
Aug  8 19:13:15 specialbrew kernel: [  885.145731] BTRFS: device label tank devid 3 transid 798058 /dev/sdh
Aug  8 19:13:16 specialbrew kernel: [  885.151907] BTRFS: device label tank devid 5 transid 798058 /dev/sdj
warning, device 1 is missing
warning, device 1 is missing
warning devid 1 not found already
Checking filesystem on /dev/sdj
UUID: 472ee2b3-4dc3-4fc1-80bc-5ba967069ceb
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
found 49947638987 bytes used err is 0
total csum bytes: 1160389912
total tree bytes: 1439944704
total fs tree bytes: 150958080
total extent tree bytes: 55762944
btree space waste bytes: 69500665
file data blocks allocated: 1570420359168
 referenced 1568123219968
Btrfs v0.20-rc1-358-g194aa4a
<grifferz> it doesn't seem to have complained. shall I give mounting
           another try, or fsck again from another disk?
<darkling> Hmm. Odd that it's complaining about the system array, then.
<darkling> That check you just did is read-only, so it won't have
           changed anything.
<grifferz> doing the fsck with another device gives identical output
<grifferz> and no, I still can't mount it
<darkling> Oooh, hang on.
<darkling> Try with -odegraded
<grifferz> # mount -odegraded /srv/tank
<grifferz> Aug  8 19:20:58 specialbrew kernel: [ 1347.388182] BTRFS:
           device label tank devid 5 transid 798058 /dev/sdj
<grifferz> Aug  8 19:20:58 specialbrew kernel: [ 1347.628728] BTRFS
           info (device sdj): allowing degraded mounts
<grifferz> Aug  8 19:20:58 specialbrew kernel: [ 1347.633978] BTRFS
           info (device sdj): disk space caching is enabled
<grifferz> Aug  8 19:20:58 specialbrew kernel: [ 1347.725065] BTRFS:
           bdev (null) errs: wr 122025014, rd 0, flush 293476, corrupt
           0, gen 0
<grifferz> Aug  8 19:20:58 specialbrew kernel: [ 1347.730473] BTRFS:
           bdev /dev/sdg errs: wr 3, rd 8, flush 0, corrupt 0, gen 0
<grifferz> prompt not returned yet
<darkling> OK, that's probably good.
<grifferz> bit worrying it says it has an error on another disk though!
<darkling> Those are cumulative over the lifetime of the FS.
<darkling> Wouldn't worry about it too much.
<grifferz> right okay, some of those happened the other day when the
           whole bus was resetting
<grifferz> prompt still not returned :(
<darkling> Bugger...
<grifferz> yeah iostat's not showing any disk activity although the
           rest of the system still works
<darkling> Anything in syslog?
<grifferz> no that was the extent of the syslog messages, except for a
           hung task warning just now but that is for the mount and
           for btrs-transactiblah
<darkling> How about -oro,recovery,degraded?
<darkling> You'll probably have to reboot first, though.
<grifferz> I can't ctrl-c that mount so should I try that in another
           window or reboot and try it?
<grifferz> probably best to reboot I suppose
<grifferz> I suspect the problem's here though:
<grifferz> Aug  8 19:26:33 specialbrew kernel: [ 1682.538282]
           [<ffffffffa02f1610>] ? open_ctree+0x20a0/0x20a0 [btrfs]
<darkling> Yeah, open_ctree is a horrible giant 1000-line function.
<darkling> Almost every mount problem shows up in there, because
           that's where it's used.
<grifferz> hey that appears to have worked!
<darkling> Cool.
<grifferz> but it doesn't say anything useful in the syslog
<grifferz> so I worry that trying it normally will still fail
<darkling> Now unmount and try the same thing without the ro option.
<darkling> Once that works, you'll have to use -odegraded to mount the
           degraded FS until the new disk arrives,
<darkling> or simply balance to RAID-1 immediately, and then balance
           again when you get the new disk.
<grifferz> that mount command hasn't returned :(
<darkling> That's -odegraded,recovery ?
<grifferz> I think I will put the new disk in and take a copy of all my
           data from the read-only mount
<grifferz> and yes that is correct
<darkling> OK, might be worth doing one or both of upgrading to 3.16
           and reporting to bugzilla.kernel.org
<darkling> You could also take a btrfs-image -c9 -t4 of the filesystem
           (while not mounted), just in case someone (josef) wants to
           look at it.

A bug report was duly filed.

A new kernel, and some success.

It’s been a long time since I bothered to compile a kernel. I remember it as being quite tedious. Happily the procedure is now really quite easy. It basically amounted to:

$ wget -qO - https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.16.tar.xz | xzcat | tar xvf -
$ cd linux-3.16
$ cp /boot/config-3.14-0.bpo.2-amd64 .config
$ make oldconfig
(press Return a lot)
$ make deb-pkg
# dpkg -i ../linux-image-3.16.0_3.16.0-1_amd64.deb

That procedure is documented in the Debian Linux Kernel Handbook.

I wasn’t expecting this to make any difference, but it did! When booted into this kernel I was able to do:

# mount -odegraded,recovery /srv/tank
# umount /srv/tank
# mount -odegraded /srv/tank

and end up with a read-write, working volume.

There were no interesting syslog messages.

Thankfully from this point on the volume was fully read-write, so even though a fair bit of work was still needed I could put it back in service and no further reboots were required.

<grifferz> oh, that's interesting. after a short delay, mounting
           -orecovery,degraded on 3.16 does actually work. it appears!
<darkling> \o/
<grifferz> do I need to unmount it and remount it with just -odegraded 
           now?
<darkling> Yes, that should work.
<grifferz> and then I can put the new disk in, add it to the volume,
           rebalance it, remove the small 500G disk, convert to raid-1?
<darkling> A bit faster to use btrfs dev replace to switch out the
           small device for the new one.
<darkling> Then btrfs dev resize n:max /mountpoint for the n that's
           the new device.
<darkling> Then restripe to RAID-1.
<grifferz> right, great, it's mounted with just -odegraded
<grifferz> so: 1) insert new disk, 2) "dev replace" the 500G one for
           this new device?
<darkling> Yes.
<darkling> That will leave the new device with an FS size of 500G, so
           you need to resize it.
<darkling> (Same idea as resizing the partition but not the ext* FS on
           it)
<darkling> The resize should take a few ms. :)
<grifferz> I don't seem to have a "btrfs device replace" command. do I
           need to build a new btrfs-progs?
<darkling> What --version is it?
<darkling> (Probably build a new one, yes)
<grifferz> Btrfs v0.20-rc1-358-g194aa4a
<darkling> Yeah, that's old enough you're mising some features.
<grifferz> ah, it's not "btrfs device replace" it's just "btrfs
           replace …" I've built v3.14.2 now

So that was:

$ sudo btrfs replace start /dev/sdj /dev/sdl /srv/tank

after carefully confirming that /dev/sdj really was the 500G disk and /dev/sdl really was the new 3TB disk I just inserted (several of the device names change throughout this post as disks are ejected and inserted!).

<darkling> Oh, OK. Seems like a slightly odd place to put it. :(
<darkling> The userspace tools are a bit of a mess, from a UI point of
           view.
<darkling> I'm currently struggling with several problems with btrfs
           sub list, for example.
<grifferz> heh: $ sudo btrfs replace status /srv/tank
<grifferz> 0.4% done, 0 write errs, 0 uncorr. read errs
<darkling> Look on the bright side: it's way faster than two balances.
<grifferz> won't this still leave me with a volume that it thinks has a
           device missing though?
<darkling> Yes, but if you're going to remove the small device, this is
           still probably the fastest approach.
<grifferz> after it's finished with the replace and I've done the
           resize, will a "device delete" of the small one leave it
           satisfied?
<darkling> Once the replace has finished, the small device should no
           longer be a part of the FS at all.
<grifferz> oh yeah
<grifferz> surely it should be happy at that point then, with 4 equal
           sized devices?
<darkling> You might want to run wipefs or a shitload of /dev/zero
           with dd over it, just to make sure. (Bonus points for doing
           it from orbit. ;) )
<darkling> The replace is a byte-for-byte replacement of the device.
<darkling> So if you were degraded before that, you're degraded after
           it.
<grifferz> right but after the replace and resize then?
<darkling> The resize just tells the FS that there's more space it can
           use -- it's a trivial O(1) operation.
<grifferz> what will I need to do to make it happy that there aren't
           any missing devices then?
<darkling> An ordinary balance. (Or a balance with -dconvert=raid1 if
           you want to go that way)
<grifferz> I do ultimately. In which case do you think there is any
           reason to do the balances separately?
<darkling> No reason at all.
<grifferz> righto :)

The replace finishes:

Started on 11.Aug 20:52:05, finished on 11.Aug 22:29:54, 0 write errs, 0 uncorr. read errs

It turns out wipefs wasn’t necessary; I did it with -n anyway just to see if it would find anything, but it didn’t.

Time to do the balance/convert.

<grifferz> $ sudo btrfs balance start -v -dconvert=raid1,soft
           -mconvert=raid1,soft /srv/tank
<grifferz> Dumping filters: flags 0x7, state 0x0, force is off
<grifferz>   DATA (flags 0x300): converting, target=16, soft is on
<grifferz>   METADATA (flags 0x300): converting, target=16, soft is on
<grifferz>   SYSTEM (flags 0x300): converting, target=16, soft is on
<grifferz> fingers crossed :)
<grifferz> I am a bit concerned that syslog is mentioning sdj which is
           no longer part of the volume (it was the smallest disk)
<grifferz> Aug 11 22:45:23 specialbrew kernel: [10551.595830] BTRFS
           info (device sdj): found 18 extents
<grifferz> for example
<grifferz> and btrfs fi sh confirms that sdj is not there any more
<grifferz> well I think it is just confused because iostat does not
           think it's touching sdj any more
<grifferz> hah, balance/convert complete, but:
<grifferz> $ sudo btrfs fi sh
<grifferz> Label: 'tank'  uuid: 472ee2b3-4dc3-4fc1-80bc-5ba967069ceb
<grifferz>         Total devices 5 FS bytes used 1.09TiB
<grifferz>         devid    2 size 1.82TiB used 372.03GiB path /dev/sdg
<grifferz>         devid    3 size 1.82TiB used 373.00GiB path /dev/sdh
<grifferz>         devid    4 size 1.82TiB used 372.00GiB path /dev/sdi
<grifferz>         devid    5 size 2.73TiB used 1.09TiB path /dev/sdl
<grifferz>         *** Some devices missing
<grifferz> Btrfs v3.14.2
<grifferz> so now half my data is on sdl, the rest is split between
           three, and it still thinks something is missing!
<darkling> btrfs dev del missing /mountpoint
<grifferz> aha!
<darkling> And the way that the allocator works is to keep the amount
           of free space as even as possible -- that maximises the
           usage of the FS.
<grifferz> that was it :)
$ sudo btrfs filesystem show
Label: 'tank'  uuid: 472ee2b3-4dc3-4fc1-80bc-5ba967069ceb
        Total devices 4 FS bytes used 1.09TiB
        devid    2 size 1.82TiB used 372.03GiB path /dev/sdg
        devid    3 size 1.82TiB used 373.00GiB path /dev/sdh
        devid    4 size 1.82TiB used 372.00GiB path /dev/sdi
        devid    5 size 2.73TiB used 1.09TiB path /dev/sdl
 
Btrfs v3.14.2

Phew!

everything went better than expected

by Andy at August 08, 2014 06:29 AM

August 05, 2014

Phil Spencer (CrazySpence)

Raspberry Pi gaming information.

The Raspberry Pi makes a great gaming choice for those who are interested in classic gaming such as Emulation or even some older FPS action such as Quake 1,2 or 3.

However, there are a few tweaks and caveats one should be aware of with the Pi before diving in.

SDL 1.2 vs SDL2

SDL 1.2:
SDL 1.2 is tried and true and has 1000’s of applications for it. It also has a specially modified release for the Pi to work on console allowing you to save resources by staying out of that cumbersome X windows environment but there are some limits.

Every SDL 1.2 application I’ve used on the Pi seems to scale to 640×480 max on the console, regardless of your video connection. It can go lower (if you’re using sdtv) but not higher it seems. If this is incorrect I have yet to see a way around it. It does scale to the size of your display though so impact is minimal and the low resolution is actually better for your Pi’s performance.

SDL2.0

SDL 2 had Linux console usage built in so in SDL 2’s natural state from source you can achieve X-less running. That being said however SDL2.0 unlike 1.2 will use your displays natural resolution for output, anything smaller (forced by the application) becomes a smaller screen inside that natural resolution and does not scale (yet, I hope this gets fixed). The problem with that is if you try and run a game at 1920x1080p there’s a good chance it could perform poorly. SDL2 also only feeds its resolution calls that one resolution you are set at so you can not change it from within the game (once again, hope they fix that).

However, you can force the issue with your /boot/config.txt

If you set an HDMI or DVI mode of a lower resolution (that your display supports) SDL2 will use that instead for your game. I don’t mean change the framebuffer size, that ** will not help in this case ** you have to actually set your hdmi mode itself.

For example 720p instead of 1080p:

hdmi_drive=2
hdmi_mode=85 #720p

If you just wanted a smaller X and linux console for example you could just do:

framebuffer_width=720
framebuffer_hieght=480

You will get an apparent resolution change but in reality your Pi is still operating at the displays full resolution and only the framebuffer is displayed in that resolution. Since SDL2 actually utilizes the Pi’s video driver it will ignore this and initialize the program you start at the full resolution so if you want to control SDL2 resolution use the hdmi_ config options in config.txt.

GPIO uinput gaming devices:

SDL2 breaks them at first. The issue was explained here. If you make a udev rule or alter the file I talk about there you can re enable those devices for SDL2 applications without affecting SDL1.2 usage either.

Raspberry Pi CPU

The default clock is 700mhz. If you plan to play games you have to overclock. I personally use the high setting in raspi-config which sets it to 950mhz. I have tried 1ghz but I found I would get a hard system lock from time to time. Results may vary.

Go have fun

It’s easy to turn the Pi into a retro gaming powerhouse and there are endless community resources at your disposal. Just keep in mind the limitations above and you will be up and running in no time.

 

Super Mario war on Pi QuakeForge on Pi OpenParsec on Pi Wing Commander 2 on Pi Ms Pacman on Pi Street fighter 2 on Pi FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at August 05, 2014 04:25 PM

July 30, 2014

Tim Waugh (cyberelk)

Fewer auth dialogs for Print Settings

The latest version of system-config-printer adds a new button to the main screen: Unlock. This is a GtkLockButton attached to the “all-edit” polkit permission for cups-pk-helper.

The idea is to make it work a bit more like the Printing screen in the GNOME Settings application. To make any changes you need to click Unlock first, and this fetches all the permissions you need in order to make any changes.

Screenshot from 2014-07-30 10:20:43

This is a change from the previous way of working. Before this, all user interface elements that made changes were available immediately, and permission was sought for each change required (adding a printer, fetching a list of devices, etc).

Screenshot from 2014-07-30 10:20:55

Hopefully it will now be a little easier to use, as only one authentication dialog will generally be needed rather than several as before.

Exceptions are:

  • the dialog for reading/changing the firewall (when there are no network devices discovered but the user is trying to add a network queue), and
  • the dialog for adjusting server settings such as whether job data is preserved to allow for reprinting

These are covered by separate polkit permissions and I think a LockButton can only be responsible for one permission.

by Tim at July 30, 2014 10:31 AM

July 18, 2014

Phil Spencer (CrazySpence)

SDL2 and retrogame

It doesn’t work!

Or does it…

Anyways as usual in my quest for using my Arcade controls to do just about everything I ran into a snag with OpenParsec yesterday…. Turns out it wasn’t accepting my arcade controls.

In a convenient sort of way the main retrogame git had a user pop in complaining of the same problem. Well this just won’t do at all I thought and I jumped into the problem!

I spent a lot of the day in udev and evdev docs and I did evtest and udevadm tests and EVERYTHING says retrogame works with evdev but it wasn’t with SDL2.

After my brains felt melted I took the day off to think about it and went back into it about an hour ago. This time combing through SDL2’s code as my tests earlier proved retrogame was working with evdev as far as Linux itself was concerned.

SDL_udev.c

val = _this->udev_device_get_property_value(dev, "ID_INPUT_KEYBOARD");
if (val != NULL && SDL_strcmp(val, "1") == 0 ) {
   devclass |= SDL_UDEV_DEVICE_KEYBOARD;
}

I was reading through this file and came through a case block where it identifies devices. It bases this on whether the udev has the property ID_INPUT_JOYSTICK, ID_INPUT_MOUSE or ID_INPUT_KEYBOARD. Boom there it was. retrogame only identifies as ID_INPUT_KEY but to me it didn’t seem to make sense why they would be so specific when in some cases other devices may identify as ID_INPUT_KEY or ID_INPUT_REL/ABS instead of the obvious KEYBOARD or MOUSE. So anyways I posted my changed file on github and I am going to ask SDL why and see if it was an oversight or had a proper purpose.

If you use retrogame just rebuild your SDL2 with the file I provided and your arcade controls should be working again. Might also fix python and other gpio input scripts with SDL2, let me know!

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at July 18, 2014 08:12 PM

July 16, 2014

Phil Spencer (CrazySpence)

OpenParsec on Raspberry Pi

I know there hasn’t been an OpenParsec update in some time so I decided to make it a good one. Over the last couple months I have made it my business to see how far the Pi can be stretched as far as gaming goes.

But… I wanted some space combat to go with this. As my Tag cloud shows Parsec has been talked about a lot on my blog as myself and several others brought it back to life and completed some of the Internet client/server parts of the game the original developers abandoned.

Currently for our next release we are getting the last few pieces to work, teleporters, proper afterburner and invulnerability FX and networking fixes.

However, wouldn’t it also be cool if it worked on the Pi…. Yes, yes it would

Firstly, it will compile without any problems as it was. Once you run it that is where the problems start. Parsec is a GL game, and I use the term GL loosely. It was a 3DFX game that changed to GL at the turn of the century so it did all sorts of whacky things that drove our GL guy (slime) bonkers but also being GL means it isn’t hardware accelerated in the Pi.

Changing it to GLES would be a challenge. slime slowly adopted the challenge and I would follow along with tests and complaints as slime didn’t actually have a Pi to test with. He did most of his GLES testing with an iOS simulator on his mac. At one point he stalled and I did a whole bunch of crazy hacks to prove it could be done and to be honest I got pretty far, so far in fact I got to the point a screen with “precaching” would come up that locked up the Pi…..

There was also a point before that where it went to menu but no textures or fonts were loaded:

iPhone mid june 031 iPhone mid june 032 iPhone mid june 033 iPhone mid june 034

That of course made slime need to prove he could best an amateur who was only flying on blind luck so he did a proper GLES port and stated it worked in the iOS simulator. Still for me it froze at precaching.

GDB proved useless as it would not actually load the GLES context from console (bug with SDL console on rpi  maybe?)

I remember slime saying he disabled the compression routines for texturing so I went through all the .con files for loading objects. I found that in _telep, f6_2, _f7_2 and _f8_2 there was “flags compress” tagged onto the texture loading so on a long shot hunch I removed the flags.

 

Boom goes the dynamite.

I was staring at a Parsec menu screen with the teleporter animations on my Pi. I went through the Spacecraft menu and noticed some of the objects had no textures. It was different objects each time depending on which one was first when I opened the menu.

iPhone mid june 060 iPhone mid june 059 iPhone mid june 058 iPhone mid june 057 iPhone mid june 056 iPhone mid june 055

Maybe it was RAM related I thought. I went into options dropped the textures to 16bit and low quality and all but 2 of the ships rendered now. The first ship I had left “flags” in the texture loading line. The second stated the texture files didn’t exist.

As it turns out because I was working with extracted packs the textures for that one ship were in UPPERCASE renaming them to lower solved it. The pack manager must not care about case.

So there we have it, OpenParsec runs on the Pi, and pretty decently too. The only problem I have is the arcade controls do not work for some reason….I will have to look into that

iPhone mid june 050 iPhone mid june 051 iPhone mid june 052 iPhone mid june 054 iPhone mid june 053

My Rpi branch of the OpenParsec source is available here: https://github.com/OpenParsec/openparsec/tree/crazyspence

Modified Data files which do not cause crash here: https://github.com/CrazySpence/openparsec-assets-extracted

Game footage if you’ve never heard of Parsec before:

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at July 16, 2014 09:53 PM

July 03, 2014

Alan Pope (popey)

Creating Time-lapse Videos on Ubuntu

I’ve previously blogged about how I sometimes setup a webcam to take pictures and turn them into videos. I thought I’d update that here with something new I’ve done, fully automated time lapse videos on Ubuntu. Here’s when I came up with:-

(apologies for the terrible music, I added that from a pre-defined set of options on YouTube)

(I quite like the cloud that pops into existence at ~27 seconds in) :)

Over the next few weeks there’s an Air Show where I live and the skies fill with all manner of strange aircraft. I’m usually working so I don’t always see them as they fly over, but usually hear them! I wanted a way to capture the skies above my house and make it easily available for me to view later.

So my requirements were basically this:-

  • Take pictures at fairly high frequency – one per second – the planes are sometimes quick!
  • Turn all the pictures into a time lapse video – possibly at one hour intervals
  • Upload the videos somewhere online (YouTube) so I can view them from anywhere later
  • Delete all the pictures so I don’t run out of disk space
  • Automate it all

Taking the picture

I’ve already covered this really, but for this job I have tweaked the .webcamrc file to take a picture every second, only save images locally & not to upload them. Here’s the basics of my .webcamrc:-

[ftp]
dir = /home/alan/Pictures/webcam/current
file = webcam.jpg
tmp = uploading.jpeg
debug = 1
local = 1

[grab]
device = /dev/video0
text = popeycam %Y-%m-%d %H:%M:%S
fg_red = 255
fg_green = 0
fg_blue = 0
width = 1280
height = 720
delay = 1
brightness = 50
rotate = 0
top = 0
left = 0
bottom = -1
right = -1
quality = 100
once = 0
archive = /home/alan/Pictures/webcam/archive/%Y/%m/%d/%H/snap%Y-%m-%d-%H-%M-%S.jpg

Key things to note, “delay = 1″ gives us an image every second. The archive directory is where the images will be stored, in sub-folders for easy management and later deletion. That’s it, put that in the home directory of the user taking pictures and then run webcam. Watch your disk space get eaten up.

Making the video

This is pretty straightforward and can be done in various ways. I chose to do two-pass x264 encoding with mencoder. In this snippet we take the images from one hour – in this case midnight to 1AM on 2nd July 2014 – from /home/alan/Pictures/webcam/archive/2014/07/02/00 and make a video in /home/alan/Pictures/webcam/2014070200.avi and a final output in /home/alan/Videos/webcam/2014070200.avi which is the one I upload.

mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=1:turbo:bitrate=9600:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup
mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=2:bitrate=9600:frameref=5:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup -o /home/alan/Videos/webcam/2014070200.avi

Upload videos to YouTube

The project youtube-upload came in handy here. It’s pretty simple with a bunch of command line parameters – most of which should be pretty obvious – to upload to youtube from the command line. Here’s a snippet with some credentials redacted.

python youtube_upload/youtube_upload.py --email=########## --password=########## --private --title="2014070200" --description="Time lapse of Farnborough sky at 00 on 02 07 2014" --category="Entertainment" --keywords="timelapse" /home/alan/Videos/webcam/2014070200.avi

I have set the videos all to be private for now, because I don’t want to spam any subscriber with a boring video of clouds every hour. If I find an interesting one I can make it public. I did consider making a second channel, but the youtube-upload script (or rather the YouTube API) doesn’t seem to support specifying a different channel from the default one. So I’d have to switch to a different channel by default work around this, and then make them all public by default, maybe.

In addition YouTube sends me a “Well done Alan” patronising email whenever a video is upload, so I know when it breaks, I stop getting those mails.

Selection_264

Delete the pictures

This is easy, I just rm the /home/alan/Pictures/webcam/archive/2014/07/02/00 directory once the upload is done. I don’t bother to check if the video uploaded okay first because if it fails to upload I still want to delete the pictures, or my disk will fill up. I already have the videos archived, so can upload those later if the script breaks.

Automate it all

webcam is running constantly in a ‘screen’ window, that part is easy. I could detect when it dies and re-spawn it maybe. It has been known to crash now and then. I’ll get to that when that happens ;)

I created a cron job which runs at 10 mins past the hour, and collects all the images from the previous hour.

10 * * * * /home/alan/bin/encode_upload.sh

I learned the useful “1 hour ago” option to the GNU date command. This lets me pick up the images from the previous hour and deals with all the silly calculation to figure out what the previous hour was.

Here (on github) is the final script. Don’t laugh.

by popey at July 03, 2014 03:17 PM

June 23, 2014

Tony Whitmore (tonytiger)

Tom Baker at 80

Back in March I photographed the legendary Tom Baker at the Big Finish studios in Kent. The occasion was the recording of a special extended interview with Tom, to mark his 80th birthday. The interview was conducted by Nicholas Briggs, and the recording is being released on CD and download by Big Finish.

I got to listen in to the end of the recording session and it was full of Tom’s own unique form of inventive story-telling, as well as moments of reflection. I got to photograph Tom on his own using a portable studio set up, as well as with Nick and some other special guests. All in about 7 minutes! The cover has been released now and it looks pretty good I think.

Tom Baker at 80

The CD is available for pre-order from the Big Finish website now. Pre-orders will be signed by Tom, so buy now!

Pin It

by Tony at June 23, 2014 05:31 PM

Andy Smith (grifferz)

How to work around lack of array support in puppetlabs-firewall?

After a couple of irritating firewalling oversights I decided to have a go at replacing my hacked-together firewall management scripts with the puppetlabs-firewall module.

It’s going okay, but one thing I’ve found quite irritating is the lack of support for arrays of things such as source IPs or ICMP types.

For example, let’s say I have a sequence of shell commands like this:

#!/bin/bash
 
readonly IPT=/sbin/iptables
 
for icmptype in redirect router-advertisement router-solicitation \
                address-mask-request address-mask-reply; do
    $IPT -A INPUT -p icmp --icmp-type ${icmptype} -j DROP
done

You’d think that with puppetlabs-firewall you could do this:

class bffirewall::prev4 {
    Firewall { require => undef, }
 
    firewall { '00002 Disallow possibly harmful ICMP':
        proto    => 'icmp',
        icmp     => [ 'redirect', 'router-advertisement',
                      'router-solicitation', 'address-mask-request',
                      'address-mask-reply' ],
        action   => 'drop',
        provider => 'iptables',
    }
}

Well it is correct syntax which installs fine on the client, but taking a closer look it hasn’t worked. It’s just applied the first firewall rule out of the array, i.e.:

iptables -A INPUT -p icmp --icmp-type redirect -j DROP

There’s already a bug in Puppet’s JIRA about this.

Similarly, what if you need to add a similar rule for each of a set of source hosts? For example:

readonly MONITORS="192.168.0.244 192.168.0.238 192.168.4.71"
readonly CACTI="192.168.0.246"
readonly ENTROPY="192.168.0.215"
 
# Allow access from:
# - monitoring hosts
# - cacti
# - the entropy VIP
for host in ${MONITORS} ${CACTI} ${ENTROPY}; do
    $IPT -A INPUT -p tcp --dport 8888 -s ${host} -j ACCEPT
done

Again, your assumption about what would work…

    firewall { '08888 Allow egd connections':
        proto    => 'tcp',
        dport    => '8888',
        source   => [ '192.168.0.244', '192.168.0.238', '192.168.4.71',
                      '192.168.0.246', '192.168.0.215' ],
        action   => 'accept',
        provider => 'iptables',
    }

…just results in the inclusion of a rule for only the first source host, with the rest being silently discarded.

This one seems to have an existing bug too; though it has a status of closed/fixed it certainly isn’t working in the most recent release. Maybe I need to be using the head of the repository for that one.

So, what to do?

Duplicating the firewall {} blocks is one option that’s always going to work as a last resort.

Puppet’s DSL doesn’t support any kind of iteration as far as I’m aware, though it will in future — no surprise, as iteration and looping is kind of a glaring omission.

Until then, does anyone know any tricks to cut down on the repetition here?

by Andy at June 23, 2014 03:28 AM

June 18, 2014

Matthew Walster (dotwaffle)

dotwaffle

This has been one of the hardest decisions I’ve had to make, but ultimately I think it’s the right one. There’ll be more news of a lesser kind when I get stuff confirmed with work — but basically I have to make a decision between three great opportunities and I’ve chosen to do the one that means I’ll be working abroad for a little while. A few weeks in Hong Kong, a month or so in Tokyo.

I realise it sounds spoilt and selfish to say that I’m “settling” for that. It’s a great opportunity and I should be thankful for it.

I’m going to regret doing it. I haven’t even done it yet, but I already regret choosing this option over the two other choices which would have meant radical new changes to my life. I’ll do these things to the best of my ability, and I’ll come out a stronger leader and a better person for having made this decision as a result. I just have this nagging fear that perhaps this was my one chance to go take a trip of a lifetime, and I’m letting it pass because once again my chosen career is coming first. Thankfully I work with some great people, and they’ve been very supportive throughout this process.

9 days. 9 days I was ecstatic, I smiled and felt better than I had done since I can remember. All I can feel now is grey. A feeling of loss when I should be feeling excitement at the new challenges ahead. Excitement that I get to go live and work in the Far East for what potentially could be a total of 2 or 3 months (on and off). I’ll get to go immerse myself in a culture that I’ve admired for some time. Who knows, perhaps I’ll still go and do the Zero G flight in Moscow over a week off later this year. Maybe I’ll even finally get around to deciding what I’m going to do with my thirties rather than living in the day.

For now, though, I’ve made the decision and that’s what I’ll be passing along in the morning.

Thank you to all who read the big change post (now password protected as it’s a bit sensitive) and gave their honest opinions. Thank you to those who let me know it touched them, to those who cried (yes, there’s more than one of you!), to those who expressed concern before telling me that they’d been feeling the same way and have given them new drive.

And of course, thank you to those who kept (and keep) the secret of what I was planning. I’m not an emotional person, I don’t find it easy sharing this kind of stuff. The fact that so many of you have kept it quiet has given me new found faith in my friends.

Thank you all. For now, I’m a bit bummed out. I believe you need to buy me a drink. =)


by Matthew Walster at June 18, 2014 09:02 PM

June 17, 2014

Phil Spencer (CrazySpence)

Quakeforge on RPi

On my never ending quest for as much gaming as possible on my little Raspberry Pi I was remembering a time where the laws of physics were mostly irrelevant and the frags were plentiful.

Quake

The original quake, no bloody 2, no bloody 3 and no bloody 4! Just pure fast paced carnage. So I pulled up the latest Quakeforge repository and decided to see where it would get me.

The long quest for Quake

The first problem I ran into was fairly minor but it seemed Rasbian defaulted at bison 2.5 and QF required 2.6 for some bloody reason. This was no big deal I just downloaded the source, compiled it and away it went. The process griped about a few more libraries along the way but they were all available from apt-get from that point on.

Once I had all the dependencies and ./configure ran its course I had a nice status menu of what it was building when I did the make.

The make process took a long time, walk away, go to the store, do something. This is a low powered single processor machine and it takes a while. Make sure you have everything configured the way you want as well as any changes makes it start it all over again.

So after growing a beard and starting a small garden in the back yard my first attempt at QF was compiled. Excited I typed nq-sdl to be presented with an error “libGL.so not found. QF defaults to the GL version it seems.

nq-sdl +set vid_render “sw”

Tada! we have Quake! the screen was a small window in the center of my console to correct this I had to revise my startup command

nq-sdl +set vid_render “sw +set vid_width “640” +set vid_height “480”

Now we have full screen Quake! Almost immediately though I noticed something was wrong. First the sound was popping. Eventually the client would become unbearably slow and fill the console with errors stating it was out of channels.

Secondly any time the player hits an item or is damaged the screen flickers for about a second. Very distracting, very annoying.

Sound problem

I reconfigured and disabled ALSA and OSS drivers with –disable-alsa and –disable-oss as I was trying for an SDL only build. This fixed the sound issue 100% No popping, no eventual slow down and channel problems.

Flickering problem

This issue took me a day or so of on and off investigating to isolate as I have never dealt with the Quake source directly before.

Eventually I came across a file called cl_view.c which had 2 functions as plain as day for handling damage and bonus collections with a palette change “flash”. What I did just to see if I had the right idea was block out the cshift variable changing with an #ifdef block and recompiled.

Sure enough no more flicker. Not sure why it’s broken on software rendering but I filed a ticket to the project and hopefully someone properly fixes it someday. For now it works for me time to move on!

Arcade controls

Come on, you knew that was coming… heh. Yes I got Quake working with my arcade controls. At first I tried handling it through the quake menu system under controls -> bindings but this wasn’t actually setting the functions I wanted to the buttons I wanted.

I eventually had to manually edit ~/.quakeforge/id1/config.cfg and set the bindings manually to match the keys my retrogame was sending to the actions I wanted. It was a bit tedious and manual BUT I got the results I wanted.

Retrogame changes

With the existing setup I had I could pull up the menu with the KEY_ESC vulcan nerve pinch that the code already had but to select items I needed a KEY_ENTER. What I did was added multiple key combinations to the code so I can send KEY_ESC, KEY_ENTER and a specific system halt combination. This allows me to navigate MAME, Super Mario war and now Quake with ease. I have uploaded all my retrogame changes to github you can view them here.

Not done yet

Now what about multiplayer? Don’t worry, I am going to do something special with qw-client-sdl in this regard. I’m sure there will be a post in the future about it.

QuakeForge on Pi iPhone mid june 137 iPhone mid june 133 FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at June 17, 2014 09:21 PM

June 15, 2014

Phil Spencer (CrazySpence)

Super Mario War on RPi

Ever play Super Mario War? If you haven’t it is essentially Super Mario death match in the setting of SMB1,2,3 and Mario World graphics.

It’s you and 3 other marios trying to stomp, hammer, shell, b-bomb each other to death until you have no lives left. There are other game modes like chicken, ctf and king of the hill to keep it interesting but to put it plainly it is a great multiplayer game.

I decided to see if I could get this running on my Pi and not only just running but also working with my arcade controls.

I downloaded the source yesterday and at first ran into a few basic issues, Installed LibSDL, SDLMixer, SDLImage ,  blah blah blah packages. Long story short I got it working but the backgrounds were garbled, maybe 1 or 2 maps were playable. Refreshing the maps list caused a crash after about 6 maps as well.

Still a success, it was working in some manner at least I thought

Today I was investigating how to correct that issue when I came across a Raspberry Pi Forum post outlining a better and simpler method to get it running and it had a patch. A patch that I had hoped resolved my garbled maps issue.

Script:

wget --no-check-certificate https://aur.archlinux.org/packages/sm/smw-svn/smw-svn.tar.gz
tar -xf smw-svn.tar.gz
cd smw-svn
svn co http://supermariowar.googlecode.com/svn/trunk smw
cd smw
dos2unix configure
chmod +x configure
patch -Np0 < ../gcc.patch
./configure
make

I modified the script slightly to add –no-check-certificate to the wget (as mine failed by default with it not present) and it was off to the races. After about 20 minutes the script had done its job. The SVN was downloaded, patched, configured and compiled.  Once you have run sudo make install you should be able to execute the game with “smw”

Problems

Unfortunately the background issue and map crash was still present. I had noticed however some maps now appeared to work so I started going through the list pruning the unplayable maps and the ones that caused the game to fault.

It’s also occasionally crashy it was a beta after all and despite my efforts I can’t seem to find a copy of the older more stable 1.7 source. Despite the problems it is well worth the effort to install.

Controls

The controls for SMW are 100% customizable for game and menu input. This means anything SDL can detect as an input you can set up. You may need a keyboard at first for menu enter/cancel until you set up a primary controller for that but once that’s covered keyboards can be put away!

The first controller I obviously set up is my arcade controls. Works 100%

Second was a semi busted ps3 controller I have that doesn’t work over bluetooth anymore but still works as usb. The PS3 controller I tried to plug in through my keyboard first but the bus power was too low so I plugged it in directly to the Pi. After that I started SMW and it was detected automatically.

You can have upto 4 players so technically with a good powered hub or 4 bluetooth gamepads you could have a lot of fun with this game

20140613-170356-61436418.jpg 20140613-170400-61440216.jpg 20140615-164514-60314521.jpg Super Mario war on Pi 20140615-164517-60317194.jpg

 

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at June 15, 2014 09:41 PM

June 11, 2014

Matthew Walster (dotwaffle)

dotwaffle

This post is password protected. You must visit the website and enter the password to continue reading.


by Matthew Walster at June 11, 2014 11:11 PM

Alan Pope (popey)

Ubuntu Online Summit ‘Ask Rick and Olli’ Session

One of the things we’re keen to continue to push with Ubuntu is a spirit of openness and inclusivity. Over the last couple of years with the reduction in ‘in person’ Ubuntu Developer Summits it’s been said Canonical developers are harder to reach, and that we’re not communicating effectively our plans and designs for the future direction of Ubuntu. We’ve been trying to address this via increased blogging, regular email status updates and video updates from all areas of the community.

As always we’re also keen to hear feedback, we welcome email discussion on our lists, bug reports, design mock-ups and of course well tested patches. We also want to ensure people at every level are available for Q&A sessions on a regular basis. Jono Bacon had a series of Q&A sessions which the Community Team will continue, but with additional domain experts and leaders during those sessions.

One of the biggest visible areas of change for Ubuntu is the transition from Unity 7 on Compiz (used in 14.04 and below) and Unity 8 and Mir (to be used in future releases). So today this weeks Ubuntu Online Summit we’ve arranged a couple of sessions which we invited participation in.

At 14:00 UTC today Rick Spencer (VP of Engineering) and Oliver (Olli) Ries (Director of Unity & Display Server) will hold an Ask Rick & Olli session. Bring along your questions about Unity, Mir, convergence, future desktop direction and more.

image20140410_0014 se11r9kc283jyd14yz48

An hour later at 15:00 UTC we have a Convergence Progress Report session where you can get an update on where we stand with our converged vision, and of course participate on the hangout or via IRC.

Click the time links above to find out when these are happening in your timezone today, and the other links to join in the sessions at that time. If you miss it you can watch the sessions later using the same links.

by popey at June 11, 2014 11:31 AM

May 25, 2014

BitFolk Issue Tracker

Misc infrastructure - Feature #124: entropy.lon.bitfolk.com over IPv6

Yepp, getting the entropy from 2001:ba8:1f1:f377::2 works for me.

Thanks!

by halleck at May 25, 2014 04:42 PM

Misc infrastructure - Feature #124: entropy.lon.bitfolk.com over IPv6

The current version of haproxy may be good enough actually. Please give:

2001:ba8:1f1:f377::2

a try as the entropy key server IP.

by admin at May 25, 2014 04:14 PM

Misc infrastructure - Feature #124: entropy.lon.bitfolk.com over IPv6

I will need to upgrade the version of haproxy that is fronting this service, and I think I would like to take the time to provision new dedicated VMs for haproxy load balancing at the same time. Other than that there is no reason this cannot be done.

by admin at May 25, 2014 03:42 PM

May 23, 2014

Phil Spencer (CrazySpence)

Raspberry Pi: On/OFF

Last Saturday I made a shut down button for my XBMC Pi which has worked great and made my SD cards live in a happier place

I decided to look into a PowerOn button because it is still sorta silly to unplug/plug in the Pi to turn it back on. Well as it turns out if you have a button on Pin 5 (GPIO 3) it will reset the Pi from halt state when pressed.

I verified this first with my arcade box as it already had a shut down implemented and an On button would be great plus I probably already had a button on that pin as I only had 2 free GPIO pins to begin with. Sure enough my yellow right hand button was on this pin.

I shut down the arcade box using the coin+1p nerve pinch and once the Pi was in a halted state I pressed the yellow button.

Success!

I immediately applied this change to my XBMC pi just by move my button connection from 7/GND to 3/GND and updated the retrogame program to reflect the change.

So now both my Pi’s have graceful shutdown buttons and power on buttons. Life is good.

20140518-172139-62499681.jpg IMG_1875 GPIO Rev2 FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at May 23, 2014 03:19 PM

May 22, 2014

Phil Spencer (CrazySpence)

Raspberry Pi View Client

A couple weeks back my main desktop gave up the ghost after 6 years of usage. This leaves me without a desktop at home and if something were to arise that had to keep me home I would now have no way to work from home.

So I was thinking…

What about the VMware Open View client, that was a source code project on Google Code that allowed extra platforms to build the RDP based View client on whatever platform. Well it turns out that was closed suddenly over a year ago but in my searching for it it turned out someone had managed to get the binary packages working on the Pi with a Debian ARM package that seemed to exist.

The Raspberry Pi thin client project seemed to have what I was looking for out of the box. A View client that works with current View deployments and half a dozen other thin client packages for other platforms.

Problems problems problems

View client:

My Pi was on Ethernet to start, I opened up view put in my connection server and hit connect. It pops up with the MOTD and I continue, enter my credentials and then the desktops available to me appear. Great so far! I click the desktop pool the wheel spins and POOF the window goes away.

-I made sure RDP was selected

-I set the window to small in case the poor Pi couldn’t handle it

Poof! window goes away!

CPU speed:

CPU is set to the default 700mhz, this isn’t really a problem from a technical standard, the Pi is default a 700mhz ARM however all the applications crawl at this speed, including the View client

Wifi:

Wifi did not work out of the box, it did not detect my USB wifi and this Wifi has worked perfectly with all other Pi projects I’ve done in the past.

Boot Splash video:

It ran longer than the actual boot process, on a 4:3 screen you can see the booted desktop in the background and you’re still stuck watching the movie

Fixes Fixes Fixes

So I downloaded this for the sole purpose of using it for View so when that didn’t work I was a little disheartened but this was still the best bet I had so I wasn’t about to give up on it.

View fixes:

So as I said earlier, it worked up until showing me my available desktop pools and then it closes when you select one. I tried a multiple set of options, smallest window size, RDP, Full screen, windowed etc and nothing yet. I checked google and wasn’t getting far there other than a couple people on the RpiTC forums with the same problem.

It wasn’t looking good.

So for the hell of it I decided to run the app in LXterm to see if it said anything useful to STDout and it did. First it told me where it saved its logs. /tmp/vmware-view-some#-tmp.log and secondly when it closed it gave me a useful error.

“vmware-view-usb cannot be started: file does not exist” once again I hit google and got nothing but KB’s and other stuff and once again it didn’t look good.

Then for some reason I decided to just touch vmware-view-usb in /usr/bin and guess what the hell happened? It worked.

View problem solved! works great, even faster than some of proper full desktop versions.

CPU Speed:

700MHZ just wasn’t cutting it, everything dragged and pushed CPU to max. The nice folks had an overclock config set already in /boot/config.txt but commented out. Easy solution, Solved.

I also cleaned out the autostart applications for LXDE, commented out all startup applications and added @vmware-view –fullscreen

The reason for this is so the system boots up straight to the View client like a proper View thin client. Removing the desktop stats application (showed cpu/ram on the desktop) and the other startup apps greatly reduced CPU overhead

Wifi:

So the system worked great on ethernet but at home I use mainly wifi so the fact that it wasn’t detecting my wifi at all was a serious problem and even a deal breaker. The system appeared to actually see the wifi card when I did ifconfig wlan0 but it wasn’t connecting to anything and wpa_gui always reported it could not get status from wpa_supplicant.

What seems to be the issue is no wpa_supplicant.conf file existed and there was no proper definition of wlan0 in /etc/network/interfaces. I followed a simple guide here and the wifi issue was resolved. I can see why wifi may have been overlooked a wireless connection does impact thin client performance but the View client performance is still more than acceptable. Problem solved.

Boot splash video:

It is a neat looking video, it looked great at home on the 40inch HD TV but on my 4:3 monitor you can quite easily tell the system booted longer before the video was over so it was also kind of annoying. The videos are located in /opt all you need to do to quickly fix this is rename them. If you want your own fancy boot videos they are in m4v format just replace the files with something of your own.

Solved.

Project summary

So it took some tweaking and some unusual fixes to get this to do exactly what I needed but after all is said and done I am quite pleased with the results. Now I have an SD card I can pop into any of my Pi’s and immediately bring up a View Client for work. Small investment big gain for me.

20140522-123848-45528484.jpg 20140522-123850-45530521.jpg

 

 

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at May 22, 2014 04:50 PM

May 21, 2014

Andy Smith (grifferz)

The HIPPOBAG experience

Sacks of soil and stones in our front garden
We’ve been doing some work in our front garden recently. Part of that involved me digging up the top 5cm or so of the stony horrible existing soil.

All in all, around a tonne of stones and soil got bagged up and had been sitting in our front garden for a couple of months. We’ve kept four sacks for use in the back garden which left 21 sacks and around 800kg to dispose of.

I enquired at our local tipreuse and recycling centre, Space Waye, who informed me that

“waste from home improvements/renovations is classed as industrial waste and is liable for charging.”

Apparently soil from your own garden is also classed as the result of an improvement and is therefore industrial waste, the disposal of which is charged at £195 per tonne. If I were to dispose of the soil and stones at the council facility I would need to transport it there and pay in the region of £156.

Looking for alternatives, I came across HIPPOBAG. The business model is quite simple:

  1. You pick which size of bag you require.
  2. They post it to you (or you can buy it at a few different DIY stores).
  3. You fill it.
  4. You book a collection.
  5. They come and take it away, which they aim to do within 5 working days.
  6. They recycle over 90% of the waste they collect.

Their “MEGABAG” at 180cm long × 90cm wide × 70cm tall and with a maximum weight of 1.5 tonnes seemed the most appropriate, and cost £94.99 — a saving of £61 over the council’s offering, and no need to transport it anywhere ourselves.

The bag turned up in the post the next day, at which point I discovered a discrepancy in the instructions.

The filling instructions on the documentation attached to the bag stated that it should only be filled two thirds with soil, and levelled out. Neither the Frequently Asked Questions page nor the How To Use A Hippobag page say anything about this, and all pictures on the site show the bags filled right up, so I was completely unaware of any such restriction.

Now, I “only” had 800kg of soil but that was some 21 sacks which when placed in the bag did fill it past the top level. I don’t see how you could use the maximum capacity of 1.5t and only fill it two thirds. I was really worried that they weren’t going to carry out the collection.

With the awkward shape of the rubble sacks they weren’t packing that well into the bag. There was a lot of wasted space between them. In the interests of packing down the soil more level we decided to split open many of the sacks so the soil and stones would spread out more evenly.

I had some misgivings about this because if Hippobag decided there was too much to collect and if it were all still in sacks, at least I might have had the option of removing some of the sacks and not entirely waste my money. On the other hand it did look like it would pack down a lot further.

What we were left with was a bag about half to two thirds full of soil and stones with three or four more sacks of it plonked in the middle, no higher than the lip of the bag.

On the evening of Monday 12th I booked a collection. I was expecting to be able to choose a preferred day, but it seems the only option is “as soon as possible”, and

we aim to collect your HIPPOBAG at any time within 5 working days of your booking

So, by Monday 19th then?

I wrote in the “comments to the driver” section that I would definitely be in so they should ring the bell (they don’t need you to be at home to do a collection). I wanted to check everything was okay and ask them about the filling instructions.

The afternoon of Monday 19th came and still no collection. I filled in the contact form on their web site to enquire when it might take place.

At some point on Tuesday 20th May I looked out of our front window and the bag had gone. I hadn’t heard them make the collection and they didn’t ring the bell. It must have happened when I was out in the back garden. They shoved a collection note through our door. My comments to the driver were printed on the bottom, so they must have seen them. I still haven’t received a response to my enquiry. They did actually reply to my enquiry on the afternoon of Tuesday 20th. I’d missed that at the time this was written.

Not a big deal since they did perform the collection without issue and only a day later than expected.

Really I still think that council refuse sites should be more open to taking waste like this at no charge, or a lot cheaper than ~£156, if you can prove it is your own domestic waste.

I understand that the council has a limited budget and everyone in the borough would be paying for services that not everyone uses, but I also think there would be far fewer incidents of fly tipping — which the council have to clean up at huge expense to the tax payer.

Compared to having to transport 21 sacks of soil to Space Waye and then pay £156 to have them accept it though, using HIPPOBAG was a lot more convenient and £61 cheaper. It’s a shame about the unclear instructions and slow (so far no) response to enquiries, but we would most likely use them again.

by Andy at May 21, 2014 01:37 AM

May 19, 2014

Tony Whitmore (tonytiger)

Mad Malawi Mountain Mission

This autumn I’m going to Malawi to climb Mount Mulanje. You might not have heard of it, but it’s 3,000m high and the tallest mountain in southern Africa. I will be walking 15 miles a day uphill, carrying a heavy backpack. I will be bitten by mosquitoes and other flying buzzy things. It’ll be hard work, is what I’m saying.

I’m doing this to raise money for AMECA. They’ve built a hospital in Malawi that is completely sustainable and not reliant on charity to keep operating. Adults pay for their treatments and children are treated for free. But AMECA also support nurses from the UK to go and work in the hospital. The people of Malawi get better healthcare and the nurses get valuable experience to bring back to the UK.

And that’s what the money I’m raising will go towards. There are just 15 surgeons in Malawi for 15 million people so the extra support is so valuable.

There have been lots of amazing, generous donors already. My family, friends, colleagues, members of the Ubuntu and FLOSS community, Doctor Who fans, random people off the Internet have all donated. Thank you, everyone. I have been touched by the response. But there’s still a way to go. I have just one month to raise £190. So much has been raised already, but I would love it if you could help push me over my target. Or, if you don’t like me and want to see me suffer, help me reach my target and I’ll be sure to post lots of photos of the injuries I sustain. Either way…

Please donate here.

Pin It

by Tony at May 19, 2014 04:59 PM

May 18, 2014

Phil Spencer (CrazySpence)

Shutdown button for my Pi

It has been around a year now since my first Pi arrived and that first Pi has always been used as a media centre for my trailer and at home.

The problem being…

I use the remote app on my phone and iPad as the primary control there is no keyboard attached to the Pi for input. On the rare occasion there is a problem where either XBMC hangs up or the Pi drops from wifi I have no way to power it off other than unplugging it.

At home when it is just the wifi problem no biggie, the physical remote over HDMI works but at my trailer I have an older TV and this doesn’t work.

Anyone who has used a Pi knows you do an unclean shutdown enough times and you have to re image the OS because the file system damage is not always repairable. With lower quality cards this even sometimes leads to the SD card becoming unbootable permanently but I have only had that happen with one particular brand.

The solution

So I have some buttons left over from my original Arcade project and I decided to solve the problem by adding a shutdown button to my Pi. This way even if XBMC locks the physical button can execute a shutdown. It would take a full hardware lock of the Pi to cause a forced power off again and the only time I have seen that occur is when over clocking to 1Ghz so this should in almost all cases solve the problem!

Hardware

In this particular case I am using a red sanwa style arcade button for the shutdown, red seemed appropriate I also had blue and white available. I got this button from Adafruit last year along with the easy connector cable that will be used to connect to the GPIO on the Pi.

I connected the cable to the last pair of pins on the GPIO board as it was the easiest place to place the jumper and manipulate the cable in a manner to fit through the GPIO slot on my Pibow case.

I tried to keep as much of the cable within the case as possible, this is meant to be functional not pretty. Maybe I’ll get a smaller button in the future but this is what I had on hand.

Software

I’m using retrogame to handle the GPIO input detection and also the shutdown activation. Recent versions of retrogame seem to have taken this out but I forked an earlier version that has what I want for my projects.

For the Vulcan nerve pinch code to work with just a single button I just modified the bitmask to look for the same button instead of 2: (1L << 5) | (1L << 5)

Results

It works! hold the button for about 2 seconds and the Pi shuts down. This hopefully will extend the life of my OS and the SD cards.

20140518-172135-62495575.jpg 20140518-172136-62496810.jpg 20140518-172138-62498033.jpg 20140518-172139-62499681.jpg 20140518-172142-62502395.jpg 20140518-172144-62504130.jpg FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at May 18, 2014 10:29 PM

May 12, 2014

Tony Whitmore (tonytiger)

Paul Spragg

I was very sorry to hear on Friday that Paul Spragg had passed away suddenly. Paul was an essential part of Big Finish, working tirelessly behind the scenes to make everything keep ticking over. I had the pleasure of meeting him on a number of occasions. I first met him at the recording for Dark Eyes 2. It was my first engagement for Big Finish and I was unsure of what to expect and generally feeling a little nervous. Paul was friendly right from the start and helped me get set up and ready. He even acted as my test subject as I was setting up my dramatic side lights, which is where the photo below comes from. It’s just a snap really, but it’s Paul.

He was always friendly and approachable, and we had a few chats when I was in the studio at other recording sessions. We played tag on the spare room at the studios, which is where interviews are done as well as being a makeshift photography studio. It was always great to bump into him at other events too.

Thanks to his presence on the Big Finish podcast Paul’s voice will be familiar to thousands. His west country accent and catchphrases like “fo-ward” made him popular with podcast listeners, to the extent that there were demands that he travel to conventions to meet them!

My thoughts and condolences go to his family, friends and everyone at Big Finish.

Paul Spragg from Big Finish

Pin It

by Tony at May 12, 2014 05:30 PM

May 05, 2014

Phil Spencer (CrazySpence)

retrogame.c goodness

Thanks to Adafruits continued trek into the forays of Arcade goodness they did some retrogame.c updates the last few months. When they mentioned for the CupCade you could do a 1p+coin to ESC from games (sound familiar?) but also shut down with this combo I went and checked out the retrogame project on GitHub.

// A "Vulcan nerve pinch" (holding down a specific button combination
// for a few seconds) issues an 'esc' keypress to MAME (which brings up
// an exit menu or quits the current game).  The button combo is
// configured with a bitmask corresponding to elements in the above io[]
// array.  The default value here uses elements 6 and 7 (credit and start
// in the Cupcade pinout).  If you change this, make certain it's a combo
// that's not likely to occur during actual gameplay (i.e. avoid using
// joystick directions or hold-for-rapid-fire buttons).
const unsigned long vulcanMask = (1L << 6) | (1L << 7);
const int           vulcanKey  = KEY_ESC, // Keycode to send
                    vulcanTime = 1500;    // Pinch time in milliseconds

Sure enough this is now in the code. I connected my Arcade console to the network and updated my local repository with the new updates.

Verdict: Wonderful. My SD cards haven’t been very happy with me the last year but it should be clear sailing from here on! Thanks Adafruit for another useful gem.

 

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at May 05, 2014 07:39 PM

Laura Denson (laura)

MSW 2014

Another year, another Maryland Sheep and Wool. I picked up all the fiber below as well as two bobbins for my Wooly Winder. The stuff on the right is heaven. Naturally Chocolate colored Alpaca and Silk (70/30) and it’s heaven to touch and spin. I had some gray two years ago, and last year she […]

by laura at May 05, 2014 12:21 AM

May 03, 2014

Phil Spencer (CrazySpence)

Cupcade

So Adafruit have done it again and got me obsessing with something they are offering  “Cupcade“. Last year it was their basic arcade console tutorial which spawned my first version here and then after gaining confidence on that one I made this much more advanced one here.

I do not have this yet, I am in the early phase of obsession where my rational side and my kid side are battling out how to justify it but you can bet I will within the next few months. There are a few discrepancies with the Cupcade software choices that I have noticed and resolved myself on my own consoles over the last year that I would like to explain for others before going on.

The era the Pi can span

In a video blog I was watching this morning regarding Adafruits new products here they claim dig dug, gauntlet, packman etc.. is the best it can do because the Pi is too slow. This is not true. The Pi can emulate nearly flawlessly up to Street fighter II, The sound, speed everything, is 99% perfect with only an occasional pop. You can also do Mortal Kombat 1 this would be where I draw the limit as MK1′s sound starts to fluctuate a bit more than SFII and when the sound fluctuates in the game that’s when you’re pushing your luck because that is an indication of the emulator skipping to keep up.

In this particular case SFII would not be a good choice for the Cupcade as it needs 6 fight buttons but I am just making a point on the Pi’s capabilities. The best two button games I have tried are Raiden and Puzzle Bobble which are also early 90′s and work perfectly.

The emulator

Now their preconfigured image comes with AdvanceMAME and I wouldn’t say this is a bad choice in one way but it is in another. The reason it is a good choice is because it is an up to date emulator and any ROM you run into today should run on it without any problems. It is a bad choice in the fact that it slaughters your CPU cycles and the Pi cannot afford this and that is most likely the cause of the video blog I mentioned above stating the limit is mid 80′s and that the Pi is too slow. I already ran into this conclusion last summer which I blogged about here when I explained my trials and tribulations with Pi and MAME.

My choice is Mame4All as it is optimized for mobile platforms including the Pi and it allows that upto Street Fighter 2 performance that I was referring to before. The caveat is of course that you need to be able to either convert roms back to the version it supports -> 0.37b5 or know a place to get those version roms (and they do exist).

Mame4All also still supports hiscore.dat because what’s the point of playing arcade games if you can’t shove your Hi score’s in your friends faces?

Back to the goodness

Anyways I just wanted to clarify those points. The Pi is an excellent emulating platform and it is a much more cost effective method of experimenting with the arcade VS when I was younger and you still needed to back it with a full blown PC. The CupCade looks amazing, I plan to get it and I am sure it will rock. Hurry up with those Sticker kits Adafruit!

FacebookTwitterDeliciousLinkedInStumbleUponAdd to favoritesEmailRSS

by KingPhil at May 03, 2014 03:48 PM

May 01, 2014

Graham Bleach (gdb)

Sorry, Code Club

Over the past few months I have been worrying about a problem that we have in IT; how to find all the people we need to keep the systems we need running.

My experience of trying to hire over the past year is that in London we cannot find enough people with the right skills to look after the kind of modern web systems we are currently building. In contrast, there are more skilled people available who are more tightly focussed on software development, or at least that is my perception, and more attempts to encourage people to program through initiatives like Code Club.

A second, and loosely related problem that troubles me is that we have the balance too much in favour of re-invention over improvement of existing open source solutions. It was primarily libraries and infrastructure code that I was thinking about. Particularly, the problem is the difficulty in reusing them that tempts people to use local implementations.

This morning I woke up and flippantly tweeted the first thing that came into my head conflating both subjects, and was justifiably and politely admonished by colleagues and friends. There are a number of things I regret about this, but most importantly including Code Club was a mistake; it is a useful and fun project that gets kids involved in tech. These kids might well go onto to be top notch systems-savvy developers or ops people, rather than programmers.

It was also a mistake to suggest, as I did, that there is somehow a decision to be made between only building new software and only maintaining the existing software and we should only do maintenance. There is not, we absolutely need to build new things and those things also need to be maintained.

I regret the tone of a hectoring authority figure, or “grumpy ops guy” as Michael Brunton-Spall put it later on in our conversation as I tried to explain what I meant.

What I really should have pointed out was the need for something like Tech Ops Club in addition to Code Club, rather than in competition. Sorry, Code Club.

May 01, 2014 11:00 PM

April 28, 2014

Tony Whitmore (tonytiger)

The Worlds of Doctor Who

My friends over at Big Finish are celebrating fifteen years of producing Doctor Who audio drama this year. To mark the occasion, they will be releasing a special box set called “The Worlds of Doctor Who.” The set comprises four stories with a linking story arc, with each story based around one of the Doctor Who spin-off series that Big Finish have been so successful at producing: Jago and Litefoot, Countermeasures, The Vault and Gallifrey.

The Worlds of Doctor Who box set cover

I have been doing some of the photography for the box set and it has been a pleasure attending the recording sessions over the past couple of months. I’m sure I will be writing more about it in the future, but the cover for the box set was released yesterday and the central image is one I took of actor Jamie Glover who plays the evil Mr Rees throughout the four stories. His costume was added by a very clever graphic designer though!

Jamie Glover as Mr Rees for Big Finish Doctor Who

“The Worlds of Doctor Who” is available for pre-order now from the Big Finish website. You might also have seen some of my photographs from Big Finish Day 4 in Vortex #61, with a very nice double spread of my images showing what good fun the day was for guests and attendees.

Pin It

by Tony at April 28, 2014 05:33 PM

April 15, 2014

Andy Smith (grifferz)

On attempting to become a customer of Metro Bank

On the morning of Saturday 12th April 2014 I visited the Kingston Upon Thames store of Metro Bank in an attempt to open a current account.

The store was open — they are open 7 days a week — but largely empty. There was a single member of staff visible, sat down at a desk with a customer.

I walked up to a deserted front desk and heard footsteps behind me. I turned to be greeted by that same member of staff who had obviously spotted I was looking a bit lost and come to greet me. He apologised that no one had greeted me, introduced himself, asked my name and what he could help me with. After explaining that I wanted to open a current account he said that someone would be with me very soon.

Within a few seconds another member of staff greeted me and asked me to come over to her desk. So far so good.

As she started to take my details I could see she was having problems with her computer. She kept saying it was so slow and made various other inaudible curses under her breath. She took my passport and said she was going to scan it, but from what I could see she merely photcopied it. Having no joy with her computer she said that she would fill in paper forms and proceeded to ask me for all of my details, writing them down on the forms. Her writing was probably neater than mine but this kind of dictation was rather tedious and to be quite honest I’d rather have done it myself.

This process took at least half an hour. I was rather disappointed as all their marketing boasts of same day quick online setup, get your bank details and debit card same day and so on.

Finally she went back to her computer, and then said, “oh dear, it’s come back saying it needs head office approval, so we won’t be able to open this right now. Would you be available to come back later today?”

“No, I’m busy for the rest of the day. To be honest I was expecting all this to be done online as I’m not really into visiting banks even if they are open 7 days a week…”

“Oh that’s alright, once it’s sorted out we should be able to post all the things to you.”

“Right.”

“This hardly ever happens. I don’t know why it’s happened. Even if I knew I wouldn’t be able to tell you. It’s rare but I have to wait for head office to approve the account.”

As she went off to sort something else out I overheard the conversation between the customer and staff member on the next table. He was telling the customer how his savings account couldn’t be opened today because it needed head office approval and it was very rare that this would happen.

I left feeling I had not achieved very much, but hopeful that it might get sorted out soon. It wasn’t a very encouraging start to my relationship with Metro Bank.

It’s now Tuesday 15th April, three days after my application was made or two working days, and I haven’t had any further communication from Metro Bank so I have no idea if my account is ever going to be opened. I don’t really have any motivation to chase them up. If I don’t hear soon then I’ll just go somewhere else.

I suppose in theory a bank branch that is open 7 days a week might be useful for technophobes who don’t use the Internet, but if the bank’s systems don’t work then all you’ve achieved is to have a large high street box full of people employed to tell you that everything is broken. Until 8pm seven days a week.

Update 2014-04-15 15:30: After contact on twitter, the Local Director of the Kingston branch called me to apologise and assure me that he is looking into the matter.

About 15 minutes later he called back to explain, roughly:

The reason the account was not approved on the day is that I’ve only been in my current address for 7 months, so none of the proofs of address would have been accepted. Under normal circumstances it is apparently possible to open an account with just a passport. If not then the head office approval or rejection should happen within 24 hours, but their systems are running a bit slowly. Someone should have called me to let me know this, but this did not happen. Apparently approval did in fact come through today – I am told someone was due to call me today with the news that my account has been opened. I should receive the card and cheque book tomorrow.

I’m glad this was so quickly resolved. I’m looking forward to using my account and hopefully everything will be smoother now.

by Andy at April 15, 2014 11:51 AM

April 12, 2014

Laura Denson (laura)

2014 Parsec Award nominations

So I seem to be eligible for a 2014 Parsec award for my readings for Pseudopod. If you are the nominating type I am eligible in two categories. Best Speculative Fiction Story: Large Cast (Short Form) for Pseudopod 369: Four Views Of The Big Cigar In Winter by Charlie Bookout http://pseudopod.org/2014/01/17/pseudopod-369-four-views-of-the-big-cigar-in-winter/ and Best Speculative Fiction […]

by laura at April 12, 2014 11:06 PM

April 10, 2014

BitFolk Issue Tracker

Panel - Feature #117: Two factor auth for https://panel.bitfolk.com/

(That is, a clear improvement to just having a regular password login.)

by halleck at April 10, 2014 04:00 PM