Setup 32-bit Google Earth on 64-bit Ubuntu/Debian

Posted in Uncategorized on November 4, 2013 by voline

I remember the Google Earth 6.0 days.  It was rock solid.  Worked as advertised.  Then 7.0 rolls around and all I get is grief.  To be fair it may have less to do with the version number than the fact that I’m now trying to run the 64-bit version.

So, why can’t one just run 64-bit Google Earth on 64-bit Ubuntu?  That would be ideal.  However, it is my experience and has been widely reported that the 64-bit version is highly unstable.  I can’t get it to run for longer than 10 seconds.  Its also well reported that the 32-bit version is fairly stable.


Ok, so installing a 32-bit should be easy if it were packaged well.  Namely, it only depends on lsb-core, which is not architecture dependent.  So when one uses dpkg to install the 32-bit deb on 64-bit Debian, everything will install without complaint.  But then Google Earth will refuse to run because it can’t find the 32-bit libs (mostly x11 libs).

Luckily, its pretty easy to install the 32-bit version of a package with apt, just append :i386 to the end of the package name when using apt-get.  The next trick is figuring out exactly which packages are needed for the binary to run, not difficult.


So, finally, here’s the simple solution:

apt-get install libglu1-mesa:i386 libgl1-mesa-dri:i386

After which, Google Earth should run just fine.  Make sure that you’ve installed the 32-bit deb and removed the 64-bit one if necessary.

Encrypted, persistent storage on Ubuntu livecd

Posted in Uncategorized on September 12, 2013 by voline

When my system fails to boot, I have a rescue usb stick with an Ubuntu livecd which can be loopback booted to be used as a rescue system. Sometimes fixing the issue requires trial-and-error and several boots.  This can cause headaches as the livecd is not persistent by default.  So if there’s several web pages I’m perusing to help solve the problem, they’re gone on the next boot.

Persistent livecd

Good news is that the Ubuntu livecd already supports persistence for a livecd session; the bad news is you have to configure it.  And while its a slight pain in the ass, I’ve not found it too difficult.  I like the method where by a file named casper-rw formatted as an ext3 partition is written to the FAT filesystem on the usb stick.  This makes it easier to resize the persistent storage should I run out of space (I abhor manipulating partition tables, and with they would just go away!).  The biggest pain is that the newer livecds don’t have a grub entry which tells the livecd to boot with persistence, so it must be added each time you boot the livecd.  This is old news though and been known abut and done for a long time.

No Encryption

Now lets say you’re in the middle of trying to figure out why your computer isn’t booting, and determine you need to buy a new harddrive.  Since you’re in your persistent livecd session you can just got to your favorite online retailer and order one up.  That’s all find an good, until several days later you realize that your usb has been lost and it has the password to your online account stored in the firefox profile on the persistent storage.  We wouldn’t want that getting into the wrong hands.  It would be nice if the persistent storage was encrypted so that regardless of whether there was important data on there it  wouldn’t be accessible to the world, should it fall into the wrong hands.


There are two obvious methods for encrypting the persistent data:

  1. Use encryption at the filesystem layer, such as ecryptfs, which Ubuntu uses for its “Encrypt Home” feature
  2. Use block-level encryption to encrypt the whole casper-rw block device.

I won’t go much into the first method because I dislike filesystem encryption vs. block layer encryption, partly because there is some information leakage (such as number of files).  However in this context, ecryptfs does have some potential benefits over block layer encryption.    For one, it requires no additional support in the livecd.  All you need to do is follow one of the many recipes for using this feature in Ubuntu.

The second method, involves encrypting the whole block device with a block-level encryption system such as LUKS, which is the linux standard for such things.  Unfortunately, this required additional support in the livecd to support unlocking the device at boot.  Fortunately, the heavy lifting has already been done in this patch to the initrd of the raring desktop iso (ubuntu bug).

The Solution

So until Ubuntu can get this integrated into their iso, here’s how to modify the current iso to add encrypted persistence support.

  1. Download the Ubuntu iso and initrd patch.
  2. Download and and chmod +x them.
  3. $ <iso file>
  4. edit the initrd
  5. $ patch -p1 < <initrd patch>
    1. Make sure tha tthe patch applies successfully!
  6. add extra crypto modules if desired (the iso by default only comes with aes)
    1. $ rsync -uavSP {,.}/lib/modules/*/kernel/crypto
    2. $ rsync -uavSP {,.}/lib/modules/*/kernel/arch/x86/crypto
  7. $ exit ### to build the new initrd and resume editing the iso
  8. edit grub config
    1. Add “persistent” to the linux command.
  9. finish the other edit_iso questions, with defaults if desired.

Now you should have another iso that you can loopback boot from as you did the original iso, except that this one will boot with luks-encrypted, persistent storage.

NOTE: The luks-encrypted device must have a password slot.  Currently there is no way to use keyfile, and storing a keyfile on the USB would effectively nullify the encryption.  Also, the device must be a file named casper-rw.  It can not be a partition on the usb stick.  This is because there would be no way for the livecd to know which luks-encrypted partition to use (in the case of multiple).  Without encryption, the livecd will search for the persistent storage by looking for a file named casper-rw or a partition with a filesystem with a filesystem label of “casper-rw”.  LUKS devices do not allow tagging or adding of labels (unless you count some UUID scheme).


Connecting to Freenode via Tor using Irssi+SASL

Posted in Uncategorized on July 21, 2013 by voline

Freenode has pretty good instructions on how to connect to their IRC network via Tor with Irssi.  It requires SASL which seems to imply that you can’t use Tor if you haven’t registered via the clearnet.

Anyhow, the current version of the irssi plugin script linked on that page requires an abandoned and inefficient (not that it matters in this case) perl package Crypt::DH if you want to send your password in encrypted form.  If you’re just sending the password as plaintext, then it won’t matter.

Ubuntu doesn’t carry the package for Crypt::DH anymore, but it does have the newer Crypt::DH::GMP, which comes with a compatability module.  The problem is that this compatibility module isn’t 100% backwards compatible.  So I’ve modified the out-dated script from Freenode to support using this newer module, but to fallback to trying to use the old Crypt::DH.

To use, just follow the same instructions on freenode’s tutorial using the script below.  This is a drop in replacement.

Encrypted filesystem on

Posted in Uncategorized on May 18, 2013 by voline

In this article, I will describe a method for creating and sharing on different systems an encrypted directory in a account (henceforth “mega”).


You might be asking why one would want to store encrypted files on a system that provides for end-to-end encryption and stores the file data internally encrypted.  The problem boils down to trust.  I’m not ready to trust mega, despite that they might be trust worthy.  Their encryption design is relatively new and this is a reason to be wary, as it hasn’t been time tested.  I’d prefer to use solutions that have been around a while and used by many.


The idea behind the implementation is to use a FUSE filesystem to access mega and then layer an encrypted filesystem on top of that.  This is basically the technique I use for encrypting files I store on dropbox (see here and here).  There is a key difference in how I currently use dropbox, which gives me 2GB, and how I plan to use mega, 50GB.  I’d like to backup large amounts of data to mega encrypted and be able to access that from potentially any computer around the world.  However, I don’t want to delete the originals and to keep them unencrypted on the local disk where the currently reside.  Since I don’t want to keep two copies of the data locally (an encrypted version and the originals), I want a solution that takes the existing unencrypted directory of originals and gives me an easy way to map that into the cipher text of the encrypted filesystem.

In linux there are two major layerable encryption filesystems: ecryptfs and encfs.  I currently use ecryptfs with dropbox and it seems like the more mature and efficient solution.  However, it does not provide the reverse (plain text -> cipher text) functionality mentioned in the paragraph above.  This was requested as a feature in 2009, but the author expresses little interest in the feature and has since closed as “WON’T FIX”, despite offering to help motivated volunteer.  So that discounts ecryptfs.  Luckily encfs does have reversible functionality with the –reverse option.

The other loose end here is a fuse filesystem for mega.  For this, I will be using the megatools (ppa) utilities.

Putting it all together

Here’s a step-by-step procedure.  I assume an mega account is already setup.

Create the reverse mapping on the computer with the originals

mkdir /tmp/MegaDir.enc
encfs --reverse /path/to/data/to/backup /tmp/MegaDir.enc

Sync the encrypted files to mega

megasync -u  <username> -p <password> \
         --local /tmp/MegaDir.enc --remote /Root/<some subdir>
  • You can sync only a subset of fs tree rooted at /tmp/MegaDir.enc and may sync to any directory under the mega /Root directory

Retrieving unencrypted files

Now you want to view these files unencrypted on some other computer.  First install the megatools programs.  Then you may use the megafs program to mount the mega account to the local filesystem and then layer the encfs filesystem over the encrypted diretory to decrypt the files. You will also need the encfs configuration file that was automatically generated above.

scp :/path/to/data/to/backup/.encfs6.xml \
mkdir /media/megafs /media/megafs.encfs
megafs -u  <username> -p <password> \
       --reload /media/megafs
export ENCFS6_CONFIG=$HOME/megafs-encfs6.xml
encfs /media/megafs/path/to/encrypted/directory \
  • Currently megafs does not support read/write on files.  So you can only get a directory listing.  Not so useful.  However there is an ubuntu ppa with packages patched to allow read support for megafs (source).

And presto!  You’ve got decrypted access to your data.  Make sure you store your password in a safe place and backed up!

NOTE: The process is very similar for ecryptfs.

Running Rebtel One-Click application in wine

Posted in Uncategorized on May 11, 2013 by voline

This was tested on wine 1.5.29 with the Oct 12, 2012 snapshot of winetricks.  Download the Rebtel PC application installer at:

  1. WINEARCH=win32 WINEPREFIX=$HOME/.wine-rebtel winetricks dotnet40
    • The wine install instructions say to install .NET Framework 4.0 in a clean wine prefix using winetricks and must be 32-bit (thus the use of WINEARCH).
  2. WINEARCH=win32 WINEPREFIX=$HOME/.wine-rebtel wine <path to setup installer>/Rebtel-PC-setup.exe

Next comes the problem of starting the application again.  The downloaded exe file is stored in some horrendous path, which probably changed each time an updated exe is downloaded.  If you can find the exe, you can give that to wine and it will run the application.  Not so nice though.

There are reports that using wine to run “rundll32.exe dfshim.dll,ShOpenVerbShortcut C:/users/crass/Start Menu/Programs/Rebtel/Rebtel.appref-ms" should open the application.  However this always gives an error about the url being invalid.  Likewise, if I do “rundll32.exe dfshim.dll,ShOpenVerbApplication <application url found in appref-ms file>", I get another error about the config being different than the installed config (which probably has something to do with the params in the appref-ms file that I’m not using).

Then I figured out I could first run “explorer” and then double click on the desktop icon created. Still this was more steps than I felt necessary. So I tried passing the appref-ms shortcut file on the desktop to explorer and voila!  Here’s the exact command which can be put into a wrapper script to easily run from the command line (changes needed for your specific installation of course).

WINARCH=win32 WINEPREFIX=$HOME/.wine-rebtel wine explorer '%USERPROFILE%\Start Menu\Programs\Rebtel\Rebtel.appref-ms'

Now the problem I’m having is that the rebtel app consistently crashes non-deterministically.  It seems to be related to the use of the gui. But I’ve successfully made an uninterrupted call for a couple minutes without the app crashing.

Another unresolved issue is that the program will not close.  Clicking the close button and selecting the close menu item for the window properties causes the window to disappear and reappear.  I think its trying to minimize to the tray, but I can’t get it to show up there in Unity.

Fix: Session Manager extension missing menu items in Firefox

Posted in Uncategorized on April 5, 2013 by voline

For some time now, most of my Firefox profiles have been missing menu items for the Session Manager extension.  When going to the sub-menu Tools -> Session Manager, the usual menu items like “Load Session”, “Save Session”, “Delete Session”, etc. were not there.  Thus severely limited my session management ability.

Curiously, this wasn’t happening for my main profile.  I looked into a bit, but didn’t find a solution and decided to live without it on my other profiles.

Then I ran across a this bug as I was searching the session manager bugs site.  I tried turning off the Ubuntu Firefox integration extension to see if that would fix it, but it didn’t.  Then I tried setting the extension property “extensions.{1280606b-2510-4fe0-97ef-9b5a22eafe30}.no_splitmenu” to true.  Lo and behold that did the trick!

I still can’t explain why my main profile has the menu items because that setting is false on it.  And its a PITA to remember to do this on newly created profiles.

Where’s the money Lebowski: bypassing page the cache

Posted in Uncategorized on March 2, 2013 by voline

Into the wild

I had a bit of a scare today.  I’ve been rearranging some partitions on my laptop hard drive without a backup (don’t have the extra storage).  Not for the feint of heart!  I would have much more peace of mind using a partition editor like gparted, where I assume its been fairly well tested.  However, gparted won’t move a partition unless it contains one of its supported file systems and it won’t move a partition across another partition, both of which I’m needing to do.  These would be great additions to the tool!

Using ddpt

I’ve been using a variant of dd called ddpt to do the low level copying of data blocks.  Ddpt can speak to the drive via scsi, so I’d guess I could get better read/write rates (turns out maybe not that much better).  There were several moves that needed to be completed and partition juggling to be had.  I was on the last move which was shifting a 500GB+ partition closer to the beginning of the disk.  Before doing the move I verified that the to block address has the content that I was expecting, just so I was sure I was overwriting what I was expecting.  I then set it off and left to come back in 5 hours when it should have completed.

On overlapping partition moves

Incidentally, this process is a bit scarier because this is an overlapping move (and remember with dd and friends this can only be a shift to the left, else you corrupt your data).  That is to say, some the input contents will be overwritten as the copy takes place.  If for some reason the copy is interrupted after some of the input has been over written, you’ve got a huge mess on your hands.  If you just restart the copy, you will corrupt your data.  The situation is not hopeless though, but its potentially very time consuming to fix.  Basically, you need to run a moving window the size of the read/write offsets from the beginning sector by sector.  If the sectors from the beginning are equal (make sure to check the first several sectors), you can start the move from the beginning as before.  Otherwise, if the sectors at the ends of the window are not equal, then the copy has started overwriting the original partition (and thus currently your partition is in a useless state).  Just continue moving the window down until the sectors start matching.  While the sectors are matching continue moving the window until they stop matching.  The length of the matching sectors should be equal to the size of the window (unless by some coincidence there could be some matching sectors at the ends of the window which make the matching sectors size larger than the window, but it doesn’t matter, the size should never be less than the window size).  When you come to the mismatching sectors again, this is the point where you may resume your sector move.

Heart attack!

When I came back to the completed move, I immediately ran some sanity tests before I assume everything went well and potentially make things worse.  The first thing I do is check the first destination block to make sure it has the filesystem header I expect.  This shouldn’t really be necessary… but wait it hasn’t changed!  Ok, remain calm with your seats in the upright position.  Don’t panic.  Let’s see if the first sector of the source has changed… it hasn’t!  Ok what’s going on here.  My first thought is that if nothing has changed I can just restart the move from the beginning.  Don’t be too hasty, why didn’t ddpt report any errors and exit?  Instead it ran for the full length and exited normally.  So ddpt thinks everything is good, which means that the drive must not have errored.  Thus the drive must have executed all those writes successfully.


Then it hit me caching!  When reading the blocks, I was not running ddpt in pt mode (ie not using the scsi layer).  So I was getting blocks from the kernel’s page cache, which might have those block s cached if I’d recently read them (and I had).  When writing I was using pt mode, which necessarily bypasses the kernel page cache.  Searching through ddpt documentation, I found the fua and fua_nv bits.  The description wasn’t so helpful that I could fully understand the implications of using it, but I could tell it might be useful.  Time to dust off the SCSI spec (SBC-3 5.8 table 40) and see what it says.  Since I wasn’t completely sure that the volatile cache on the disk was right either, I set FUA=0 and FUA_NV=1 to get the block from non-volatile cache or directly from the media.  Lo! And behold!  The sectors were as they should be according to the drive!  Ok, but where do we go from here?

Dumping the cache (the cops are on our tail!)

After doing a few quick googles, I found that you can tell linux to drop the pages in its cache (if they aren’t dirty!).  My biggest concern now was, what if it causes the blocks to be written to disk before dropping them?  Then you end up with blood all over your money, which makes it completely unusable. Now I wouldn’t expect pages to get written back to disk fromthe cache unless they were dirty (if the kernel thinks nothing has changed, why would it write the same data to disk that’s already there).  Some more looking around lead me to /proc/sys/vm/dirty_writeback_centisecs, which says about how long a dirty page will stay in the cache before its written back to disk.  By default this is 5 seconds.  So by the time I was running these sanity check commands, any dirty block should already have been written to disk.  In fact, since the only thing writing to the disk was not going through the page cache, it should have been a very long time since there was a dirty page destined for the sectors I cared about.

Time to pull the trigger.  Done and done.  After telling linux to drop the pages in the page cache (no need to have it drop inodes or dnodes since there was no filesystem associated with those blocks), the sectors from the disk are returning what they should when going through the page cache. Mount readonly and do fsck, everything is fine…  Whew!

Looking back…

I don’t think the disk’s volatile cache could have been inconsistent with the media.  So I shouldn’t have needed the FUA or FUA_NV bits.  Running in pt mode should have been sufficient.