Archives

All posts by Matthew

A friend was looking for a way to list the space usage on a windows server that only had FTP access. I had written something similar for a project long ago, and polished up to do the job.

This python script will walk an FTP directory in a top-down, depth-first pattern. It uses the ftplib library, which I believe is built-in to most or all python distributions. Configure the FTP_* variables near the top to set the server, port, user, password, and the delay between each FTP operation (to avoid hammering the server). The script recursively processes directories, creating a dirStruct tuple that contains the following items:

(pwd, subdirList, fileList, sizeInFilesHere, sizeTotal)
    pwd is a string like "/debian/dists/experimental"
    subdirList is a list of tuples just like this one
    fileList is a list of (filename, sizeInBytes) tuples
    sizeInFilesHere is a sum of all the files in this directory
    sizeTotal is a sum of all the files in this directory and all subdirectories

It also writes data to two CSV files:

  • dirStruct_only_folders.csv
    • Contains entries for just the directories.
    • Local size is the total size of files in that folder (does not count subdirs).
    • Total size is the sum of local size and total size of all subdirs.
  • dirStruct_complete.csv
    • Contains entries for both files and folders.
    • Files do not have a total size, only a local size.
#!/usr/bin/env python
#
# A script to recursively walk an FTP server directory structure, recording information
# about the file and directory sizes as it traverses the folders.
#
# Stores output in two CSV files:
#  dirStruct_only_folders.csv
#     Contains entries for just the directories.
#     Local size is the total size of files in that folder (does not count subdirs).
#     Total size is the sum of local size and total size of all subdirs.
#  dirStruct_complete.csv
#     Contains entries for both files and folders.
#     Files do not have a total size, only a local size.
#
# Customize the FTP_* variables below.
#
# Basically does a depth-first search.
#
# Written by Matthew L Beckler, matthew at mbeckler dot org.
# Released into the public domain, do whatever you like with this.
# Email me if you like the script or have suggestions to improve it.

from ftplib import FTP
from time import sleep


FTP_SERVER = "ftp.debian.org"
FTP_PORT = "21" # 21 is the default
FTP_USER = "" # leave empty for anon FTP server
FTP_PASS = ""
FTP_DELAY = 1 # how long to wait between calls to the ftp server

def parseListLine(line):
   # Files look like          "-rw-r--r--    1 1176     1176       176158 Mar 30 01:52 README.mirrors.html"
   # Directories look like    "drwxr-sr-x   15 1176     1176         4096 Feb 15 09:22 dists"
   # Returns (name, isDir, sizeBytes)
   items = line.split()
   return (items[8], items[0][0] == "d", int(items[4]))

# Since the silly ftp library makes us use a callback to handle each line of text from the server,
# we have a global lines buffer. Clear the buffer variable before doing each call.
lines = []
def appendLine(line):
   global lines
   lines.append(line)
def getListingParsed(ftp):
   """ This is a sensible interface to the silly line getting system. Returns a copy of the directory listing, parsed. """
   global lines
   lines = []
   ftp.dir(appendLine)
   myLines = lines[:]
   parsedLines = map(parseListLine, myLines)
   return parsedLines
   
def descendDirectories(ftp):
   # Will return a tuple for the current ftp directory, like this:
   # (pwd, subdirList, fileList, sizeInFilesHere, sizeTotal)
   #     pwd is a string like "/debian/dists/experimental"
   #     subdirList is a list of tuples just like this one
   #     fileList is a list of (filename, sizeInBytes) tuples
   #     sizeInFilesHere is a sum of all the files in this directory
   #     sizeTotal is a sum of all the files in this directory and all subdirectories

   sleep(FTP_DELAY) # be a nice client

   # make our directory structure to return
   pwd = ftp.pwd()
   subdirList = []
   fileList = []
   sizeInFilesHere = 0
   sizeTotal = 0

   print pwd + "/"
   items = getListingParsed(ftp)
   for name, isDir, sizeBytes in items:
      if not isDir:
         fileList.append( (name, sizeBytes) )
         sizeInFilesHere += sizeBytes
      else:
         # is a directory, so recurse
         ftp.cwd(name)
         struct = descendDirectories(ftp)
         ftp.cwd("..")
         subdirList.append(struct)
         sizeTotal += struct[4]

   # add in the size of all files here to sizeTotal
   sizeTotal += sizeInFilesHere
   return (pwd, subdirList, fileList, sizeInFilesHere, sizeTotal)

def pprintBytes(b):
   """ Pretty prints a number of bytes with a proper suffix, like K, M, G, T. """
   suffixes = ["", "K", "M", "G", "T", "?"]
   ix = 0
   while (b > 1024):
      b /= 1024.0
      ix += 1
   s = suffixes[min(len(suffixes) - 1, ix)]
   if int(b) == b:
      return "%d%s" % (b, s)
   else:
      return "%.1f%s" % (b, s)

def pprintDirStruct(dirStruct):
   """ Pretty print the directory structure. RECURSIVE FUNCTION! """
   print "{}/ ({} in {} files here, {} total)".format(dirStruct[0], pprintBytes(dirStruct[3]), len(dirStruct[2]), pprintBytes(dirStruct[4]))
   for ds in dirStruct[1]:
      pprintDirStruct(ds)

def saveDirStructToCSV(dirStruct, fid, includeFiles):
   """ Save the directory structure to a CSV file. RECURSIVE FUNCTION! """
   # Info about this directory itself
   fid.write("\"{}/\",{},{}\n".format(dirStruct[0], dirStruct[3], dirStruct[4]))
   pwd = dirStruct[0]

   # Info about files here
   if includeFiles:
      for name, size in dirStruct[2]:
         fid.write("\"{}\",{},\n".format(pwd + "/" + name, size))

   # Info about dirs here, recurse
   for ds in dirStruct[1]:
      saveDirStructToCSV(ds, fid, includeFiles)

print "Connecting to FTP server '%s' port %s..." % (FTP_SERVER, FTP_PORT)
ftp = FTP()
ftp.connect(FTP_SERVER, FTP_PORT)
if FTP_USER == "":
   ftp.login()
else:
   ftp.login(FTP_USER, FTP_PASS)

print "Walking directory structure..."
dirStruct = descendDirectories(ftp)

print ""
print "Finished descending directories, here is the info:"
pprintDirStruct(dirStruct)
print ""

FILENAME = "dirStruct_complete.csv"
print "Saving complete directory info (files and folders) to a CSV file: '%s'" % FILENAME
with open(FILENAME, "w") as fid:
   fid.write("\"Path\",\"Local size\",\"Total size\"\n")
   saveDirStructToCSV(dirStruct, fid, includeFiles=True)

FILENAME = "dirStruct_only_folders.csv"
print "Saving directory info (only folders) to a CSV file: '%s'" % FILENAME
with open(FILENAME, "w") as fid:
   fid.write("\"Path\",\"Local size\",\"Total size\"\n")
   saveDirStructToCSV(dirStruct, fid, includeFiles=False)

Sample CSV output:

"Path","Local size","Total size"
"/plugins/",5426535,7594527
"/plugins/foo-1.1.jar",7774,
"/plugins/CHANGELOG.txt",45169,

Local size is just the size of the file itself, or the size of all files in a directory. Total size is the total size of the files in a directory plus the total sizes of all subdirectories. Files do not have a total size entry.

I recently discovered that two of the hard drives in my server had a firmware bug that could leave to silent dataloss when used with smartd (and other programs). Fortunately, there is a firmware update. Unfortunately, it doesn’t change the drive’s reported firmware revision so you can’t tell if the update has been applied already…

To run the update, it says “Save the .exe files to a bootable media”. It doesn’t provide any more details than that, but apparently it needs to be a DOS boot disk/usb drive. This link provides an easy way to take a pre-generated FreeDos image and DD it to a USB drive. Once you do that, you can mount it and copy the files to run to the FAT partition.

FreeDOS prebuilt bootable USB flash drive image – http://chtaube.eu/computers/freedos/bootable-usb/

As part of migrating to a new account at Dreamhost, I had to regenerate my blog from an old SQL dump backup email. Not quite as easy as from an actual WordPress export, but I made it work. However, something got messed up with the RSS feed. Turns out my old blog install had the following in the /blog/.htaccess that didn’t make it into the new install:

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /blog/
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /blog/index.php [L]
</IfModule>

Adding it to the same place in the new install fixed things, but changed the location to http://www.mbeckler.org/blog/?feed=rss2 . Just an FYI.

I’m writing a fun little webapp using Flask and Python and Sqlalchemy, running on Heroku using a PostgreSQL database. I use a sqlite3 database file for local testing, and PostgreSQL when I deploy, so naturally there are some minor snags to be run into when switching between database engines.

Tonight I ran into a tricky issue after adding a ton more foreign-key / relationships to my database-backed models. I was getting an error like this when I tried to issue my db.drop_all() command in my python script that initializes my database tables:

sqlalchemy.exc.InternalError: (InternalError) cannot drop table pages because other objects depend on it
DETAIL:  constraint pagesections_parent_page_id_fkey on table pagesections depends on table pages
HINT:  Use DROP ... CASCADE to drop the dependent objects too.
 '\nDROP TABLE pages' {}

A bunch of searching for solutions indicated that maybe it would work if you run db.reflect() immediately before the db.drop_all(), but apparently the reflect function is broken for the current flask/sqlalchemy combination. Further searching revealed a mystical “DropEverything” function, and I finally found a copy here. I had to do a few small modifications to get it to work in the context of Flask’s use of Sqlalchemy.

def db_DropEverything(db):
    # From http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DropEverything

    conn=db.engine.connect()

    # the transaction only applies if the DB supports
    # transactional DDL, i.e. Postgresql, MS SQL Server
    trans = conn.begin()

    inspector = reflection.Inspector.from_engine(db.engine)

    # gather all data first before dropping anything.
    # some DBs lock after things have been dropped in 
    # a transaction.
    metadata = MetaData()

    tbs = []
    all_fks = []

    for table_name in inspector.get_table_names():
        fks = []
        for fk in inspector.get_foreign_keys(table_name):
            if not fk['name']:
                continue
            fks.append(
                ForeignKeyConstraint((),(),name=fk['name'])
                )
        t = Table(table_name,metadata,*fks)
        tbs.append(t)
        all_fks.extend(fks)

    for fkc in all_fks:
        conn.execute(DropConstraint(fkc))

    for table in tbs:
        conn.execute(DropTable(table))

    trans.commit()

I had to change the uses of engine to db.engine since Flask’s SQLalchemy takes care of that for you. You get the db object from the app, like this “from myapp import db”, and this is how I defined db in myapp:

from flask.ext.sqlalchemy import SQLAlchemy

app = Flask(__name__, etc)

# DATABASE_URL is set if we are running on Heroku
if 'DATABASE_URL' in os.environ:
    app.config['HEROKU'] = True
    app.config['SQLALCHEMY_DATABASE_URI'] = os.environ['DATABASE_URL']
else:
    app.config['HEROKU'] = False
    app.config['SQLALCHEMY_DATABASE_URI'] = "sqlite:///" + os.path.join(PROJECT_ROOT, "../app.db")

db = SQLAlchemy(app)

And then this is the important parts of my db_create.py script:

from sqlalchemy.engine import reflection
from sqlalchemy.schema import (
        MetaData,
        Table,
        DropTable,
        ForeignKeyConstraint,
        DropConstraint,
        )

from cyosa import app, db

if not app.config['HEROKU'] and os.path.exists("app.db"):
    os.remove("app.db")

def db_DropEverything(db):
    # listed above

db_DropEverything(db)
db.create_all()

# add your instances of models here, be sure to db.session.commit()

I wanted to play Minecraft on my 64-bit Ubuntu Linux install, but it wasn’t working correctly for me, and would give a black screen after login, and the console reported some errors about xrandr (which might be related to my odd “dual display-port + docking station” setup at home). After some searching, I found a tip to manually install the LWJGL java libraries into the ~/.minecraft/bin/ folder, to have the latest and greatest version of those libraries.

  1. Download the latest version zip archive of the LWJGL libraries: http://sourceforge.net/projects/java-game-lib/files/latest/download?source=files
  2. Extract downloaded zip archive
  3. Copy all files in lwjgl-2.9/jar/ to ~/.minecraft/bin/
  4. Copy all files in lwjgl-2.9/native/linux/ to ~/.minecraft/bin/natives/

And then you should be good to go.

via https://bbs.archlinux.org/viewtopic.php?pid=876274#p876274

Every stinkin’ time you have to update Java (which is every day it seems, since Java has more (security) holes than most Swiss cheese) it wants to install the stupid Ask Toolbar and take over your default search engine. Here’s a quick Windows registry fix that will apparently disable the installer from even asking you about the toolbar installation.

Another way, without having to download and rename or create a new .REG file, is to copy and paste the following two lines into an elevated CMD prompt:

reg add HKLM\software\javasoft /v "SPONSORS" /t REG_SZ /d "DISABLE" /f

reg add HKLM\SOFTWARE\Wow6432Node\JavaSoft /v "SPONSORS" /t REG_SZ /d "DISABLE" /f

via Superuser: How can I prevent Ask.com Toolbar from being installed every time Java is updated?

Update: Looks like Transmission sends traffic out the loopback (lo) interface, to the loopback interface. Seems kind of weird, but it should be harmless. These rules should permit traffic from the vpnroute gid to pass to the tun0 and lo interfaces, but everything else will be rejected. You can also duplicate the last rule with a LOG target if you want to see what is still being rejected.

sudo iptables -A OUTPUT -m owner --gid-owner vpnroute -o tun0 -j ACCEPT

sudo iptables -A OUTPUT -m owner --gid-owner vpnroute -o lo -j ACCEPT

sudo iptables -A OUTPUT -m owner --gid-owner vpnroute -j REJECT


We recently moved into a new home where we have a shared internet connection with the other occupants of the duplex. I didn’t want to use bittorrent directly, since any nastygrams would end up with the landlord and cause problems, so I signed up for the IPredator VPN service in Sweden. It allows you to make an encrypted and secure connection from your computer to their network, so all of your internet traffic is funneled through the secure connection, making it so that the neighbors, landlord, and internet service provider can’t tell what I’m up to. The VPN was really easy to set up in Ubuntu Linux with the graphical network manager (IPredator provides a visual guide to this process) and the speeds are certainly reasonable.

One downside of this is that if there is a connection hiccup that causes the VPN to drop, the bittorrent software will just fall back to sending data out the regular, unencrypted network interface, potentially exposing your naughty activities to the ISP. I wanted to find a way to effectively say, “only allow bittorrent traffic through the VPN connection” that would step up and protect things if the VPN connection dropped.

On Linux, the standard firewall is called “iptables”, and can do just what we need, in only three commands. But first, a couple of assumptions:

  • I am assuming that you are using the default Ubuntu Linux bittorrent client called “Transmission”, which is executed using the command “transmission-gtk”.
  • When the VPN is connected, it creates a new network interface called “tun0” (“tun” for “tunnel”).

The general plan is to somehow tag the bittorrent traffic so that the iptables firewall can identify the relevant packets, and reject them if they aren’t heading out the secure VPN interface tun0. An easy way is to run your bittorrent program using a different UNIX user or group.

Here, we add a new group called “vpnroute”:

sudo groupadd vpnroute

Then, we add the firewall rule that rejects all traffic from this group that is not heading out tun0:

sudo iptables -A OUTPUT -m owner --gid-owner vpnroute \! -o tun0 -j REJECT

Finally, we start up the bittorrent software with group ownership of vpnroute:

sudo -g vpnroute transmission-gtk

Your torrents should now only run when the VPN is connected. Try it out with some safe torrents, like the Ubuntu ISO files, and make sure that they only download when the VPN is connected, and they should stop right away when you disable the VPN.


If you want to confirm that the firewall rule is actually matching your traffic, you can add a similar rule that does the LOG operation instead of REJECT. You need to ensure that the LOG rule happens first, because after handling the LOG rule the packet keeps going, while a REJECT action won’t let the packet continue down the chain of rules. You can remove the output rule with “sudo iptables -F OUTPUT” (F for Flush), and then:

sudo iptables -A OUTPUT -m owner --gid-owner vpnroute \! -o tun0 -j LOG
sudo iptables -A OUTPUT -m owner --gid-owner vpnroute \! -o tun0 -j REJECT

Then you can check the output of “dmesg” to see when packets are logged (and then rejected) by the firewall.

Being from the Minnesota/Wisconsin area, but living in Pittsburgh, we tend to drive through Chicago 2-3 times each year. The GPS and online mapping services suggest taking I-90 all the way through Chicago, taking the Chicago Skyway to the Dan Ryan to the Chicago Circle to the Kennedy Expressway to Rockford. While that route is indeed the shortest highway route in terms of distance, it’s almost guaranteed to be very congested with lots of traffic, and ends up being slower in the end compared to other routes.

Over the past five years of driving through Chicago a few times per year, we’ve decided that taking I-290 and I-294 is the best way to bypass as much Chicago traffic as possible, without increasing the overall driving distance too much.

chicago_route_overview

Westbound Route Details – Heading from Indiana to Wisconsin, first you need to exit from the Indiana Toll Road (I-80W / I-90W) onto I-80W / I-94W, which is kind of a goofy right-exit-then-overpass-to-the-left sort of thing:

chicago_route_westbound_indiana

I-94W will peel off after a while and head North into Chicago. The transition from I-80W to I-294N is really easy, as I-80W has to exit and cross over to head further West, while you just stay in the center lanes for the curve toward the North.

The transition from I-294N to I-290W is somewhat tricky, and is usually the only place we encounter any traffic-based slowdown. It’s a silly right-exit loop-under, that sometimes gets backed-up a few hundred feet owing to the slow speeds on the tight loop. Merging with the I-290W traffic isn’t too bad.

chicago_route_westbound_294_to_290

After that, the transition from I-290W to I-90W is a standard cloverleaf loop, and it has some nice protected “feeder” lanes to make your merge really easy.

chicago_route_westbound_290_to_90

Eastbound Route Details – Heading from Wisconsin to Indiana, first you need to exit from I-90E to I-290E, which is a simple right exit, so there’s no picture here.

The transition from I-290E to I-294S is much simpler than the opposite-direction transition loop-around shown above, and is just a simple “keep right” sort of bump around the interchange.

chicago_route_eastbound_290_to_294

The transition from I-294S to I-80E is trivial, as is the interchange where I-94E merges into I-80E. The only tricky bit remaining is near the Indiana state line where you merge from I-80E / I-94E onto I-80E / I-90E, which has a silly cloverleaf bridge thing.

chicago_route_eastbound_indiana

Overall, a pretty easy route, usually with no significant traffic that would be easy to avoid.

There are definitely some tolls along the way, but I think all the Illinois toll plazas have open-road tolling, so get your EZ-Pass and save time and money. If you get your EZ-Pass from Pennsylvania (you don’t have to be a PA resident) and elect for paperless billing and connect your credit card for automatic reloading of funds, there are no monthly or yearly fees!

Last week a friend of mine mentioned that he was going to get started with the world of brewing your own beer, so I wrote up a quick little note to him about my perspective on homebrew. I thought that other people might be interested too, so here it is:

Hey there!

Good luck getting started with homebrewing, it’s a ton of fun! Forgive me if you know some or all of this already, but I thought I’d send you some general info from my perspective after a few years of reading up on and then actually brewing a bunch.

To make beer, you get sugar from barley and add it to some water, boil it for an hour or so, add hops during the boil, then cool it down, add yeast, and let it ferment for a few weeks. Then, either put it in a keg or bottles.

The two main categories of beer, at least from the homebrewing perspective, are ale and lager. Ale yeast ferments best at higher temperatures (65-76 degrees F). Lager yeast ferments best in the 45-55 range, so you need to keep it cooler in order to do lagers. Most homebrewers start by brewing ales, since the temperature requirements are more compatible with most houses. I have a basement storage area that’s pretty constantly 65 degrees that I use for fermenting. I haven’t done a lager yet, but I have had good success brewing an Oktoberfest using ale yeast, that has turned out pretty nice two years in a row. If you want to brew lager styles of beer, many homebrewers get an old chest freezer and an external controller to do that.

Most people who get into homebrewing start with Extract Brewing, where someone else did the hard work of extracting the fermentable sugars from the malted barley. Extract comes in both powder form (dry malt extract, DME) and thick syrupy form (liquid malt extract, LME), but they are functionally equivalent. More advanced brewers have extra equipment and can start from raw malted barley, and extract the sugars themselves, which can be cheaper, and there is a much wider variety of grains available (dozens) than malt extract (4-6 types). I don’t have room for doing all-grain, so I’m sticking with extract brewing until I have a larger home, and I’m perfectly happy with how my beer turns out.

Hops come in dried pellet form (vacuum packed in nitrogen for freshness) generally in increments of an ounce. Most styles will have 1-2 ounces for a 5 gallon batch, but strongly-hopped styles like IPA will have 4+ ounces of hops. Hops add two characteristics: bitterness and aroma. Hops that are added at the start of the boil have all their aroma boil off so they don’t contribute much aroma, but their alpha acids (which require heat and time to activate) add bitterness. Hops that are added at the end of the boil (or even after the boil) don’t have enough time in the heat to contribute much bitterness, but they contribute lots of aroma.

Yeast is either dried (in a packet, which lasts for years) or liquid (in a test tube or foil pouch, which needs refrigeration since it’s perishable). There are only a couple strains available in dried form, but dozens of strains available in liquid form. I like to order extract ingredient kits by mail order, and then pick up the suggested liquid yeast from the friendly local homebrew shop, but also have a couple packets of dry generic ale yeast on hand in case I forget to pick up liquid yeast. You add the yeast after cooling the beer down to room temperature. When I first started I would fill the kitchen sink with ice (or snow, yay winter) and set the pot of boiling wort in the ice, then stir the wort like crazy to cool it down. Last year for Christmas my parents got me a nice copper coil that I can hook up to the cold water tap, so I can chill my wort down from 212F to 80F in about 5 minutes, which is nice.

Fermentation takes 2 weeks or more, depending on the style, and how strong the beer is (how much sugar is in the water). Strong beers generally take more time to ferment. During the fermentation our yeast buddies convert sugar into CO2 and CH3CH2OH (plus other flavors), but we just vent the CO2 to the atmosphere with a one-way airlock. I usually use food-safe 5 gallon plastic pails, since they are cheap and won’t break, but many people like to use large glass 6.5 gallon carboys, since they don’t retain colors or odors from previous batches, and won’t get scratches in the sides (which can be difficult to clean and harbor mean bacteria), but I haven’t had trouble with plastic pails.

After fermentation is finished, you either keg or bottle the beer. I don’t have room for a kegging setup, so I bottle my beer, but I can’t wait until I have room for kegging. With kegs, you just transfer the fermented beer into a clean keg, then use the CO2 tank to force carbonate the beer, and it’s ready to drink within a day or two. With bottles, you need to wash and sanitize 50-some brown glass bottles, then mix in a controlled amount of sugar (called priming sugar) into the beer (which is consumed by the remaining yeast, but this time the CO2 is captured by the bottle cap and carbonates the liquid). Natural carbonation this way takes a week or two, so you have to wait longer to try your beer. Since it’s not pasteurized, homebrew is still “alive” and generally improves with age, like a fine wine. If your beer tastes weird when it’s fresh, give it a few weeks in the bottles to mellow out.

Altogether, homebrewing is fun and pretty easy. It takes about 3-4 hours on brew day, and 2-3 hours on bottle day. While there are several good books (notably the Palmer book and the Papazian book), you can really get away without knowing much about the process, and just following the directions in the extract kits. Sanitation is really the only critical thing in homebrewing, you need to ensure that you clean everything well, and use sanitizer (my friends and I like EZ-Clean and StarSan) on anything that will touch non-boiling beer or wort (that is, you don’t need to sanitize your brewing pot or stirring spoon, as they will have the heck boiled out of them by the end of the hour-long boil). Basically the warm, sweet sugary wort is very tasty for mean bacteria, and we want to make sure our awesome yeasties dominate the culture of the beer, so we need to try to remove as many other microbes from our tools as possible. I’ve never had a batch “go bad” but one of my friends in Pittsburgh had a batch get infected, and he had to toss that batch and really scrub all his equipment to get rid of that nasty microbe.

As far as equipment goes, I’d suggest picking up a starter set of equipment. I have this one: http://www.northernbrewer.com/shop/essential-brewing-starter-kit.html but I’ve added a few parts here and there over the years. I have a 5 gallon (20 quart) stainless steel pot, which works ok on my electric range. Gas ranges work best, since they can dump many joules into the pot (it’s surprisingly difficult to get 3+ gallons to a rolling boil). You can also pick up a float hydrometer, which measures the relative density of the wort, and enables you to estimate the alcohol content of your finished beer. Adding all the sugar at the start makes the wort denser than water (“starting gravity”), and you also measure the density after fermentation (“final gravity”). We assume that the difference in density is due to missing sugar, that the yeast converted into alcohol at a known rate, so we can estimate the % alcohol with this little expression: (SG – FG) * 131.25

I’ve been keeping pretty meticulous records, and in 3.5 years I’ve spent $963 on equipment and ingredients for 21 batches of 5 gallons, which is about 1000 bottles, so I’m below $1/bottle now. My friends and I like to do mail order from Northern Brewer, MoreBeer, and Austin Homebrew (but mostly Northern Brewer, woo Minnesota!). Some places offer flat rate $8-$10 shipping, or we get a bunch of people together to make a big order and split the shipping cost.

I recently needed to keep a Bash script in a centralized location, with many symbolic links to that script sprinkled throughout my repository. I wanted the script be able to determine both the user’s PWD, as well as the true location of the script. Based on all the different suggestions found online, I created a little test script to see how well each suggestion worked:

#!/bin/bash

echo "\$0"
echo $0
echo ""

echo "pwd -P"
pwd -P
echo ""

echo "pwd -L"
pwd -L
echo ""

echo "which \$0"
which $0
echo ""

echo "readlink -e \$0"
readlink -e $0
echo ""

echo "readlink -e \$BASH_SOURCE"
readlink -e $BASH_SOURCE
echo ""

I put this script (test.sh) in ~/ and then created a symlink to it in a different directory. Here are the results.

My friend JT left a comment below to say that using $BASH_SOURCE is probably the better choice than using $0, since $0 can be changed, and is only set equal to the file name by convention.

Directly calling the script from the same directory (/home/matthew/):

matthew@broderick:~$ ./test.sh
$0
./test.sh

pwd -P
/home/matthew

pwd -L
/home/matthew

which $0
./test.sh

readlink -e $0
/home/matthew/test.sh

Directly calling the script from some other directory (/some/other/directory/):

matthew@broderick:~/some/other/directory$ ~/test.sh
$0
/home/matthew/test.sh

pwd -P
/home/matthew/some/other/directory

pwd -L
/home/matthew/some/other/directory

which $0
/home/matthew/test.sh

readlink -e $0
/home/matthew/test.sh

Creating a symlink to ~/test.sh in ~/some/other/directory, and calling it directly (./test.sh):

matthew@broderick:~/some/other/directory$ ln -s ~/test.sh ./test.sh
matthew@broderick:~/some/other/directory$ ./test.sh
$0
./test.sh

pwd -P
/home/matthew/some/other/directory

pwd -L
/home/matthew/some/other/directory

which $0
./test.sh

readlink -e $0
/home/matthew/test.sh

Creating a symlink to ~/test.sh in ~/some/other/directory, and calling it from yet another location:

matthew@broderick:~/some/other/directory$ ln -s ~/test.sh ./test.sh
matthew@broderick:~/some/other/directory$ cd ~/somewhere/else
matthew@broderick:~/somewhere/else$ ~/some/other/directory/test.sh
$0
/home/matthew/some/other/directory/test.sh

pwd -P
/home/matthew/somewhere/else

pwd -L
/home/matthew/somewhere/else

which $0
/home/matthew/some/other/directory/test.sh

readlink -e $0
/home/matthew/test.sh

Conclusion:
So, it looks like “readlink -e $0” will always return the full, non-symlink, “physical” location of the script (regardless of whether or not symlinks are involved). Looks like “pwd” returns the user’s current working directory reliably.