Tuesday, January 31, 2017

Systemd and fstab

Untitled Document.md

Over the weekend, I made a number of changes to my NAS server hardware, and along the way decided to upgrade from Open Media Vault (which is based on Debian Wheezy), to something a bit more modern. This bought me to my first experience with systemd which was available in wheezy, but not installed by default.

My Setup

In Open Media Vault, each drive is mounted by UUID in the directory /media. On top of this, I pool some of my drives using aufs, which is also mounted in /media using a UUID generated for the aufs pool. Rather than refer to each of the folders on my media drives via their long (and difficult to remember) UUID-based paths, I also create bind mounts under /export. For example /export/Movies, or /export/Books or whatever. That way if I shift content around different drives, I can simply re-point the bind mounts. Any software (eg, like Plex or whatever) points to the bind mounts.

Under OMV (and Wheezy) this all just “works”. But installing Debian Jessie, I suddenly found problems. The aufs pool seemed to be incomplete, and the bind mounts were completely empty. It took me a while to google and find the needle in the haystack that was causing me problems.

Systemd and fstab

In a pre-systemd world, Debian mounts drives in /etc/fstab in the order they are listed. If you want to create a pool mount using aufs, you would simply make sure it was listed after drives that aufs was pooling. Similarly, bind mounts would generally need to be listed last, since you are binding a folder from another drive, hence that drive needs to be mounted.

The problem is that systemd tries to streamline (and speed-up) the booting process by doing as much as possible in parallel. Under systemd, the order in fstab is not honoured and drives can potentially be mounted in any order. This means systemd might try and mount an aufs pool, before the constituent drives are available. In this case, this is why my my pool was incomplete.

Trying to diagnose this problem has proved to be a pain because there still isn’t a huge amount of knowledge out there around systemd, so I came across some inconsistent posts. Plus, I suspect it’s been something of a moving target over the years, with a lot changes and developments meaning the recommended solution from 2 years ago isn’t necessarily the best solution right now.

What’s happening behind the scenes?

At boot, systemd takes the entries in fstab and converts them into ‘unit files’. It then uses the unit files to mount each drive individually. You can find the unit files in /run/systemd/generator. Best as I can tell, one approach to resolving this issue is to create customised unit files. At boot time, systemd will then use the existing unit files rather than generating new ones from fstab.

The problem with this approach is that (a) depending on the number of drives you have, this could be quite annoying to setup and maintain, and (b) you are now splitting system configuration between fstab and a myriad of unit files.

The good news is that you can add options into your fstab that systemd will understand and parse. Two possible options to consider;

  1. Automounting – by adding the options noauto,x-systemd.automount to a fstab entry, a drive won’t be mounted on booting. Instead, it is only mounted when referenced. In the case, for example, of a pooled drive, it will only mount when the pool is referenced which is typically after the system has booted, and consequently after the constituent block devices have been mounted. However, in my case, I have bind mounts that have a dependency on a pool mount, which in turn depends on block devices… so I’m not sure how that chain of dependencies would work.

  2. Specifying dependencies – you can add the option x-systemd.requires= to a fstab entry (followed by the path of the required mount). This ensures that when the systemd unit file is generated, this dependency will be listed and resolved

Specifying the dependencies will result in a unit file (in /run/systemd/generator) with a line entry for Requires= (ie, which unit needs to be run first).

If you want to test this first, before rebooting (and potentially borking your system) – edit your fstab (making a backup first), and then run the systemd-fsab-generator which can be found in /lib/systemd/system-generators (in Debian). This will generate new unit files for mounting your drives. They can be found in /tmp (and hence, won’t actually be used). You can read through them, to check the systemd has appropriately parsed your fstab.

Thursday, April 7, 2016

Email Notifications for Kodi / OpenELC (Part 2)

 

Getting it


The Difference Engine can be found at;

https://github.com/najmead/thediffer

It's written in Python 3, and has been tested on Mac OSX using Pyzo (which comes with Python 3.4) and on Debian 7.9.  It should probably work on any other Linux system.  And it should probably work on Windows, assuming you have Python 3 installed.  It probably won't work on OSX natively, since it ships with Python 2.7.

Use git to clone the repository.  Alternatively, if you aren't using git and don't want to install it... you can just download the main script (differ.py) and the stylesheet template (stylesheet.css).  You'll also need to download the example config file, and modify it for your own settings (more on that later).

Running the Difference Engine


Run 'python3 thediffer.py' from your command line or whatever your python environment uses.  The first time it runs, thediffer.py will try and create a new sqlite database, and populate it with data from your Kodi / OpenELEC media player.  Depending on the size of your library, this could take a long time.  At the moment, in order to help me with debugging, it spews out quite a lot of information so you'll know it's working busily.

Once the initial database has been created and populated, subsequent runs should be much faster.  Though, if you turn on the option to query the TVDB this can slow things down.


Configuration


There is an example config file in the github repository.  Download it, rename it to thediffer.conf and fill in the appropriate details.  Some basics around the configuration;

  1. Scan days -- this is the number of days the differ will scan back to, looking for updates.  Generally, this depends on how frequently you schedule it to run.  If you run it daily, set the scan days to 1.  If you run it weekly, set the scan days to 7 (and so on).  The update option determines whether you want to issue Kodi with an update command before commencing with the scan (to make sure the media library is up-to-date).
  2. Email -- Set this option to True if you want to receive an email.  If left at False, it will simply dump out an html file (which you can either email using another app, or present using a webserver).  If set to True, make sure to fill in the details of your email provider
  3. TVDB -- if set to True, the differ will call the TVDB API, to find an IMDB identifier and (where available) use that to create live hyperlinks in the html.  You'll need a TVDB account for this to work.  It's a nice option, but completely unecessary.
  4. Styles -- modify this if you want to change the look of the emails.







Sunday, April 3, 2016

Email Notifications for Kodi / OpenELC (Part 1)

To jump straight to the installation and usage instructions, check out Part 2.

For quite some time now, I've been wanting to have an email notification system for new content that appears on my Kodi / OpenELEC based media player.  Most people seem to be happy to use the notification system in either SABnzbd (for when they download a piece of content) or in Sickrage/Sickbeard/Sonarr/Couchpotato, etc... for when the downloaded media gets picked up.

The problem with this approach is twofold;

  • Firstly, shock, horror... but there's a stack of content on my media player that ISN'T downloaded.  I manually rip content from DVD/BluRay and then load it myself.  And while I don't need to be notified that this new content has been added to the media library (afterall, I did it myself)... it's nice to be able to notify other family members.
  • Secondly, most of these notification systems send a new notification for every single piece of new content.  If you added 24 new episodes of a TV series that you just ripped from DVD, you can expect to receive 24 new email notifications... one for each episode.
What I really wanted is a sort of "daily digest" that shows all the content being added on a given day... neatly put together and summarised.

The Difference Engine


Alpha1


My first attempt was based around using a bash script to scan the hard drive of my NAS, looking for any new files.  If they looked like media files, I'd then dump them into a file and email the results.  The problem is that timestamps, I found, aren't always a reliable indicator of whether a file is actually "new" or not.  So instead, I started maintaining a list of existing content in a SQLite database.  The script would then scan for any media content, check to see if it was in the database already.  If not, it was deemed to be "new" and would go in the daily digest.

I called the script "The Difference Engine" since it checked each day to see if there were any "differences" in the media content.

Alpha2


Although my first script mostly worked, it lacked any kind of sophistication.  Plus, the script needed to run on the same server that stored all the media content -- which I decided was potentially a Bad Idea (TM).  So for my second attempt, I re-wrote the script from scratch.  This time, it called Kodi's built-in API to find new content.

Again, since Kodi doesn't always reliably report content as being new (eg, if you hose your library and re-install it), I used the same approach of storing data in a SQLite database.  The script, written in bash, would query Kodi (using curl) and then try and interpret the json that was returned (using jq).

It mostly worked.  But after spending plenty of time on Stack Exchange asking a myriad of questions, it became pretty apparent that trying to parse json strings in bash was a pretty dumb idea.  Bash is really, really useful for scripting and automating various system admin tasks.... but as a general purpose programming language -- not so great.

Beta1


So after resisting it for a long time, I decided to try and teach myself python.  So far, all my attempts at learning python have been completely hopeless.  I'm not a programmer, and the little bit of programming knowledge I have seems to be completely at odds with a lot python paradigms.  But I'm reliably informed that python is awesome and incredibly productive.  So over the last week, with some time off work, and a specific project in mind (ie, an updated version of the differ), I put my mind to my first python application.

A big shout out to Pyzo which is a pretty simple and functional IDE that can be installed on Windows and OSX with minimal fuss.  Not only does it work pretty nicely, but it's pretty self-contained.  So it allowed me to install python3 on my MacBook without messing with any of the existing python libraries.

Now... on to the end result.





Wednesday, June 3, 2015

Hiding Services Behind HAProxy

This is an update to the article here, on hiding services behind Apache.

With the latest version of Open Media Vault, the default web server has been changed from Apache to Nginx.  This means that if you want to follow my guide on hiding services behind Apache, you'll need to install and run a dedicated instance of Apache web server.  IMHO, this seems like overkill.  Alternatively, you could use Nginx to redirect traffic to your various applications.  Do a quick Google search, and you'll find a few guides on how to redirect traffic to Sickbeard, Couchpotato, and probably a few other applications.  But personally, I decided to take the opportunity to re-think my approach to this, and try something different.

HAProxy


HAProxy is a piece of software designed to perform load-balancing and proxying.  It has a couple of neat features.  It's small, lightweight and very efficient.  And unlike Apache (or Nginx), HAProxy is not just limited to redirecting HTTP traffic, but also works with other protocols in the TCP stack.

My needs are generally pretty simple, though.  I just want HAProxy to redirect traffic from port 80, to various ports, based on the URL path.  The extra features in HAProxy are a bonus really, the main thing I want is something lightweight.

Installation and Configuration


On Debian (and probably Debian-based distributions), you can install HAProxy via apt-get.  The installation process should install the HAProxy software, as well as create a configuration directory with a default config file (in /etc/haproxy) and create an init script for starting and stopping the HAProxy service (/etc/init.d/haproxy).  As a starting point, I recommend taking a backup of the default configuration script.

Now let's start playing with possible configurations...

There are two parts we are interested in... the frontend, and the backend.
The frontend is where HAProxy is given instructions on traffic to listen for, and where to direct that traffic.  Here's the frontend instructions in my HAProxy configuration file;


frontend public :80
        mode http
        option forwardfor
        option http-server-close
        option http-pretend-keepalive
        acl is_sab path_beg /sabnzbd
        use_backend sabnzbd if is_sab


This config is basically telling HAProxy to listen for public HTTP traffic, incoming via port 80.  Seems pretty straight forward, but what does it do with that traffic?  Well, the config also includes an acl instruction (short for access control list), called "is_sab" which is looking for any incoming traffic that has a URL path that begins with /sabnzbd.  The second instruction says that if HAProxy finds traffic that meets this criteria, it should be forwarded to the backend called "sabnzbd".

At this stage, though, we haven't specified any details about this backend, so we need to add a new configuration entry;

backend sabnzbd
        option httpclose
        option forwardfor
        server sab localhost:8080/sabnzbd


This config specifies a new backend, called "sabnzbd", which references a server called "sab", which is located at localhost:8080/sabnzbd.  Note that the backend does not need to be the localhost, but could be another machine on whatever port.  In our case, though, sabnzbd runs locally and listens to port 8080.

Adding Services


If you are running more services than just SABnzbd, you'll want to add multiple entries to the HAProxy configuration.  Since we have already configured a frontend, we just need to add additional entries for each service.  For example;

frontend public :80
        mode http
        option forwardfor
        option http-server-close
        option http-pretend-keepalive
        acl is_sab path_beg /sabnzbd
        use_backend sabnzbd if is_sab

        acl is_sickbeard path_beg /sickbeard
        use_backend sickbeard if is_sickbeard
        default_backend omv

So this expanded config now includes instructions for listening to port 80, and checking for traffic with a url that begins with /sickbeard.  I've also included a "default_backend" command, which directs traffic through to a backend called "omv" when it doesn't match any of the acl criteria.  Since I'm redirecting traffic to a backend called "sickbeard", I'd better create an entry for it;

backend sickbeard
        option httpclose
        option forwardfor
        server sb localhost:8081/sickbeard


Like the SABnzbd entry, this basically redirects traffic from /sickbeard to the server localhost:8081/sickbeard.

Furthermore, since I created an entry to redirect traffic through to a backend called "omv", I'll need to create an entry for that backend;

backend omv
        timeout server 30s
        server omv localhost:7000


Note that since HAProxy is now listening to port 80, I've shifted the Open Media Vault web console to now listen on port 7000.  And any traffic that doesn't match my list of acl entries will, by default, be directed to the OMV console at localhost:7000.

This is just an example of some configuration options.  HAProxy has got a really comprehensive configuration manual, but it can be a bit overwhelming and there are often multiple ways to do the same tasks.

Kicking Things Off


Once you're happy with your HAProx configuration, you'll need to restart the service by issuing the command;

service haproxy restart

 Any time you make a config change, you'll need to restart the haproxy service in order for it to re-read the new configurations.

Update!


A couple of things I forgot to add.

Firstly, if you are using these instructions for Open Media Vault, then you'll need to change the settings so that the OMV web interface doesn't listen to port 80 (and therefore, conflict with HAProxy).  This is found under System->General Settings.

Secondly, if you want to access services from outside your network, you'll need to configure your modem/router.  Typically, this involves "port forwarding" any traffic from port 80 to the internal network address of your server -- typically something like 192.168.1.XX or something similar.

Tuesday, February 3, 2015

Installing SABnzbd in Docker on OMV, Part 2

Continued from Part 1.

I'm going to assume you've read (or skimmed) through part 1 of this guide, and you have a Dockerfile prepared.  In order to build an image off this Dockerfile, run the following command;
sudo docker build -t sabnzbd

This will build a new image, using your Dockerfile, and the image will be called "sabnzbd".  If this is the first image you've built, the process may take a little while.  At the start of the Dockerfile, we specified that our image will be built based on debian:wheezy.  So the docker daemon will need to download a copy of wheezy to serve as the starting point.  Don't worry, this is not a full-blown copy of Debian, just a minimalist image to serve as a starting point.

If you build more images, using the same debian:wheezy image, you won't need to download this image again.  The Docker daemon is smart enough to use the existing image.

Once the image has been built, you can check to see it by running;
sudo docker images
This will give you a list of the images currently on your system.  At the very least, you should see a listing for debian and a listing for your newly created sabnzbd image.

Now that your image has been created, let's fire it up with the following command (this assumes you used the default options from the build in the previous article);
sudo docker run -v /etc/downloaders/sabnzbd:/etc/downloaders/sabnzbd -v /export/Downloads:/export/Downloads -v /etc/localtime:/etc/localtime:ro -p 8080:8080 --name=sabnzbd -d --restart=always sabnzbd
Yep, it's a pretty long command.  Let's examine the components of the command.

sudo docker run
This is pretty straightforward -- we're instructing the docker daemon to run a new container.
-v /etc/downloaders/sabnzbd:/etc/downloaders/sabnzbd
The -v parameter is used for allocating volumes.  During the build, we specified a volume for config files.  During execution, we map the path /etc/downloaders/sabnzbd on the host system (ie, on the left hand side of the colon), to /etc/downloaders/sabnzbd in the container (ie, on the right hand side of the colon).  With this mapping, anything stored inside the container in this directory path will ALSO be available on the host.  And these files will continue to reside on the host, even if the container is stopped or destroyed.

The -v parameter can be called multiple times from the command prompt.  We call it once to map the config directory, and then a second time to map where we store our downloads (ie, /export/Downloads).  I've also chosen to map the file /etc/localtime on the host to /etc/localtime in the container.  This helps to keep the clock on the host synchronised with the clock inside the container.  The container doesn't need to change localtime, so it's mapped using the option ro which basically means it is mapped as "read only".
-p 8080:8080
The -p option maps ports on the host system, with ports on the container.  When we built our sabnzbd image, we configured it to listen to the port 8080 (which is the default port that sabnzbd listens on).  However, traffic on the 8080 port won't reach the container, unless it is directed there by the Docker daemon.  This option basic tells the Docker daemon to listen on port 8080, and then redirect it to port 8080 inside the container (where sabnzbd is listening).  The ports don't necessarily have to match... Docker could listen on port 8000, and redirect to 8080.  But for the sake of simplicity, we'll leave them matching.
--name=sabnzbd
This option basically names out new container "sabnzbd".  If we don't specify a name, the Docker daemon randomly assigns one.
-d
This instructs the Docker daemon to run our new container in the background.
--restart=always
The --restart option if and when to restart the container in the event that it shuts down.  By specifying "always", the Docker daemon will start our container on reboot, or if the container crashes for whatever reason.  However, if you deliberately stop the container, it will stay stopped.  Keep in mind, that the application running inside the container might stop (or crash), but the container itself will continue to run.  Docker will only restart if the container itself crashes.  This is something you need to be mindful of -- if you shutdown sabnzbd from the GUI, the application won't automatically restart.  You'll need to go into the running container and restart the application, OR, you need to stop the container and restart it... at which point, our Start script will run.

After running the sabnzbd container, we should be able to check and see it running by using the following command;
sudo docker ps
This gives you a list of all containers running, and a little bit of information about them.  Hopefully, if you've followed the commands from Part 1 and 2 of this guide, sabnzbd should be running.  You can now point your browser to your server, and it should be waiting for you.



Installing SABnzbd in Docker on OMV

For anyone reading this post, I'm going to assume you know what Docker is, you have it installed and that you are familiar with some of the basic Docker commands.

If you simply want to get SABnzbd up and running in Docker, you can find several images in the Docker repository.  If you want to create your own Dockerfile, with a build customised to your own preferences, then read on.  If you can't be bothered reading through the information, just scroll to the bottom of this post, where I'll summarise with a complete Dockerfile you can cut and paste.  It's been tested to work on Open Media Vault, and should work on any Debian system... in fact, it should probably work on any Linux system with Docker running.

As a starting point, go to the OMV interface and create a new group called "media" and a new user called "sabnzbd", and make the user sabnzbd a member of the group media.  You don't have use these exact names, whatever suits your purposes is fine.

Then use ssh to connect to your OMV box -- the rest of these instructions will be via command prompt.

We want to find the userid and the groupid of the user and group we've just created.  So run the following commands;

getent group media
getent passwd sabnzbd

And take note of the groupid (it's the number) and the userid (the first of the two numbers).  Now create a working directory (for simplicity sake, call it sabnzbd), change into the directory and create a new file called Dockerfile using your preferred file editor (ie, nano, vi, emacs).  Start entering the following;

FROM debian:wheezy
MAINTAINER Your Name <your@email.com>
Ok, so this tells Docker that your container will be built using a Debian Jessie template.  Now add the following;
ENV GROUP xxxx
ENV GROUPID xxxx
ENV USER xxxx
ENV USERID xxxx
ENV SERVERPORT xxxx
ENV CONFIGDIR xxxx
ENV DATADIR xxxx

This sets some environment variables that we'll use later in the script. Wherever you see the xxxx, you need to customise this with your information. We already know the GROUP, GROUPID, USER and USERID, so put that information in accordingly. The SERVERPORT is the port that sabnzbd will listen to for incoming connections. By default, this is 8080, but if you want to change it, you'll need to modify this variable. The CONFIGDIR is where you'll store any config information for sabnzbd. This is purely a personal choice, but I like to use /etc/downloaders/sabnzbd. Lastly, the DATADIR is the directory where you plan to store your downloads. On Open Media Vault, this is probably going to be a shared folder somewhere in /media.  Next add the following;

## Prepare dependencies and then cleanup
RUN echo "deb http://http.debian.net/debian wheezy non-free" >> /etc/apt/sources.list.d/debian-nonfree.list

RUN apt-get update && apt-get upgrade -qy
RUN apt-get install python-cheetah par2 python-yenc unzip unrar git python-openssl sudo -qy
RUN apt-get clean &&\
rm -rf /var/lib/apt/lists/* &&\
rm -rf /tmp/*

Ok, this section handles the installation of the basic pre-requisites.  First we add the debian non-free repository into our list of sources.  This is because we want to install the non-free version of unrar.  We then run apt-get update and apt-get upgrade.  The flag "-qy" is necessary to stop apt from prompting you for any confirmation (which breaks the Docker build).  After our sources are updated, we install the packages we need, and run a basic cleanup.

RUN git clone https://github.com/sabnzbd/sabnzb/opt/sabnzbd

Once the basic pre-requisites have been installed, we take a copy of the sabnzbd code and put it in the directory /opt/sabnzbd.  You can put it wherever you like, but since it is running in Docker, it doesn't really matter.

## Add a sabnzbd user and media group
RUN groupadd -g ${GROUPID} ${GROUP} && useradd -u ${USERID} -s /usr/sbin/nologin -g ${GROUP} ${USER}
RUN chown -R ${USER}:${GROUP} /opt/sabnzbd RUN mkdir -p ${CONFIGDIR} && chown -R ${USER}:${GROUP} ${CONFIGDIR} RUN chmod u+rw ${CONFIGDIR}

Ok, this is where things get a bit funky.  Most Docker scripts assume that the application running inside the container will run a root.  Normally, running a web application as root has security issues.  But generally, since SABnzbd is being isolated inside a container, this should be less of an issue.

But, the problem is, any downloads created by SABnzbd will be written to the file system as root.  Depending on the permission structures you set up on your file system, this may or may not be an issue.  On my system, I've decided to set up a dedicated sabnzbd user that will exist on both the host system AND within the docker container.  Therefore, anything written by the user sabnzbd inside the container, will also be accessible by the user sabnzbd outside of the container.  In order to make this "work" though, the user ids need to be the same.  This is why we needed to get the groupid and userid for the user and group we created earlier.  Next add the following;

## Create Volumes
VOLUME ${CONFIGDIR}
VOLUME ${DATADIR}
Ok... one of the problems with containers is that they tend to be transient.  That is, they aren't necessarily designed for storing data permanently.  The intention is to spawn a container, based on a standard image, to be run as long as it is required.  When the container is no longer required, you can safely delete it, knowing that you can spawn a new container off the image whenever you want.

Of course, this creates a problem for things like configurations, which you probably want to save to be used again and again.  By specifying volumes, we are telling Docker to reserve these directories for permanently stored information.  Later, when we run the container, we'll map the volumes back to corresponding directories on the host system.  By performing this mapping, the data sits on the host system, but can be accessed inside the container.  If the container gets shutdown, you can still access the data inside volumes on the host.  If you delete the container, the data inside volumes remains untouched.

In the SABnzbd images, we'll specify two volumes.  The first, is a place to store configuration information -- which on my system is /etc/downloaders/sabnzbd.  The second volume is a place to store downloaded data, which on my system is /export/Downloads.  Next step;
## Expose the port sabnzbd will run on
EXPOSE ${SERVERPORT}
Ok, this exposes a particular port that SABnzbd will be running on.  When traffic is directed to your host system, firstly the Docker daemon needs to know to intercept that data, and forward it to a particular container.  And secondly, the container that receives the traffic, needs to be configured to list on that particular port.  The EXPOSE command basically configures containers built from our image to listen to a specific port.  In the case of SABnzbd, this is typically 8080, although you can configure it to something different on your system.  Next add the following to your Dockerfile;

RUN echo "#!/bin/bash" >> /opt/sabnzbd/Start.sh
RUN echo "sudo -u ${USER} /usr/bin/python /opt/sabnzbd/SABnzbd.py --config-file=${CONFIGDIR} --server :${SERVERPORT}" >> /opt/sabnzbd/Start.sh
RUN chmod +x /opt/sabnzbd/Start.sh
Ok, so this basically creates a start script inside your container.  The start script is run using bash, and contains a single command to run SABnzbd, using the parameters we've specified at the start of our Dockerfile.  Lastly, we use chmod to make the script executable.  Finally, add the last line to your Dockerfile;
ENTRYPOINT ["/opt/sabnzbd/Start.sh"]
This last step tells your SABnzbd image that the first thing a new container should do, is run the start script we just built in the previous step.  In other words, a new container spawned from our SABnzbd image, will immediately start running our start script, which has been designed to fire up SABnzbd.  Pretty simple really.

To Summarise


If you followed the instructions above, your Dockerfile should look something like this below (obviously, with the xxxx's replaced with your own personally configurations)

FROM debian:wheezy
MAINTAINER Your Name <your@email.com>

ENV GROUP xxxx
ENV GROUPID xxxx
ENV USER xxxx
ENV USERID xxxx
ENV SERVERPORT xxxx
ENV CONFIGDIR xxxx
ENV DATADIR xxxx

## Prepare dependencies and then cleanup
RUN echo "deb http://http.debian.net/debian wheezy non-free" >> /etc/apt/sources.list.d/debian-nonfree.list
RUN apt-get update && apt-get upgrade -qy
RUN apt-get install python-cheetah par2 python-yenc unzip unrar git python-openssl p7zip-full sudo -qy
RUN apt-get clean &&\
        rm -rf /var/lib/apt/lists/* &&\
        rm -rf /tmp/*

## Clone sabnzbd
RUN git clone https://github.com/sabnzbd/sabnzbd /opt/sabnzbd

## Add a sabnzbd user and media group
RUN groupadd -g ${GROUPID} ${GROUP} && useradd -u ${USERID} -s /usr/sbin/nologin -g ${GROUP} ${USER}
RUN chown -R ${USER}:${GROUP} /opt/sabnzbd
RUN mkdir -p ${CONFIGDIR} && chown -R ${USER}:${GROUP} ${CONFIGDIR}
RUN chmod u+rw ${CONFIGDIR}

## Create Volumes
VOLUME ${CONFIGDIR}
VOLUME ${DATADIR}

## Expose the port sabnzbd will run on
EXPOSE ${SERVERPORT}

RUN echo "#!/bin/bash" >> /opt/sabnzbd/Start.sh
RUN echo "sudo -u ${USER} /usr/bin/python /opt/sabnzbd/SABnzbd.py --config-file=${CONFIGDIR} --server :${SERVERPORT}" >> /opt/sabnzbd/Start.sh
RUN chmod +x /opt/sabnzbd/Start.sh

ENTRYPOINT ["/opt/sabnzbd/Start.sh"]
Now that your Dockerfile is complete, let's run this sucker!!  Check out Part 2, where I give you the basics for building your image, and getting it running.




Wednesday, January 21, 2015

Open Media Vault and Docker, Part 2

Continued from Part 1.

So Docker is pretty cool.  And I think it could solve a few problems I have.  I currently run Open Media Vault on an HP N40L microserver as my NAS, as an application server and as a general "do anything" box.  The problem is that any time I monkey around with the box, such as installing experimental software, I run the risk of breaking something.  And while it's not a serious business machine, I really don't want the potential disruptions.

In most business scenarios, you'd typically run TWO machines -- one to serve as a development and/or testing environment, and the other to serve as a production environment.  In a situation with constrained resources (like a small business), you might maintain a single physical server, but use virtualisation to maintain separate development and production environments.  And after looking at a number of different virtualisation options, Docker was the only one I could find that really suited my needs -- that is, open source, easy to use, well documented and supported, minimal overhead and ideally, compatible with tools that I'm already familiar with.


So, how do we get Docker up and running on Open Media Vault?

The Docker website has some basic instructions for running Docker on a Debian Jessie system.  There are also instructions for installing Docker on Debian Wheezy (which is the version upon which OMV Krazelic is based), but they are a little bit more involved -- since you need to install a backports kernel.  I suspect these instructions will probably work for OMV, but generally, installing backports kernel is handled through the web interface.  So I recommend the following steps;
  • Follow the steps here to install the  OMV-extras plugin
  • Once the OMV-extras plugin is installed and working, you should see a tab for installing the backports kernel.  Once selected, it will take a little while to download and install.  You'll need to reboot afterwards
  • After the reboot has finished, you'll need to ssh into your machine, and you can continue following the instructions from step 2 -- that is, run  
curl -sSL https://get.docker.com/ | sh
  • If you haven't done so already, you may need to instead curl (sudo apt-get install curl)
At the end of the whole process, you should have a working installation on your OMV machine.  Now it's time to try out installing an application.