$Id: mythtv_setup.html 53 2008-03-16 01:40:20Z isely $
Mike Isely <isely at pobox dot com>
If you have any comments or suggestions, please drop me a message.
Contents
MythTV is a killer Linux PVR application. You can find its home page at http://www.mythtv.org. There already exists lots of other locations where you can find out about the wonders of this application, so I'm not going to evangelize it here. Rather, what is described below is my particular MythTV setup, assembled in early March 2008.
What makes this setup interesting (to me at least) is that the frontend systems run diskless. They each mount a single shared read-only exported NFS root-configured file system from the backend. This has all sorts of good effects. For example, without a disk, there's less (a lot less) noise. And there's less heat - which means fans can spin slower or be simply eliminated. Since the root file system is read-only, it's perfectly OK to just kill the power on any front end, with negligible ill effects. And also since the root file system is read-only, multiple frontends can share the same system installation. This means that it's possible to add more frontends just by throwing some more configuration data into the nearby DHCP and TFTP servers (and maybe a tiny amount into the file system itself - but really it's nearly all in DHCP).
I'm not kidding about the noise reduction here. For the longest time I had set up a single standalone mythbox (lyta, below). This machine housed a single Seagate 750GB SATA drive for both its operating system and program storage. I had to bolt one of those "flat-plate" fans to the bottom of the drive to help with cooling - HTPC enclosures are not known for good ventilation. The box was very usable, but in a quiet room it was audible with a very definite single-pitch whine coming from the hard drive. With this new configuration however the drive has been removed and with it that the fan plate went away too. Now the box is so quiet that the only way you can tell it's on is with the power LED on the front!
I've mentioned this diskless aspect to other people and I've had multiple queries now asking for the details that make this a reality. This web page describes the answer. I've only just thrown this together. I'm sure with a lot more thought I can organize this better and make it more penetrable. However all the details are here. I suggest you study the "Vital stats" first and keep that at the back of your mind as you go through the various other sections of the page.
Vital stats:
In addition to the above, my home network includes a dedicated 24x7 server (host name: cnc). This system is the host for http://www.isely.net. It has been operational and online nearly continuously in various forms since roughly 2001. I mention this here because that server also provides the following services to my mythtv setup:
The DHCP service is on this server because it's always been there. I run a DHCP server there for other reasons, and so it was easier to just enhance it rather than trying to set up another DHCP server on valen and then have to deal with issues involved in keeping the two from stepping on each other. There is no technical reason however for why all of this couldn't also be run on the mythtv backend machine in a setup without an otherwise dedicated network server.
The rest of this text dives into specific details of the various bits needed to make the above MythTV setup a reality. The general pattern is to take the reader through a logical progression. Most of the actual work is in making the cluster (backend server plus 2 diskless frontend systems) viable as generic machines. Then the remaining problems to solve are fairly stock MythTV issues (e.g. install the MythTV backend on valen, install the MythTV frontend in the NFS exported root, then do the usual configuration steps for a split MythTV setup). Here's a quick summary of the rest of this document:
This system is named "valen".
I tend to name all systems in my LAN after characters from the Babylon 5 television series. The MythTV backend is valen; the front ends are lyta and talia. The main LAN server is cnc (for "Command aNd Control"). Other hosts scattered around the network include sheridan, delenn, londo, vir, sinclair, and gkar.
A MythTV backend does not require a lot of CPU horsepower. The big CPU-sucking activity in MythTV is video rendering and that happens in the frontend. If you were to use a dumb framebuffer capture card then CPU cycles will still be needed to compress that video deluge into something that can be spooled into a file. But if were instead to use digital tuners (or analog tuners with hardware mpeg encoders, like those driven by the ivtv or pvrusb2 drivers), the act of video recording is just an exercise in spooling relatively modest bit rate streams of data. About the only CPU intensive operation that might happen in a backend is commercial skip scanning - and even that doesn't have to happen in real time.
What a good MythTV backend does need is reasonable I/O throughput and as much storage as you can stuff into it. PCI slots are also very important if you are using internal tuner cards (as opposed to USB tuners or a network-centric tuner like the HDHomeRun).
So in consideration of the above, valen is set up here using an ancient 5-PCI slot dual Athlon SMP box - rescued from the scrap heap. It contained a pair of Athlon MP 1400+ processors, with .25GB of functioning PC-2100 RAM and an old broken video card. To that was added another .5GB of RAM, a junk box AGP video card of no consequence, and a 250GB Maxtor PATA hard drive for the operating system and the NFS exported root file system.
Into the PCI slots went a Promise TX-4 SATA controller, and two pcHDTV HD-3000 ATSC tuners. Three Seagate 750GB SATA drives were attached to the TX-4 controller.
A stock Debian system was installed onto valen's PATA drive. I built a custom kernel but the stock kernel should work just as well.
Important additional Debian packages installed included:
Valen's three 750GB hard drives were each set up with a single primary partition of type 0xfd ("Linux raid autodetect"). Then mdadm was employed to tie the 3 drives' primary partitions into a single RAID5 array, yielding a total capacity of 1.5TB. I set this up with a persistent superblock on each drive, thus no configuration files were needed in /etc and the system is able to auto-assemble the RAID array at boot time without any extra help.
With the RAID raw storage configured, I then used the usual sequence of operations to set up a logical volume spanning the array, and then formatted a JFS file system on top of it. I added this to /etc/fstab in the expected manner (as /mnt/mythtv_storage).
Why jfs and not ext3? Because jfs can delete large files in constant-time, while ext3's delete overhead grows with file size. This can be a significant effect when dealing with 30+GB recorded video programs. With that said, jfs has one disadvantage over ext3: You can't shrink a jfs file system. So if I ever had to shrink the RAID array, I'd have to backup and restore the entire thing. With ext3 (provided there's enough free space in the file system) this can be done online. However I felt the constant-time delete overhead was more important - and if I am ever faced with shrinking that storage, then I will likely be dealing with bigger problems than just moving bits around.
Some pages about mdadm can be found here, and here. The mdadm man page in Debian is also fairly complete.
Information about lvm2 can be found here.
While valen has its own operating system installation, it actually must have two installations on it - the second one being what ultimately gets read-only NFS exported to lyta and talia. The following tasks must be done correctly to make this work:
Each of these is solvable, with some thought...
One of the packages installed on valen is an unassuming little thing called "debootstrap". It is the key, providing a tool called debootstrap whose job is to initialize a directory hierarchy as a bare Debian system. Run the tool, specifying a target directory, a repository source, and a choice of which Debian release to install (e.g. etch, lenny, sid). When the tool is done, the specified target directory will be the root of a regular (though minimal) Debian installation. Then you can chroot into it and use your favorite package tool to install whatever additional packages might be desired. None of this affects the host's own Debian installation - but by exporting that target directory out via NFS, then another machine can use it as its own root file system.
For valen, I created the path /opt/nfsroot_mythtv as the new installation root area. Then I ran debootstrap to initialize it. The command was roughly something like this:
debootstrap etch /opt/nfsroot_mythtv http://cnc/debian/debian
The path http://cnc/debian/debian is the root of a local Debian mirror I have. Obviously for others this would be different. Having done the above, then I did this:
chroot /opt/nfsroot_mythtv
apt-get install aptitude
aptitude
exit
If you've never used chroot before, then some explanation is in order. The chroot command opens another shell and sets the shell's root directory to be the specified argument. The commands that follow all execute inside that chroot'ed shell. The exit command at the end casuses that shell to exit, returning control back to the non-chroot'ed parent shell. So what's the big deal with chroot? Well once you have changed your notion of the root directory you've effectively moved into a sort of virtual file system environment. The chroot'ed shell simply can't see anything that isn't under the new root. Thus all commands executed in this shell will treat this area as its own system. So, apt-get here doesn't actually install anything on valen, but rather in the new freshly debootstrap'ed file tree. My preferred package tool is aptitude, so the first thing I did here was to use apt-get to install aptitude, then I ran aptitude and used it to interactively install various other packages.
This is a significant point here. Even after installation is done, I can always go back and tweak things further (and have done so multiple times). I need only use chroot to enter the nfsroot_mythtv area and do whatever I need. Want to update to the latest security fixes? No problem: chroot into it and then run apt-get upgrade as usual.
Why even do this? Because this is the way to maintain the front ends. One never actually needs to ever log into a front end and run the package manager. I couldn't do that anyway, since the root file system there will be read-only. And it doesn't make sense to even try, since both lyta and talia share the same file system. What does it mean to talia if I go mucking about on lyta? So to do maintenance on the front ends (all of them, at once), I just log into valen as root, chroot to /opt/nfsroot_mythtv and do the work once within that shell.
The chroot environment isn't a real virtual environment of course. While I might be working in a virtual file system, the process and network environment is still that of the underlying host, valen. This means, in theory, that if I install a package within the chroot environment that it could start a daemon which makes no sense or otherwise interferes with valen. However it seems that most Debian packages (at least the ones that matter) are smart enough to figure this out. Packages which might do daemon start / stop tend to recognize that they are being installed in a chroot'ed environment and therefore bypass messing with the process or network environment. (Either that or they abort those steps harmlessly).
Once the root file system has been initialized with debootstrap, the debootstrap package should not be needed any longer and can be uninstalled from valen.
The base chroot environment is very minimal. Other things will need to be added before it becomes usable. Other packages that must be installed include:
WARNING: There's one other special step needed here. I spent hours pulling my hair out until I figured it out. Trust me, you don't want to go through what I went through. If you miss this step, then mythtv won't be able to properly fast-forward or rewind any program AT ALL: Run the command "dpkg-reconfigure tzdata" and follow the prompts to set the timezone up correctly within this new installation. This doesn't seem to happen with debootstrap and thus it won't happen at all unless done explicitly. I have learned the hard way (really!) that if the timezone on the front end doesn't match the timezone on the backend, then seeks while watching a program will stutter and hang!
Remember that all of the above after the debootstrap step must take place while chroot'ed into /opt/nfsroot_mythtv; if that detail is skipped the installation will be on valen itself, which (while not fatal) is obviously not the desired action.
At least one kernel package must also be installed into the chroot'ed area. Otherwise the booted client won't have access to any kernel modules. In my case I built and installed two custom kernels, one compiled for Core 2 Duo and one compiled for Pentium 4.
There are still some other packages to be installed, but I'll get to that later on where the description can be given in the correct context.
And of course being that this is in the end a normal Debian set up, lots of other packages can be installed into /opt/nfsroot_mythtv area. For example, I pulled in the full xfce environment so I could have a desktop GUI available for testing. I also installed various truetype fonts as well.
I use an initrd image to bootstrap the frontends into the NFS-hosted root file systems. The kernel will boot with the initrd image, then the initrd image will execute the steps needed to first mount the exported root file system from valen, then pivot into the new root. It is also possible to build a custom kernel image that knows how to NFS-mount the root file system directly. But the approach I did here was to let the initrd image do all the work - thus making possible the use of a normal stock kernel.
Though the initrd image is actually loaded via tftp (see further), it needs to be set up inside the NFS exported root file system since it is based on the kernel & modules installed there as well. The Debian initramfs-tools package makes this step easy: Based on a few simple bits of configuration data plus the installed kernel files, this package will generate an initrd image appropriate to the hardware and the type of boot-up desired.
First the initramfs-tools package must be configured. Using a chroot'ed shell (as usual), I edited /etc/initramfs-tools/initramfs.conf and set the following options:
BOOT=nfs
DEVICE=eth0
NFSROOT=auto
Those are all important because they configure the to-be-built initrd image for an NFS-mounted root file system. All the legwork to load the appropriate drivers and mount the root file system are handled by the initrd image.
I also set up the MODULES option:
MODULES=most
That option tells initramfs-tools how to determine what modules are to be pulled into the initrd image. While I used "list", there are other choices as well:
- most
- This causes nearly any module that might be even suspected as being needed. The result is a large initrd image, but it usually always works.
- dep
- This causes initramfs-tools to try to guess the list of modules based on the current running system. That choice is useless for here, since the running system context here is valen which is different than the frontend systems.
- netboot
- This adds modules appropriate for a network-booted machine. I should be using this, but I haven't tested it yet.
- list
- This tells initramfs-tools not to guess anything but to just pull in explicitly listed modules and their depedencies.
The "list" option provides the most control over the listed modules and can be used to create the smallest possible initrd image. The list of modules is pulled from /etc/initramfs-tools/modules (whose trivial format is the same as the normal system file /etc/modules). On other systems I usually just use the list option and specify the bare minimum for a working root file system and the ability to use the console (i.e. ext3, sd_mod, whatever libata driver is appropriate for the motherboard, i8042, and atkbd). For talia and lyta things are different - no libata stuff is required to boot. I haven't really investigated this so at the moment I just use "most" and accept the fact for now that I have a 35MB initrd image.
With "/etc/initramfs-tools/initramfs.conf" (and possibly also "/etc/initramfs-tools/modules") configured the update-initramfs command must be executed to generate the new initrd image. With a stock Debian kernel installed there will be an initrd image already present (generated automatically as part of the kernel package installation). With a custom configured Debian kernel however the initrd generation is not automatic and so the image must be created. Again, as with nearly anything done with the exported NFS root file system, this needs to be done from a chroot'ed shell:
For a new initrd image, the correct syntax is:
update-initramfs -c -k <version>
If the initrd image already exists, then correct syntax is:
update-initramfs -u -k <version>
The difference is important because otherwise update-initramfs complains. Annoying. The "<version>" argument is the exact version of the kernel for which the initrd image is being created / updated. Look in /boot and it's possible to guess the correct version - it's everything following the "vmlinuz-" in the name of the installed kernel image.
In my case talia and lyta are dissimilar frontend systems - one has a Pentium 4 and the other has a Core 2 Duo processor. I built custom kernels for each, so there's two kernels installed in the NFS root file system on valen. So I ran update-initramfs twice, once for each installed kernel.
Once initramfs-tools has been setup, it's just a simple matter to rerun update-initramfs any time a new kernel has been installed; the configuration is applied each time. If a Debian stock kernel is installed, this step is automatic each time.
In order to make tftp setup easier, I also set up a few symbolic links in /boot:
cd /boot
ln -s vmlinuz-2.6.23.16-p4-isely1 linux.p4.stable
ln -s initrd.img-2.6.23.16-p4-isely1 initrd.p4.stable
ln -s vmlinux-2.6.24.2-core2-isely1 linux.core2.stable
ln -s initrd.img-2.6.24.2-core2-isely1 initrd.core2.stable
As one might guess the two custom kernels I'm using are currently versioned as "2.6.23-p4-isely1" and "2.6.24.2-core2-isely1" (yes, talia is running a later kernel than lyta). What the above does is set up some simple aliases for the two kernels and initrd images. These links will be referenced in the tftp server setup and pxelinux setup. The idea here now is that later when I update to later kernels then it's just a simple matter to reconfigure these symbolic links in this single spot rather than also having to fix up tftp and pxelinux every time.
As stated earlier, the root file system is NFS exported from valen to the two frontends, talia and lyta. It is a read-only export - which makes it possible for talia and lyta to actually share the same file system. However we just can't mark the whole thing read-only and expect the system to work. Various parts of Linux expect to be able to write to various areas, so how do we keep the underlying exported root file system on valen read-only?
There are 3 possible strategies:
The first strategy is simple and blunt. But the frontend machine is going to need a lot of RAM to hold the entire image. Also the boot time is going to suffer badly if 300+MB of stuff has to be transferred to the frontend each time it boots. This can be improved by carefully limiting what packages are in the image (reducing its size). It can also be improved by using a cramfs style initrd image - however that makes it read-only again and so one of the other two strategies would have to be employed anyway to allow it to be modified.
The second strategy - overlaying a local writable RAM file system with unionfs - is appealing; after all with this approach one does not need to care for which parts become writable. It's relatively simple to implement. In fact various live CD distributions (e.g. knoppix) use this strategy. But unionfs is not really a prime-time file system type (yet) and requires patching the kernel to use. I'd rather stick to a vanilla kernel if at all possible.
The third strategy sounds like a lot of work, but it turns out to be pretty easy. And in the case of the Debian distribution, part of the work for this has already been done anyway.
A stock Debian install already sets up a tmpfs mount (tmpfs is a dynamically sized RAM based file system, ideal for what is needed here) and it uses this to stuff some read/write items away which otherwise have no reason to ever persist across a reboot. This file system is mounted as /dev/shm. (Actually /dev is also a tmpfs mount used by udev but it's not really relevant to this discussion.)
So here's what I did:
Please note that all of the steps below are relative to /opt/nfsroot_mythtv on valen - chroot there first and the rest will work itself out as mentioned earlier.
/shm_template
/shm_template/generic
/shm_template/host-talia
/shm_template/host-lyta
#!/bin/sh
HOSTNAME=`hostname`
echo "Host name is $HOSTNAME"
case "$1" in
'start')
echo "Setting up temporary storage area contents"
rsync -aHx /shm_template/generic/ /dev/shm/
if [ -e /shm_template/host-$HOSTNAME ]; then
echo "Including $HOSTNAME-specific files in storage area"
rsync -aHx /shm_template/host-$HOSTNAME/ /dev/shm/
fi
;;
'stop')
;;
'restart')
;;
'force-reload')
;;
*)
;;
esac
cd /etc/rcS.d
ln -s ../init.d/shm_template S04shm_template
cd /dev
ln -s /shm_template shm
mkdir -p /dev/shm/foo/bar
cd /foo/bar
mv item /dev/shm/foo/bar/
ln -s /dev/shm/foo/bar/item .
The list of areas to move (i.e. values to substitute for
/foo/bar/item) includes:
/etc/hostname
/etc/ntp.conf.dhcp
/root
/tmp
/var/account
/var/lib/initramfs-tools
/var/lib/initscripts
/var/lib/logrotate
/var/lib/misc
/var/lib/nfs
/var/lib/ntp
/var/lib/udhcpc
/var/lib/urandom
/var/lib/xkb
/var/lock
/var/log
/var/run
/var/state
/var/tmp
/var/yp/binding
Also, do the same for /var/lib/alsa except in this case
duplicate the alsa part into the host-$HOSTNAME area (using
corresponding relative paths as with the other items). The file
/etc/ntp.conf.dhcp probably won't exist yet, but the symbolic
link needs to be there because that file will be written later by the
network configuration as each frontend comes up.
cd /etc
rm mtab
ln -s /proc/mounts mtab
Again a reminder: All of the above was performed on valen using a shell that was chroot'ed into /opt/nfsroot_mythtv.
Steps 1-3 set up a simple template area which is automatically copied (using rsync) into the tmpfs mount during the boot process. Thus anything that needs to be initialized need only be moved into one of the /shm_template areas. The generic directory is for anything that applies to any host, while the host-$HOSTNAME areas are for things specific to a host. In my case, the /var/lib/alsa area has to contain hardware-specific mixer settings (the two frontends use different types of hardware), so it has to go into the host-specific area. All other areas go into the generic template tree.
Step 4 sets up a symbolic link that has no effect on the frontends but is a useful shortcut for step 5. The /dev area here is not actually used on the client systems because as part of the boot process, a tmpfs mount is laid over /dev/ anyway - the area will be controlled by udev so it can all live in RAM. So, by placing a symbolic link named shm in /dev which points to /shm_template then we can make the future read/write area within the read-only root on valen map to the template tree and thus look the same as it will end up on the booted client. Said another way, without that handy shortcut, then the paths given in step 5 would be a little harder to understand.
Step 5 is the key piece: Here I found all the places that should remain writable and migrated them into the template area. This is something that could be scripted, but I found the process so straight-forward that I didn't bother. Also, it only needs to be done once, so by the time I had figured out the pattern I had most of the moves done anyway. A couple important points here:
mkdir -p /shm_template/host-talia/var/lib
cd /var/lib
mv alsa /shm_template/host-talias/var/lib/
ln -s /dev/shm/var/lib/alsa .
Step 6 is a special case. The file /etc/mtab keeps track of the currently mounted file systems. Normally it is kept up to date by the mount program. However it can't be symbolically-linked like the other cases for two reasons: First the mount program apparently doesn't understand following of symbolic links for this file. (Why? I don't know. But the fact remains.) Second, if this file isn't in the root file system, then the first few updates will get messed up anyway since it can't be modified until the file system in which it lives has been mounted! It turns out that /proc/mounts is the same format and mostly contains the same data - and it is automatically kept up to date by the kernel. So this problem is finessed by pointing /etc/mtab to a kernel-maintained variation of the same content. This isn't perfect; I understand that loop device mounts won't be handled correctly. But for a mythtv frontend that is a non-issue.
Some of those read-write areas probably already have some stuff that can simply be deleted. For example, I erased any old logs under /var/log (but kept any subdirectories preserved). Old log data is useless to a running front end and is otherwise just additional junk that would be copied every time a frontend is booted.
Even with the above hacks in place, the result is still a reasonable Debian-maintainable setup. Because of the /dev/shm symbolic link trick, all the shunted read-write directories are still transparently accessible while inside a chroot'ed shell on valen. So Debian package management (when done within that chroot'ed shell) generally still works just fine. The only extra work here that might be needed happens if a package is added or removed which might affect the list of moved directories (listed in step 5). That's something to watch for, but my experience so far is that this generally is not a very big issue.
With the above completed, I had a true read-only root file system. The read-write parts are all shunted off to a tmpfs mount, which is course thrown away when the system is shut down. For the intended use here, that's really just fine anyway.
There are still more changes to make here however. These changes have to do with ensuring that the booted client's network-related (and host-specific) configuration parameters are correctly set.
Because the root file system is read-only and because I wanted this to be shared by multiple frontends, I really wanted to avoid encoding any host-specific information into the file system. Network configuration falls into this category. Rather than coding this directly, I set things up such that network-related information can be fetched from the network via DHCP.
So the first step is to install and set up a DHCP client. As mentioned earlier, the Debian distribution supplies several choices here. However I found that most were unsuitable to the task. The big hangup was that the clients, when tied in via ifupdown, would "down" the interface first before fetching information from the server. For a normal system, temporarily downing an interface is not really a problem. But if that interface is being used to access the root file system, then downing the interface is death to the host - NFS connectivity is lost and the host will spend forever waiting on NFS in order to retrieve, for example, the ifconfig binary.
I determined that it was simply unsafe to use ifupdown to control the network interface through which the NFS root file system is accessed.
Another problem I found with numerous DHCP clients is that they did either an incomplete or a sloppy job dealing with the incoming configuration data from the DHCP server. This is not really the fault of the client, Debian's way of handling dynamic configuration has kind of "evolved" over time and tends to be a bit of a free-for-all. I needed a DHCP client that allowed for better control of handling the configuration data.
With these realizations, I settled on the udhcpc Debian package. This installs a lightweight DHCP client - which was apparently written for embedded systems. Unlike the other DHCP clients, this one does not directly muck with the system. Rather, upon receiving information from the DHCP server (or other state changes), the udhcpc program simply tosses all the data into environment variables and calls a shell script to deal with it. The udhcpc package installs with a default set of scripts, which unfortunately is still not good enough.
To solve the configuration problem, I wrote a new shell script, designed to be called by the udhcpc client daemon. You can directly fetch the script from here. Alternatively a hastily constructed debian package called "udhcpc-support" containing this script and some supporting files & configuration can be downloaded from here. A Subversion sandbox (which includes the Debian package setup) can be checked out from this URL http://www.isely.net/svn/udhcpc_stuff. If you have Subversion installed, you can check out a snapshot with this command (this will create a "udhcpc_stuff" directory in the current working directory where the command is issued):
svn checkout http://www.isely.net/svn/udhcpc_stuff/trunk udhcpc_stuff
With the Subversion sandbox checked out, and with the proper Debian build packages installed (I think "fakeroot", "build-essential", "dpkg-dev", and "devscripts" should do but I don't remember exactly), you should be able to build the Debian package directly with this command:
cd udhcpc_stuff
fakeroot ./debian/rules binary
The newly built package will be in the parent directory of the sandbox area.
So, with the custom Debian package ready, I just installed it into the NFS exported root area (with the usual chroot'ed shell inside of /opt/nfsroot_mythtv area and a "dpkg -i udhcpc-support_1.0_all.deb" command). In addition to installing the magic udhcpc configuration shell script, this package also installs a network configuration script called /etc/init.d/udhcpc_support (which starts udhcpc and configures it to use the new configuration data script) and automatically sets up a symbolic link in /etc/rcS.d/S38udhcpc-support pointing to the configuration startup script.
With the package installed, DHCP-fetched information is taken care of. This includes IP address / netmask / broadcast info, host name, NTP configuration, default route, and DNS resolver configuration (via the resolvconf package that I installed earlier).
I also had to do the following additional bits of configuration in the NFS exported root file system. Again, a chroot'ed shell in /opt/nfsroot_mythtv on valen was used to do this:
I initialized the file /etc/defaultdomain to "isely.net". In reality the udhcpc-support package should handle this (the information is available via DHCP), but I just hadn't gotten to it yet.
The following was placed into /etc/network/interfaces:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
It's important that the lo interface be listed first; if it isn't then things that depend on the loopback interface will be touched before the loopback interface is ready. (Why? I don't know; I haven't tried to find out.) The strangeness with the eth0 interface is equally important. Even though I'm not using ifupdown to manage the ethernet, other parts of the Debian boot process won't run unless it thinks eth0 is ready - like other NFS mounts. So what this does is to set up a dummy eth0 configuration that effectively does nothing. But it's enough to convince the rest of the boot scripts to proceed.
Since I'm using NIS, I also edited /etc/passwd and /etc/group to enable NIS access. To do this, I appended +:::::: to /etc/passwd and I appended +::0: to /etc/group.
Though the root file system is read-only, I also run a central file server which exports /home to other machines. Since the mythtv account itself is hosted this way, I also added the following to /etc/fstab:
cnc.isely.net:/home /nfs/cnc/home nfs defaults,auto 0 0
The path /home is also set up as a symbolic link pointing to /nfs/cnc/home. (And don't get any ideas; NFS access is firewalled away from the outside Internet!)
With the NFS root file system read for use, it still needs to be exported by valen. This step requires editing of /etc/exports on valen, followed by a command to tell the kernel to re-read it. The contents of valen's /etc/exports for this exported file system are:
/opt/nfsroot_mythtv *.isely.net(ro,subtree_check,no_root_squash)
My internal LAN domain is isely.net so the specified pattern allows any machine in the LAN to access this file system. The "ro" attribute is important; it forces this to be a read-only export and ensure that no client can ever modify the exported tree. The "no_root_squash" attribute is also important in that it tells the server to treat the root user from the client as root here, which is fine for this read-only area and and I suspect (haven't tested) that things on the client will go awry otherwise.
This command causes the kernel NFS server to re-read the exports file:
exportfs -a -v
This just needs to be done once to make the file system immediately available. It is also done implicitly each time at initialization as part of the Debian boot process.
Topics described here include details about the frontend hardware, steps needed to enable NIC-based booting.
A good MythTV frontend suitable for HD viewing needs lots of CPU, a graphics subsystem capable at least of solid hardware accelerated video scaling, decent bandwidth to the GPU's framebuffer, and a reasonable NIC. All that and quiet operation too. In my case, I want these frontends to operate diskless, which entails two more requirements:
I built my first MythTV machine as a standalone box in 2004. At the time I bought the Shuttle SB61G2BV3 barebones enclosure that became lyta. I tried at the time to configure it decently enough to do HD rendering, but I really didn't know what I was doing at the time and mostly got lucky. When I purchased it, I equipped it with 512MB of RAM and a 2.4GHz Pentium 4 processor. It came equipped with an onboard "Intel Extreme Graphics" GPU, which was good enough for NTSC video, but that poor GPU was hopelessly underpowered to render any kind of scaled HD stream. At the time I installed a Hauppauge PVR-250 and threw in a 300GB PATA hard drive. That was good enough for a non-HD PVR, and I ran the system like that for a while.
About 2 years later I swapped out the PVR-250 for a pcHDTV HD-3000, installed an XFX 5200 AGP video card (nvidia GPU), and attached a second pvrusb2 analog tuner via USB. I also upgraded the hard drive to a 750GB Seagate SATA drive. This system was able to render HD - when using XvMC - but just barely.
A few months ago I swapped out the XFX 5200 AGP video card for an XFX 6200 (more recent nvidia GPU, still passively cooled). That sped things up further and now the box can do a fair job of rendering HD, even without needing XvMC.
The final change was for this new split configuration. I removed Seagate drive and reconfigured its BIOS for network booting (see further). My only concern at this point was the RAM. Given the requirements I had listed, lyta was fairly well configured by now but it still only had 512MB. However so far it seems to work fine with no swap and that (by today's standards) small RAM size.
Since building lyta, I had learned a few things. I discovered the wonders of the Intel GMA-950 GPU paired to an Intel Core 2 Duo processor. This lesson was learned about a year ago when I built a new desktop system - and soon after discovered that it simply kicked butt rendering 1080i. Lyta by comparison wasn't even close. Where lyta would chew up 80% to play a MythTV 1080i stream, the desktop machine I had built barely got above 20%. And that's with all open source drivers! I had also learned that the Intel Core 2 Duo just seems to run a lot cooler than the old Pentium 4, while being much faster as well. The stock fan supplied with a retail Core 2 Duo is whisper quiet.
So it was obvious that this new frontend was going to have an Intel Core 2 Duo in it and a GMA-950 or better onboard GPU. The only thing left was to find a barebones system that had this capability, and if it had a DVI connector, so much the better. This I found with the Asus P2-P5945G. As an added bonus the box is actually smaller than the older Shuttle PC and I got it for even less money. That sealed the deal. I added an Intel E6420 Core 2 Duo to this and 2GB of RAM. This became talia.
To be useful as a frontend, there really must be an IR receiver. For now I don't have a very good solution - I put an old PVR-250 into lyta just to use its IR receiver. And for talia, I attached a pvrusb2 I had laying around, again just for its IR receiver. The LIRC Hauppauage I2C driver is used for both. It seems kind of silly and wasteful to install hardware mpeg2-encoding tuner peripherals just to use IR, but it's what I had laying around unused at the time. I intend on improving this situation at some point, if only to save a few watts and eliminate an external peripheral (in talia's case).
The network boot protocol used by the frontends implements an apparently common standard called "Intel PXE". (I say "standard" because every machine I've encountered with a built-in NIC seems to implement this. My laptop has this capability and even vmware virtual machines can do it too.) Basically PXE means that the NIC will use a DHCP transaction to configure its interface and learn what image file should be downloaded from a TFTP server at a given address.
To support the PXE standard, a nearby DHCP server must be configured to respond to the boot request. The DHCP server will return a block of configuration data to the PXE client. Then the PXE client will use this information to first configure itself, then contact a TFTP server and retrieve a boot image. That image is then dispatched.
The Linux boot process starts with that first downloaded image. There isn't really enough capability to simply start a kernel from the PXE ROM. Rather, the Debian "syslinux" package includes a special binary called "pxelinux.0" which can be booted by a PXE client. This is actually a bootloader, designed to boot from a NIC.
When pxelinux.0 runs, it immediately learns of the PXE-fetched configuration information and recontacts the TFTP server. The pxelinux.0 bootloader then retrieves a syslinux style configuration file, specifying all the usual stuff one needs to boot a kernel, e.g. kernel image name, initrd image name, kernel command line options, etc.
Having fetched its configuration file, the pxelinux.0 bootloader then uses it to again go back to the TFTP server to fetch the kernel image and possibly (likely, for network booting) also an initrd an image. Then the kernel is dispatched in the normal manner.
A normal Linux kernel of course will very soon want to mount its root file system, usually expected to be a local disk. But in my setup that isn't true. Rather, the initrd image that is loaded, previously constructed using "initramfs-tools" (another Debian package), knows that it is doing an NFS root style boot. The boot script in that initrd image then contacts the NFS server (whose name and path were supplied via DHCP) and mounts the root file system. The root mount point is then pivoted over to the NFS mount and booting continues using the init program in the new root file system. After this point, the remaining boot sequence is governed by the scripts in the remote file system.
To summarize:
So what needs to be done to support all of that? Step 4 is just the result of all the earlier work to set up the root file system, so there's nothing left to do there. That leaves the other steps, and the details are described below.
So the first step is to get the frontends to boot over the network...
With the hardware set up, the actual local software details on the frontends are pretty trivial: Just set each machine to boot from its NIC. In both cases this enabled "Intel PXE" standard booting.
The exact BIOS settings were a little different in both cases. IIRC for talia all I had to do was put the NIC into the boot device list in the BIOS and everything else worked. For the older lyta frontend, I had to drop into a second NIC BIOS installed on the machine, enable it for booting, then I had to go back to the main BIOS and add the NIC BIOS to the boot list. But either way the result is the same.
For a normal disk-based client using DHCP, the only really important information is the IP address, netmask, and possibly default gateway. But for a diskless client, the DHCP server has to supply additional information. Specifically, the server must also supply the IP address of a TFTP server and the name of an image to boot.
For my setup, the DHCP server is not on valen but rather on cnc, my main LAN server. This is not required, but I did it this way because I already have a DHCP server functioning there so it was easier to add to it rather than set up another server and worry about the two servers getting into a fight.
Here's the relevant block of information for my cluster. This is kept in /etc/dhcp3/dhcpd.conf on the (Debian) host where the DHCP server is running (cnc):
default-lease-time 600;
max-lease-time 7200;
authoritative;
shared-network isely-home {
option domain-name "isely.net";
option nis-domain "isely.net";
option domain-name-servers 192.168.23.2;
option netbios-name-servers 192.168.23.2;
option ntp-servers 192.168.23.2;
subnet 192.168.27.0 netmask 255.255.255.0 {
range dynamic-bootp 192.168.27.50 192.168.27.69;
option broadcast-address 192.168.27.255;
option subnet-mask 255.255.255.0;
option routers 192.168.27.1;
}
subnet 192.168.23.0 netmask 255.255.255.0 {
option broadcast-address 192.168.23.255;
option subnet-mask 255.255.255.0;
option routers 192.168.23.2;
}
host talia {
hardware ethernet 00:1a:92:5e:b0:42;
fixed-address 192.168.23.16;
filename "pxelinux.0";
option root-path "/opt/nfsroot_mythtv";
option host-name "talia";
server-name "valen";
next-server 192.168.23.15;
option log-servers 192.168.23.15;
}
host lyta {
hardware ethernet 00:30:1b:b5:0b:f4;
fixed-address 192.168.23.11;
filename "pxelinux.0";
option root-path "/opt/nfsroot_mythtv";
option host-name "lyta";
server-name "valen";
next-server 192.168.23.15;
option log-servers 192.168.23.15;
}
}
Now, realize that there is a lot of stuff here, both because I tried to keep as much host-specific information out of the root file system (placing it here instead), and also because this server is doing more than just booting up lyta and talia. But I have to show this in context for it to make sense. For someone else's cluster setup this might be considerably simpler.
First, the "shared-network" grouping is present because the server is actually handling two subnets (192.168.23.x and 192.168.27.x) on the same wire. The reasons why are not relevant to MythTV, but without this, then the entire 192.168.27.0 stanza can go away and so can the surrounding "shared-network" grouping (the options at the top of the grouping can then go inside the remaining 192.168.23.0 stanza or just be made global to the file).
The "authoritative" keyword just tells the DHCP server that it is the authority on my network.
The options near the top, outside of the subnet stanzas, provide default configuration data that is specific to my network and generic to anything on it. This information is given to all DHCP clients. So here I'm specifying my domain ("isely.net"), NTP, DNS, and WINS server (all 192.168.23.2 - which is cnc's internal network IP address).
In order to service a particular subnet, it must be defined. Thus I have the 192.168.23.0 subnet - my normal default LAN setup is 192.168.23.0/24. Options inside this stanza provide additional configuration data that will be handed to clients which are given IP addresses inside this subnet. So here I have the broadcast address, subnet mask and default route (pointing through cnc) specified.
The two host stanzas have the interesting bits. There's one stanza each for lyta and talia, with custom information for each. The first field in each standard is the host's MAC address - the DHCP server uses this to match up against the requesting DHCP client, thus recognizing when it is talking to lyta or talia. These are the MAC addresses printed on the consoles when these hosts try to boot. Obviously in other configurations these MAC addresses are going to be different. The other bits are fairly self-explanatory: I specify fixed IP addresses for each (who says DHCP is only for dynamic addresses?), the boot image name ("pxelinux.0") the TFTP server IP address (192.168.23.15 - valen's internal network address), the server's name (valen), the frontend's name, and the NFS root file system mount point - I don't even have to hardcode that into the initrd image or the pxelinux.0 configuration!
To boot additional frontend systems it's just a simple matter to create additional host stanzas appropriate to those systems.
For further reading, I strongly recommend looking up the man page for dhcpd.conf.
This DHCP configuration data is the only bit not actually kept on valen; it is also the key by which the frontends establish their own identities.
In my setup, the atftpd Debian package is installed on valen to supply the tftp server, and the service itself is configured via inetd. I have the tftp transfer directory set to /tftpboot (set as an argument in /etc/inetd.conf).
Four entities need to be available via TFTP:
The pxelinux.0 bootloader image is supplied by the syslinux packaged which I had previously installed on valen. Rather than copying the image into tftpboot, I set up a symbolic link which makes future updates of the syslinux package automatic:
cd /tftpboot
ln -s /usr/lib/syslinux/pxelinux.0 .
To manage the kernel and initrd images, I set up additional symbolic links. Remember that the kernel and initrd images already exist in the NFS exported root file system, so they only need to be sym-linked here to be available to the tftp server:
cd /tftpboot
ln -s /opt/nfsroot_mythtv/boot/linux.*.stable .
ln -s /opt/nfsroot_mythtv/boot/initrd.*.stable .
Note the use of the symbolic links that these point at. This means that the "stable" kernel and initrd image names do not change, so future kernel updates for the frontends only require that the symbolic links inside of /boot within the NFS exported root (i.e. /opt/nfsroot_mythtv in my case) need to be updated.
The syslinux-formatted configuration file is a little trickier. The pxelinux.0 bootloader uses a set of guesses to find the right configuration file. This is important because it makes possible the use host-specific configuration files. All files are searched for under the subdirectory pxelinux.cfg. The pxelinux.0 bootloader first guesses the file name by using a transformation on the MAC address of the booting machine's NIC. First I created the area
cd /tftpboot
mkdir pxelinux.cfg
Then I dropped in two configuration files, one appropriate for booting a Pentium 4 machine, and one appropriate for booting a Core 2 Duo machine (the only difference between the two being the names of the image and initrd images to be loaded):
/tftpboot/pxelinux.cfg/p4
LABEL linux
KERNEL linux.p4.stable
APPEND root=/dev/nfs initrd=initrd.p4.stable
/tftpboot/pxelinux.cfg/core2
LABEL linux
KERNEL linux.core2.stable
APPEND root=/dev/nfs initrd=initrd.core2.stable
Then I created additional symbolic links to map those files to the MAC addresses for lyta and talia:
cd /tftpboot/pxelinux.cfg
ln -s core2 01-00-1a-92-5e-b0-42
ln -s p4 01-00-30-1b-b5-0b-f4
The numeric-octet file names match the transformation generated by one of the guesses by the pxelinux.0 bootloader. Note that the leading 01- is a constant and not a part of the MAC address. I'm not sure why it is required but I figured it out after studying logged failures from the tftp service as pxelinux.0 went through its guessing. Obviously the MAC addresses are specific to lyta and talia, for other setups these will be different values.
To boot additional frontend systems it's just a simple matter to create another symbolic link here appropriate to the MAC address of the new system.
Note that the two configuration files here have barely anything at all - just enough to pick the right kernel and initrd image. Information about the NFS mount point, NFS server, etc, all are fed into lyta and talia via the earlier described DHCP configuration.
At this point it is possible to bring up the diskless front end machines and have Linux fully functioning. The hardest work is done. All that's left is mostly a normal split MythTV configuration.
Next I installed the mythtv-backend and mythtvweb packages (from the debian-multimedia repository) onto valen. Dependencies pulled in everything else (though I selected php5 for the web server module rather than the default dependency of php4). The mythtvweb package required zero configuration.
On valen, I set up a master backend. All of the configuration via mythtv-setup is just as one would expect. Remember that for the database server name, one must use the actual server name (i.e. "valen") not the default value of "localhost". In a split configurtion the database is accessed from multiple machines so the name for accessing it must work properly from all involved machines.
The only unusual detail I had to take care of involved mysql. A normal Debian installation of mysql restricts access to the server daemon to just the local machine. That's no good for lyta and talia. To fix this, I edited /etc/mysql/my.cnf and changed the bind-address option from its default of 127.0.0.1 to 0.0.0.0 (obviously this is not something one would want to do on a hostile network).
For my system, I configured MythTV for the 2 HD-3000 digital tuner cards. The DVB driver from the vanilla 2.6.23.16 kernel works just fine - once I installed the appropriate firmware files. For the recordings directory, I left the backend set to its default and replaced the target directory with a symbolic link pointing into an appropriate area on valen's RAID array.
While once again chroot'ed into the NFS exported root file system on valen, I installed the following additional packages:
With that done, what follows is a description of the final steps...
Since lyta and talia share the same root file system provided by valen, then they also share the same /etc/X11/xorg.conf file. This is an issue because the two machines use different GPUs (Intel vs Nvidia) and completely different display configurations. One way to deal with this is just to use the shm_template host-specific area to keep separate copies of the file per-host and establish a symbolic link for the original location. However I used a different approach that retained a single configuration file: multiple ServerLayout stanzas.
The xorg.conf wraps everything up into a name ServerLayout stanzas. Normally the name of that is a default value and the X server will use that default. Instead, I created ServerLayout stanzas, one named "layout-lyta" and the other named "layout-talia". Each points to a separate Screen stanza, "nvidia screen" and "Philips Screen", respectively. Those screen declarations then tie in the proper xorg driver and monitor configuration.
When the X server is started, the desired layout to use is passed as an argument to the X server using the "-layout" X server option (see further).
Another issue with the shared X server configuration should not be an issue at all, but we all have Nvidia to thank for it: OpenGL acceleration. Remember that both the Intel and Nvidia drivers have to be installed in the same file system. While the open source intel xorg driver nicely integrates with DRI (as it should), the Nvidia xorg driver goes its own way. When installing the Nvidia driver, pretty much the entire OpenGL software stack gets replaced. Unfortunately this is an either/or; it seems not possible to have both co-exist. Worse still, the Nvidia driver apparently requires its software stack to be installed to even use the driver. This one issue is the single, solitary problem that I could not properly resolve in the shared root file system without some seriously bad hacking. In a more conventional Linux installation, the root file system is always private to a single host, so while Nvidia's ripping out and replacing the OpenGL software stack is repugnant it does not cause any problem since there's nothing else in that installation that would need to compete with it. But this is a shared root and so the problem arises. I punted. Fortunately, OpenGL is not required to run MythTV and since these are dedicated MythTV frontend machines, I did not dig any deeper. Rather, I just installed the binary Nvidia driver, messing up OpenGL for the Intel driver, and left it as-is. But this would be a serious issue if I were setting these machines up for general purpose use.
As I mentioned earlier, the IR side of this is kind of hacked up right now. There's a PVR-250 installed in lyta and a PVR-USB2 connected to talia. Though they are different hardware devices (one being USB-connected), they share compatible IR receivers. This made it easier to configure since the LIRC setup could therefore also be kept in common.
Because the IR situation was common between the two frontends, I could treat the IR setup as a single-instance in the NFS exported root file system. So I just installed lirc and configured it as one normally would for a single Hauppauge IR receiver.
Sound hardware is considerably different between lyta and talia. While the issue of driver installation is trivially handled in the usual Linux manner (i.e. let udev sort it out), the two devices have completely different mixers. Part of Alsa includes saving / restoring the mixer settings, and this file must therefore be specific to the host. I could not punt this problem by leaving the mixers completely unset, because the default setting disables audio output. So I did the following twice (once each for lyta then talia):
On valen, I created the following directory:
mkdir
/opt/nfsroot_mythtv/shm_template/host-$H/var/lib/alsa
where "$H" is the host name of the frontend
being set up.I booted up the frontend, then logged in as root.
I ran alsamixer in a terminal window, then tweaked the mixer settings to what I needed.
After exiting alsamixer I ran "alsactl store -f /tmp/asound.state". This saves the mixer settings to the named file in /tmp (which is writable directory).
Finally I sent that file back to valen, in a spot which will get read-back at boot:
scp /tmp/asound.state
valen:/opt/nfsroot_mythtv/shm_template/host-$H/var/lib/alsa
where "$H" is the host name of the frontend
being set up. The shm_template stuff in the root file system will
cause this file to be used as the initial mixer settings when the
frontend is next booted.The MythTV frontend software setup again was more or less what one would expect for a split configuration. And again, a critical detail is to ensure that the database host name in the configuration is set to "valen" not "localhost". The MythTV configuration must be done on the frontend not inside a chroot shell. This is because the MythTV configuration is held in the database not the file system (well, except for the database name itself, which is kept in a file in the mythtv user's home directory - which for my setup is a normal read/write file system area exported from my main file server).
In order to establish an acceptable WAF, one cannot be required to attach a keyboard, login as mythtv, and execute mythfrontend in order just to watch TV. The frontend must behave like an appliance. Turn it on, hit some buttons on the remote, turn it off again later.
To make this happen, additional things must happen automatically:
The first requirement is met with the installation of the rungetty package (done earlier) and some configuration of /etc/inittab. The rungetty package is a replacement for getty that can be set up to automatically log in as a specific user. I modified /etc/inittab to change run-level 2 such that rungetty gets executed on tty1 with arguments appropriate to logging in the mythtv user. Specifically, while inside a chroot'ed shell inside of /opt/nfsroot_mythtv on valen (as usual), I changed this part of /etc/inittab:
# Note that on most Debian systems tty7 is used by the X Window System,
# so if you want to add more getty's go ahead but skip tty7 if you run X.
#
1:2345:respawn:/sbin/getty 38400 tty1
2:23:respawn:/sbin/getty 38400 tty2
3:23:respawn:/sbin/getty 38400 tty3
4:23:respawn:/sbin/getty 38400 tty4
5:23:respawn:/sbin/getty 38400 tty5
6:23:respawn:/sbin/getty 38400 tty6
to look like this:
# Note that on most Debian systems tty7 is used by the X Window System,
# so if you want to add more getty's go ahead but skip tty7 if you run X.
#
1m:2:respawn:/sbin/rungetty tty1 --autologin mythtv
1:345:respawn:/sbin/getty 38400 tty1
2:23:respawn:/sbin/getty 38400 tty2
3:23:respawn:/sbin/getty 38400 tty3
4:23:respawn:/sbin/getty 38400 tty4
5:23:respawn:/sbin/getty 38400 tty5
6:23:respawn:/sbin/getty 38400 tty6
In other words, for run level 2 I replaced the getty on tty1 with a
rungetty instead that logs in as user mythtv.
One nice feature of this approach is that the entire mechanism can be disabled by switching run levels. For example if I log in as root via ssh to talia and issue "telinit 3", then the init process will shut down mythtv completely and restore tty1 to conventional operation - useful for maintenance work.
Solving the second step is accomplished through the mythtv account's .profile, the script that is executed when the user logs in. Rather than just blindly starting X here however, I wanted things to be a little smarter. The only place where something needs to be automatically started is when logging in on tty1 on either talia or lyta. Here's my .profile for the mythtv account:
export HOSTNAME=`hostname`
if [ -e $HOME/autostart-$HOSTNAME ] &&
[ -z "$DISPLAY" ] &&
[ $(tty) == /dev/tty1 ]; then
while [ 1 == 1 ]; do
. $HOME/autostart-$HOSTNAME
sleep 10
done
fi
So what does this do? It looks for a file of the form "autostart-$HOSTNAME" where "$HOSTNAME" is the name of the host where the login is taking place. If the file exists, if we're on tty, and if we're not already in an X environment, then the file is treated as a script and executed inline. If the script returns, we wait 10 seconds and just do it again (which is useful if mythfrontend crashes).
For my setup, the names autostart-talia and autostart-lyta are both symbolic links that point to mythtv_scripts/start_x_mythtv, which is a script I wrote. And that script is a single line:
startx $HOME/mythtv_scripts/start_apps-$HOSTNAME -- -layout layout-$HOSTNAME
which starts the X server with custom session file (see futher) and a
specific X server layout. That layout business was mentioned earlier;
remember that I have a single shared xorg.conf that contains
configuration data for both frontend setups. It's at this point, with
the "-layout" option, where the correct
ServerLayout stanza is picked. Since "$HOSTNAME"
has the local host name I get the right name.
In the previous step, a script of the form mythtv_scripts/start_apps-$HOSTNAME was passed to the startx command as the session script. Again, with the $HOSTNAME exansion, host-specific session files can be set up. Here's the one for talia:
cd /home/mythtv
HOSTNAME=`hostname`
xset s off
xset -dpms
date >>mythtv_logs/frontend-$HOSTNAME
mythfrontend 2>>mythtv_logs/frontend-$HOSTNAME
There's actually not a lot there. It boils down to this:
The log file is always appended; it will grow over time without end. Someday I might actually zero those files (one for talia, one for lyta), but since these are in the mythtv account's home directory, and there's over 300GB of free space back on cnc, I'm not going to worry about these for a while. :-)
That mythfrontend startup script is very generic, however lyta has a crucial difference that forced me to set up a startup script for it:
cd /home/mythtv
HOSTNAME=`hostname`
xrandr -s 720x440
xset s off
xset -dpms
date >>mythtv_logs/frontend-$HOSTNAME
mythfrontend 2>>mythtv_logs/frontend-$HOSTNAME
The difference is the xrandr command. This is because lyta is coupled to an old Sony WEGA 36" HDTV monitor. It's a 4x3 screen but the electronics will switch the tube to a 16x9 aspect ratio when it gets a 1080i (or 540p) video signal. I have the X server configured for that monitor to support both screen geometries (and I have specific modes set in mythfrontend depending on the resolution of the program being viewed). But I want the initial geometry to be a 4x3 setting so that mythfrontend starts in full screen.
Perhaps in a future revision of this very long note I'll include information about the WEGA's dual mode setup. If you want to see this sooner, ask me.
That's everything. I hope.
Feel free to e-mail me (address at the top of this page) if you have any questions or just want to say hello...
Mike Isely