Saturday, December 30, 2006

Open Source Religion

Last week, Susie and I provided the topic for the small Quaker meeting we've been attending as visitors (neither of us are officially Quakers). The topic we picked: Open Source.

The results? Spectacular. Almost everyone in the group understood and embraced the philosophy of Open Source. Many went home after the meeting to dabble with Open Source products, or followed up by asking questions. We also provided Ubuntu cookies as refreshments.

While there are some actual Open Source religious movements out there it is impressive to know that the Religious Society of Friends seems to take so well to the idea.

Sunday, December 10, 2006

Pericles Lives!

Pericles is now open for business! It was only down for an hour, coming back online at 1:14. Everything I set out to complete was done, but I had to scramble to fix a few loose ends. Namely, I forgot to unpack one of the websites during the prior step because its archive had been placed in the wrong folder.

The system is running much faster. I'm sure there are still mistakes, so I'm going to go through and test each site meticulously to make sure things are smooth.

Hurray for Open Source!

Saturday, December 9, 2006

Transferring the Hosted Content

Well, here I am. I took a break from Friday evening until this evening and then got back to work, and I have just finished the most terrible and nasty job of transferring the hosted site content over to Pericles.

Compress, copy, uncompress, change a few paths, add to /etc/hosts to "trick" the server into thinking the DNS is pointed here already when its still really on the old box, and edit /etc/apache2/pericles.conf (I include this from the main httpd.conf file) to tell it to pull from 192.168.0.1 instead of the real VirtualHosting IP. Run apache2 -k restart, and try it out in Firefox. Wash, Rinse, Repeat. About 35 times.

One of the things I had to do was set permissions on the web files so that my scripts were able to write to the folders they needed to. In Ubuntu, the default Apache2 user seems to be www-data.

After testing all the sites, it is now time to set up the necessary FTP accounts with access to their proper folders, to change the IP *back* to the real one in the pericles.conf, to install a fresh grab of the mysql database, to change the IP of the machine itself, and then to physically swap boxes and bring the new server online. After that, I expect chaos :-) Just kidding. I think I've checked everything out. I'm also limiting myself to finish this in the next 3 hours or else to hold off until another night so that this operation doesn't run into morning time.

I'll report back again once more progress has been made.

Ubuntu Cookies

I saw this post and decided that the world needs more Ubuntu cookies.

I actually wrote a very long and detailed post about how I made these, *but* through some freak accident, I selected all my text, opened the context menu, and selected 'insert dummy lorem ipsum'. Firefox+Blogger wouldn't let me undo, so, you get the abridged version now.

I started out by mixing up the dough using the recipe provided by Joseph Hall. It turned out the consistency of frosting... I've made sugar cookies before, and the dough was more like soft play-dough, I added 3/4 cup more flour but it didn't help. Before I chilled the dough, I added the food coloring that way the dye could be dispersed evenly.

I'm doing the whole thing with a minimum of special tools, just my hands, parchment paper, four and a large cutting board (and something round I still have to find ).



Since the dough was so soft, it was a lot easier to roll the logs, then place them on the parchment paper, then flatten them. Otherwise they kept breaking apart as I moved them.

The main thing I didn't care for in Joseph Hall's version of these cookies was the mid-section, it was too big in proportion to the rest of the pieces compared to the actual logo. So, I made sure to make mine smaller, although I hope it's not too small.



I started at 8:00pm, it's now 10:20pm - time to roll out the piece-between-the-head-and-arms. After they freeze, it'll be time to assemble the whole thing.



I accidentally tried to add the impressions for the head into the main section .. oops.

10:40pm, If you ever do anything like this, make sure to measure the circumference of your center log and figure out how wide to make the wrapper. (hmm, I should probably do that for the outer wrapper) I got it right on the third try. Luckily the dough was stiff enough for me to unroll it when I got it wrong. I have a little yellow left, but the other pieces were all the right length.

11:15pm, I took the pieces for between the head and arms out and let them thaw while I added the indentations to the main log. I started by using a dry erase marker, but ended up using my thumbs. The dividing piece was hard to stick on because of all the flour I had used - but I don't think I could have rolled out the dough without the flour, so perhaps some water would have helped stick the pieces all together. I just hope the final product doesn't fall apart when I cut it.

Adding the heads was easy, even though they kept snapping. I kept the whole thing rolled up in parchment paper to hold it together while it freezes up again.



I can see that the final wrapping piece will need to be wider, so I'm thawing it out while I type this.

11:30pm, all finished! Now I understand the large center piece! it would have been much easier, but I think I'd still rather have the logo be the right proportions. I'll let it freeze for an hour, then cook a few and see how they worked out.



12:30pm, the dough was pretty solid, so I cut off four inches from the "bad end" where all the ends weren't the same size, sliced them and stuck them in the oven. The biggest problem was that the various sections didn't want to stick together. Part of the reason they won't stick together is because they were all frozen pieces, and there wasn't any soft dough to stick too. My solution is to set a piece of the log out so it can all thaw, then solidify it again.



12:45pm, BLAST! the cookies spread! (a lot) I think there was either something very wrong with the recipe, or there's something very wrong with one of my ingredients.



I'll have to try this again with a recipe of dough I know doesn't spread.

Wednesday, December 6, 2006

Migrating MySQL Data

Its time to migrate a copy of the MySQL data so that I can begin testing the websites running on Pericles in a closed environment before I let this out into the wild.
sudo /etc/init.d/mysql stop

The data lives in /var/lib/mysql

I'm cool with that, but I'm going to replace it with my existing data, so first I'll rename it to preserve the original:
cd /var/lib
sudo mv mysql mysql-original
sudo mkdir mysql
sudo chown mysql.mysql mysql

Next, I unpack my MySQL backup, which was archived from my replication slave a couple days ago. Since I'm going to make this into the new master, I'll first use this data as a testbed and afterwards I'll replace it with a fresh copy when I'm really ready to switch masters. My data is currently about 1 gigabyte, which shrinks to 111 megabytes when compressed, so it takes about five minutes to copy and longer to unpack. I should point out that innobase and myisam data on Linux and Windows are "binary compatible" so they can be copied directly across the board, but you need to make sure your mysql configuration files have the same innodb settings in them, or else. Oh yeah, I should also mention that the MySQL version numbers are both very close, and up to date, so the fields in the user tables and so forth are the same format. I wouldn't recommend migrating a MySQL 4.0 database to a MySQL 5.0 environment using this technique because the user fields changed a bit.

I deleted the master.info file so that this server won't try to connect to the replication master like the replication slave from which it was copied was programmed to do. I'll configure it to be a replication master once I do the real thing.

I had to change my bind-address=0.0.0.0 in /etc/mysql/my.cnf in order to get the MySQL to bind to all network interfaces (I need to be able to connect from several networks.) This is ok, because the mysql users table has restrictions on which users are allowed to connect from which hosts. I wanted to change it in /var/lib/mysql/my.cnf which seemed to be recommended by the heading in /etc/mysql/my.cnf, but copying the cnf file and changing it there did not make any difference, so I resorted to changing the /etc/mysql/my.cnf version of the file.

Starting up MySQL I see this:
error:  'Access denied for user 'debian-sys-maint'@'localhost' (using password: YES)'

Now I need to add back in the debian-sys-maint user that I lost by overwriting the user tables. This is easy, because they leave the random password sitting unencrypted in /etc/mysql/debian.cnf

I did this by logging into MySQL from the command line interface as root, and executing the following commands:
USE mysql;
CREATE USER 'debian-sys-maint'@'localhost';
UPDATE user SET password = PASSWORD('PasswordCopiedEarlier') WHERE user = 'debian-sys-maint';
GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost';
FLUSH PRIVILEGES;

I now stopped and started the MySQL server:
sudo /etc/init.d/mysql stop
sudo /etc/init.d/mysql start

This worked like a charm; I no longer see the error message, which means debian-sys-maint can do its magic.

Converting Hardcoded Pathnames

The next step on my adventure with Pericles is to convert all of the hard-coded pathnames from F:\Web to /var/www (as well as any other hardcoded paths I find.) I've been pretty careful not to use hard-coded names in unnecessary places, putting most of them in simple configuration include files and so forth, because I knew from the beginning that this day would come.

I typed grep and it turns out that I have a convenient version of Turbo grep (comes with any Insprise/Borland product of the last decade), but GNU grep should work just as well. The syntax for turbo grep that I'm using is:

grep -di "f:" *.ph*>hardcoded

I'm using *.ph* because I am searching all *.php and *.ph files. I typically use *.ph for include files instead of *.inc, the -di means search subdirectories and ignore case, and I keep my data and web files on the F: drive. I'm redirecting the output to a file named hardcoded so that I can use it as a checklist. I'll do the same thing again but, like this:

grep -di "c:" *.ph*>>hardcoded

To get all references to anything on the C: drive (I know I refer to an executable font conversion tool from one place, for example.) and the double angle brackets mean to append the result to my previous file instead of overwriting.

Now that I have my checklist, here's the plan:

I want to convert all the code while still on the Windows box to support Linux based on a single configuration change. I will make an include file with one function in it that takes any path given as a parameter and if it starts with /var/www, to convert it back to F:/Web. This will allow Linux to be the native language with a "patch" for Windows support. This should have been part of my plan from day 1 (two years ago), but it didn't occur to me. The idea is that I can replace this function with an "identity" function such as: function linpath($a) { return $a; } in order to use raw Linux paths instead of Windows.

I will also do similar adaptations on a situational basis as I run into any F: or C: references that can't be solved with the above include file.

I anticipate some difficulties with file permissions since Apache on Linux is more secure than Apache on windows. My main resolution to this will be to consolidate the affected items that need to be accessed into consistent areas that the Apache process has access to read. If I have any chroot issues, I can overcome them with the use of mount --bind as explained in my article on setting up FTP.

This path translation will probably be the most tedious task in the Pericles project. I'll report back when I have finished.

4:18 P.M. - Upon more deliberate consideration, I have determined to abandon this undertaking and instead to change the paths manually. Why? There only seems to be about one per website, with an odd exception here or there. The trouble crops up when deciding exactly how to include the file with the path mapping function without using a path for itself. Furthermore, even if I install it into the default includes path, it seems silly to include a file only to help correctly include another file. May as well just change them when the time comes.

Fixing rndc error with bind9

While setting up DNS on Pericles I did run into one snag:

rndc: connect failed: connection refused

The first thing to know, is that in Ubuntu Server's default setup, the /var/log/syslog file contains errors relating to bind and rndc startup. I watched this file and found some syntax errors as I tried to resolve this problem, and I recomend you do the same. Here is my solution:

Careful, this will overwrite your rndc.conf (run this while in the /etc/bind folder):
rndc-confgen > rndc.conf

Open the new rndc.conf. First, take the hyphen out of all the rndc-key names. I don't know why. rndckey is what I ended up needing. I think maybe bind9 removed support for the hyphen in these names, but rndc-confgen didn't know it.

Now, copy the bottom section (the commented out part) out, we're going to paste it into the top of named.conf.local, and uncomment it. Save changes to rndc.conf, and after pasting the other section into named.conf.local change the part inside the allow { } to localhost; instead of 127.0.0.1, for similar reason as before it seems to want a name instead of a hardcoded IP in this version.

I killed the named process then tried starting it again, and once again. I found an error in my syntax by looking at /var/log/syslog, and after fixing this it worked without any error.

DNS with bind9 on Ubuntu

I just finished setting up bind on Pericles, and it wasn't too bad.

Bind was already installed by using the LAMP option on the Ubuntu Server disc. The configuration files for it are found in /etc/bind. I have a tool written as a Windows console application that dynamically cranks out my forward files for me based on templates, so I ported that over and ran it with wineconsole. It worked. I had to make a change in the named.conf.local file because for some reason bind on Linux seems to require a full path in the zone lines:
zone "whatever.tld" IN { type master; file "/etc/bind/forward/whatever.tld.zone"; };

On windows, I didn't need /etc/bind/ prefixing those, because the paths were relative to the conf file. No big deal, however--it was an easy change.

I dropped a script into /usr/local/sbin called redns

It does the following:
/etc/init.d/bind9 stop
/usr/local/sbin/dnsgen.sh
/etc/init.d/bind9 start

This simply stops bind, regenerates the forward files using my tool (the dnsgen.sh file launches it with wineconsole), and then starts up bind again.

If you get an rndc error, here's how I fixed it.

I will port the dnsgen tool over to a native application at some future point, but I'm in a hurry right now because my WAMP server is starting to have MySQL blackouts requiring a reboot. It seems to be something to do with a file handle getting a lock stuck on it, because stopping and starting the MySQL daemon doesn't improve the situation.

Configuring PHP

I'm now trying to get PHP working on Pericles.

The first thing I should point out, is that I started by doing a vanilla LAMP install from the Ubuntu Server disc. This means I already had Apache2 and PHP5 installed "out of the package", but they aren't configured adequately for my needs.

I dropped a simple file in /var/www that would echo the output of phpinfo() so that I could compare it with my existing WAMP server's setup.

If you don't have Apache working yet...

You can use php5-cli (install this with Synaptic) as an alternative to view the phpinfo() at a shell prompt:

php
<?php echo phpinfo(); ?>
(Press CTRL+D)


A plain text rendering of phpinfo() will appear, which you can scroll back to view in your console buffer.

Here are the major differences that I need to adjust:

magic_quotes_gpc needs to be turned off. It's a stupid thing, blast them for making it default to on. post_max_size and upload_max_filesize need to be increased, because I have people uploading large megapixel images through http forms. 2M doesn't quite cut it any more these days. The gd module needs to be enabled. The zip module, or a substitute for it, needs to be enabled (I use this to automatically unpack files uploaded to a designated FTP account for daily processing.) I notice a few other differences, but I think they're minor. If I run into problems with them later, I'll follow up with details on how to fix them.

I discovered that by installing the php5-gd package with Synaptic, and then restarting Apache I gained gd2 support. That was easy. To restart Apache:
sudo apache2 -k graceful

This does a graceful restart (it won't force any connections to close that are still opened). I realize this is a new box, so there won't be any connections hanging open anyway, but it is good to get into this habit early on.

I reloaded the phpinfo() test file, and there is now a section for gd which says version: 2.0 or higher, and everything looks enabled (freetype, t1lib, gif, jpg, png, wbmp)

I think the zip module I was using was part of pecl. I know pecl is similar to pear, and now that I think about it, I know I'll need pear support too, so if it hasn't already been done, php5-cli and php-pear should now be installed with Synaptic. The reason we need to install php5-cli is because pear is a command-line utility, and requires the command-line version of PHP in order to run. Don't worry: php5-cli and libapache2-mod-php5 peacefully coexist. I opened a shell, typed pear, and there it is!

On a whim, I typed pecl, and it also runs. It looks like pear and pecl come as a pair (no pun intended).

Back to Synaptic, we need to install php5-dev because we will need a tool called phpize in order to complete the next step. php5-dev has several dependencies that it will automatically install.

Now, the magic command:
sudo pecl install zip

When its finished, it will say: You should add "extension=zip.so" to php.ini
cd /etc/php5/apache2
sudo editor php.ini

Do what it says. Add the line extension=zip.so at the very end of the file, because that's where the automatically added extensions (mysql, mysqli, gd) ended up. You may want to also add the same line into the /etc/php5/cli/php.ini so that you have zip support when you use php for shell scripting.

Save your changes and:
sudo apache2 -k graceful

If you look at the phpinfo() output again, you'll now see the zip section near the bottom. This is
really easy.

While we're editing these two ini files, lets search for and change the following lines to these new values:

memory_limit = 16M
post_max_size = 16M
magic_quotes_gpc = Off
upload_max_filesize = 10M

Remember to set these for both /etc/php5/apache2/php.ini and /etc/php5/cli/php.ini

Another tool I sometimes use (if I need to programatically submit a post):
sudo pear install HTTP_Request

Well, I think that's everything I use. Restart apache2 one last time and see if it all works.

Tuesday, December 5, 2006

Welcome to Your Linux Filesystem

If you're trying Linux (or Unix) for the first time, you may be alarmed when you first see the filesystem. Windows users (who install fresh) are used to seeing something like this:

Volume in drive C has no label.
Voume Serial Number is EXPE-NSIV

Directory of C:\

12/05/2006 04:04 AM 0 AUTOEXEC.BAT
12/05/2006 05:34 AM 12,286,482 AVG7QT.BAT
12/05/2006 04:13 AM 241 boot.ini
12/05/2006 04:04 AM 0 CONFIG.SYS
12/05/2006 05:28 AM <dir> Documents and Settings
12/06/2006 01:22 AM 267,964,416 hiberfil.sys
12/05/2006 04:04 AM 0 IO.SYS
12/05/2006 04:04 AM 0 MSDOS.SYS
08/03/2004 01:38 PM 47,564 NTDETECT.COM
08/03/2004 01:59 PM 250,032 ntldr
12/06/2006 01:21 AM 402,653,184 pagefile.sys
12/06/2006 10:44 PM <dir> Program Files
12/05/2006 05:26 AM <dir> RECYCLER
12/06/2006 11:28 PM <dir> WINDOWS
10 File(s) 683,201,919 bytes
4 Dir(s) 984,405,442 bytes free


The picture in Linux is very different:

bin   cdrom  etc   initrd      lib         media  opt   root  srv  tmp  var
boot dev home initrd.img lost+found mnt proc sbin sys usr vmlinuz


As one continues to use Windows, the root directory of your boot drive continues to gain a few additional files (fewer in the latest versions than in the past, mostly all you'll see is logs now) In Linux, the root directory almost invariably remains pristine without much variation from the 22 items listed above. I never had anyone bother to explain these to me, so I thought this might be a useful subject to cover for those becoming acquainted with Linux for the first time.

/bin/ ... This is like the Windows folder (or more accurately, like the old DOS folder), in that it holds the basic system tools that all users may access. In Linux, these are considered "essential" programs.

/boot/ ... This folder holds the files for the boot loader. It is similar to ntldr on a Windows system.

/cdrom/ ... This is just a convenient symbolic link to /media/cdrom (see below)

/dev/ ... This folder holds devices, which in Linux are treated like files. The items listed in here are visualized in a method similar to serial ports (COM1, COM2) and parallel ports (LPT1) in Windows or DOS.

/etc/ ... System-wide program settings are held here. This is similar to the HKEY_LOCAL_MACHINE hive in the Windows Registry, or the "C:\Documents and Settings\All Users\Application Data" folder. For newer and more elaborate packages, a specific subdirectory within /etc is usually created to hold system-wide settings for the package.

/home/ ... Each local user gets a home directory here. This is similar to the "Documents and Settings" folder in Windows, with the root of each home directory being considered similar to the Windows "My Documents" folder.

/lib/ ... *.so Libraries (sets of compiled functions in shared object files, used by many programs) These are like the *.DLL files found in the C:\WINDOWS\System or C:\WINDOWS\System32

/lost+found/ ... This is where files recovered during a file system check (fsck) are placed. I like to think of it as similar in its temporary nature to the Windows RECYCLER folder, but it isn't really the same purpose. (Windows places deliberately deleted files in RECYCLER until you empty the Recycle Bin.) In reality, the Windows chkdsk utility saves recovered fragments directly in the root.

/mnt/ ... This is where mount points go for temporarily mounted filesystems. On Windows, you would use A: B: and possibly D: E: or F: for this sort of storage, but in Linux drives get mapped to a mountpoint in the root ("/") filesystem.

/media/ ... This is similar to /mnt, but specifically for removable media such as a CD-ROM drive which is typically found in /media/cdrom. On a Windows system, this would be found as D:, E:, or F:.

/opt/ ... Optional software packages are installed into this folder. It is similar to "Program Files" on a Windows system. In reality, this folder is slightly confusing and hardly used, it serves arguably the same purpose as /usr/local (see below)

/proc/ ... This holds a virtual filesystem with kernel and process information. There really isn't a Windows equivalent, but it gives access to information somewhat similar to what can be found in the Microsoft "System Information" tool (click Start, Help and Support. Click Support button on the Toolbar, under Tools and Links on the left side click Advanced System Information, and then in the details pane click View detailed system information. They make this really easy to get to. To do the same thing in Linux you type cd /proc)

/root/ ... This is the home directory for the main system administrator account. In Linux, the administrator is named root (because they have access to the whole filesystem from the root down). Root shouldn't represent an individual, but is an account used during administrative tasks by utilities such as sudo or su. Thus, the administrator will also have their own personal account, as well.

/sbin/ ... This folder is similar to /bin/ but contains utilities specifically for tasks restricted to the superuser (i.e. root, the system administrator). See also /usr/sbin, and /usr/local/sbin.

/tmp/ ... Temporary files. This is similar to C:\TEMP, C:\TMP, or "C:\Documents and Settings\UserName\Local Settings\Temp" on a Windows system.

/usr/ ... This contains "user" files, meaning non-system files (the system should be able to boot without these files). It contains another bin, lib, and sbin with like meanings to their root level counterparts, excepting that the files are non-essentials. It also contains include (standard include files) and src (kernel source code), which are useful for developers, X11R6 which is where the graphical "X Window System" resides, and local where another set of bin, include, lib, sbin, share, and src reside which are considered specific to this single host (machine). I should point out that /usr is limited to read only data. Host specific (machine specific) data is stored in /usr/local rather than directly in /usr.

/var/ ... Variable files. This includes logs, databases, websites, and temporary email files. There is another tmp folder in here, which is preferred in lieu of /tmp when the system is in multiuser mode.

I will soon post a follow-up article on Filesystem Permissions in Linux.

FTP Server on Ubuntu

I have a couple of clients who will be uploading data packets to Pericles on a daily basis, so I decided to install vsftpd.

Using Synaptic Package Manager, I installed the latest version. The configuration file can be found and edited (as root, so use sudo) in /etc/vsftpd.conf but since FTP settings are so diverse depending on your needs, I am not going to go into the details of the conf file here, except to say that you should read and carefully select your options. I decided to set chroot_local_user=YES for security reasons, so that my authenticated users cannot browse files outside of their home directory, and I set pasv_enable=YES, pasv_min_port=62000 and pasv_max_port=64000 in order to match the iptables Firewall restrictions I had previously enabled.

NOTE: If you list pasv_min_port but forget pasv_enable=YES, vsftpd might give you the very unhelpful error message: "unrecognised variable in config file"

Since I have a few users who are allowed to do maintenance on their own websites, and I don't use ~/public_html since I don't allow shell access to these users, nor do I want to have to visit their home directory to do maintenance or backups myself, I will be needing to grant them access to specific folders in the system's /Web directory. My first thought was to do a symbolic link, but since they are in a chroot jail, the symbolic link can't get out of the jail either. Hard links would work, except that you can only hard link a file, not a whole directory. It looks like mount --bind is the answer. You can test out this arrangement by performing the following at a shell prompt, replacing jeffd with your own username and /Web/sample with the actual path to the path you are granting access to:

mkdir /home/jeffd/sample
sudo mount --bind /Web/sample /home/jeffd/sample


Now, list the files in the /home/jeffd/sample directory. You will see that it is equivalent to /Web/sample. To unmount:

sudo umount /home/jeffd/sample


Once you're satisfied with this arrangement, you can edit /etc/fstab (again, remember to use sudo) and append lines in the following format for each path you want to map:

/Web/sample  /home/jeffd/sample  none  bind 0 0


Now, reboot if you want to test it (sudo reboot now). Voila!

Now, lest you think yourself clever and try chrooting from a shell prompt to test the restricted environment, it will not work. You would need to have a copy of /bin/bash and some of its required libraries from /lib (or /lib64 as the case may be) residing within the target root in order to do so. Since you aren't going to be allowing shell access into the chroot area, but only FTP, this shouldn't be a problem so don't bother trying it. If you DO try it, and you've only copied /bin/bash, you'll get a misleading error that /bin/bash can't be found even if its there because it can't find the files it needs in /lib. If you DO copy /lib and get in, you won't be able to do anything fun anyway because you don't have the basic tools like ls in order to operate, so just don't try this.

Once you're up, FTP in and verify your chroot setup by trying to cd to the root or elsewhere that you are not allowed to go, and make sure your mount bind works by putting some files there.

Firewall on Ubuntu using iptables

I decided to start by adding a firewall to Pericles, and a little searching revealed that iptables is exactly what I need for the very simple setup I am planning on running. Even if you run a separate firewall or router as a gateway, it may not be a bad idea to install iptables on your machine as well so that you can have full control over what goes in and out in the event that you ever have any guest machines connected on the network inside of the firewall.

The Ubuntu server distribution came with iptables preinstalled, I just had to create scripts to set up the firewall and get them to automatically start when the machine boots up.

I started here:

Easy Firewall Generator for IPTables

I generated a simple script, enabling SSH, DNS, Web Server, and a couple of other services I use on the first Ethernet interface (eth0), copied and pasted it into an editor (running under sudo) and modified it slightly:

I searched and found the line for the HTTP service:

$IPT -A tcp_inbound -p TCP -s 0/0 --destination-port 80 -j ACCEPT

I copied and pasted this and changed the port number to a few other ports I need open for specialized purposes. (Since I do more than just basic web hosting, I have clients using custom software that connect to specific ports.)

If you need to open a range, use something like 3000:3010 in place of the 80 in the above line.

Try to open as few ports as possible. That's kind of the point of a firewall.

Also, search for "ping", and you'll find a note on a line you can uncomment to allow pinging to your server. I prefer to allow pinging, you may choose not to. If you want pinging, uncomment it so it looks like this:

$IPT -A icmp_packets -p ICMP -s 0/0 --icmp-type 8 -j ACCEPT

Now save the finished file as /etc/init.d/iptables (which did not exist when I started)

Set the permissions so that it matches the rest of the files in /etc/init.d:
sudo chmod 755 /etc/init.d/iptables

Please test your firewall by running ./iptables start from the shell prompt. Remember, it won't close any ports that are already opened, so try opening a second ssh session or whatnot to verify that you can still access your box before deciding to make this firewall permanent. I recommend leaving some distinguishable port closed so you can verify that iptables is working--for example, I disabled icmp ping, and when I pinged the box and saw Request timed out, I knew that my firewall was working, so then I edited the iptables script to enable pinging again.

Once you are satisfied that it is working according to your desires, you need to add iptables to the list of daemons to automatically start for the various runlevels when your machine is booted up:
sudo update-rc.d iptables defaults

Finally, reboot your system and make sure the firewall comes up:
sudo reboot now

WAMP to LAMP

I'm starting a series to document my progress on converting a very specialized WAMP server over to a LAMP server.

WAMP = Windows, Apache, MySQL, PHP
LAMP = Linux, etc.

I am going to name this box Pericles (Mostly for the sake of giving it a tag in the blog so that you can read all about its life by clicking Pericles here or on the sidebar.)

To begin with, I want to explain why I was using a WAMP server to begin with. Most people either go all Microsoft or all Open Source. This server was born to fulfill one pressing need: A friend of mine had a website already written in ASP that he needed a new hosting provider for, and neither of us had time to deal with anything as serious as a rewrite at that time. So, we found a neat plug-in for Apache on windows that allowed it to execute ASP code. This worked, and our small Web-hosting business was born.

Fast-forward two years.

We now host around twenty websites on this box, and the need for ASP is gone because the original site has been rewritten in PHP. Furthermore, the specs on that box were outdated when we started, and it is time for a faster CPU. We bought a new system at the Day After Thanksgiving sale and we are now ready to go Open Source.

We've installed Ubuntu 6.10 (Edgy Eft) from the Server disc, and then added the ubuntu-desktop package manually to get a GUI desktop. We need the GUI desktop for a couple of reasons: 1) We're new at administering Linux and it helps us feel a little more confident. 2)
I've developed a few tools over the past two years that handle frequent back-end tasks as Windows applications. I don't have time to port them all at once, so I am going to have to do it piece by piece, thus I will run the unported tools by using Wine in the meantime.

Once I got Edgy booting smoothly and configured for our graphics card (it required a resolution tweak using dpkg-reconfigure xserver-xorg to get it looking correct on our LCD) this is our official starting point.

To follow our ongoing drama, click on the Pericles category/label in the sidebar.

Wednesday, November 29, 2006

Welcome to Open Computing

We're here to advocate the use of Open Source and Free Software. Our goals are to:
  1. Help by sharing details of our experiences as we try to use Open Software to accomplish various real life tasks.
  2. Educate people about Open Source Software and what it can do for them.
  3. Contribute to Open Source projects by identifying and reporting bugs and spending time on various Open Source projects, including the possibility of aiding in development.
By doing these things we can help to make a difference in people's lives.