No ads? Contribute with BitCoins: 16hQid2ddoCwHDWN9NdSnARAfdXc2Shnoa
Feb 272010
 

One of the great things about OpenSolaris is that the archaic packaging tools have been replaced with something that looks like it may be a little better; one of the disadvantages is that trying to install packages from something like OpenCSW is a little awkward when the first command fails.

Given that I’ve just to hunt around for the details a second time, it is worth working up the basics into something that can be added here. Firstly we need to install the commands necessary to support the old packages :-

pkg install SUNWpkgcmds
pkg install SUNWwget

Now that has been done, it should be possible to install the OpenCSW package command using pkgadd :-

pkgadd -d http://www.opencsw.org/pkg_get.pkg
Jan 072010
 

For various reasons I have decided that I need to install mod_security2 on my personal web server. This is a Solaris zone running on an OpenSolaris global zone with various bits of software provisioned by OpenCSW. Unfortunately (or fortunately at least from the point of view that I get to do something interesting), mod_security2 is not something provided by OpenCSW.

For even more various reasons, I decided to “formalise” my notes on building, installing, and configuring mod_security2.

Before attempting to build mod_security2, it is important to have a functional build environment. This includes :-

  • Installing the apache2_devel package from OpenCSW (pkg-get -i apache2_devel)
  • Installing the gcc3 package from OpenCSW
  • Installing the following OpenSolaris packages (pkg install XXX) :- SUNWhea, SUNWarc, SUNWbtool
  • Installing the SunStudio package from Sun. It may be possible that gcc3 is not necessary with this installed, but I ended up with both so advise you too as well. In addition to installing it in the standard location (/opt/SUNWspro) it is also necessary to create a symlink in the place where the OpenCSW developer placed his/her copy of SunStudio :- mkdir -p /opt/studio/SOS11; ln -s /opt/SUNWspro /opt/studio/SOS11/SUNWspro

The next step is to setup a shell environment appropriate to configuring and compiling mod_studio2 :-

export PATH=$PATH:/opt/SUNWspro/bin
export PATH=$PATH:/opt/csw/bin
export PATH=$PATH:/usr/ccs/bin
export PATH=$PATH:/opt/csw/gcc3/bin
export CC=gcc

(The above presumes the use of a shell that understands the above syntax)

The next step is to unpack the module source code, and configure it  :-

cd /var/tmp
gunzip -c modsecurity-apache_2.5.11.tar.gz | tar xvf -
cd modsecurity-apache_2.5.11
cd apache2
./configure --with-apxs=/opt/csw/apache2/sbin/apxs \
   --with-pcre=/opt/csw \
   --with-apr=/opt/csw/apache2 \
   --with-apu=/opt/csw/apache2//bin/apu-config

That should successfully general a Makefile. Edit this makefile and remove all references to “-Wall” (for APSX_EXTRA_CFLAGS, also remove the proceeding “-Wc,”). This is because modules will compile with SunStudio’s compiler no matter what we try to do to stop it, and SunStudio does not understand “-Wall”.

Now finally you can compile the software :-

make
sudo make install

Now we are at the point where we can start configuring mod_security2.

In the main httpd.conf file, add the following two directives somewhere appropriate (i.e. close to the other “LoadModule” directives) :-

LoadFile /opt/csw/lib/libxml2.so
#   Check that this library is installed!
LoadModule unique_id_module libexec/mod_unique_id.so
#   This will be already in the file but may be commented out
LoadModule security2_module libexec/mod_security2.so
#   And this is the one we're interested in.

At this point, try a graceful restart (/opt/csw/apache2/sbin/apachectl graceful) to be sure that the relevant code loads. Now onto enabling the module and configuring it with the “Core Rule Set” …

First copy the rules subdirectory to an appropriate place and fix the permissions :-

cp -rp rules /opt/csw/apache2/etc/modsecurity
chown -R root:root /opt/csw/apache2/etc/modsecurity
chmod -R o+r /opt/csw/apache2/etc/modsecurity
find /opt/csw/apache2/etc/modsecurity -type d -exec chmod o+x {} \;

In the file modsecurity/modsecurity_crs_10_global_config.conf, change SecDataDir to /var/tmp.

In the file modsecurity/modsecurity_crs_10_config.conf :-

  1. Change SecAudditLog to var/log/modsec_audit.log
  2. Change SecDebugLog to var/log/modsec_debug.log

Now add the following to httpd.conf :-

Include etc/modsecurity/modsecurity_crs_10_global_config.conf
Include etc/modsecurity/modsecurity_crs_10_config.conf
Include etc/modsecurity/base_rules/*conf

And gracefully restart Apache.

At this point, mod_security2 is running and blocking stuff, but has not been finely “tweaked” to the local applications – at the very least it partially breaks WordPress, and may well break other applications.

Oct 272009
 

Whether you are using ufs filesystems or zfs storage pools, Solaris has a rather nifty way of migrating storage from one SAN to another wih no (or little) downtime. Or various other reasons involving moving from one disk to another. The key advantage to the following method is to reducing or eliminating downtime. Even if your users can take the hit, not having to slowly watch a multiterabyte filesystem copying from one disk to another is reason enough to use this technique.

Basically it is by using mirroring. Using mirroring to copy a disk might seem a little odd to begin with, but once you’ve seen it work you’ll be a fan.

For UFS (and SVM) Filesystems

This section assumes that the source disk device (cXXXXX) is set in the variable ${sourcedisk} and the destination is in ${destdisk}.

For UFS filesystems, the first step (which does require an outage) is to :-

  1. Stop the application that uses the filesystem being migrated.
  2. Unmount the filesystem.
  3. Encapsulate the existing filesystem device into a SVM metadevice: metainit d1001 1 1 ${sourcedisk}
  4. Create a mirror device with the new metadevice as a submirror: metainit d1000 -m d1001
  5. Change the references in /etc/vfstab to the old device name (${sourcedisk}) to the new mirror (not sub-mirror!) device – d1000
  6. Remount the filesystem and restart the application.

This should take no more than 10 minutes and is the only outage involved. There are two remaining sets of steps :-

  1. Create a new metadevice using the new disk: metainit d1002 1 1 ${destdisk}
  2. Attach the new metadevice to the mirror as an additional sub-mirror: metattach d1000 d1002

At this point, the mirror will start resilvering. It may take some time to complete, but the time it takes to do so does not really matter. In particular the resilvering process should not cause a performance problem to your application – the application I/O takes priority.

When the resilvering is complete :-

  1. Remove the metadevice containing the old SAN disk: metadetach d1000 d1001
  2. Remove the metadevice that is no longer required: metaclear d1001
  3. Attach “nothing” to the mirror metadevice (this is to ensure that the mirror grows to the size of the new submirror): metattach d1000
  4. Finally, ignore the warning on the manual page (which is outdated) and grow the filesystem: growfs -M /mount/point /dev/md/rdsk/d1000

You will see that I have used the metadevice names d1000 (for the mirror), d1001 (for the old sub-mirror), and d1002 (for the new submirror). Whatever device names you use, it is worth trying to be consistent – it helps a lot when you have dozens of filesystems to process.

ZFS Storage Pools

This is even simpler. If you have a storage pool called ${pool} which contains a single device called ${sourcedisk}, you simply :-

  1. Attach the new device: zpool attach ${pool} ${sourcedisk} ${destdisk}
  2. Wait for the resilvering to finish.
  3. Dettach the old device: zpool detach ${pool} ${sourcedisk}

Of course be aware of anything you read on the Internet! I have not actually tested the above; I’m merely regurgitating memory that has recently been exercised – I’m doing a SAN migration at work right now.

Oct 032009
 

Yesterday I went through the process of creating a ZFS storage pool with a single device :-

zpool create zt1 cXXXXX

Next adding an additional device to mirror the first :-

zpool attach zt1 cXXXXX cYYYYY

Watched it resilver, and then detached the first replica reducing the number of replicas to one :-

zpool detach zt1 cXXXXX

This is one of the nicest ways possible to migrate a large dataset from one set of devices to another (say replacing a SAN). However the documentation on Sun’s manual page for zpool is just a little vague in the relevant area and does not explicitly say that a single replica is a perfectly valid configuration.

This might all seem a little obvious, but removing a replica to reduce a storage pool to an pool without a mirror (no redundancy) is something that some volume managers don’t allow.

Jul 272009
 

I am a big fan of ‘self-documenting’ systems where the system has enough ‘comments’ to describe how it is configured and what things are doing. Unfortunately Solaris zones (or containers if you are so inclined to use the marketing name) lack one feature that would assist this :-

# zoneadm list -d
global
black                  Stealth Secondary DNS
grey                   Webserver for project X
white                  Mailbox server for project Y
blue                   Oracle DBMS for project X
puce                   MySQL DBMS for project X

It would seem that project Y hasn’t gotten beyond the talking stage 🙂

Yes, you’ve guessed it. Solaris zones could do with a “description” attribute to assist in documentation.

Feb 252009
 

Traditionally I have always mounted just the filesystems I needed in single user mode whilst tinkering in Solaris. Turns out this is a dumb method for ZFS filesystems.

What happens is that the zfs mount command will create any directories necessary to mount the filesystem. Later this can stop other ZFS filesystems from mounting when the tinkering is finished. This could be an argument for not creating hierarchies of filesystems, but that’s rather extreme.

The better solution is to mount all the ZFS filesystems in one go with :-

zfs mount -a
Jan 082009
 

If you think that you can use a ZFS volume as a Solaris LVM metadevice database, you will be wrong. Whilst it works initially, the LVM subsystem is initialised before ZFS this cannot find the databases. Whilst this may seem to be a perverse configuration at least one administrator has tried it – being me!

Dec 052008
 

I recently encountered a dead blog entitled “Linux Haters” and instantly thought up writing about tedious fan-boys that think that the operating system they like is the best and everyone should use it. I’ve no time for people like that as they tend to annoy rather than educate. I’ve no problem with people who prefer to use Windows, Linux, Solaris or OSX; it is their choice. Of course in the case of Windows, I do have to wonder why 🙂

But one of the links on that blog led to a place that (amongst other things) ranted about how FOSS projects always have dumb names, and that these projects need a big dose of marketing intelligence. He went on to whinge about the word-games often embedded into the project name.

First of all, he misunderstands how many open source projects start – with a geek or a group of geeks deciding they want something different. Either a new package or a variation on an existing one. There are no marketing types in sight, and the geeks involved probably have no great expectation that they are coming up with the next big thing – they are just having fun and hoping to come up with something useful for themselves. So what if they have a bit of fun playing word games to come up with a name for their project ? Not only do many such projects end up disappearing without a trace, but as marketing types have fun playing with words, why can’t geeks ?

Perhaps the names they come up with are not as punchy as a name thought up by a marketing department, but weirdness does have its own value in this area. A name such as Amarok does tend to stick in the mind more than Music Player 52. And over time, formally weird names such as google and yahoo do tend to become more normal if they are attached to popular projects.

Secondly he specifically criticises names invented by geeks for being recursive acronyms … but does that matter ? He specifically names GIMP which is admittedly particularly guilty being a recursive acronym with no termination. But most users won’t care … once they learn that GIMP does images (and most distributions will tell you so in the menu), they are not going to care that the name is an infinitely recursive acronym … they will just get on and use it.

Thirdly he overlooks the fact that some of the names may in fact have “sensible” names but are in fact sensible names in non-English languages.

Finally he tails off into a moderately incoherant rant with more insults than proper facts.

Perhaps “funny” names do put people off, but perhaps not. Most people are in fact more concerned with compatibility (they use Word because everyone else does) or features.

And of course there are more than a few commercial software packages whose name is not entirely sensible … does Photoshop have anything to do with setting up a shop to sell photos? What does Trent do ? Or Cedar ?

Nov 072007
 

I have been spending some time looking up information on ZFS for OSX because I’ve used ZFS under Solaris and would quite like it on my new Macbook. In many of the places I looked, there were tons of comments wondering why ZFS would be of any use for ordinary users. Oddly the responders indicating features that are more useful for servers than workstations. The doubters were responding with “So?”.

This is perhaps understandable because most of the information out there is for Solaris ZFS and tends to concentrate on the advantages for the server (and the server administrator). This is perhaps unfortunate because I can see plenty of advantages for ordinary users.

I will go through some of the advantages of ZFS that may work for ordinary users. In some cases I will give examples using a command-line. Apple will undoubtedly come up with a GUI for doing much of this, but I don’t have access to that version of OSX and the command-line still works.

ZFS Checks Writes

Unlike most conventional filesystems, ZFS does not assume that hard disks are perfect and uses checks on the data it writes to ensure that what gets read back is what was written. As each “block” is written to disk, ZFS will also write a checksum; when reading a “block” ZFS will verify that the block read matches the checksum.

This has already been commented on by people using ZFS under Solaris as showing up problematic disks that were thought to be fine. Who wants to lose data ?

This checksum checking that zfs does will not protect from the most common forms of data loss … hard disk failures or accidentally removing files. But it does protect against silent data corruption. As someone who has seen this personally, I can tell you it is more than a little scary with mysterious problems becoming more and more common. Protecting against this is probably the biggest feature of ZFS although it is not something that is immediately obvious.
ZFS Filesystems Are Easy To Create

So easy in fact that it frequently makes sense to create a filesystem where in the past we would create a directory. Why? So that it is very easy and quick to see who or what is using all that disk space that got eaten up since last week.

Lets assume you currently have a directory structure like :-

/Users/mike
/Users/john
/Users/stuart
/Users/stuart/music
/Users/stuart/photos

If those directories were ZFS filesystems you could instantly see how much disk space is in use for each with the command zfs list

% zfs list
NAME                                 USED   AVAIL   REFER   MOUNTPOINT
zpool0                               3.92G  23G     3.91M   /zpool0
zpool0/Users/mike                    112M   23G     112M    /Users/mike
zpool0/Users/john                    919M   23G     919M    /Users/john
zpool0/Users/stuart                  309M   23G     309M    /Users/stuart
zpool0/Users/stuart/music            78G    23G     78G     /Users/stuart/music
zpool0/Users/stuart/photos           12G    23G     12G     /Users/stuart/photos

With one very simple (and quick) command you can see that Stuart is using the most space in his ‘music’ folder … perhaps he has discovered Bittorrent! The equivalent for a series of directories on a normal filesystem can take a long time to complete.

With any luck Apple will modify the Finder so that alongside the option to create a new folder is a new option to “create a new folder as a ZFS filesstem” (or something more user-friendly).

It may seem silly to have many filesystems when we are used to filesystems that are fixed in size (or are adjustable but in limited ways), but zfs filesystems are allocated out of a common storage pool and grow and shrink as required.

ZFS Supports Snapshots

Heard of “Time Machine” ? Nifty isn’t it ?

Well ZFS snapshots do the same thing … only better. Time Machine is pretty much limited to an external hard disk which is all very well if you happen to have one with you, but not much use when you only have a single disk. ZFS snapshots work “in place” and are instantaneous. In addition you can create a snapshot when you want to … for instance just before starting to revise a large document so that if everything goes wrong you can quickly revert.

Time Machine has one little disadvantage … if you modify a very large file, it will need to duplicate the entire file multiple times. For instance if you have a 1Gbyte video that you are editing over multiple days, Time Machine will store the entire video every time it ‘checkpoints’ the filesystem. This can add up pretty quick, and could be a problem if you work on very large files. Zfs snapshots stores only the changes to the file (although an application can accidentally ‘break’ this) making it far more space efficient.

One thing that zfs snapshots does not do that Time Machine does, is to ensure you have a backup of your data on an external hard disk. The zfs equivalent is the zfs send command which sends a zfs snapshot “somewhere”. The somewhere could be to a zfs storage pool on an external hard disk, to a zfs pool on a remote server somewhere (for instance an external hard disk attached to your Mac at work to give you offsite backups), or even to a storage server that does not understand ZFS! And yes you can send “incrementals” in much the same way too.

Currently using zfs send (and the opposite zfs receive) requires inscrutable Unix commands, but somebody will soon come up with a friendlier way of doing it. Oh! It seems they already have!

Unfortunately I’ve found out that using ZFS with Leopard is currently (10.5.0) pretty difficult … the beta code for ZFS is hard to get hold of, and may not be too reliable. Funnily enough this mirrors what happened when Solaris 10 first came out … ZFS was not ready until the first update of Solaris 10!

Unfortunately it seems that Apple have retreated back from using ZFS in OSX which is a great shame, and until they come up with something better, we are stuck with HFS+, which means not only do we lack the features of a modern filesystem, but we are also stuck with slow fsck times. Ever wonder why sometimes that blue screen of a Mac starting sometimes takes much longer ? The chances are that it is because a filesystem is being checked – something that isn’t necessary with a modern filesystem.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close