tinymush-2.2.4/conf/
tinymush-2.2.4/scripts/
tinymush-2.2.4/vms/
[ This note was written by Andrew Molitor. ]

	Using compression is something you don't wat to take lightly.
It's useful if you're short on disk space, and it will help slightly on memory
usage as well, but it will cost a little performance. Not a lot, there is
some question about whether the users can feel it at all on the dinky little
Sun 3/60 I run my MUSH on, so it's likely that nobody will even notice
on a normal machine. Benchmarks indicate about a 5% performance hit. The
larger issue is that it's a bit of a hassle to set up.

	See portability notes below if you have trouble making the analysis
tools work.

Step 1
------

	You need to produce a compression table. One is supplied in
compresstab.h.dump, it won't provide any compression, but it will get you
started if you have no database to analyse yet. The radix tree compression
library works by converting common substrings into 12 bit code, so it needs
to know what your common substrings are. You'll need a flat dump of your
database, or at least the ability to produce one. In the radixlib directory,
do a 'make a' which builds the binary part of the tools used to analyse your
flat dump. Then you run your database through the analyse.sh shell script,
like this:

cat my_db.flat | analyse.sh

	or possibly these variations:

db_unload my_game.gdbm | analyse.sh
zcat my_db.flat.Z | analyse.sh

	This will a) take a while and b) produce a file called compresstab.h.
You can look at it if you like, it's just a big C structure with 4000ish
of the most common strings in your database. If you have access to a big
machine for one-off jobs, I highly recommend using this machine to build
your compression table.

Step 2
------

	Build the compression library. Just do a 'make libcompress.a' in
the radixlib subdirectory.

Step 3
------

	Rebuild your server. Add '-DIN_MEM_COMPRESSION' to your DEFS and add
'-L./radixlib -lcompress' to the LIBS in the Makefile, and rebuild netmush
from scratch. Make damn sure have a copy of dbconvert that does not use
compression. Use it to get a flat dump of your uncompressed DB before moving
on.

Step 4
-----

	Reload your database compressed. Rebuild dbconvert with the
-DIN_MEM_COMPRESSION and the libraries modified, it will now load/unload
into/out of a compressed gdbm database. It's flat dump format is still
uncompressed, however. db_load or whatever your poison is to load up the gdbm
database from the flat dump. You're ready to roll!


Portability Notes
-----------------

	analyse.sh has a definition for AWK in it, if your system doesn't
have nawk, try using just awk. On my Linux box, awk is a link to gawk
which, while it is modern enough, is staggeringly slow. It would takes days
to grind through a substantial database. If your mileage does not vary,
try using a non-free version of awk.

	If you don't have enough temp space to do the sort, which is
a big sort, try adding a '-T /tmp' or '-T /usr/tmp' or whatever temp
directory has lots and lots of space.