daleken/
daleken/data/notes/
daleken/data/player/
daleken/data/system/poses/
daleken/doc/Homepage/images/
daleken/log/
Game Balance and Formulae

Symposium 21/2/2000


Introduction
======================================================================

When you start out adding a new feature, spell or skill there is often a
value that you need to get based on a number of factors.  Often you know what
you want and you can say approximately what values you want to get for a
certain set of conditions but you just can't find the right formula to relate
them all together.

This document outlines the methods that I use for obtaining a fine balance
between elements of the game through the complex formulae that relate certain
elements of the game to a probability or a value.

This information is not strict directions, rather it should be used as a
guide and a description of possible methods.  If you are intending to tweak
game functions and parameters then this file might help fine tune those
formulae.

In this file a number of references to DalekenMUD in examples and in this
there may be a few parameters that don't exist on your own mud
implementation.  The core ideas of the document remain the same, however, and
you should ignore these and take the concept not the actual formulae.


This Document
---

This contains information that is at times a little to mathematical.  The
intention is to give information on all the mathematics that can be used.  A
certain degree of understanding of mathematics is assumed, but sections that
require a little more knowledge can probably be safely ignored.

The formulas in this document use C style syntax for operators, except for
the power operator '^', so 'x' squared becomes: x ^ 2


Beginning - Defining the Problem
======================================================================

Whether you are creating a new skill or spell and you have no idea of how
much damage you need to deal or you are defining a probability for adding an
enchantment or you are doing a complete overhaul over the to hit/armour class
balance there are a number of things you should consider before you start.

--> First write down what the outcome should be.  Is it a probability?  Is it
a damage value for the skill/spell?  Is it a number of hitpoints you should
give to a mobile?

    e.g. fireball spell: damage outcome.

--> Next write down all the things that you think should affect the final
result.

    e.g. fireball spell: 
         spell level, caster's intelligence, caster's fire magic skill,
	 victim's fire magic skill, victim's magic resistance.

--> Establish which of the values are most significant.  Rank your options
loosely and give them some token weighting.

    e.g. fireball spell:
	 spell level,
	 caster's intelligence,
	 caster's fire magic - victim's fire magic,
	 victim's magic resistance.

  Some things to remember:

    * Parameters on the same scale (such as the fire magic skills in the
      example) can be directly added or subtracted to make a single value.

      NOTE: The problem here is that they MUST be on the same scale.  Often
      different values may appear to be on a similar scale but in actual fact
      extreme values are not on the right scale, causing these methods to
      break.

    * Some parameters may rely somewhat on the value of other parameters,
      keep this in mind and you may be able to eliminate a parameter from
      the equation.

    * Some parameters may not be directly related to others but there may be
      some effect.  If this is the case, try to keep the default values
      reasonable.

	e.g. If you increase level, armour class is likely to drop also.  So
	     make sure that for higher levels you use a reasonable armour
	     class.

    * Remember that some values are absolute and others are relative to
      another value, sometimes the same value in one context can be relative
      and in others it is absolute.  Keep this in mind when you are looking
      at a value.

 	e.g. In the fireball spell, level is absolute.  On the other hand,
	     when it comes to working out save vs. spell level is then
	     relative, to the victim's level.

--> Now write down a few samples with 'guessed values', fill in a few simple
parameters and guess at what you think the result should be.  It is often a
good idea to pick the most important parameter and vary it all over it's
possible range (only 3-5 values are needed) leaving all the other values at
some standard value.  These values will serve as a guide later on so you can
tweak your formula.

--> Choose a method, or combination of methods to scale, twist, add, divide
and warp the values you have into the value you want.  Remember that any of
the following methods can be combined for maximum effect, this is left to the
reader to arrange.


Summing Parameters
======================================================================

Sometimes a particular parameter suggests a different shape to another
parameter.  In this case, the easiest choice is to take each parameter and
formulate a function for it.  Then take the result from this and all the
other parameters and add them together.  This can be used in combination with
other methods.

Note: probability scaling does not always benefit from adding to a result as
      it can result in a value outside the range of probability.


Base Value + Scaling
======================================================================

A very common option with spell damage and similar values where there are no
strict upper bounds on the value (capping doesn't count).  A base value is
determined using one or two of the major parameters and the others are used
to apply a scaling factor to this base number.

The best example of this is the fireball spell.  Here the most significant
value by far is spell level, so this is used to obtain a base value.

  e.g. base = level * 5;

This is oversimplified (I will cover curve fitting later) but serves for this
example.  From here a number of scaling factors are used to get a final value
based on all the required parameters.


Probabilities
---

Probabilities are different from the values that have been dealt with so far
in that they have a definite limit on both ends of the scale.  This means
that applying a scaling factor using any of the methods above can give a
value outside the range, which may not be desired.  This leads into the next
section which describes how to combine all your values into a single
coefficient which is mapped via a single function to obtain an outcome.

A better method of fitting probability dealt with in the next section.


Linear Scaling
---

This is by far the easiest method of applying a scaling factor, the base
value is simply multiplied or divided by the value of the parameter and then
normalised by dividing (or multiplying) by an 'average' value.

  e.g.	scaled = base * intelligence / 15;

This assumes a value of 15 is a reasonable average value for intelligence, it
is important to keep an average value in mind here as the scaling value
shouldn't change the base value if the scaling factor is equal to the
average.  Of course the base value can be changed to accommodate this scaling
and integer truncation, but for now it is best not to think of those things.

Here the victim's intelligence could also be factored in:

  e.g.	scaled = base * intelligence / vintell;

(Note how this linear scaling can be applied twice on the same line, this is
not a whole lot different to having two different scalings.)

This method however presents some pretty wild results (refer to the section
on unusual scaling methods, under hyperbolas) and it might be safer to use a
more reasonable method to factor in a negative influence:

  e.g.	scaled = base * intelligence * ( 30 - vintell ) / 225;

This now presents the idea of adding an offset to the values to smooth out
the changes.  If an offset value was added to each intelligence value then
the that value would only affect a part of the entire value:

  e.g.	scaled = base * ( 20 + intelligence ) * ( 50 - vintell ) / 1225;

Note: You should keep in mind the value that you expect the parameters to be,
      it helps to write down any new formula and then try out the result on
      extreme values.  Often a formula can be found to be ridiculous when a
      particularly high or low value is tested.

The method of continual refinement is one that should be used for all of the
following scaling methods.  Often the correct pattern isn't found straight
away and several attempts are made before settling on the right method of
scaling even.


Quadratic and Quartic Scaling
---

This represents using a linear scaling more than once.  Using a scaling
factor of a higher power than one is rare, as this tends to exaggerate the
differences away from the centre far too much and a change in value nearer to
the centre is not as noticed.  This is due to the gradient of a power an N
squared function being zero about the origin.  This method of scaling is
often discarded as giving too much of an advantage to those at the extremes.


Powers less than One
---

This is the opposite of Quadratic and Quartic scaling, where the power of the
scaling factor is reduced to a fractional value.  This in mathematical terms
gives the highest gradient about the origin, which causes more difference to
be noticed about the central point.  The problem with this method is a
reliance on complex arithmetic, which can be quite processor intensive.

*** Note that this method is possible on values that should be zero and are
    never going to be negative.


Exponential Scaling
---

This method is used in a similar way to the linear scaling.  Find a central
point that serves as a reference and values to one side are scaled towards an
asymptote (a value that is not possible to reach, but every step brings you
marginally closer) and on the opposite side the values are increased or
decreased in greater and greater amounts.

The basic form for an exponential function is a little more complex than a
linear scaling but they can give a slightly nicer curve in some situations.

    scaled = base + ( asymptote - base ) * ( 1 + ef ) ^ ( sf - ref )

    base - the unscaled value.

    asymptote - the limit that should not be reached, this may be zero in a
	  large number of situations.

    sf  - the scaling factor, the value used to scale, this is compared to
	  the value of ref, and for every point of difference the result will
	  change.

    ref - a reference value to match the scaling factor against.  This value
	  should be as close to the average or typical value as you can
	  manage as this will greatly reduce the distortion that using this
	  style of scaling can cause.
    
    ef  - exponential factor.  This value should be small, depending on the
	  range of possible values for 'sf'.  If this is positive then the
	  asymptote will be approached when 'sf' is on the negative side of
	  'ref', if it is negative then a large value of 'sf' will cause the
	  result to tend to the asymptote.

	- Note: this value can be any positive number and any number down to
	  -0.99999, but as the value gets larger (or closer to -1) the rate
	  of increase of the result increases dramatically.  This means that
	  values of 'sf' even slightly different to 'ref' can cause values
	  that can be extremely large.

	- To counter any problems ensure that this value is never too large.
	  Especially take note of the range of likely values of sf - ref and
	  reduce this value as this range increases.

	- Tip: in order to simplify things, you can make this value always
	  positive, in order to reverse it's effect, simply reverse 'sf' and
	  'ref' in the formula.

	- Tip: A good rule of thumb for this value is to never exceed 0.1 for
	  values of sf - ref that are small (less than 10).  Values of about
	  0.002 should be used for a range in the 100s.  A value of 0.03
	  gives a pretty good starting point for values in the range -12 to
	  12.


Unusual Scaling Functions
---

There are also a few unusual scaling methods possible.  These can be
extremely useful in situations where there are strict constraints or a
precise shape is to be obtained.

--> Arc Tangent: 
    
    diff = ( scale_value - reference ) / stretch;
    result = base * ( 2 * atan( diff ) / M_PI + 1 );

        (M_PI - the value for PI = 3.141592636... in C use 'M_PI')

    Here the scaling factor (scale_value) is compared to a reference value,
    and about this value the result is scaled from 0 to 2 times its original
    value.  At a value of 'stretch' above the reference value the result is
    1.5 times the base and at 'stretch' below the reference the result is 0.5
    times the base.

    The atan function is extremely useful in these applications but the
    requirement for floating point arithmetic can be heavy on a processor if
    it has to be used too often.

--> Sinusoids:

    These are special cases and they can only be used in certain ranges due
    to their oscillating properties.  The use of these is rare but they do
    have certain advantages if used properly.

--> Hyperbolas:

    These functions have the scaling factor on the denominator (e.g. 1/x).
    These are incredibly risky to use due to the fact that at certain points
    they tend to be undefined, which can result in a SIGFPE or divide by zero
    and a complete crash.  However they can be used if care is taken to
    exclude the special cases under which a divide by zero can occur,
    normally by limiting the range over which they are used.

    The most common method of using a hyperbolic function is to take the
    inverse (1/x) of a value in order to gain the opposite effect of an
    increase in its value.

	e.g. On stock Diku muds any hitpoints above a current maximum stay
	     until they are removed forcibly.  It seems more appropriate
	     however to make them drain away slowly.  Thus find the amount
	     that would normally be regenerated and then if the current is
	     above maximum drain an amount away.  Additionally, allow those
	     who would normally regenerate a large amount to have it drain
	     away more slowly.  Thus the formula for drainage is:

		 drainage = -5000 / regen;

	     This formula is simple and it allows a higher number to
	     correspond to a lower number.  Note here that a value of 0 for
	     regen would cause a fatal error, so the value should first be
	     tested.


The Primary Factor - Combining the parameters
---

This involves combining all of the parameters in the equation into one,
giving them all different weightings and ensuring that they are balanced
against each other.  In some cases several of the key parameters can be
combined using this method and the less likely left to be used later to scale
the result.

Remember: This section is the most rough of the lot, and the method here
          should only be used as a guide, if this method is followed, then
          this section will have to be revisited a number of times.  This
          method, more than any other in this document, is meant as a
          possible solution to the problem.

Once you have all of your parameters you should list them in order of
significance in a table with a relative weighting and a second value that
represents the relative significance of a change in value. 

Note: Weighting is dependent on both the size and range of a value, range can
      mean as much as the size of a number.
Note: Remember the note above about relative and absolute values, keep this
      in mind in this case, much of this weighting depends on that.

A good start is to put the value of 100 in both fields for the highest
priority and then fill in all the other values with respect to this starting
value.

  e.g. fireball spell:
	value			weight		increase        
	spell level		100		100
	intelligence		60		100
	fire magic diff		40		50
	magic resistance	40		40

Now take the values and apply the following formula to each:

    total = ( value - offset ) * weight ^ ( increase / 100 )
	  
This will give a full weighted value for each parameter, now all that is
required is to add these together.  From this long formula a value can be
obtained that contains all of the parameters in question.

Note: The offset value here is most often zero when the correct weighting
      factor is used, however this value can be used to reduce the total.
      This value becomes more useful as the starting value is restricted to a
      small range about a larger number.  Remember that if such a factor is
      used then the weighting should be changed accordingly.

Tip: It doesn't matter much if the end value is less than zero, but in this
     case an increase value cannot be used.  If you intend to use the
     increase value, ensure that the offset is set so that the value cannot
     be negative.

Now this can be used as is, but it involves a number of highly inefficient
power operations in floating point.  One solution to this is to remove the
increase value from the equation, which in most cases is sufficient.

If a better method is required, some fiddling is required, often with
functions often seen in electrical control, with complex numbers and formulas
that have several factors on the denominator etc...  In other words, methods
that are too complex to even bother with, when a simple weighting is normally
enough.

Testing this formula is as easy as thinking up a few combinations of the
parameters used and then ranking them in order of those that should get the
highest score.  If the values obtained through using this formula are in a
different order to this order then this stage will have to be repeated.


Finding the Base Value (aka. Curve Fitting 101)
---

From here all factors have been combined into a single value, or a single
critical factor has been selected.  This value then must be mapped into a
final value.  

Here is where those values that were written down at the start come into
play.  These results should be plotted against the main value (if a combined
value is being used that will have to be calculated for each situation).
From here a pattern is often observed and the trick then is to find what the
pattern is so it can be matched with the right formula.

Refer to the curve fitting section for information on curves and their
possible applications.


Summary of Method
---

1. Find all parameters.

2. Write down some reasonable values for the outcome in certain key
   situations to be used as a guide for tweaking.

3. Rank the parameters.

4. Select the most important parameters, if there are more than one combine
   these into a single figure.

5. Look at the values decided on for a reasonable outcome and attempt to find
   a curve that matches the pattern.  Map the curve to the main value or the
   composite value.

6. Apply the remaining parameters as scaling factors.

7. Test the resulting formula on the values written down at Step 2.  If the
   formula is too far off, start tweaking.


The Second Method - Scaling the Parameters
======================================================================

Probabilities and Scaling the Primary Factor
---

These require a value in a certain range (often between 0 and 100) which will
be matched to a random number in that range.  Thus for a probability curve,
the actual value is very difficult to scale properly whilst still remaining
within the bounds.  This is where the second method comes into play.  This
concentrates on a curve and finding the right value on the x axis to match
the wanted values.


Select the Shape
---

The first task is to find what the graph should look like.  This requires
some tinkering and a few sketches on a piece of paper.  Keep the following in
mind.

   * How much effect should a big rise in "the parameters" do to the result,
     should it be a similar large rise? should the change be linear? should
     the change slow?

   * Should the value reach zero?  Should the value be zero when "the
     parameters" are zero?

   * Probabilities are different, should the chance ever get to 100% or 0%,
     or should it be capped somehow?

This part requires a fair amount of tinkering until the concept is fixed.


Fitting the Parameters to the Graph
---

All that is now required is to take the parameters, starting at the most
significant, and finding where they should be. 

  * For each parameter define an average value for each value of the main
    parameter.
 
  * Take the main parameter and for several values along the entire range,
    plot where this corresponds to the desired value on the graph.  Keep in
    mind that at this point, all the other values should be at their average.

    e.g. fireball spell: You use level as the most significant, thus values
         of magic are assumed to increase one point every 20 levels.  So at
         level 10 assume the fire magic to be 2, and at level 50 fire magic
         is 4 for the purposes of this scaling.

   * From this a trend can be seen and a rough scale can be formulated from
     this primary value.  This can help in scaling this value to be put in
     the formula, so write down the formula with the correct scaling factored
     in.

   * Now for each of the other values in turn, take a few points around each
     of the points used for the first value and plot a range of values around
     this value.  This should help determine how much, and in what shape, to
     scale this value.

     e.g. fireball spell: About the level 10 mark plot values for fire magic
          of 0, 1, 3 and 4.  These might have the same effect as two thirds
          of a level each point.

   * As each stage goes in, the formula should be tested against all the
     'ideal' values.


Curve Fitting
======================================================================

This section deals with the possible functions that can be used, hopefully
the information will give an idea of what the advantages are in using each
different type.

   * Straight Line 

	The easiest of all the graphs to apply and test, a straight line is
	often best in formula that are performance sensitive, even though it
	offers little flexibility.  This is also a common choice for an
	offensive spell as it offers the best relation between level and
	power.

   * Polynomial
   
	This may include a straight line as a major component, but the major
	difference is

atan, exponential, straight line, polynomial, square root...


Things Not Covered in this Document
======================================================================

'if' statements - These can greatly simplify formula creation, don't be
          	  scared to use them, they often can avoid complex math and
          	  they can also improve the general result.


Reasons NOT to Use these Methods
======================================================================

It's not exactly as you would like it to be, possibly not ever.  This
document only shows a few of the possible means of obtaining a balanced
function.  In the end it is best to have a very simple formula.


Tools of the Trade
======================================================================

You should have a few of these tools handy, they are invaluable aids.

  * A pen and paper.  Write formulae.  Jot down tables of values.  Draw
    sketch graphs.  Pretty much the single most important thing, without this
    nothing happens.

  * A calculator.  Sometimes you need to find a value, the computer's
    calculator is almost invariably inadequate, so a regular scientific
    calculator becomes your friend.

  * Gnuplot.  This is not exactly mandatory but I find it extremely useful in
    showing me exactly how the formula look.  This is an extremely easy to
    use package and it can plot those complex formula you have concocted
    pretty easily, even doing surface plots where you have two independent
    variables.

    This program is especially handy when it comes to tweaking the formula,
    as key values can be quickly adjusted and a plot quickly redrawn.

    e.g. Recently the heal spell on Daleken got a revamp.  When it was cast
         a quantity of available mana was converted into extra healing.  The
         formula for the spell efficiency was found for both before and
         after, these formula were entered into gnuplot and plotted against
         each other.  The following code sets up the two functions f(x) and
         g(x) (x represents level in this case) and then plots them both over
         a scale of 0 to 250.
	 
	 gnuplot << EOF
	 f(x) = x / 50
	 g(x) = (x ** 2 / 25 + 4 * x) / (50 + x / 5)
	 plot [x=0:250] f(x),g(x)
	 EOF

	 This not only confirmed the theory that the addition was gradually
	 more efficient, but it also showed exactly how much healing would be
	 taking place, it turned out that the efficiency was far too high at
	 the higher levels, so the increase would have to be scaled back.