Precision Appendix: Matrix Transformation Tutorial
Table of Contents
- Math Prerequisites
- Importance of Correct Transformations
- What a Matrix Represents
- Matrix Multiplication
- Transforming a Vector by a Matrix
- Object Space Transformations
- Camera Transformations
- Inverse Transformations
- Hierarchical Transformations
There is one drawback to this method of matrix transformation, it
requires that the values stored in your matrix be very precise. This
is a problem because all digital representations of fractional numbers
are approximated somewhat, and no matter how precise the
approximation, errors will propagate. This means that the more you
use an approximated value in calculations, the less precise the value
becomes. There are several solutions to this problem, I will briefly
discuss each and then leave the decision to you.
The first (and most obvious) solution is to use the most precise
data type availible. On the PC, we have 64 bit (double) and 80 bit
(long double) floating point data types. Clearly, using these data
types will result in VERY little loss of precision in calculations.
This is a viable solution to our problem but may not be desireable due
to the larger storage requirements and possible speed hits which
accompany these data types.
The second solution is to use 32 bit single precision floating
point data. Its precision is more than adequate in most cases, and
its speed is tolerable when a floating point coprocessor is present.
Also, single precision floats do not require any more space to store
than long integers. Disadvantages of this data type are that it is
somewhat slow even on machines with a floating point coprocessor, and
there is always the chance that someone without a FPU will use the
software. In this case, pure floating point math will be anywhere
from 10 to 100 times slower than integer math.
The third option is to use integer math. The obvious benefit here
is speed, the obvious disadvantage is precision. 16.16 integer math
(16 bit integer, 16 bit fraction) allows for less than 5 decimal
places of precision. This may sound like a lot, but consider the fact
that the axis vectors in the transformation matrix all have a
magnitude of 1, which means that the individual x, y, and z components
are always less than or equal to 1. Your fractional bits become very
important in such a situation.
As far as my personal preference, I use single precision floating
point math in my transformation matrix. I have found it to be the
best compromise between speed and precision. As processors become
more and more adept at floating point calculation, floating point math
will be faster than integer math. Many of the newer RISC based
processors already have floating point math that is faster than its
integer counterpart. For example, friends of mine developing
PowerPC rendering software tell me that floating point math is three
to four times faster than integer math on their platforms. The
PowerPC has a nifty little instruction called fpmuladd which performs
a floating point multiply and add in once clock cycle! It makes your
matrix multiplication routines pretty fast. There are also faster
desktop FPUs. The floating point unit in a DEC Alpha chip is roughly
the same speed as those in Cray supercomputers made in the early
Now, about how to handle the loss of precision. Two things
happen when a matrix loses precision; its axis vectors change
magnitude so that they are no longer unit vectors, and its axis
vectors "wander" around a bit so that they are no longer
perpendicular to each other. I have implemented both integer and
single precision floating point versions of these matrix
transformations. I found that the 16.16 fixed point integer versions
will lose precision after somewhere around 10^2 transformations, while
the single precision floating point version shows no noticable loss of
precision after 10^5 transformations. However, there was a slightly
noticable wobble of objects in the third level of hierarchy when using
single precision floating point. I attribute this to a normal,
usually unnoticable loss of precision which is more noticable because
it is shown in the same frame as objects transformed with more precise
versions of the same matrix.
If you are a die hard integer math freak who refuses to accept the
fact that floating point is the wave of the future, there are a few
things you can do to justify your use of integer math with reasonable
matrix precision. First, you could try using 0.32 fixed point in the
rotation portion of the transformation matrix. However, you are going
to have to do some 64 bit shifting around, and some tricky shifting in
your matrix multiplication routine to accomidate translation values,
which are almost always greater than 1. The next method involves
fixing your matrix so all the vectors are the proper length and
mutually perpendicular. This is quite a math problem to solve if you
have never considered the solution or have no knowledge of vector
operations. There are two ways of going about this, one involves dot
products, the other involves cross products.
The dot product method is based on the fact that the dot product
of perpendicular vectors is 0, because the dot product is the
projection of one vector onto another. If the dot product is not 0,
you have a value which is related to how far the vectors overlap in a
certain direction. By subtracting this value from the vector, you can
straighten it in that direction. After you straighten all your
vectors in this manner, you re-normalize them so that their lengths
are 1 (length changes as a result of the perpendicularity correction),
and you are set.
The cross product method involves the fact that the cross product
operation yeilds a vector which is perpendicular to its operands. By
taking the cross product of two axes, you can make a third axis which
is perpendicular to both. Then, by taking the cross product of this
new axis and the first of the two axes in the first cross product
operation, you can generate a new second axis which is perpendicular
to axes one and three. One is perpendicular to three and two, two is
perpendicular to three. There you have it, mutual perpendicularity.
Of course, you also need to normalize these vectors when you are done.
Wow, writing docs is loads of fun...I wonder why I waited this
long to make a new one? heh...
All of this stuff came from my head. I'm not the type who sits
down in front of a terminal with a copy of Foley's book (I don't even
own a copy of that monster...probably never will) or any other book
for that matter (don't own any books on graphics at all, come to think
of it :) On the other hand, I'm not claiming that everything in here
is my own idea. Actually, there are no big secrets divulged here,
sorry if thats what you were looking for. These techniques are pretty
much common knowledge, if that were not the case, how would I have
found out about them? :) "So why did you write a doc then, fool?"
Because I have found myself spending lots of time explaining this
stuff lately, and its better for all parties involved for me to write
things down in a doc rather than teaching the same things to different
people day after day. Now I can just say, "here, read this! :)"
I have been asked if its ok to publish my other docs in diskmags,
newsletters, etc. I have no problem with this whatsoever, as long as
I am dealt with fairly. That is, you can publish this doc anywhere as
long as you do not lead anyone to believe it is not my work.