by metlslime » Sun May 17, 2009 11:04 pm
and as for how to do it, here's a general description but you'll have to work out the details:
1. quake uses one of three axial projections, depending on the facing angle of the polygon: top (XY), front (XZ), side (YZ). When a face is exactly 45 degrees between two of these, there's some arbitrary but consistent rule for picking one.
2. the default projection is scaled to the texture dimensions, so 1 texel = 1 unit of world space in the two dimensions used by that projection.
3. quake applies the user-supplied projection by first translating, then scaling, then rotating.
4. to work backwards from UVs to projection, you will need to compare the vertex coodinates in UV space to their coordinates in default projection space. Default projection space is the XYZ coordinate mapped into the correct axial projection space, and scaled according to the texture dimensions.
5. rotation is the angle between the vectors defined by the verts in both coordinate spaces. scale is the ratio of distances between the verts in the two coordinate spaces. translate is the offset between one of the verts in one space, and the same vert in the other space.
6. Be sure to remove the effect of each transform before calculating the next one.
For reference, check out BuildSurfaceDisplayList in the glquake source code, this demonstrates converting quake projections into UV coordinates (referred to as S,T in the code.)