OK, I was talking rubbish. I've managed to reproduce Spike's error, and it's nothing to do with timer precision.
This description is for NQ, but I assume that QW does something reasonably similar.
On a frame when the server runs we get a full read on the client.
On a frame when it doesn't run we get no read on the client.
In both cases CL_RelinkEntities (which generates particle trails) runs.
If the client is running faster than the server, the entity remains on the client but it's new origin doesn't get updated. In order to generate a particle trail we need to:
- Copy current origin to old origin.
- Advance current origin by the lerp position determined by CL_LerpPoint.
- Generate particles between the two points.
However, if the client is running really really really fast, the difference between old origin and current origin will be negligible on successive client frames; at least for the purpose of generating particles.
Soooo... how about this?
- Check for particle trail effects in CL_ParseUpdate instead of in CL_RelinkEntities.
- if ((bits & U_ORIGIN1) || (bits & U_ORIGIN2) || (bits & U_ORIGIN3)) generate particles.
- use ent->msg_origins[0] and ent->msg_origins[1] as the start and end points (or should that be the other way around?)