In, say, OpenGL 2.0 or higher, as I understand it ...
For every entity effectively, you will be calculating your own matrix either via a library or your own matrix calculations code.
1. The projection and modelview matrix get blended together before the draw. (Which happens in OpenGL 1.0, gpu side, i.e.)
2. This is shifting a ton of matrix calculations to the CPU.
3. Many of them don't have to be recalculated constantly, but still since the projection matrix and the modelview matrix have to be multiplied, a camera location change means (i.e. you moved or turned) is going to require a ton of floating point matrix calculations in every frame.
Doesn't this to some degree fail to take advantage of what the GPU exists to do, namely handle a lot of calculations so the cpu doesn't have to?