The first Big Thing that anyone should do is fix water and sky warps; these are horrifically bad (and low-performing) in GLQuake; moving them to shaders just fixes everything cleanly, doesn't need surface subdivision, has massively reduced memory overhead, and runs a LOT faster.
After that it's decision time - do you want a traditional look or do you want to go all-out for eye-candy? Some things that I think (and it's only my opinion, not some kinda universal truth) bridge the gap nicely would include heat haze and other post-processing effects.
Beware of developing on NVIDIA hardware. It's much more tolerant of bad or sloppy code and what you end up with might not run on anything else. Word on the street is that their GLSL compiler will even accept HLSL syntax.

Shaders go a long way towards making static VBOs possible. Because - in many cases - you no longer need to modify vertex data on the CPU before submitting it, that vertex data can just go into static VBOs and give you increased performance. A combination of the two is definitely the next level.
Is it time to make the jump yet? I definitely think so. Shaders have been standardised for 10 years now, and everyone should have capable hardware by this point in time. There are probably a few die-hards clinging to their old stuff, but there are plenty of other options available for them. Just switching to shaders opens up a LOT of possibilities; things that weren't possible before, or would require multiple blended passes causing too much of a performance hit suddenly become really easy.
They're also more or less required to fix GLQuake. Software Quake used a lot of tricks that can only be done on a per-pixel level; there have been quite heroic efforts to accurately replicate them in OpenGL but they're never quite right and require a huge investment in time and effort than could be replaced by something like 2 lines of shader code.