by Spike » Sat Feb 21, 2015 12:16 am
despite direct3d's naming, these shaders are not 'pixel' shaders, they're fragment shaders and determine the colour of the fragment rather than any specific pixel on the screen.
To put that another way (ie: a more obvious way), the result of the fragment shader is fed into a blend unit, and its the blend unit (fixed-function) which does the actual pixel output. the prior colour of the framebuffer is not known until *after* the fragment shader has run, in part because multiple fragment shaders for the same pixel can be run before the blend unit is actually given any data (in anticipation of depth rejection etc).
Thus if you want to read the screen you must first copy it to a texture first (this is not a problem assuming you have either npot/rect textures, gl_FragCoord+rect textures doesn't even need a varying).
If you're feeling lazy, you can use glCopyTexImage and copy the framebuffer into a texture instead of needing to set up an fbo. There's a slight performance hit (but not as much as you might think) and of course, you're limited to images no bigger than the framebuffer (but the actual texture can be larger, just sampling only a corner or so).
note that some gles devices DO allow directly reading the framebuffer via the gl_LastFragData array thing (GL_EXT_shader_framebuffer_fetch). However, doing so is only supported because these devices are typically tile-based. Such an extension on desktop GL would enforce synchronosity, basically stalling it, which I'm sure you can understand would be disadvantageous.
Also, this won't help with bloom/blur.
Also note that you can write a contrast 'shader' by utilising the blend units ability to blend according to the destination value twice or so, and brightness is a flat addition, so its really just gamma that needs fancy textures and glsl.
so... no. you can't just read the destination colour and replace it with something better.
.