Does the input texture in a fragment shader change when the shader runs?


I'm trying to implement the Atkinson dithering algorithm in a fragment shader in GLSL using our own Brad Larson's GPUImage framework. (This might be one of those things that is impossible but I don't know enough to determine that yet so I'm just going ahead and doing it anyway.)

The Atkinson algo dithers grayscale images into pure black and white as seen on the original Macintosh. Basically, I need to investigate a few pixels around my pixel and determine how far away from pure black or white each is and use that to calculate a cumulative "error;" that error value plus the original value of the given pixel determines whether it should be black or white. The problem is that, as far as I could tell, the error value is (almost?) always zero or imperceptibly close to it. What I'm thinking might be happening is that the texture I'm sampling is the same one that I'm writing to, so that the error ends up being zero (or close to it) because most/all of the pixels I'm sampling are already black or white.

Is this correct, or are the textures that I'm sampling from and writing to distinct? If the former, is there a way to avoid that? If the latter, then might you be able to spot anything else wrong with this code? 'Cuz I'm stumped, and perhaps don't know how to debug it properly.

varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;

uniform highp vec3 dimensions;

void main()
    highp vec2 relevantPixels[6];

    relevantPixels[0] = vec2(textureCoordinate.x, textureCoordinate.y - 2.0);
    relevantPixels[1] = vec2(textureCoordinate.x - 1.0, textureCoordinate.y - 1.0);
    relevantPixels[2] = vec2(textureCoordinate.x, textureCoordinate.y - 1.0);
    relevantPixels[3] = vec2(textureCoordinate.x + 1.0, textureCoordinate.y - 1.0);
    relevantPixels[4] = vec2(textureCoordinate.x - 2.0, textureCoordinate.y);
    relevantPixels[5] = vec2(textureCoordinate.x - 1.0, textureCoordinate.y);

    highp float err = 0.0;

    for (mediump int i = 0; i < 6; i++) {
        highp vec2 relevantPixel = relevantPixels[i];
        // @todo Make sure we're not sampling a pixel out of scope. For now this
        // doesn't seem to be a failure (?!).
        lowp vec4 pointColor = texture2D(inputImageTexture, relevantPixel);
        err += ((pointColor.r - step(.5, pointColor.r)) / 8.0);

    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    lowp float hue = step(.5, textureColor.r + err);

    gl_FragColor = vec4(hue, hue, hue, 1.0);

There are a few problems here, but the largest one is that Atkinson dithering can't be performed in an efficient manner within a fragment shader. This kind of dithering is a sequential process, being dependent on the results of fragments above and behind it. A fragment shader can only write to one fragment in OpenGL ES, not neighboring ones like is required in that Python implementation you point to.

For potential shader-friendly dither implementations, see the question "Floyd–Steinberg dithering alternatives for pixel shader."

You also normally can't write to and read from the same texture, although Apple did add some extensions in iOS 6.0 that let you write to a framebuffer and read from that written value in the same render pass.

As to why you're seeing odd error results, the coordinate system within a GPUImage filter is normalized to the range 0.0 - 1.0. When you try to offset a texture coordinate by adding 1.0, you're reading past the end of the texture (which is then clamped to the value at the edge by default). This is why you see me using texelWidth and texelHeight values as uniforms in other filters that require sampling from neighboring pixels. These are calculated as a fraction of the overall image width and height.

I'd also not recommend doing texture coordinate calculation within the fragment shader, as that will lead to a dependent texture read and really slow down the rendering. Move that up to the vertex shader, if you can.

Finally, to answer your title question, usually you can't modify a texture as it is being read, but the iOS texture cache mechanism sometimes allows you to overwrite texture values as a shader is working its way through a scene. This leads to bad tearing artifacts usually.