(no subject)

Aug 24, 2011 03:12


Since I am only thinking about coding (and travel arrangements) on the run-up to a demo event, I always forget that I'm about to spend time with some of the best people ever. Afterwards, I try to make a note about that... note to self: demoparties have best people ever.

I ran out of time and did not enter the compo at Evoke. One reason is: I developed on ATI hardware this time, and I have some kind of weird nvidia bug I can't figure out. It doesn't seem like I'm doing anything wrong... though I have been at fault for the overwhelming majority of bugs so far.

The bug is this: I have a fragment shader which is used to displace screen pixels. The screen pixels are read from a 2D texture, the displacement direction and amount is read from a 3D texture (and is animated by changing the depth coordinate in that texture). The part in question is like this:

#version 330 core // FS_Break.frag uniform sampler2D tx1; // a screen sized texture which is a capture of the screen color buffer uniform sampler3D tx2; // a 3D texture containing displacement data uniform float tx2z; // how deep in the 3D texture to read from uniform float amount; // severity of effect. 0.0 = no effect. layout(location = 0, index = 0) out vec4 Color; in vec2 ouv1; // interpolated texture coordinate void main() { Color=texture(tx1,ouv1 + vec2(amount * texture(tx2, vec3(ouv1, tx2z)).rg - vec2(0.5,0.5) ) ); } Works as expected on ATI. On my nvidia machine, it results in an empty screen. Messing around with it shows some very weird properties:
// correctly displays the screen sample Color= texture(tx1, ouv1); // correctly displays the attenuated displacement texture Color = amount * texture(tx2, vec3(ouv1, tx2z)); // this is used in some more variations below.. vec4 nvbug = texture(tx2, vec3(ouv1, tx2z)); ... // blank screen. Color= texture(tx1, ouv1 + amount * nvbug.rg * 0.0000000001); // not blank. Color= texture(tx1, ouv1 + amount * nvbug.rg * 0.000000000); // this is the best one: // replace either branch with Color = vec4(1.0,1.0,1.0,1.0) and it works as written // but if both texture values are used, even if never combined or used at the same time, blank!!! if(amount > 0.4) { Color= texture(tx1, ouv1); // Color = vec4(1.0,1.0,1.0,1.0); } else { Color = nvbug; // Color = vec4(1.0,1.0,1.0,1.0); }
So... that's driving me nuts. I thought maybe my textures were two different formats and that ATI was doing some pixel format conversion that NV wasn't, but that seems unlikely. Clearly the driver is optimizing out one or the other texture reference when it computes that it is unused. But what's wrong?? I guess I will go over my texture parameters one at a time tomorrow.
Previous post Next post
Up