This post is for discussion and solutions to the hard logic puzzles in my previous post. I recommend reading that one first, especially if you want to think about the puzzles without spoilers. ( Spoily puzzle solutions )
This makes me want to look into other applications of index-XOR
Then perhaps you'd be interested to know that it's related to a standard construction of category theory. That per se doesn't mean it has practical applications, of course :-) but it might be relevant regardless.
A standard construction of category theory is called a "free functor", which turns a thing of type A into a thing of type B in the "most general" kind of way. This is a bit of a vague definition, and depending on A,B the details can go all sorts of strange ways (e.g. free groups), but the relevant example is the free functor that turns an arbitrary set S into a vector space V (over some pre-specified field 𝔽) whose basis corresponds to the original set: you say that an element of V is just any old way of assigning a nonzero coefficient in 𝔽 to some finite set of elements of S. (Put another way, V is the space of functions S→𝔽 with finite support.)
There's also a thing called a "forgetful functor", which turns a thing into a simpler kind of thing by ignoring some structure or property that it had. For example, a forgetful functor can turn a vector space back into a set by just forgetting that there's any kind of vector structure there, and reducing it to the plain set of its elements.
So one thing you could do with these functors would be to take a vector space V; forgetful-functor it down to a plain set; and then use the free functor to make a second vector space W out of it. When you do that, you find that W is a larger vector space which has a basis element for every single vector in V. Put another way, a vector in W describes a sort of "formal linear combination" of vectors of V, by specifying a particular finite set of V-vectors each with a weight.
These two functors are "adjoint" to each other, which is a fiddly category-theory concept related to monads (and hence to functional programming) which I don't really want to have to remember all the details of from the notes I read :-) But one consequence is that you can automatically derive a pair of linear maps between these vector spaces V and W, called the unit and the counit. The unit maps a V-vector to a W-vector in the most obvious and boring way, by taking the basis element of W it corresponds to. And the counit takes a vector in W, regarded as a formal linear combination of vectors in V, and evaluates it, to reduce it back to a single vector of V by actually adding up all the V-vectors it specifies, with the given weights.
Why is all of this relevant to chessboards and index-XOR?
Because: suppose we take 𝔽 to be the finite field with 2 elements, aka ℤ/2ℤ or GF(2). And suppose we take V to be a vector space of dimension 6 over 𝔽, so that it has 26=64 vectors, which is the same number as the squares on the chessboard in question #2.
Then W will be a vector space of dimension 64 over the same field, which means that it has 264 vectors. Each of those vectors assigns a weight in {0,1} to every vector of V - i.e. a vector of W corresponds to precisely an arrangement of coins on the chessboard.
And the effect of the counit function I describe above is to add together all the V-vectors (chessboard squares), each multiplied by their weight in the input W-vector. Since weights are all either 0 or 1, this is equivalent to saying "add up all the V-vectors which are assigned a weight 1 by the W-vector". And since addition, in a vector space over GF(2), is isomorphic to bitwise XOR, this is saying: XOR together the indices of all the squares whose coin says "heads".
Cool! I know almost nothing about category theory except that Alex did his Part III on it and says it's very cool and like set theory but "more so". That, and your comment here, make me both intrigued and slightly intimidated.
Then perhaps you'd be interested to know that it's related to a standard construction of category theory. That per se doesn't mean it has practical applications, of course :-) but it might be relevant regardless.
A standard construction of category theory is called a "free functor", which turns a thing of type A into a thing of type B in the "most general" kind of way. This is a bit of a vague definition, and depending on A,B the details can go all sorts of strange ways (e.g. free groups), but the relevant example is the free functor that turns an arbitrary set S into a vector space V (over some pre-specified field 𝔽) whose basis corresponds to the original set: you say that an element of V is just any old way of assigning a nonzero coefficient in 𝔽 to some finite set of elements of S. (Put another way, V is the space of functions S→𝔽 with finite support.)
There's also a thing called a "forgetful functor", which turns a thing into a simpler kind of thing by ignoring some structure or property that it had. For example, a forgetful functor can turn a vector space back into a set by just forgetting that there's any kind of vector structure there, and reducing it to the plain set of its elements.
So one thing you could do with these functors would be to take a vector space V; forgetful-functor it down to a plain set; and then use the free functor to make a second vector space W out of it. When you do that, you find that W is a larger vector space which has a basis element for every single vector in V. Put another way, a vector in W describes a sort of "formal linear combination" of vectors of V, by specifying a particular finite set of V-vectors each with a weight.
These two functors are "adjoint" to each other, which is a fiddly category-theory concept related to monads (and hence to functional programming) which I don't really want to have to remember all the details of from the notes I read :-) But one consequence is that you can automatically derive a pair of linear maps between these vector spaces V and W, called the unit and the counit. The unit maps a V-vector to a W-vector in the most obvious and boring way, by taking the basis element of W it corresponds to. And the counit takes a vector in W, regarded as a formal linear combination of vectors in V, and evaluates it, to reduce it back to a single vector of V by actually adding up all the V-vectors it specifies, with the given weights.
Why is all of this relevant to chessboards and index-XOR?
Because: suppose we take 𝔽 to be the finite field with 2 elements, aka ℤ/2ℤ or GF(2). And suppose we take V to be a vector space of dimension 6 over 𝔽, so that it has 26=64 vectors, which is the same number as the squares on the chessboard in question #2.
Then W will be a vector space of dimension 64 over the same field, which means that it has 264 vectors. Each of those vectors assigns a weight in {0,1} to every vector of V - i.e. a vector of W corresponds to precisely an arrangement of coins on the chessboard.
And the effect of the counit function I describe above is to add together all the V-vectors (chessboard squares), each multiplied by their weight in the input W-vector. Since weights are all either 0 or 1, this is equivalent to saying "add up all the V-vectors which are assigned a weight 1 by the W-vector". And since addition, in a vector space over GF(2), is isomorphic to bitwise XOR, this is saying: XOR together the indices of all the squares whose coin says "heads".
Reply
Reply
Leave a comment