Does anyone have a deep and possibly unhealthy knowledge of the Windows 2008 TCP/IP implementation?
Vista and 2008 Server apparently use an RFC 3484-alike process for selecting the default outbound address on an interface with multiple IP addresses in the same subnet; it looks at the default gateway, finds the address with the largest matching bit pattern and selects that.
e.g. a host with three addresses:
.67 (binary 01000011)
.70 (binary 01000110)
.73 (binary 01001001)
in a /28 where the gateway is set to .78 (binary 01001110)
In this case, the default source address in Windows 2008 (unless you tell your application to specify a source address) is going to be .73, as 01001001 is the closest match to the gateway's 01001110.
However... if you remove the .73 address, you end up with two equally non-matching addresses, at least as far as the 29th bit goes; the gateway has a 1, they both have 0.
You might possibly expect the 30th bit to be examined in the case of two addresses; in which case, .70 would be the default choice in the absence of .73 ...
Nuh-uh - it seems to be selecting .67 as the default outbound address and I'm not entirely sure why.
Anyone got any bright ideas? - all suggestions gratefully received.