The defining characteristic of a complex system is that some of its global behaviours, which are the result of interactions between a large number of relatively simple parts, cannot be predicted simply from the rules of those underlying interactions. (
src)
One of the reasons I find this topic interesting--although perhaps naively so--is because it seems that it could transcend the usual "bleak fatalistic determinism" that philosophy students joke about, without needing to jump to the extreme conclusion of a kind of "free will" that most people assume they have. I should note that I don't accept either of these extremes, although I think that fatalistic determinism is far more plausible than any kind of utterly free will that knows no bounds, as Sartre would have you believe. Not that there is anything you would be able to do about it if either extreme turns out to be true, so the question is essentially trivial because of its intractability. Not that it isn't fun to muse about it.
However, I think that there is a kind of compromise between the two positions. The human brain is a machine specifically designed to gather data, imagine choices, make decisions, and direct the organism... but it is not perfect at this task and can break. It is particularly interesting to look at case reports of patients with frontal lobe dysfunction, especially those with environmental dependency syndrome and its variants. It suggests to me that "free will" is just the opposite of how it is usually conceived. One is only free inasmuch as they are able to control their reactions and replace them with intelligently measured decisions. I think that most people (myself included, although I try to avoid habits and schedules as much as possible) exercise this only seldomly, operating on "autopilot" most of the time, acting as they usually do and following their usual schedule. Only a minimal amount of control is needed to allow the pre-programmed actions to execute at the correct time while inhibiting the inappropriate ones.
This whole idea reeks of the scene from
Dune where Paul has to put his hand in the Box and withstand the pain (by nerve induction) without withdrawing. He does this to "prove he's human." Although I do question the methods, since he WAS under threat of death if he did withdraw. He didn't need to overcome his instincts if he understood that he would die.
But anyways, I don't see why self-directed will couldn't be an emergent property of the human brain. After all, you're never truly free since your choices are always limited by your knowledge and intelligence. In any situation, your choices are really always finite because you can't know everything and don't have forever to think of novel things to do. This concept of will is far more naturalistic than the usual misconception of "free will," and it allows for a discernible connection between self-directed action and the neurological processes that produce it.