(
Prolegomena)
Let us ignore, for the moment, the horde of metaphysical questions banging at the gates of "possible" and the normative stormclouds looming ominously over "disorder", but rather focus on the enormous epistemic difficulties involved in making sound decisions in a state of high risks and high ignorance. And ignorance, we got lots of: neuropharmacology didn't really come into its own until the 1970's, and for most of the psychoactive drugs in use today there simply aren't any big longitudinal studies we can look at because they haven't been in use long enough.
Reasoning based on mechanism will not avail us either, as a rule: people who work in drug discovery can tell you just how much half-blind groping and outright wild-ass guesswork is involved, and that a drug you construct based on a theory is more likely to
cast aspersions on the theory than do what you're hoping it will. Even "gold standard" methods of analyzing (say) receptor structure like X-ray crystallography are
riddled with interpretative problems, not the least of which is that being able to get an unambiguous structure (which is often not easy) in no way means being able to figure out how a drug is binding to it, since the crystallography process distorts the protein.
Even if you manage that you're still not out of the woods, since how a protein-ligand system behaves in solution is
frequently quite different from how it behaves in a crowded cell.
Protein-interaction mapping is still cutting edge, and most interactions are still poorly understood. And even having a good theory for how one large molecule works often won't help you much in working on highly similar ones -- proteins with
a 90% sequence overlap will often behave shockingly differently from one another. For much the same reasons, animal testing is never definitive.
Given that our understanding of human biochemistry in general is still mediocre at best, it's hardly surprising that this applies a fortiori in the narrower field of neurochemistry. Whole new transmission pathways are still
occasionally discovered, and the workings of commonly used staples of psychiatry like antipsychotics are
still very much up in the air. So are
their consequences -- there's a plausible
evidence-based argument to be made that these things are overprescribed and consequently doing more harm than good. Until fairly recently we really didn't know
how SSRIs actually work, and some would argue we still don't know that they
work better than placebo.
With medication in general and psychoactives in particular we're playing with fire while blindfolded. Given this, the notion that non-pharmacological solutions should be given preference has a bit more heft behind it than it might seem to at first, and it's kind of a miracle that the situation isn't worse than it is. Combine this with the fact that cognitive-behavioral therapy actually does seem to be
more effective in the long term for treating some conditions, and the choice for a cautious person seems more like a no-brainer. Perhaps the strongest argument is that even where psychoactive drug treatments work, they can, at best, only mitigate problems -- many drugs eventually lose their effectiveness over time, which can sometimes leave a person worse off than when they started. If we think long-term, focus on treatment draws away investments that could be better focused on prevention -- understanding and removing the causes of major disorders.
Preventative approaches also usually end up bringing more bang per buck. There's a
small but
growing body of
empirical work and
some strong theory suggesting infectious causation of common psychological syndromes, and it may be that (as in the case of antibiotics for peptic ulcers and rubella vaccinations preventing deafness) there's already a low-cost, well-understood way of preventing many cases. While treatment certainly funds a lot of sexy new research, if we're going to be utilitarian about it then low tech prevention is sorely underrated.