I am always amazed when people fail to distinguish
between constants and variables.
I am all the more amazed when the victims of such confusion
are the otherwise brilliant implementers of programming languages.
You'd think that if anyone knows the difference
between a variable and a constant,
it would be a programming language implementer.
For instance, a CLISP maintainer
explicitly based his argument for
making some backdoor compulsory
on the belief that the behavior of
his hypothetical source-closing adversary
will remain the very same after the backdoor is created.
But what is constant here is the hypothetical adversarial will
of said antagonist, not his behavior;
the known backdoor will be trivially circumvented by this adversary,
and will only remains as a constant hassle and security hazard
to all the friends.
In another instance, the respected creator of Python
argued against proper tail calls because they
allegedly lose debugging information
as compared to recursion without tail call elimination.
But as said hacker implicitly acknowledges
without making the explicit mental connection,
in programs for which proper tail calls matters,
the choice is conspicuously not between
proper tail calls
and improper tail calls, it is a choice between
proper tail calls and explicit central stateful loops.
And the absence of debugging information
is constant when you transform tail calls into stateful loops.
Stateful loops precisely make it harder to get debugging information,
whereas proper tail calls are trivially disabled, wrapped or traced
(and trivially so if you have macros).
In addition, state introduces a lot of problems
because of the exponential explosion of potential interactions
to take into account.
But more importantly,
proper tail calls allow for dynamic decentralized specification
of a program in loosely coupled separate modules by independent people,
whereas loops force the static centralized specification of the same program
by a one team of programmers
in one huge conceptual gob requiring tight coupling.
Finally, loops are trivially macro-expressible in terms of tail-calls (i.e. through local transformations), whereas it requires a global transformation to transform arbitrary programs requiring tail-calls into programs using loops - and if we allow for such transformations,
then who needs Python,
INTERCAL is the greatestest language ever designed.
Brilliant operating system designers have argued that
microkernels
can simplify software development because factoring an operating system
into chunks that are isolated at runtime allows to make each component simpler.
But the interesting constant when you choose between ways to factor your system
and compare the resulting complexity is not the number of components,
but the overall functionality that the system does or doesn't provide.
Given the desired functionality, run-time isolation vastly increases
the programmer-time and run-time complexity of the overall system
by introducing context switches and marshalling between chunks
of equivalent functionality across the two factorings.
Compile-time modularity solves the problem better;
given an expressive enough static type system,
it can provide much finer-grained robustness than run-time isolation,
without any of the run-time or programmer-time cost.
And even without such a type system,
the simplicity of the design allows for much fewer bugs,
whereas the absence of communication barriers allows
for higher-performance strategies.
Hence HURD being an undebuggable joke whereas Linux is a fast, robust system.
In all these cases, the software designer enforces some kind of hurdle
that doesn't help honest people;
the only beneficiaries are the specialists who get job security
at handling the vast increase in gratuitous complexity.
These people, though very intelligent, fall for an
accounting fallacy.
They take a myopic look at the local effect of one alternative
on some small detached parts of the system
where they can indulge in some one-sided accounting
whereby the alternative they like has benefits at no cost,
whereas the other one has costs at no benefit.
And they neglect to consider the costs and benefits
in other parts of the system outside of their direct focus
though they are necessarily being changed
by the switch between alternatives to preserve
the actual overall constants that make the choice meaningful.
It is possibly the individual interest of these experts
to promote labor-intensive solutions where
their expertise is the development bottleneck.
Conscious dishonesty isn't even necessary
when the rational incentive is
for the experts to ignore the real costs of their choices,
because they don't bear these costs.
And so ultimately, the laugh is on the users
who follow the advice of these experts.
In a choice between proposed alternatives,
what needs be evaluated is the economic cost of each alternative,
i.e. its relative cost to other alternatives
with respect to the overall system.
And before you may even evaluate this cost,
you must determine what is constant and what varies
when you make the choice.
Woe be on software designers who confuse constants and variables!