Large Hadron Collider

Sep 10, 2008 10:20

Imagine that a group of explorers spot another group of people playing a board game (think checkers/draughts) using some stones to represent their pieces ( Read more... )

Leave a comment

trufflesniffer September 10 2008, 18:21:29 UTC
Greater understanding of the building blocks of matter gives us more accurate theories, and with more accurate theories, we can actually do applied science.

I think it's this idea I don't necessarily agree with. Understanding micro-scale (even elemental) properties doesn't necessarily provide any useful insight into the behaviour of macro-scale phenomena, even when the latter is comprised entirely out of (emerges from) the former.
We are macro-scale phenomena. Everything we can intuitively understand and care about is macro-scale phenomena.
Think, for instance, of Conway's cellular automata 'Life' (which Daniel Dennett seems to talk about endlessly): The micro-level rules governing whether a cell switches on or off doesn't intuitively or meaningfully indicate what sorts of macro-level entities ('eaters', 'gliders', etc) will emerge as a result of the consistent application of these micro-level rules; at the macro-level, qualitatively disconnected predictive and explanatory theories have to be developed and applied in order to, without the excessive cognitive and computational burden of having to deduce from first principles, predict and negotiate these emergent entities.

You can't build a nuclear power plant without a good working knowledge of just exactly how an atom works.
But it seems it is possible to develop, for example, various medical technologies that 'work' (e.g. penicillin) without a clear idea of how and why they work. (If the effectiveness of a drug could be deduced from first principles, for instance, then there'd be no need for randomised control trials.) 'Stuff' that happens closer to our scale of existence tends to be so irreducibly complex that bottom-up knowledge is all but useless.

Reply

archdukechocula September 11 2008, 15:24:56 UTC
But it seems it is possible to develop, for example, various medical technologies that 'work' (e.g. penicillin) without a clear idea of how and why they work. (If the effectiveness of a drug could be deduced from first principles, for instance, then there'd be no need for randomised control trials.) 'Stuff' that happens closer to our scale of existence tends to be so irreducibly complex that bottom-up knowledge is all but useless.

Sure, because they work on two entirely different principles. Nuclear reactions require exacting manipulations of an incredibly small scale timed with incredible perfection, all of which was only determined because of elaborate experimentation on a super small scale, and which I am highly confident could never have been replicated without that precise knowledge and means.

Much of pharmaceutical work, however, is based on, in essence, trial and error, which works only because we are seeking a very general solution to a problem. Most are akin to wanting to kill five guys in a town by poisoning its river. If you have those kinds of loose objectives, trial and error is a perfectly legitimate method. But you can't just stumble upon nuclear power as a means to an end without first being aware of very small things and their properties. There aren't nuclear reactions in the natural environment on earth , except in exceedingly rare circumstances (I don't think one has ever been recorded in human history, though I may be wrong on that one), none of which are remotely comparable to useful fission. Recreating a nuclear reaction through random experimentation is pretty much not possible. Creating useful drugs is possible because, really, the reactions of something like penicillin are in fact pretty simple and easily replicable. Nothing about penicillin resembles the Manhattan project. A $50 billion dollar project composed of super specialized parts designed to manipulate something super small in order to produce a reaction that had never been seen on earth by manipulating invisible particles was only possible because we found they existed, discovered their properties, and developed testable theories around that knowledge. It produced a useable thing on the macro scale because of micro scale knowledge.

Another good example is DNA, which has lead to genetic engineering of crops and has even yielded targeted gene therapy.

Reply

trufflesniffer September 11 2008, 19:42:57 UTC
I think here you're arguing for the sane kind of clear ontological distinction - between micro-level and macro-level 'stuff' - that I'm arguing for. My point, however, is that once one accepts this kind of distinction then the relevance and usefulness of knowing about micro-level stuff to our macro-level existence can't simply be assumed, but must be justified. My main objection to the kinds of arguments and soundbites that seem to be put forward in approval for the LHC is that they tend not to acknowledge that this ontological distinction exists, and to simply assume that all 'real' knowledge must be 'bottom-up' (deductive rather than inductive or abductive).

I think that:
  1. The kinds of physical structures that people manage to utilise through trial-and-error tend to be much more complex and intricate than those that we utilise through bottom-up logical-deductive reasoning;
  2. Trial-and-error has created what I presume is the most powerful nuclear reactor the human race will ever know: The Sun. Since the emergence of life, completely mindless creatures have been effectively utilising nuclear power without even the slightest knowledge of subatomic particles. People make use of nuclear power very effectively every time they eat plants or animals that have eaten plants.

Regarding DNA: remember that the term and concept of 'natural selection' was created in contrast to, and based upon, the farming practice of selective breeding. People have been genetically engineering crops and animals for millennia, without knowledge even of evolution, let alone DNA.

Reply

archdukechocula September 11 2008, 20:28:14 UTC
The kinds of physical structures that people manage to utilize through trial-and-error tend to be much more complex and intricate than those that we utilize through bottom-up logical-deductive reasoning;.

Well, I'm not sure I agree with that, unless you include biological structures or composites, such as a City or other abstract conglomerate, or just the sheer volume of trial and error products. Even supposing they do, complexity in itself really isn't a virtue, unless there is a specific justification for it, so I am not sure as to the point of that.

Regarding DNA: remember that the term and concept of 'natural selection' was created in contrast to, and based upon, the farming practice of selective breeding. People have been genetically engineering crops and animals for millennia, without knowledge even of evolution, let alone DNA.

Right, but there is an inherent limit in such a thing, in the forms of time and structure. You cannot radically re-engineer life via artificial selection, even over thousands of years. You can radically re-engineer life once you have a solid grasp of genetics, because you can create otherwise impossible scenarios, such as adding full blown genes of entirely unrelated species.

Trial-and-error has created what I presume is the most powerful nuclear reactor the human race will ever know: The Sun.

Well obviously, but that's why I was pretty explicit in mentioning the whole "on Earth" bit. Also, I don't think we are anywhere near to creating a Sun or any similar sort of thing through trial and error technology, yet we have managed to harness the power of the atom through deductive reasoning.

Since the emergence of life, completely mindless creatures have been effectively utilising nuclear power without even the slightest knowledge of subatomic particles. People make use of nuclear power very effectively every time they eat plants or animals that have eaten plants.

That is a bit of a rhetorical stretch. I was, of course, referring to nuclear fission specifically, and perhaps one day nuclear fusion. Sure plants benefit from nuclear power in an indirect manner, but that is not remotely comparable. That is actually a terribly inefficient way of converting nuclear energy into usable useful power, as is solar energy. A nuclear power plant is radically more efficient at producing power in the sense of localized order when compared to a plant, which is in essence relying on a giant entropy spewing furnace which actually produces an insignificant amount of useful energy compared to the fuel expended. It's akin to comparing a wood fire to a jet engine, except the gap is substantially larger. Energy is all about creating local order (lowered entropy within a fixed system) at the cost of increased disorder, generally in the form of waste heat, in the universe at large. Plants are the ultimate in inefficiency by that standard, relying on a system that produces incredible amounts of waste heat for the benefit of ridiculously small energy payout. We are way better at doing that then plants are, and as of the moment, nuclear power is leaps and bounds better than any other existing technology. Its comparable to saying a homeless guy living in a dumpster by my house is comparable to me in the house with a heating system because he can lay against the wall. It's a pretty absurd comparison.

Reply

archdukechocula September 11 2008, 20:31:21 UTC
The only thing I will say about the plant analogy is that, if you think of entropy in terms of information retention, then yeah, plants and life in general are actually fantastically good at that. After all, we pass on DNA with very few replication errors over millions of generations, and DNA is ultimately a power system, in that it creates localized order at the expense of wider entropy. But, I think you will agree that that discussion is getting off the rails a bit.

Reply

trufflesniffer September 12 2008, 07:43:36 UTC
...unless you include biological structures or composites, such as a City or other abstract conglomerate, or just the sheer volume of trial and error products.
Both.
...complexity in itself really isn't a virtue
Yes: but it is the justification for taking the position that purely deductive, bottom-up explanations for macro-scale phenomena aren't necessarily the best explanations for the phenomena.

You cannot radically re-engineer life via artificial selection, even over thousands of years.
Disagree here, though perhaps the disagreement pivots on different interpretations of the word 'radically'. I think chihuahuas are radically different to wolves, for example, and domestic cattle are radically unlike anything that does, or could, exist in the wild (not least because milk cows are bred to have udders so huge they would be incapable of escaping predators in a hostile environment, and are thus completely dependent on humans for survival).

Reply

archdukechocula September 12 2008, 15:31:28 UTC
Yes: but it is the justification for taking the position that purely deductive, bottom-up explanations for macro-scale phenomena aren't necessarily the best explanations for the phenomena.

It really depends. Getting to the essence of a thing can be just as useful as having an overarching view. Understanding and reducing a problem to a point that can be very clearly articulated and understood makes the manipulation of that thing much more accurate, and the outcome way more predictable. Macro scale stuff, not operating on those kinds of deductive principles, relies on getting something that works without really caring much about why or how.

Ultimately, I think he difference is more tautological than anything. In either case, working from the ground up will ultimately get you the same result as working from the top down. You need constant revision in either case to get something that is both reliable and convenient in its use. The problems presented in each case are simply different. They both are ultimately aiming for the same thing, and I think there are plenty of examples of both providing useful things for humanity, so I really fail to see why this distinction is so relevant. Perhaps if you did a cost benefit analysis of each type of technology that accurately cataloged the financial , human and environmental costs of various technologies, your argument would seem more compelling to me, but without some sort of data to support the assertion, it just strikes me as a vague claim. As a question, there is something compelling there, but I don't think there is any way to answer the question through repetitious syllogisms.

Disagree here, though perhaps the disagreement pivots on different interpretations of the word 'radically'. I think chihuahuas are radically different to wolves, for example, and domestic cattle are radically unlike anything that does, or could, exist in the wild (not least because milk cows are bred to have udders so huge they would be incapable of escaping predators in a hostile environment, and are thus completely dependent on humans for survival).

I would call most all of that fairly superficial. Radical would be more like skipping over the need to stick with a given skeletal system. All these examples are is just greatly exaggerating an existing trait. To me, that is the difference between highly developed rifles, which while superificially different today, are not that radically different from rifles of even 150 years ago. When compared to a laser, or a nuclear weapon, those developments seem superficial. There is no radical leap in design and function that produces something that opens a whole new world of possibility. Just a continuous refinement of an existing mechanism that mostly produces change through scale. Each are relevant in history, but approached the problem from different angles. Both had different challenges, but both also worked fine in the end.

Reply


Leave a comment

Up