Intelligent Design. It isn't Creationism but it is true. Big Science. It isn't science but it is everywhere.
One big problem with Darwinism is that it precludes "intelligent" and "design". They give credit to *poof* - random chance occurrences for the existence of everything, including information and life and every force and process you can name. Frankly Darwinists might as well just put on Merlin hats and carry wands around for all the creative forces they can name and for all the macroevolution they can demonstrate - which would be the null set.
One of the most amazing examples of cellular nanotechnology is a molecular motor protein known as kinesin. Kinesin is responsible for transporting molecular cargo -- including chromosomes (e.g. during cell division), neurotransmitters and other important material -- along microtubule tracks from one region of the cell to another. It is driven by ATP hydrolysis, thereby converting chemical energy into mechanical energy which it can use for movement. A kinesin molecule typically possesses two tails on one end, which attach to the cargo, in addition to two globular heads (often called "motor domains") on the other end. Some readers may recognize this elegant protein from the now-famous Harvard animation, Inner Life of the Cell (time 1:59).
The sheer number of processes needed to be undertaken by such a motor protein makes the appearance of intelligent design seem almost beyond rational denial. Of course, many people resist this conclusion despite the evidence. As one Science Daily article in October 2010 put it,
Kinesin proteins characteristically march towards the plus end of the microtubule (that is, towards the periphery regions of the cell), while a similar motor protein (called dynein) walks in the direction of the minus end (i.e. towards the centrosome near the nucleus). As a result of this singular and opposite directionality of the motor proteins, materials can be moved either toward or away from the cell's center.
Previous research by Mallik and Gross (2004) has revealed that, though kinesin is simple and efficient (in contrast to its counterpart dynein, which is structurally complex and cumbersome), its simplicity limits the cell's ability to regulate its activity: kinesin can only exist in two functional states, either fully active or fully inactive. Conversely, the structural complexity of dynein provides the cell with the capacity to dynamically regulate its activity, literally shifting gears in response to the load, and regulating its speed in response to the cell's needs. The trade-off between efficiency and regulative capacity allows dynein and kinesin to work together in a unique way. This phenomenon seems, at least on its face, to be better explained as a product of design than as that of a purely material-driven evolutionary process devoid of foresight.
Mallik and Gross themselves write,
An interesting research paper has just appeared in the journal Science (Kaan et al. 2011), elaborating on how this molecular motor has the capability to enter into an "energy saving mode" when not in use!
You can also read the Science Daily report here.
The research offers one possible means by which the kinesin motor is able to conserve energy when not in use: That is, the ability to fold in upon itself in order to prevent the loss of ATP.
The Science Daily report notes:
~~~~~~~~~
Even a scientist who would be considered hostile to Creationism is wondering at the new world order in science in which one has to whistle the same tune as everyonze else to investigate. The freedom to investigate freely is no longer rewarded. I will intersperse comments in this font and color in between paragraphs:
The Kinesin Motor: A Stunning Example of Cellular Nanotechnology
One of the most amazing examples of cellular nanotechnology is a molecular motor protein known as kinesin. Kinesin is responsible for transporting molecular cargo -- including chromosomes (e.g. during cell division), neurotransmitters and other important material -- along microtubule tracks from one region of the cell to another. It is driven by ATP hydrolysis, thereby converting chemical energy into mechanical energy which it can use for movement. A kinesin molecule typically possesses two tails on one end, which attach to the cargo, in addition to two globular heads (often called "motor domains") on the other end. Some readers may recognize this elegant protein from the now-famous Harvard animation, Inner Life of the Cell (time 1:59).
The sheer number of processes needed to be undertaken by such a motor protein makes the appearance of intelligent design seem almost beyond rational denial. Of course, many people resist this conclusion despite the evidence. As one Science Daily article in October 2010 put it,
"Our results show that a molecular motor must take on a large number of functions over and above simple transport, if it wants to operate successfully in a cell," says Professor Matthias Rief from the Physics Department of the TU Muenchen. It must be possible to switch the motor on and off, and it must be able to accept a load needed at a specific location and hand it over at the destination. "It is impressive how nature manages to combine all of these functions in one molecule," Rief says. "In this respect it is still far superior to all the efforts of modern nanotechnology and serves as a great example to us all." [emphasis added]One of my favorite descriptions of the workings of this nano-motor is in the following animation:
Kinesin proteins characteristically march towards the plus end of the microtubule (that is, towards the periphery regions of the cell), while a similar motor protein (called dynein) walks in the direction of the minus end (i.e. towards the centrosome near the nucleus). As a result of this singular and opposite directionality of the motor proteins, materials can be moved either toward or away from the cell's center.
Previous research by Mallik and Gross (2004) has revealed that, though kinesin is simple and efficient (in contrast to its counterpart dynein, which is structurally complex and cumbersome), its simplicity limits the cell's ability to regulate its activity: kinesin can only exist in two functional states, either fully active or fully inactive. Conversely, the structural complexity of dynein provides the cell with the capacity to dynamically regulate its activity, literally shifting gears in response to the load, and regulating its speed in response to the cell's needs. The trade-off between efficiency and regulative capacity allows dynein and kinesin to work together in a unique way. This phenomenon seems, at least on its face, to be better explained as a product of design than as that of a purely material-driven evolutionary process devoid of foresight.
Mallik and Gross themselves write,
We propose that kinesin and myosin are robust and highly efficient transporters, but with somewhat limited room for regulation of function. Because cytoplasmic dynein is less efficient and robust, to achieve function comparable to the other motors it requires a number of accessory proteins as well as multiple dyneins functioning together. This necessity for additional factors, as well as dynein's inherent complexity, in principle allows for greatly increased control of function by taking the factors away either singly or in combination. Thus, dynein's contribution relative to the other motors can be dynamically tuned, allowing the motors to function together differently in a variety of situations.In addition, Hammond et al. (2010) discuss how autoinhibition is caused, during the inactive state when it is devoid of cargo, by a folded conformation that enables nonmotor regions to directly contract and inhibit the enzymatic activity of the motor domain. The C-terminal tail interferes with microtubule binding and the coiled-coil segment blocks the motor's processive motility.
An interesting research paper has just appeared in the journal Science (Kaan et al. 2011), elaborating on how this molecular motor has the capability to enter into an "energy saving mode" when not in use!
You can also read the Science Daily report here.
The research offers one possible means by which the kinesin motor is able to conserve energy when not in use: That is, the ability to fold in upon itself in order to prevent the loss of ATP.
The Science Daily report notes:
Kinesin's heads are typically joined together at one spot, called the hinge. In the new structure, the heads swing in toward each other and are bridged by the tail domain, effectively cross-linking the heads at the site of tail binding. This double lockdown -- at the hinge and at the bridge -- prevents the heads from separating. Because the heads need to be separate from each other to break down ATP, the double lockdown effectively stops the molecule from generating fuel to power the motor.So there you have it: This molecular motor protein elegantly enters into an "energy saving" mode when it isn't in use, thus ensuring very high energy consumption efficiency. If this doesn't constitute positive evidence for design, I don't know what does!
~~~~~~~~~
Even a scientist who would be considered hostile to Creationism is wondering at the new world order in science in which one has to whistle the same tune as everyonze else to investigate. The freedom to investigate freely is no longer rewarded. I will intersperse comments in this font and color in between paragraphs:
Science’s dead end
21st July 2010 — Issue 173Never has so much money poured into scientific research—yet the results add up to surprisingly little. Have we finally come to the end of what science can tell us?
For science this is both the best and the worst of times. The best because its research institutions have never been so impressive, its funding never more lavish. This is the era of Big Science, the financing of whose mega projects is now routinely measured in tens or hundreds of millions of dollars. While the total science research budget for the US just prior to the second world war ran to only $230m, by 1998 that figure had leapt several orders of magnitude. Biomedical research alone received $62bn and over the last ten years that figure has almost doubled again, soaring past the hundred billion dollar mark and dwarfing the GDP of a dozen countries. During this period, capital investment for new research facilities tripled to $15bn.
This endeavour is immensely productive, generating a tidal wave of research papers in scientific journals, whose thick shiny volumes occupy a greater acreage of library space every year. In 1980 a year’s worth of the Journal of Biological Chemistry (to take one example) already ran to a daunting 12,000 pages. By last year its size had grown eightfold to 97,000 pages or 25m-odd words, filling an entire library shelf. And this is just one of hundreds of scientific and medical journals. Put them all together, and it is possible to glimpse the scale of the explosion in new knowledge in the recent past.
So the best of times—but also the worst. Pose the question, What does it all add up to? and the answer, on reflection, seems surprisingly little—certainly compared to a century ago, when funding was an infinitesimal fraction of what it has become. In the first decade of the 20th century, Max Planck’s quantum and Einstein’s special theory of relativity would together rewrite the laws of physics; Ernest Rutherford described the structure of the atom and discovered gamma radiation; William Bateson rediscovered Mendel’s laws of genetic inheritance; and neurophysiologist Charles Sherrington described the “integrative action” of the brain and nervous system. The revolutionary significance of these and other discoveries were recognised at the time, but they also opened the door to many scientific advances over succeeding decades.
By contrast, the comparable landmarks of the recent past have been rather disappointing. The cloning of a sheep generated much excitement but Dolly is now a stuffed exhibit in a Scottish museum and we are none the wiser for the subsequent cloning of dogs, cats and cows. It will no doubt be a similar story with Craig Venter’s recent creation of “artificial life.” Fabricating a basic toolkit of genes and inserting them into a bacterium—at a cost of $40m and ten years’ work—was technologically ingenious, but the result does less than what the simplest forms of life have been doing for free and in a matter of seconds for the past three billion years.
The practical applications of the massive commitment to genetic research, too, is scarcely detectable. The biotechnology business promised to transform both medicine and agriculture—but in the words of Arthur Levinson, chief executive of the pioneering biotechnology company Genentech, it has turned out to be “one of the biggest money-losing industries in the history of mankind.” There are promises that given 30, 40 or even 100 years all will become clear, that stem cell therapy will permit the blind to see and the lame to walk and we will have a theory of everything—or, as Stephen Hawking puts it, “know the mind of God.” But they remain promises.
Sadly, to quote Hawking using the word "God" is counterintuitive as he used to pretend He has a minimal committment to the existence of God to become a popular writer. Once he'd established himself as a cultural figure he was free to drop the mask of belief altogether. In any event, the next paragraph will reveal a large part of the problem. Error cascade. If supposedly intelligent people assume that various presumptions are true without question they can begin with an entirely mistaken starting point on the big issues of physics and biology and chemistry and etc.
More than a decade ago, John Horgan, a staff writer for Scientific American, proposed an explanation for the apparent inverse relationship between the current scale of research funding and scientific progress. The very success of science in the past, he argued in his book The End of Science (1996), radically constrains its prospects for the future. We live “in an era of diminishing returns.” Put simply, the last 60 years have witnessed a series of scientific discoveries that taken together rank among the greatest of all intellectual achievements, in permitting us for the first time to hold in our mind’s eye the entire history of the universe from its inception to yesterday. So, within living memory, we have learned how the universe came into being at the moment of the big bang 15bn years ago. We know how the first stars were formed and how within their fiery interiors the chemical elements were created by the process of nuclear fusion. We have learned how 4bn years ago a vast cloud of intergalactic gas and particles coalesced to form our solar system; and how our earth acquired its life sustaining atmosphere and how the movement of massive plates of rock beneath its surface created the continents and oceans. We have identified the very first forms of life that emerged 3bn years ago and that “universal code” strung out along the double helix by which all living things replicate their kind. And we now know the details of the physical characteristics of our earliest ancestors and the transformation to modern man. It is difficult, even impossible, to imagine how so comprehensive an achievement can be surpassed. Once it is possible to say “this is how the universe came into being,” and so on, anything that comes after is likely to be something of an anticlimax.
Actually the author inadvertantly has his finger on the pulse of the problem and fails to recognize it. We cannot claim that Big Bang Hypotheses of any kind have been proven or even found to be logical. They are full of holes and fudge factors and every single one of them cannot describe the events at the very beginning that might have caused a mythological Big Bang.
We have NOT learned how the first stars were formed at all. In fact there is no way anyone can identify stars being formed other than from the remnants of previously existing stars.
The entire idea of a gaseous beginning that formed our Solar System has been entirely debunked. In fact there is strong evidence in the makeup of every single planet and most of the moons that falsifies that idea altogether. No secular scientists can agree on a beginning to the Solar System or the formation of any one of the planets.
Certainly every hypothesis ever posited fails to explain the existence of all this water on Earth other than of course Creation by God. Shifting tectonic plates explains nothing concerning water nor life. In fact discovering DNA and learning more about it has been a steadily increasing problem for Darwinists as the uncomfortably complex organisms continues to become more complex and DNA becomes more intricate and demonstrates that mankind is indeed the student and not the teacher when studying the cell.
Finally, the pomposity of declaring that "Once it is possible to say “this is how the universe came into being,” and so on, anything that comes after is likely to be something of an anticlimax" is a good clue as to why science is bogging down. If science was a game of football, the Darwinists have not yet gained a yard or completed a pass because they have been too busy celebrating a victory not won or even a battle contested. Darwinists run away as fast as they can from every Creationist or ID challenge. They prefer to censor and deny so to avoid dealing with that uncomfortable subject, the evidence!
Almost to his surprise, Horgan found many prominent scientists he interviewed concurred. “We have been so impressed by the acceleration and the rate of magnificent achievements,” observed H Bentley Glass, a former president of the American Association for the Advancement of Science, “we have been deluded into thinking it can be maintained indefinitely.” The physicist Richard Feynmann once expressed a similar view: “We live in an age of the discovery of the fundamental laws of nature. It is very exciting, but that day will never come again. Like the discovery of America, you only discover it once.”
Another ignorant statement! We now know that people discovered America from the North by land bridge and boats coming over from Asia. They took boats from Africa across the Atlantic to South America. We also have proof that Asians coming over from mainland took ships that landed in far South America. Vikings both discovered and established colonies in North America during the Warming Period and abandoned them when the climate turned cold again. So by the time Columbus was landing in the Caribbean there had been many "discoveries" of America. It didn't just happen once.
But others predictably disagreed, leading to Horgan being “denounced,” he proudly admitted, by no less than a dozen Nobel laureates and the editors of both Nature and Science. The contention that “science has reached its limits,” his critics argued, had been expressed many times in the past only to be consistently disproved. Famously, Lord Kelvin at the close of the 19th century predicted the future of the physical sciences were to be looked for “in the sixth place of decimals”: that is, in futile refinements of the present state of knowledge. Within a few years Einstein had proposed his theory of relativity and the certainties of Lord Kelvin’s classical physics were overthrown. But, Horgan responded in a robust defence of his views, the current situation is different, for by the time science encompasses the two extremes of matter—the minuscule structure of the atom and the vastness of the cosmos—then the opportunities for further progress is clearly limited.
I think that is another preposterous statement. Secular science, having abandoned God, has failed to come up with an explanation for existence or life or information or matter or energy or time. To hand over all the big questions of life to being a series of lucky breaks is akin to ascribing magic powers to time and chance. Much of modern science resembles the classic Cargo Cult.
Countering his critics’ charge that there remain many unanswered questions (Why is there something rather than nothing? What prompted the big bang? Why is the cosmos intelligible?), Horgan retorted that such issues are not resoluble by the methodology of science. The proposed explanations, such as the superstring theory that would have the simplest elements of matter vibrating in ten dimensions, or the multiverse hypothesis of there being billions of parallel universes to our own—are “unconfirmable speculation.”
That is an astounding statement. How is the Big Bang not unfounded speculation? It doesn't fit the evidence. To understand why there is something rather than nothing and to comprehend why the Universe is intelligible only requires acknowledgement of a Designer Who created it all intentionally, logically and wonderfully. Pretty simple actually.
For all its plausibility Horgan’s “end of science” scenario is inconsistent with the exponential surge in research funding of the recent past—and the sagging library shelves worth of knowledge it generates. His thesis also fails to take into account the significance of major technical developments originating in the 1980s that promised to resolve the two final obstacles to a truly comprehensive account of our place in the universe: how it is the genetic instructions strung out along the double helix give rise to that near-infinite diversity of form and attributes that so readily distinguish one form of life from another; and how the electrical firing of the brain “translates” into our subjective experiences, memories and sense of self.
Again, this is easy when you admit that DNA is designed. We know that DNA doesn't exist without the cell, the cell doesn't exist without DNA and neither exist without ATP which cannot be produced without both cell and DNA in place. We know the building blocks of life cannot survive in the wild as they will have chemical reactions that break them down before they can be formed and joined together. The dirty truth of genetics that Darwinists are loathe to admit is that the actual difference in the DNA between a mouse and a human and a bacteria are few. It is the way in which the DNA is read and translated and used and in what order that determines the organism far more than the essence of the DNA string itself.
Those technical developments are, first, the ability to spell out the full sequence of genes, or genomes, of diverse species—worm, fly, mouse, man and many others—and, second, the sophisticated scanning techniques that permit neuroscientists to observe the brain “in action,” thinking, memorising and looking out on the world. Both sets of developments signalled a radical departure from conventional laboratory-based science, instead generating petabytes (or tens of thousands of trillions of bytes) of raw data which require supercomputers to analyse and interpret. This explosion of data has indeed transformed our understanding of both genetics and neuroscience—but in ways quite contrary to that anticipated. (See Philip Ball’s article, Prospect June 2010).
This explosion of data should have caused scientists to join in concert with Michael Behe in "Darwins Black Box" and agreed that irrreducible complexity precludes random macroevolution.
The genome projects were predicated on the reasonable assumption that spelling out the full sequence of genes would reveal the distinctive genetic instructions that determine the diverse forms of life. Biologists were thus understandably disconcerted to discover that precisely the reverse is the case. Contrary to all expectations, there is a near equivalence of 20,000 genes across the vast spectrum of organismic complexity, from a millimetre-long worm to ourselves. It was no less disconcerting to learn that the human genome is virtually interchangeable with that of both the mouse and our primate cousins, while the same regulatory genes that cause, for example, a fly to be a fly, cause humans to be human. There is in short nothing in the genomes of fly and man to explain why the fly has six legs, a pair of wings and a dot-sized brain and that we should have two arms, two legs and a mind capable of comprehending the history of our universe.
This is only the beginning of the Darwinist nightmare. DNA operates in remarkably intricate ways and the once-misnamed "Junk" DNA has more utility than the portions first thought most significant.
The genetic instructions must be there—for otherwise the diverse forms of life would not replicate their kind with such fidelity. But we have moved in the very recent past from supposing we might know the principles of genetic inheritance to recognising we have no conception of what they might be.
"Replicate their kind." This is what Creationists teach. It may be a mystery to Darwinists but Creationists have always asserted that it is a design feature of the organism. Conservation of kind, rich genetic information that allows for variation within kind in order to provide contingencies and redundancies for the kind and for the ecological cycle of Earth. Meta-information specific to the kind that conserves the kind.
It has been a similar story for neuroscientists with their sophisticated scans of the brain “in action.” Right from the beginning, it was clear that the brain must work in ways radically different from those supposed. Thus the simplest of tasks, such as associating the noun “chair” with the verb “sit” cause vast tracts of the brain to “light up”—prompting a sense of bafflement at what the most mundane conversation must entail. Again the sights and sounds of every transient moment, it emerged, are fragmented into a myriad of separate components without the slightest hint of the integrating mechanism that would create the personal experience of living at the centre of a coherent, unified, ever-changing world. Reflecting on this problem, Nobel prize-winner David Hubel of Harvard University observes: “This abiding tendency for attributes such as form, colour and movement to be handled by separate structures in the brain immediately raises the question how all the information is finally assembled, say, for perceiving a bouncing red ball. These obviously must be assembled—but where and how, we have no idea.”
Meanwhile the great conundrum remains unresolved: how the electrical activity of billions of neurons in the brain translate into the experiences of our everyday lives—where each fleeting moment has its own distinct, intangible feel: where the cadences of a Bach cantata are so utterly different from the taste of bourbon or the lingering memory of that first kiss.
The implications are obvious enough. While it might be possible to know everything about the physical materiality of the brain down to the last atom, its “product,” the five cardinal mysteries of the non-material mind are still unaccounted for: subjective awareness; free will; how memories are stored and retrieved; the “higher” faculties of reason and imagination; and that unique sense of personal identity that changes and matures over time but remains the same.
The usual response is to acknowledge that perhaps things have turned out to be more complex than originally presumed, but to insist these are still “early days” to predict what might yet emerge. Certainly both genetics and neuroscience could generate further petabytes of basic biological and neuroscientific data almost indefinitely, but it is possible, in broad outline, to anticipate what they will reveal. Biologists could, if they so wish, spell out the genomes of each of the millions of species with which we share the planet but that would only confirm they are composed of several thousand similar genes that “code” for the cells from which all living things are made. Meanwhile, the really interesting question of how they determine the unique form and attributes of such diverse creatures would remain unresolved. And so too for observing the brain “in action,” where a million scans of subjects watching a bouncing red ball would not progress understanding any further of how those neuronal circuits experience the ball as being round and red and bouncing.
The contrast with the supreme intellectual achievements of the postwar years is striking. At a time when cosmologists can reliably infer what happened in the first few minutes of the birth of the universe, and geologists can measure the movements of continents to the nearest centimetre, it seems extraordinary that geneticists can’t tell us why humans are so different from flies, and neuroscientists are unable to clarify how we recall a telephone number.
Actually, scientists have no clue what happened in the first few minutes of the birth of the Universe unless they believe Genesis 1:1. Scientists cannot actually predict the movements of continents because there will be earthquakes and volcanoes that take them by surprise. Our weather forecasters cannot, in most areas, predict the weather with acccuracy even one week in advance. Since science has presumed to "know" the opposite of what God tells us, it is no wonder they cannot understand the smaller details.
Has science perhaps been looking in the wrong place for solutions to questions that somehow lie outside its domain—what it might be that could conjure that diversity of form of the living world from the monotonous sequence of genes, or the richness of the mind from the electrochemistry of the brain? There are two possible reasons why this might be so. The first, obvious on reflection, is that “life” is immeasurably more complex than matter: its fundamental unit—the cell—has the capacity to create every thing that has ever lived and is billions of times smaller than the smallest piece of machinery ever constructed by man. A fly is billions upon billions upon billions of times more complex than a pebble of comparable size, and possesses properties that have no parallel in the inanimate world: the capacity to transform the nutrients on which it feeds into its own tissues, to repair and reproduce itself.
And so too the laws of biology, where the genetic instructions strung out along the double helix determine the living world must similarly be commensurately billions upon billions of times more complex than the laws of physics and chemistry that determine the properties of matter. So while it is extraordinary that cosmologists can infer the physical events in the wake of the big bang, this is trivial compared to explaining the phenomena of life. To understand the former is no indication of being able to explain the latter.
To repeat, there is no scientist on Earth who can give you a Big Bang scenario in which he can give you a logical picture of the "Singularity" at the beginning of it all nor can he give you a logical power source for it or in any way find a good explanation for nothing exploding and creating everything. Furthermore none of the theories can explain several important things we do know about the Universe. Why is everything moving away from the Earth, making the Earth and/or Solar System the center of the Universe? How can any logical person be expected to believe in dark matter and dark energy with not one shred of evidence for their existence? Why is the background radiation completely wrong for the Big Bang and why would the temperature of the Universe be about the same everywhere if there was this big explosion? Since when did explosions learn how to build things?
The further reason why the recent findings of genetics and neuroscience should have proved so perplexing is the assumption that the phenomena of life and the mind are ultimately explicable in the materialist terms of respectively the workings of the genes and the brain that give rise to them. This is a reasonable supposition, for the whole scientific enterprise for the past 150 years is itself predicated on there being nothing in principle that cannot ultimately be explained in materialist terms. But it remains an assumption, and the distinctive feature of both the form and “organisation” of life (as opposed to its materiality) and the thoughts, beliefs and ideas of the mind is that they are unequivocally non-material in that they cannot be quantified, weighed or measured. And thus, strictly speaking, they fall outside the domain of the methods of science to investigate and explain.
This then is the paradox of the best and worst of times. Science, the dominant way of knowing of our age now finds itself caught between the rock of the supreme intellectual achievement of delineating the history of the universe and the (very) hard place of the apparent inscrutability to its investigations of the phenomena of life and the mind.
This writer almost has the truth in his grasp and then lets it slip away. It is the wrongheaded belief that all things must have a materialistic explanation that has led science to the end of dead-end streets. The presumption that naturalistic materialism, which is a worldview, is foundational to science has crippled science.
Still, the generous funding of science research will continue so long as the view prevails that the accumulation of yet more petabytes of data will, like a bulldozer, drive a causeway through current perplexities. But, that view undoubtedly has its hazards for, as the saying goes, “under the banyan tree nothing grows.” And the banyan tree of Big Science threatens to extinguish the true spirit of intellectual inquiry. Its mega projects organised on quasi-industrial lines may be guaranteed to produce results, but they are inimical to fostering those traits that characterise the truly creative scientist: independence of judgement, stubbornness and discontent with prevailing theory. Big Science is intrinsically conservative in its outlook, committed to “more of the same,” the results of which are then interpreted to fit in with the prevailing understanding of how things are. Its leading players who dominate the grant-giving bodies will hardly allocate funds to those who might challenge the certainties on which their reputations rest. And when the geeks have taken over and the free thinkers vanquished—that really will be the end of science.
I for one will be a free-thinking geek who will fight for real science instead of the brainwashed censorship that Darwinism passes off as "science" which is beginning to turn into a skipping record, saying the same thing in response to every new question, answering nothing.
This endeavour is immensely productive, generating a tidal wave of research papers in scientific journals, whose thick shiny volumes occupy a greater acreage of library space every year. In 1980 a year’s worth of the Journal of Biological Chemistry (to take one example) already ran to a daunting 12,000 pages. By last year its size had grown eightfold to 97,000 pages or 25m-odd words, filling an entire library shelf. And this is just one of hundreds of scientific and medical journals. Put them all together, and it is possible to glimpse the scale of the explosion in new knowledge in the recent past.
So the best of times—but also the worst. Pose the question, What does it all add up to? and the answer, on reflection, seems surprisingly little—certainly compared to a century ago, when funding was an infinitesimal fraction of what it has become. In the first decade of the 20th century, Max Planck’s quantum and Einstein’s special theory of relativity would together rewrite the laws of physics; Ernest Rutherford described the structure of the atom and discovered gamma radiation; William Bateson rediscovered Mendel’s laws of genetic inheritance; and neurophysiologist Charles Sherrington described the “integrative action” of the brain and nervous system. The revolutionary significance of these and other discoveries were recognised at the time, but they also opened the door to many scientific advances over succeeding decades.
By contrast, the comparable landmarks of the recent past have been rather disappointing. The cloning of a sheep generated much excitement but Dolly is now a stuffed exhibit in a Scottish museum and we are none the wiser for the subsequent cloning of dogs, cats and cows. It will no doubt be a similar story with Craig Venter’s recent creation of “artificial life.” Fabricating a basic toolkit of genes and inserting them into a bacterium—at a cost of $40m and ten years’ work—was technologically ingenious, but the result does less than what the simplest forms of life have been doing for free and in a matter of seconds for the past three billion years.
The practical applications of the massive commitment to genetic research, too, is scarcely detectable. The biotechnology business promised to transform both medicine and agriculture—but in the words of Arthur Levinson, chief executive of the pioneering biotechnology company Genentech, it has turned out to be “one of the biggest money-losing industries in the history of mankind.” There are promises that given 30, 40 or even 100 years all will become clear, that stem cell therapy will permit the blind to see and the lame to walk and we will have a theory of everything—or, as Stephen Hawking puts it, “know the mind of God.” But they remain promises.
Sadly, to quote Hawking using the word "God" is counterintuitive as he used to pretend He has a minimal committment to the existence of God to become a popular writer. Once he'd established himself as a cultural figure he was free to drop the mask of belief altogether. In any event, the next paragraph will reveal a large part of the problem. Error cascade. If supposedly intelligent people assume that various presumptions are true without question they can begin with an entirely mistaken starting point on the big issues of physics and biology and chemistry and etc.
More than a decade ago, John Horgan, a staff writer for Scientific American, proposed an explanation for the apparent inverse relationship between the current scale of research funding and scientific progress. The very success of science in the past, he argued in his book The End of Science (1996), radically constrains its prospects for the future. We live “in an era of diminishing returns.” Put simply, the last 60 years have witnessed a series of scientific discoveries that taken together rank among the greatest of all intellectual achievements, in permitting us for the first time to hold in our mind’s eye the entire history of the universe from its inception to yesterday. So, within living memory, we have learned how the universe came into being at the moment of the big bang 15bn years ago. We know how the first stars were formed and how within their fiery interiors the chemical elements were created by the process of nuclear fusion. We have learned how 4bn years ago a vast cloud of intergalactic gas and particles coalesced to form our solar system; and how our earth acquired its life sustaining atmosphere and how the movement of massive plates of rock beneath its surface created the continents and oceans. We have identified the very first forms of life that emerged 3bn years ago and that “universal code” strung out along the double helix by which all living things replicate their kind. And we now know the details of the physical characteristics of our earliest ancestors and the transformation to modern man. It is difficult, even impossible, to imagine how so comprehensive an achievement can be surpassed. Once it is possible to say “this is how the universe came into being,” and so on, anything that comes after is likely to be something of an anticlimax.
Actually the author inadvertantly has his finger on the pulse of the problem and fails to recognize it. We cannot claim that Big Bang Hypotheses of any kind have been proven or even found to be logical. They are full of holes and fudge factors and every single one of them cannot describe the events at the very beginning that might have caused a mythological Big Bang.
We have NOT learned how the first stars were formed at all. In fact there is no way anyone can identify stars being formed other than from the remnants of previously existing stars.
The entire idea of a gaseous beginning that formed our Solar System has been entirely debunked. In fact there is strong evidence in the makeup of every single planet and most of the moons that falsifies that idea altogether. No secular scientists can agree on a beginning to the Solar System or the formation of any one of the planets.
Certainly every hypothesis ever posited fails to explain the existence of all this water on Earth other than of course Creation by God. Shifting tectonic plates explains nothing concerning water nor life. In fact discovering DNA and learning more about it has been a steadily increasing problem for Darwinists as the uncomfortably complex organisms continues to become more complex and DNA becomes more intricate and demonstrates that mankind is indeed the student and not the teacher when studying the cell.
Finally, the pomposity of declaring that "Once it is possible to say “this is how the universe came into being,” and so on, anything that comes after is likely to be something of an anticlimax" is a good clue as to why science is bogging down. If science was a game of football, the Darwinists have not yet gained a yard or completed a pass because they have been too busy celebrating a victory not won or even a battle contested. Darwinists run away as fast as they can from every Creationist or ID challenge. They prefer to censor and deny so to avoid dealing with that uncomfortable subject, the evidence!
Almost to his surprise, Horgan found many prominent scientists he interviewed concurred. “We have been so impressed by the acceleration and the rate of magnificent achievements,” observed H Bentley Glass, a former president of the American Association for the Advancement of Science, “we have been deluded into thinking it can be maintained indefinitely.” The physicist Richard Feynmann once expressed a similar view: “We live in an age of the discovery of the fundamental laws of nature. It is very exciting, but that day will never come again. Like the discovery of America, you only discover it once.”
Another ignorant statement! We now know that people discovered America from the North by land bridge and boats coming over from Asia. They took boats from Africa across the Atlantic to South America. We also have proof that Asians coming over from mainland took ships that landed in far South America. Vikings both discovered and established colonies in North America during the Warming Period and abandoned them when the climate turned cold again. So by the time Columbus was landing in the Caribbean there had been many "discoveries" of America. It didn't just happen once.
But others predictably disagreed, leading to Horgan being “denounced,” he proudly admitted, by no less than a dozen Nobel laureates and the editors of both Nature and Science. The contention that “science has reached its limits,” his critics argued, had been expressed many times in the past only to be consistently disproved. Famously, Lord Kelvin at the close of the 19th century predicted the future of the physical sciences were to be looked for “in the sixth place of decimals”: that is, in futile refinements of the present state of knowledge. Within a few years Einstein had proposed his theory of relativity and the certainties of Lord Kelvin’s classical physics were overthrown. But, Horgan responded in a robust defence of his views, the current situation is different, for by the time science encompasses the two extremes of matter—the minuscule structure of the atom and the vastness of the cosmos—then the opportunities for further progress is clearly limited.
I think that is another preposterous statement. Secular science, having abandoned God, has failed to come up with an explanation for existence or life or information or matter or energy or time. To hand over all the big questions of life to being a series of lucky breaks is akin to ascribing magic powers to time and chance. Much of modern science resembles the classic Cargo Cult.
Countering his critics’ charge that there remain many unanswered questions (Why is there something rather than nothing? What prompted the big bang? Why is the cosmos intelligible?), Horgan retorted that such issues are not resoluble by the methodology of science. The proposed explanations, such as the superstring theory that would have the simplest elements of matter vibrating in ten dimensions, or the multiverse hypothesis of there being billions of parallel universes to our own—are “unconfirmable speculation.”
That is an astounding statement. How is the Big Bang not unfounded speculation? It doesn't fit the evidence. To understand why there is something rather than nothing and to comprehend why the Universe is intelligible only requires acknowledgement of a Designer Who created it all intentionally, logically and wonderfully. Pretty simple actually.
For all its plausibility Horgan’s “end of science” scenario is inconsistent with the exponential surge in research funding of the recent past—and the sagging library shelves worth of knowledge it generates. His thesis also fails to take into account the significance of major technical developments originating in the 1980s that promised to resolve the two final obstacles to a truly comprehensive account of our place in the universe: how it is the genetic instructions strung out along the double helix give rise to that near-infinite diversity of form and attributes that so readily distinguish one form of life from another; and how the electrical firing of the brain “translates” into our subjective experiences, memories and sense of self.
Again, this is easy when you admit that DNA is designed. We know that DNA doesn't exist without the cell, the cell doesn't exist without DNA and neither exist without ATP which cannot be produced without both cell and DNA in place. We know the building blocks of life cannot survive in the wild as they will have chemical reactions that break them down before they can be formed and joined together. The dirty truth of genetics that Darwinists are loathe to admit is that the actual difference in the DNA between a mouse and a human and a bacteria are few. It is the way in which the DNA is read and translated and used and in what order that determines the organism far more than the essence of the DNA string itself.
Those technical developments are, first, the ability to spell out the full sequence of genes, or genomes, of diverse species—worm, fly, mouse, man and many others—and, second, the sophisticated scanning techniques that permit neuroscientists to observe the brain “in action,” thinking, memorising and looking out on the world. Both sets of developments signalled a radical departure from conventional laboratory-based science, instead generating petabytes (or tens of thousands of trillions of bytes) of raw data which require supercomputers to analyse and interpret. This explosion of data has indeed transformed our understanding of both genetics and neuroscience—but in ways quite contrary to that anticipated. (See Philip Ball’s article, Prospect June 2010).
This explosion of data should have caused scientists to join in concert with Michael Behe in "Darwins Black Box" and agreed that irrreducible complexity precludes random macroevolution.
The genome projects were predicated on the reasonable assumption that spelling out the full sequence of genes would reveal the distinctive genetic instructions that determine the diverse forms of life. Biologists were thus understandably disconcerted to discover that precisely the reverse is the case. Contrary to all expectations, there is a near equivalence of 20,000 genes across the vast spectrum of organismic complexity, from a millimetre-long worm to ourselves. It was no less disconcerting to learn that the human genome is virtually interchangeable with that of both the mouse and our primate cousins, while the same regulatory genes that cause, for example, a fly to be a fly, cause humans to be human. There is in short nothing in the genomes of fly and man to explain why the fly has six legs, a pair of wings and a dot-sized brain and that we should have two arms, two legs and a mind capable of comprehending the history of our universe.
This is only the beginning of the Darwinist nightmare. DNA operates in remarkably intricate ways and the once-misnamed "Junk" DNA has more utility than the portions first thought most significant.
The genetic instructions must be there—for otherwise the diverse forms of life would not replicate their kind with such fidelity. But we have moved in the very recent past from supposing we might know the principles of genetic inheritance to recognising we have no conception of what they might be.
"Replicate their kind." This is what Creationists teach. It may be a mystery to Darwinists but Creationists have always asserted that it is a design feature of the organism. Conservation of kind, rich genetic information that allows for variation within kind in order to provide contingencies and redundancies for the kind and for the ecological cycle of Earth. Meta-information specific to the kind that conserves the kind.
It has been a similar story for neuroscientists with their sophisticated scans of the brain “in action.” Right from the beginning, it was clear that the brain must work in ways radically different from those supposed. Thus the simplest of tasks, such as associating the noun “chair” with the verb “sit” cause vast tracts of the brain to “light up”—prompting a sense of bafflement at what the most mundane conversation must entail. Again the sights and sounds of every transient moment, it emerged, are fragmented into a myriad of separate components without the slightest hint of the integrating mechanism that would create the personal experience of living at the centre of a coherent, unified, ever-changing world. Reflecting on this problem, Nobel prize-winner David Hubel of Harvard University observes: “This abiding tendency for attributes such as form, colour and movement to be handled by separate structures in the brain immediately raises the question how all the information is finally assembled, say, for perceiving a bouncing red ball. These obviously must be assembled—but where and how, we have no idea.”
Meanwhile the great conundrum remains unresolved: how the electrical activity of billions of neurons in the brain translate into the experiences of our everyday lives—where each fleeting moment has its own distinct, intangible feel: where the cadences of a Bach cantata are so utterly different from the taste of bourbon or the lingering memory of that first kiss.
The implications are obvious enough. While it might be possible to know everything about the physical materiality of the brain down to the last atom, its “product,” the five cardinal mysteries of the non-material mind are still unaccounted for: subjective awareness; free will; how memories are stored and retrieved; the “higher” faculties of reason and imagination; and that unique sense of personal identity that changes and matures over time but remains the same.
The usual response is to acknowledge that perhaps things have turned out to be more complex than originally presumed, but to insist these are still “early days” to predict what might yet emerge. Certainly both genetics and neuroscience could generate further petabytes of basic biological and neuroscientific data almost indefinitely, but it is possible, in broad outline, to anticipate what they will reveal. Biologists could, if they so wish, spell out the genomes of each of the millions of species with which we share the planet but that would only confirm they are composed of several thousand similar genes that “code” for the cells from which all living things are made. Meanwhile, the really interesting question of how they determine the unique form and attributes of such diverse creatures would remain unresolved. And so too for observing the brain “in action,” where a million scans of subjects watching a bouncing red ball would not progress understanding any further of how those neuronal circuits experience the ball as being round and red and bouncing.
The contrast with the supreme intellectual achievements of the postwar years is striking. At a time when cosmologists can reliably infer what happened in the first few minutes of the birth of the universe, and geologists can measure the movements of continents to the nearest centimetre, it seems extraordinary that geneticists can’t tell us why humans are so different from flies, and neuroscientists are unable to clarify how we recall a telephone number.
Actually, scientists have no clue what happened in the first few minutes of the birth of the Universe unless they believe Genesis 1:1. Scientists cannot actually predict the movements of continents because there will be earthquakes and volcanoes that take them by surprise. Our weather forecasters cannot, in most areas, predict the weather with acccuracy even one week in advance. Since science has presumed to "know" the opposite of what God tells us, it is no wonder they cannot understand the smaller details.
Has science perhaps been looking in the wrong place for solutions to questions that somehow lie outside its domain—what it might be that could conjure that diversity of form of the living world from the monotonous sequence of genes, or the richness of the mind from the electrochemistry of the brain? There are two possible reasons why this might be so. The first, obvious on reflection, is that “life” is immeasurably more complex than matter: its fundamental unit—the cell—has the capacity to create every thing that has ever lived and is billions of times smaller than the smallest piece of machinery ever constructed by man. A fly is billions upon billions upon billions of times more complex than a pebble of comparable size, and possesses properties that have no parallel in the inanimate world: the capacity to transform the nutrients on which it feeds into its own tissues, to repair and reproduce itself.
And so too the laws of biology, where the genetic instructions strung out along the double helix determine the living world must similarly be commensurately billions upon billions of times more complex than the laws of physics and chemistry that determine the properties of matter. So while it is extraordinary that cosmologists can infer the physical events in the wake of the big bang, this is trivial compared to explaining the phenomena of life. To understand the former is no indication of being able to explain the latter.
To repeat, there is no scientist on Earth who can give you a Big Bang scenario in which he can give you a logical picture of the "Singularity" at the beginning of it all nor can he give you a logical power source for it or in any way find a good explanation for nothing exploding and creating everything. Furthermore none of the theories can explain several important things we do know about the Universe. Why is everything moving away from the Earth, making the Earth and/or Solar System the center of the Universe? How can any logical person be expected to believe in dark matter and dark energy with not one shred of evidence for their existence? Why is the background radiation completely wrong for the Big Bang and why would the temperature of the Universe be about the same everywhere if there was this big explosion? Since when did explosions learn how to build things?
The further reason why the recent findings of genetics and neuroscience should have proved so perplexing is the assumption that the phenomena of life and the mind are ultimately explicable in the materialist terms of respectively the workings of the genes and the brain that give rise to them. This is a reasonable supposition, for the whole scientific enterprise for the past 150 years is itself predicated on there being nothing in principle that cannot ultimately be explained in materialist terms. But it remains an assumption, and the distinctive feature of both the form and “organisation” of life (as opposed to its materiality) and the thoughts, beliefs and ideas of the mind is that they are unequivocally non-material in that they cannot be quantified, weighed or measured. And thus, strictly speaking, they fall outside the domain of the methods of science to investigate and explain.
This then is the paradox of the best and worst of times. Science, the dominant way of knowing of our age now finds itself caught between the rock of the supreme intellectual achievement of delineating the history of the universe and the (very) hard place of the apparent inscrutability to its investigations of the phenomena of life and the mind.
This writer almost has the truth in his grasp and then lets it slip away. It is the wrongheaded belief that all things must have a materialistic explanation that has led science to the end of dead-end streets. The presumption that naturalistic materialism, which is a worldview, is foundational to science has crippled science.
Still, the generous funding of science research will continue so long as the view prevails that the accumulation of yet more petabytes of data will, like a bulldozer, drive a causeway through current perplexities. But, that view undoubtedly has its hazards for, as the saying goes, “under the banyan tree nothing grows.” And the banyan tree of Big Science threatens to extinguish the true spirit of intellectual inquiry. Its mega projects organised on quasi-industrial lines may be guaranteed to produce results, but they are inimical to fostering those traits that characterise the truly creative scientist: independence of judgement, stubbornness and discontent with prevailing theory. Big Science is intrinsically conservative in its outlook, committed to “more of the same,” the results of which are then interpreted to fit in with the prevailing understanding of how things are. Its leading players who dominate the grant-giving bodies will hardly allocate funds to those who might challenge the certainties on which their reputations rest. And when the geeks have taken over and the free thinkers vanquished—that really will be the end of science.
I for one will be a free-thinking geek who will fight for real science instead of the brainwashed censorship that Darwinism passes off as "science" which is beginning to turn into a skipping record, saying the same thing in response to every new question, answering nothing.