The Nature and Philosophy of Science
Scientists are unbiased observers who use the scientific method to conclusively confirm and conclusively falsify various theories. These experts have no preconceptions in gathering the data and logically derive theories from these objective observations. One great strength of science is that it’s self-correcting, because scientists readily abandon theories when they are shown to be irrational. Although such eminent views of science have been accepted by many people, they are almost completely untrue. Data can neither conclusively confirm nor conclusively falsify theories, there really is no such thing as the scientific method, data become somewhat subjective in practice, and scientists have displayed a surprisingly fierce loyalty to their theories. There have been many misconceptions of what science is and is not. I’ll discuss why these misconstruals are inaccurate later, but first I’d like to begin by talking about some of the basics of what science is.
Science is a project whose goal is to obtain knowledge of the natural world. The philosophy of science is a discipline that deals with the system of science itself. It examines science’s structure, components, techniques, assumptions, limitations, and so forth.
To properly understand the contemporary philosophy of science, it is necessary to examine some basic components of science. The components of science are data, theories, and what is sometimes called shaping principles.
Data are the collections of information about physical processes. Sometimes collecting and finding data to support theories can be rather laborious. This is because the specific details of data that come into play can make science such a tricky business that some scientists, when talking to laymen, sometime leave them out. Also, it is easy to fit a theory in with the data if the data are vague and overgeneralized. It usually becomes more difficult to fit the theory with specific data, especially since the details make it more likely for the theory to become less plausible. Even so, data are important parts of theories and of science.
Theories come in roughly two forms. Contrary to what some might think, a theory in the scientific sense does not have anything to do with whether or not it is supported by the evidence, contradicted by the evidence, well liked among scientists, and so forth. It only has to do with its structure and the way it functions. That is, just because a theory is a scientific theory does not mean that the scientific community currently accepts it. There are many theories that, though technically scientific, have been rejected because the scientific evidence is strongly against it. Phenomenological theories are empirical generalizations of data. They merely describe the recurring processes of nature and do not refer to their causes or mechanisms. Phenomenological theories are also called scientific laws, physical laws, and natural laws. Newton’s third law is one example. It says that every action has an equal and opposite reaction. Explanatory theories attempt to explain the observations rather than generalize them. Whereas laws are descriptions of empirical regularities, explanatory theories are conceptual constructions to explain why the data exist. For example, atomic theory explains why we see certain observations. The same could be said with DNA and relativity. Explanatory theories are particularly helpful in such cases where the entities (like atoms, DNA, and so forth) cannot be directly observed.
Shaping principles are non-empirical factors and assumptions that form the basis of science and go into selecting a “good” theory. Why are they necessary? Can’t theories be selected solely on the basis of empirical data? Surprisingly, the answer is no. Why not? Describing some mistaken views of science come in handy for explaining the answer.
Many students (including me) were brought up with a somewhat eminent view of science, or at least a fairly eminent view of science as it should be done. As I have found however, the status of science which most of us were taught may have been a bit misleading. Some ideas of what “the scientific method” is have also been erroneous. This is perhaps because scientists themselves tend to be ignorant of the philosophy of science. Changes have been made in history about what science is and how it should be done.
In the early years of science, the system of acquiring knowledge was viewed as completely objective, rational, and empirical. This traditional view of science held that scientific theories and laws were to be conclusively confirmed or conclusively falsified based on objective data. This was supposed to be done through “the scientific method.” Apparently some sort of method was necessary because humans seemed to have a variety of tendencies and feelings that were not very trustworthy, including biases, feelings, intuitions, and so forth. These kinds of things had to be prevented from infecting science so that knowledge could be reliably obtained. Rigorous and precise procedure (“the scientific method”) was to be followed so that such imperfections of humanity would not hinder the process of discovering nature.
Baconian inductivism in the early seventeenth century was at one point considered to be the scientific method. The basic idea at the time was this: collect numerous observations (as much as humanly possible) being unaffected by any prior prejudice or theoretical preconceptions while gathering the data, inductively infer theories from those data (by generalizing the data into physical laws), and collect more data to modify or reject the hypothesis if needed. In many instances, this concept seemed to work. One can collect numerous observations of physical processes and experiments to derive natural laws, such the conservation of mass-energy. Alas, Baconian inductivism is an inaccurate picture of scientific method. When using inductivism to arrive at natural laws, certain theoretical preconceptions are absolutely vital. To generalize the data into physical laws, the individual must assume that the laws apply for physical processes not observed. This results in several assumptions being held, such as the uniform operation of nature. Even if we put aside the fact that inductive logic is invariably based on such postulations, there is another problem. Science deals with concepts and explanatory theories that cannot be directly observed, including atomic theory and the theory of gravity. Many other theories include unobservable concepts like forces, fields, and subatomic particles. There is no known rigorous inductive logic that can infer those theories and concepts solely from the data they explain. If inductivism is the correct scientific method, then such theories cannot be legitimate science. As if these difficulties weren’t enough, inductivism has other major technical problems that have led to its demise.
Sir Isaac Newton developed hypothetico-deductivism in the late 1600s (though the method was actually named at a later date). Essentially, one starts with a hypothesis (a hypothesis is basically a provisional theory) and then deduces what we would expect to find in the empirical world as a result of that hypothesis, hence the name hypothetico-deductivism. Here the idea was to quarantine human irrationality. One could make a theory for any or no reason. The sources of theories would be irrelevant in hypothetico-deductivism since the theories could be tested against the empirical world and be confirmed or refuted that way. A theory did not become a good theory by its origins, but because of the hypothetico-deductive method of verification. Inductivism, recall, could not work because empirical data cannot be the sole source of a theory. Some scientists and philosophers of science who rejected inductivism embraced hypothetico-deductivism. A significant reason is that it allowed ideas like atomic theory to be legitimate science whereas they would not be in inductivism.
Unfortunately, hypothetico-deductivism also has problems. The philosophy that rigorous proof is necessary for good science has serious problems even if we assume that sense experience, memory, and testimony are all generally reliable. For one thing, we cannot be sure that we have examined all the germane data. There is always the opportunity for future observations to topple even the most established of theories. For example, there is always the possibility that an observation could conflict with any known scientific law. This is what caused Newtonian mechanics to be cut down to size. Rather than being a total account for the nature and dynamics of the universe, Einstein, Heisenberg, and other physicists demonstrated that the realm of Newtonian mechanics is much more restricted than what was once thought. Unrevealed data can also contradict the predictions of any explanatory theory as well. Every theory has an infinite number of expected empirical outcomes, and we are incapable of testing all of those expectations. So even though a theory can be confirmed to some extent by empirical data, it can never be conclusively confirmed. Apart from this, hypothetico-deductivism’s method of verification has this sort of structure where T is a theory and D a set of data that we would expect if the theory were true:
This is not a logically valid argument. Indeed, an argument of this sort of structure is called the fallacy of affirming the conclusion. Have T = “An invisible unicorn from Mars flew into the sky to cause rain,” and D = “It is raining.” Logically, the first premise must be correct (If T is true, then D would be true). Suppose the second premise is correct. It is raining. Even so, the conclusion doesn’t logically follow. Why doesn’t it work? Because there could be other possibilities for D other than T. That is, more than one theory could exist to explain the data. And this is indeed the case. In this example, it could simply be natural weather patterns, not a flying invisible unicorn from Mars, that caused the rain. In science or anywhere else, any given body of data (no matter how large) will always be agreeable with an unlimited number of alternative theories. Invariably there are many theories that explain the exact same data, and at least some of the theories will contradict each other. This fact is sometimes expressed as data underdetermining theories, or is simply referred to as the underdetermination of theories. Because such competing theories are consistent with the same set of data, all of these theories are empirically identical. This means that empirical data by itself cannot exclusively confirm one theory from among its empirically indistinguishable competitors. Some of these empirically indistinguishable theories may be elegantly simple and others may be outrageously complex, but multiple alternatives exist for any set of data. There are examples of this problem in the real world. In one such instance, Tyco Brahe and Copernicus each had a competing theory of the solar system. It can be shown mathematically that every bit of data that is predicted by one theory would be predicted by the other theory. We may not always be able to think of alternative theories, but this only has to do with problems of human imagination in constructing such theories, not the logic of the circumstances. Of course, the underdetermination of theories also poses yet another problem for Baconian inductivism. (Explanatory theories cannot be inferred from data alone if there are always numerous alternatives that explain that same set of data.) As a result of the underdetermination of theories and the risk of undiscovered, contradictory empirical evidence, a scientific theory cannot be conclusively proven merely through the data. Even if we take out the notion of conclusive proof from hypothetico-deductivism, it seems that this idea of the scientific method dreadfully oversimplifies how science works. No rational scientist would accept the flying invisible unicorn from mars theory simply because it passed the empirical confirmation test in the above example, for instance.
Popperian falsification is another belief of what the scientific method is. Karl Popper, regarded by many as one of the finest and most influential philosophers of science of the twentieth century, realized the flaws of inductivism and rejected it. Popper recognized that one could not record everything observed, because that is simply not feasible. Some sort of selection is needed, and thus observation is always selective. That being true, Popper believed that a hypothesis had to be created first for scientific investigation to begin. Otherwise there would be no other way to tell which data are germane. Since theories must be created first in order to decide what observations were relevant, such theoretical preconceptions would be essential to doing science (contrary to Baconian inductivism). This was one of the reasons he believed inductivism is unworkable. He also denied the concept of conclusive proof and instead stressed the idea that falsifiability is the necessary criterion for a theory to be legitimate science. In other words, if a theory cannot be falsified through some conceivable observation, then such a theory is not genuine science. The necessity for a scientific theory to be conclusively falsifiable is known as the demarcation criterion. This idea seemed reasonable enough, since scientific theories can make predictions. Popperian falsification suggested that if a prediction does not come true, then the scientific theory must be false. Popper’s idea of the scientific method was for scientists to test scientific theories in experiments where the outcome could potentially falsify the theory, especially in experiments where the theory would most likely collapse. Science still had some of its traditional quality in that it could make definite progress by conclusively eliminating theories.
Yet, like inductivism, Popper’s ideas are not entirely successful either. (Consequently, some regard Popper’s contribution to the philosophy of science to be overrated.) Popper was certainly correct that data are selective, but they need not have a theory to guide the selection (though that is often the case). For instance, one can record data and apply assumptions to the data to form a theory, as is sometimes the case with scientific laws. (Note that since assumptions need to be accepted for the theory to be created, this would not be an example of inductivism without assumptions in action.) The demarcation criterion is even more flawed. Surprisingly, the problem is that it is impossible to conclusively falsify theories by empirical data. One reason is that theories by themselves are incapable of making predictions. Instead, the empirical consequences of a theory invariably rest on background assumptions (also called auxiliary assumptions) from which to derive predictions and even to obtain data.Suppose we have a particle theory that says if we process a certain particle in a particular way, we will get specified values on various measurements.
Note that most of the items depend on scientific theories. But scientific theories, remember, cannot be conclusively proven. The dependence on background assumptions to make predictions is sometimes called the Duhem-Quine problem. There are real life examples of this problem. To “disprove” the idea that the earth was moving, some people noted that birds did not get thrown off into the sky whenever they let go of a tree branch. That data is no longer accepted as empirical evidence that the earth is not moving because we have adopted a different background system of physics that allows us to make different predictions. So if a theory’s prediction does not come true, one can claim that the theory is correct and that at least one of the auxiliary assumptions is wrong.
Besides using auxiliary assumptions to make predictions, such assumptions are necessary to find out if the predictions come true. Suppose that in order to test our particle theory in the real world we must use a certain particle accelerator in a particular way. To experimentally test this, we must adhere to the following statements:
Notice that several of the items are again dependent on scientific theories, which cannot be rigorously proven. Suppose the prediction does not come true and we observe that, “this particle did not have the specified properties that it should’ve had.” That observation would be heavily dependent on theories. Although it is possible that our theory could be wrong, it is also possible that instead one or more of the assumptions listed are wrong. Often, the terminology used to describe experimental results in addition to the measurements and instruments used in testing theories make up another set of background assumptions. The dependence on such postulations for obtaining data is described as observations being theory-laden. In this example, we have to assume these kinds of assumptions (including #1, #2, #3, and #4 on the list above) to accept the observation of what properties the particle produced. A completely theoretically neutral language for recording data is not always possible. Suppose Bob’s prediction comes true. There is still the possibility of the background assumptions being wrong. Consequently, theories can neither be conclusively proven nor conclusively falsified by empirical data.
Also, it is possible to salvage a troubled theory or make arguments against a well-supported theory. This can be done because one can alter auxiliary assumptions to produce different predictions or change the meaning of theory-laden observations. For example, suppose I proposed the theory that the moon is made of cheese. To refute this theory, many people would point out that astronauts have gone up there and found out that it is more like a rock than a huge piece of cheese. I could counter that argument by saying something like, “the moon with its great age would naturally accumulate massive quantities of rocks and other particles from space. Under that layer of space debris, however, is the cheese.” This type of argument that explains away such evidence is called an ad hoc hypothesis, especially if the theory-saving device lacks further significant evidence to support itself. Of course, it is possible to rationally discard this absurd theory, but the point is one cannot do this merely by pointing to the data. When the right ad hoc hypotheses are made, the theory of the moon being made of cheese becomes empirically identical to the moon being rock-like. This sort of thing is not limited to ridiculous theories about the moon’s composition. It’s possible to modify virtually any theory so that it’s consistent with whatever data that might come up.
Despite the fact that Karl Popper was not completely successful, he did make some useful contributions. He pointed out that data are selective and subject to human choice (and thus demonstrated that data are not quite as objective as once thought). He also showed the flaws of inductivism and why a theory cannot originate exclusively from empirical data.
So it does seem that, if the only way to evaluate theories is in terms of empirical predictions, science is in trouble. In testing theories, scientists use auxiliary assumptions for which they have rational reason for being true, even though the assumptions and theories are not conclusively proven. Yet, given the underdetermination of theories, we can’t just pick a theory and justify it solely by the data. We can’t even justify a particular theory as probable by the empirical evidence since there are an infinite number of other theories that can explain the exact same set of data. How can science function?
It is evident that theories and data by themselves are insufficient for science to work, and thus other factors are needed for science to operate. This group of factors in the nature of science is that of shaping principles, which can be used to select theories and form the foundations of science. Many assumptions are made in science. One example is the uniformity of nature. That is, the belief that natural processes operate in a fairly consistent manner. This shaping principle is the basis for the idea of natural laws. For example, Newton’s laws are said to apply throughout the universe. This is believed even though scientists have not actually tested the laws everywhere in the universe. Natural laws could not exist in science without assuming the uniformity of nature. Other assumptions made for science to operate include that there exists an external objective reality, that our senses are generally reliable, and so forth.
Another set of shaping principles evaluates the empirical evidence to select theories. Because of the underdetermination of theories, there is always an infinite number of competing theories that can accommodate any given set of empirical data. Since these competing theories are empirically indistinguishable from each other, if science is to pick out a theory from among these numerous competitors and claim that it is correct, then such a selection must be based on nonempirical principles (whether they be philosophical, personal, societal, or whatever). The law of parsimony is one of them. This principle of logic states that, if all other aspects are equal, the simplest theory is preferred over other theories involving additional factors. This is also called Ockham’s razor (sometimes spelled as Occam's razor). The law of parsimony is often used because a theory conforming to this principle fits the data more easily. This principle especially applies to theories with ad hoc hypotheses. The lower the number of ad hoc hypotheses a scientific theory has, the better. Other principles include (but are not limited to) empirical adequacy (covering the pertinent data in some suitable way), self-consistency, fruitfulness (giving rise to other understandings and having stimulated pioneering investigations and advancements), and explanatory power. Another key principle is how well a theory ties in with other scientific theories and concepts that are rational to believe. It is only when these kinds of shaping principles interact with data can science then provide rational support for a theory over its competitors.
However, there are a few exceptions to the idea that there is no conclusive proof in science. Logic is the closest we can get to rigorous proof and falsification. For example, suppose our friend Bob has this theory: hairless men have no hair. By the rules of logic, Bob’s theory must be true. Of course, Bob’s theory is a tautology (needless repetition of an idea, in this case it’s the repeated concept of hairless men), and tautologies are typically not very helpful. Sadly, not very many helpful theories can be thoroughly proved by logic, and logic disproving a scientific theory is almost never used because seldom does a scientist propose a theory that is logically impossible. Most of the time science relies on other shaping principles to pick theories.
It becomes easier to understand these principles when they are put into action. In the “moon is made of cheese” example, we can reject it because of the law of parsimony. It uses an ad hoc hypothesis, whereas the theory of the moon being like a rock does not. Often times, of course, more than one shaping principle becomes applicable. For example, suppose Bob’s computer is malfunctioning. One theory he has is that an invisible gremlin has caused such problems, and another is that a computer virus has invaded his machine through his modem, computer programming, and some fairly complex electronic systems in his computer as well as on the Internet. The gremlin theory is simpler, and thus it would seem to appeal to the law of parsimony. Yet the gremlin theory hardly seems empirically adequate in this case. This is because other considerations need to be taken into account. Another fact to consider here is that the computer virus theory ties in with electronic concepts that are supported by evidence, whereas the gremlin theory does not. Because so many shaping principles are used and because they can often conflict with each other, we should be careful about justifying how much the evidence supports a theory. 
Unfortunately, there are still limitations involved in scientific practice and shaping principles do not solve the entire problem, even in the basic foundational beliefs of science. Take the uniformity of nature, for example. We believe nature is consistent enough so that the experimental data (such as from testing physical laws) obtained from two years ago on Earth will essentially be the same if the experiments were to be conducted in identical conditions on Mars next week. But there really is no logical principle to tell us that physical laws will hold true in places where we haven’t tested them (even if that place is the future). A similar sort of problem arises when we choose between empirically identical theories. When using shaping principles to select a theory, we must have some philosophical basis for believing that nature’s preferences are similar to ours. And for many of these principles there is no logical rule to imply their reliability. For example, in picking out a theory from among it’s empirically indistinguishable competitors (and when all other factors are held constant), the notion that reality favors simple theories over complex ones is nevertheless a philosophical principle. Although these indicators of theoretical truth are necessary for science to work, they are significantly indirect, circumstantial, highly fallible, and are still unable to prove/disprove theories. While science may be the best we can do, the limitations should still be recognized.
On top of that, there is no known clear-cut method that tells us to what degree the evidence confirms a scientific theory, despite attempts at finding one. This becomes problematic when scientists must decide on what theory to accept as the most rational one. Scientists intuitively feel how rational scientific theories are, rather than having a precise logical method for such judgments. These intuitive feelings result from shaping principles. The interactions of shaping principles in the minds of scientists are so complex and so numerous that we may never come up with a rigorously logical system to select theories. Most of the shaping principles are frequently unspoken and sometimes scientists themselves do not know they are using them. Although some shaping principles are based on logic, others are not always so sensible and objective. Scientists (and regular human beings) are also affected by cultural, social, and personal beliefs. Indeed, such factors have been significant influences in scientific revolutions. This is because many activities in science, such constructing theories, involve numerous aspects of oneself. In the case of making theories, the theories themselves are creative inventions that come from the minds of scientists. Science is a human activity, and what affects scientists will have an effect on science. One may think that having such unscientific factors affect theory judgments is bad for science. That may very well be true, but unfortunately there is no known way to separate the helpful principles (explanatory power etc.) from the unfavorable ones (personal biases etc.) in the subconscious minds of scientists that make these theory judgments. Because every human being has their own unique set of shaping principles, different scientists (and regular human beings) can look at the exact same set of data and disagree about which theory most rationally explains the observations. Rather than the traditional view that science is to be protected from biases and other imperfections of people, it turns out that science is inescapably infected with humanness.
It would seem that there is a delicate tapestry in interpreting the data. It is uncommon for a theory to be tested in isolation because of the Duhem-Quine problem. Because we often rely on background assumptions to derive predictions for a theory, and because those background theories depend on other auxiliary theories and principles for their empirical expectations etc., it would seem to follow that the collection of theories combined with their shaping and background principles thus make up an explanatory matrix, or conceptual grid, in which to fit the data. Modifications to the explanatory matrix can be made in attempts to get a better fit, but because of the interwoven nature of the tapestry, often times one cannot supplant aspects of the grid without changing things in some way elsewhere. So it’s possible that the need would arise for an entire conceptual system to be replaced. Additionally, the nature of science can make it difficult, if not impossible, to empirically test an individual theory completely independent of this matrix. However, it is also quite possible for nature to teach us some things in carrying out our investigation. That is, the interaction between the explanatory matrix and the data can be a sort of two-way process. As we uncover more data, we can learn better ways to shape the grid and how to go about it.
Some have pictured the scientist as a completely objective individual who is free of bias and preconceptions, and who is willing to quickly abandon even the most well accepted theory if it were shown to be scientifically inadequate. This belief is not close to the truth. The reality is that scientists are humans, and humans are fallible beings. They have weaknesses just like the rest of us. For one thing, a bias towards favored theories is actually built into all scientific research. (Recall the necessity of background assumptions to make predictions and test theories.)
A related imperfection, and to many a startling one, is a shaping principle called tenacity (also referred to as belief-perseverance by psychologists). Scientists throughout history have shown a surprisingly severe loyalty to their theories, even with theories that are in trouble with the evidence. Furthermore, this sort of tenacity persists in scientists for rather long periods of time. Why is this the case? The reasons become clear when one considers what scientists do in their field of work. When people put enormous amounts of effort into something over great lengths of time, as scientists often do with their theories, they have a tendency to become attached to it. Scientists in such cases have an inclination to want the theory to be true and it becomes psychologically more difficult for them to reject it as false, even if they are presented with strong evidence against the theory. The satisfaction of destroying a theory one has arduously worked for can be small compared to watching the theory become successful. Furthermore, the reluctance to give up long-held beliefs is part of human nature, and scientists are not immune to it. Not many of us would renounce the idea that two plus two equals four even if we were presented with a mathematical proof disproving that idea. Consequently, a scientist whose career and livelihood are invested in a scientific theory will probably not give it up effortlessly. Needless to say, not everyone has been aware of this, including scientists. How is it then that new theories emerge in science? Nobel prize winning physicist Max Planck has said, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”
However, tenacity is not necessarily a bad thing. Ironically, belief-perseverance is one of the reasons science has advanced as far as it has. This is because scientific theories are not perfect, and the only way to make real progress with a theory is to be committed to it. Virtually every scientific theory has some sort of problems with the scientific evidence; which are sometimes explained away by ad hoc hypotheses, at times there is some waiting for the problems to be eventually solved, sometimes the problems are unnoticed, at times they are simply ignored, and from time to time a theory is kept because there is no better alternative. If science abandoned every theory that had contradictory evidence, science would barely have any theories at all. Furthermore, if a theory’s problems are eventually solved, then we have tenacity to thank for preventing the premature abandonment of the theory. Besides that, consider this hypothetical case. Suppose a scientist who possesses no tenacity writes a paper for a scientific journal and points out all the ways a concept or experiment might be flawed. Such a paper is likely to be rejected. Part of the responsibility of a scientist is to provide the most favorable case for his theories and leaving the criticism of the theories to other scientists. Belief-perseverance helps accomplish this and thus can work well when science deals with theories that only a few scientific workers really care about. When significant tenacity to an accepted theory is only limited to a single scientist or a small group of scientists, the theory can easily be weeded out. So some amount of tecnacity is reasonable, and is part of what makes science function. Nevertheless, tenacity can become a major problem when the majority of scientists fervently accept a scientific theory that does not have enough rational support. Naturally, there is an extent where the amount of tenacity becomes excessive and it’s time to abandon the theory in favor of a different theory that has more evidence behind it. Unfortunately, there is no clear-cut agreeable procedure to decide when such scientific concepts should be discarded. Feelings and other shaping principles play a part in deciding when that time should come, and scientists can sometimes disagree reasonably on that issue.
Another imperfection is that of observation. Because scientists are human, we cannot obtain completely objective observations even if there could be total theoretical neutrality. One time it was believed (because of direct observation) by Thomas Huxley that he discovered a being halfway between a living organism and a dead one. Many other scientists made observations that came to support that view. Later, however, it was discovered to be purely mineral. Over a hundred independent observations corroborated Rene Blondlot’s concept of N-rays, but later it was discovered that there were no such things as N-rays. These are, of course, extreme cases, but it does demonstrate that data are not totally uncontaminated by humans. In practice, data are somewhat subjective. This is because shaping principles influence the data we perceive, and also because of the tendency for the mind to unconsciously fill in patterns based on these notions. Such human contamination is called internal theoretical orientation of data. As a result, totally objective data cannot be obtained.
Besides honest confusion of data, there is also deliberate distortion. Often times the scientist who commits the fraud thinks he knows the answer. Some people may have justified faking the data by thinking they were just speeding up the process. Some examples include that of Cyril Burt; a psychologist who forged data on identical twins to support the idea that intelligence was inherited. It is possible this was done because finding thirty-three identical twins who were separated at birth would be a bit tricky. A more famous case would be that of Piltdown man, an alleged missing link in human evolution. This is also an example of internal theoretical orientation of data, because the fraud was an obvious one and yet persisted for over forty years. Of course, these things do not happen all the time, but it should be noted that scientists are not perfectly moral beings either, and sometimes this can have a debilitating effect on science.
The notion that religion and science have constantly been at war is not without foundation. It is true that there have been some religious people who have disagreed with the scientific community (e.g. Biblical creationists). It is also true that many religious people once held views contrary to what is now accepted (such as the Catholic Church accepting geocentricism). However, these sorts of events should not be overgeneralized. While many attempts have been made to show that religion is unhealthy for science (particularly in the 19th century), contemporary historians see that work as more propaganda than legitimate history.
Even so, some believe that religion and science are utterly incompatible. Actually, that view is relatively recent. It dates back not to Galileo, but to the liberal theologians of the Enlightenment. (Incidentally, Galileo was not actually branded a heretic, the sentence he received was for disobeying orders.) Not every educated person believes that science is against religion. There are a growing number of people who believe otherwise, and that have rational support for the idea that theology and science cannot be totally separated. Many scientists (including Newton, Faraday, and even Galileo) have been deeply religious. To add to that, some scientists have actually implanted their religion into their scientific work, including Newton, Boyle, Maxwell, Pasteur, and others. Clearly, religion and science are not always bitter enemies.
Also, the evidence suggests that religion (and more specifically the theistic philosophy that stemmed from the Christian worldview) was a significant factor in the birth of modern science, at least partly because it provided some unique philosophical principles that science requires. Why, for instance, would a rational investigation of nature be successful? Because a rationally orderly God created the universe. (Nature consistently operating in mathematical patterns would especially be confirmative for this belief.) According to the Christian religion of that time and area, the universe is orderly, this orderly world can be known, and there is a motive to discover this order. Indeed, many of the founders of modern science were Christians trying to demonstrate that humanity lived in an orderly universe. Why should the investigation of nature be empirical? Because God could have created an orderly universe in more than one way. This sort of mindset is rather different from classical atheism (which was even accepted in the 16th century), which holds to the metaphysical view of a universe dominated by chance events. This philosophy hardly implied an orderly universe.
So what exactly is the scientific method? Although scientists certainly do something in their field of work, there really is no such thing as the scientific method. This is true for a number of reasons. First, the majority opinion in the scientific community is often wrong. Someone not going along with what the majority does can produce something scientifically useful, and this has been done many times. Second, science has many specialized fields, and scientists in those fields require certain craft skills unique in that field to conduct experiments. Such experiments do not involve precise rules that give detailed instructions on what to do at each step. What may appear to be misconduct to an outsider may actually be quite valid scientific practice in that field. Furthermore, rapid progress in science will be more likely if scientists do not follow a single standardized method.  Individual scientists have numerous ways of making theories and evaluating them, which explains why there can be disagreements among scientists. The different shaping principles that interact with data can produce different results with each scientific worker, including on how scientists should approach things. Sometimes these disconformities help to produce useful scientific revolutions. At times revolutions in science happen in large part because these kinds of shaping principles that are accepted by the majority change over time. Great changes in shaping principles create another reason why there has never been a single scientific method used by all scientists. Although there are some general objectives to achieve in science (e.g. finding scientific theories that are rationally supported), there are a number of ways to go about this, and not every scientist shares the same method.
It does seem that science contains various imperfections and some serious limitations on certainty. Many have pointed out the existence of technology as a sign that we are on the right track. But just because technology works doesn’t necessarily mean that our theories of why it works are correct. Often, the reliability of technology depends more upon empirical regularities, rather than explanatory concepts. For example, candles and light bulbs have worked and will continue to work even though our theories of why they work have changed over time (light as particles, waves, or some combination of the two; the rejection of the phlogiston theory of heat, etc.). The underdetermination of theories applies to explaining the effectiveness of technology just like any other data. Some have believed that science has been successful in acquiring knowledge, yet there really is no way of verifying this. Data are incapable of conclusively proving theories, and we can’t exactly read an omniscient “book of truth” to see how often our theories have been correct. Historically speaking, almost every theory in science eventually becomes discarded as wrong. Consequently, there have been so many false starts in science that it would be rather incredible if we were the ones who are finally on the right track. It would be especially amazing considering that the theories that we’ve already discarded have not even been conclusively falsified by the data. Even so, this is not to say science isn’t worth having around. On the contrary, science provides significant benefits for humanity. For one thing, science has helped us to alleviate the struggle to survive. Whether or not we are on the right track, it seems clear that science is conducive for useful technology. Various aspects of science can be used for the needs of people, understanding ourselves and even our place in the universe. Although there is a very real possibility of being wrong, we can increase our chances of being right through further accumulation of data. Despite all its imperfections and limitations, science may very well be the best tool we have for discovering nature.