Organismal Biology

Terms and Concepts

CHAPTER 3 - Scientific Method

Development of Science Methods  

 HISTORY - SCIENTIFIC METHOD

We accept today that science follows certain rules and processes that make it a dependable source of information, but those rules have not always been in place.  Until as recently as the 1600's, for instance, it was widely believed that living things could arise spontaneously from non-living, dead, or waste materials (this is called spontaneous generation), because people saw such materials "generate" living things such as mold or maggots, and no one thought to test whether this was truly what was happening.   In 1688, Italian naturalist Francisco Redi set out to test the idea with decaying meat in two containers:  one open to the air, the other sealed.  The open container meat eventually became infested with maggots.  And when critics insisted that it was the sealing of the second container that kept spontaneous generation from occurring, Redi did the test with an open container and one covered with cheesecloth, through which air could circulate (he suspected what we know, that flies were the actual source of the maggots), and the cheesecloth-covered sample produced no maggots.  However, even as certain aspects of spontaneous generation became recognized as wrong, later, when germs were first discovered and associated with illness, it was thought that they were a spontaneous product of sick tissues, rather than independent-living organisms that reproduced in the body.

It was a long road from that basic test to today's scientific method , but some of the approach Redi used persists:  modern science is about testing suspected explanations of one's observations, which can be made directly through one's own personal senses or indirectly through instruments or second-hand from someone else's direct observations.  An explanation for one or more observations is properly called a hypothesis.  A hypothesis should produce testable predictions or it isn't much use scientifically, and the tests are most reliably done under controlled conditions.  An additional concept is the null hypothesis, the idea, always there, that your hypothesis is wrong.  Some experiment designers focus on this;  they design tests to show that what they think is there is not actually there.

In biology, complete control over conditions is hard to achieve, but scientists still strive for it.  If no alternative exists, testing may be done in the field, with well-planned and organized series of observations that look for evidence for the hypothesis predictions.  Controlled experiments may be done in a laboratory environment with different test groups, similar to how Redi did his experiment.  One group, the experimental group, is specifically set up to test some critical aspect (the variable) of the hypothesis;  another group, the control group, duplicates the experimental group but removes the variable (or, if that isn't possible, changes it in some known, significant way).  In Redi's second test, the experimental group was the cloth-covered containers (the cloth barrier as a test of air access but fly blockage was the variable), with the control test being containers with no cloth over them.  

Results, usually in some sort of number form (quantitative data, as oppose to non-number qualitative data) are collected from each group and compared.  The comparison is absolutely critical - just running an experimental group is possible (we could give a new headache remedy to a group of 100 people with headaches and record how much their symptoms improved), but how would you know whether your results were directly connected to your variable - how many headaches would have improved on their own, or improved just because the subjects were given a pill and expected improvement (improvement based solely on expectations is called the placebo effect, placebo being an "empty" treatment)?  In a proper experiment, a control group would have been treated identically, given pills with the remedy ingredient removed;  the difference in effects in the two groups can be said to be an effect of the remedy itself.

 HEADACHE REMEDY EXAMPLE


PLAN / EXPERIMENTAL DESIGN:  200 people with headaches will be gathered in a comfortable setting.  The basics of the tests will be explained to them, including that they may be receiving a placebo treatment.  Each person will rate, on a scale of 0 (no headache) to 10 (the worst headache they've ever had), the severity of their headache just before they are given the treatment.  After one hour, they will be asked to rate their headache again;  the change for each individual will be added to the others in their group and averaged, and that average change will be compared between the groups.  Our prediction is that the experimental group average will be a larger negative number than the control groups.  The numbers are changes in headache strength from the beginning to the end of that first hour.


RESULTS:    Experimental Group Average Change:  - 4.15
                              Control Group Average Change:     ???


WHAT IF - The Control Group change is - 4.00?  or + 2.2?   or - 1.33?


A conclusion is made based upon your results.  You may have strong evidence for your hypothesis, weak evidence, or evidence that your hypothesis is mistaken (the condition called the null hypothesis).  Remember, a good scientist avoids saying that things are proved, because there may always be another and better explanation than yours that might produce the same results.

  SOME ADDITIONAL DETAILS

Modern science is based upon a descendant of that original scientific method, with some additions and minor changes.  A good experiment should be clearly designed and stated, and reproducible, so that someone else running the same test will get approximately the same results.  Research also generally is subject to peer review, scrutiny by others in the same field, usually when results are being published (in peer-reviewed journals) but sometimes at other stages of the process.  Peer review can be a double-edged sword:  on the one hand, it should help to assure that research is being properly done and conclusions make sense, but on the other hand, one's peers may not be ready for innovative or unusual ideas or approaches.

Modern biology, including medical research, can be confusing for a number of reasons, especially for the general public.  Often different studies seem to be completely at odds with one another, when in reality they were not looking at the same thing, or the results were misinterpreted by the media.  How data is defined or collected can affect results (how would the headache study above be influenced if the rating system went from "1 = barely there, to 10 = the worst headache you could imagine"?), and experiments with living organisms are affected by a wide range of confounding factors, other things that might be influencing the results.  One of the most common confounding factors is pure chance -  if the mouse you've picked to test happens to be particularly prone to cancer, anything you test will look cancerous - which requires that, whenever possible, test groups must be of sufficiently large size.  If you had used 100 mice, that one cancer-prone individual would not have significantly affected your averaged results.  Conclusions based on a single instance or a very limited group are said to be based upon anecdotal evidence and are not considered to be reliable.  You know the basic logic here from real life:  just because you were lucky enough to get away with something once doesn't mean you should trust that it will always works that way.

One very important method of experimental analysis is significance: this is a form of statistics that determines the odds of your results being mostly the result of chance, a measure of reliability.  There are pitfalls even in what seems straightforward math, where values are used in convenients ways.

A major potential confounding factor is bias.  Bias makes people either see things that aren't really there or refuse to see things that are.  This may at appear at any stage of experimentation.  The philosophical concept of postmodernism deals with the reality that any human endeavor involves the humans' understanding and expectations from what they have been taught and experienced through their lifetimes.  As a scientist, you're supposed to be aware of your own prejudices and avoid them, but this is not easy.  A related issue is ethics:  sometimes you know a way to test a hypothesis, but it isn't morally right to attempt it.  This is such a critical area that many laboratories, especially medical labs, have ethics boards to review experiment designs and give feedback or outright vetos over them.

In some cases, researchers can't directly test what they want to.  For ethical reasons, animal models must be used as substitutes for human subjects or for animals that can't be studied.  There are, of course, built in confounding factors with substituting one animal for another.  These are worse with cell culture models, testing simple aggregates of cells in a lab.  In cases that are too big, too longterm, or otherwise impractical, computer models simulate complex systems, but the question always is whether there's a deep enough understanding of the system to actually simulate it effectively.

Obviously, if a test subject knew they were receiving a placebo, that would influence their responses;  this is why they are not told, producing what is called a blind test.  It was determined decades ago, however, that if the people giving out the treatments themselves knew which were real and which were placebos, they tended to treat the patients differently, sending subtle messages that might alter patient responses and results.  To eliminate those confounding factors, modern drug tests are double-blind:  those giving the treatments deal with numbered samples packaged and recorded elsewhere, not knowing which are real and which are not - there's no way they can alert the patients, even unconsciously, if they don't know which dose is which.  In some cases, the data is analyzed by a statistician who has no idea who belongs to which group - this is a triple-blind test.

The placebo effect is a type of experimental artifact:  a result that is the product of some element of the experiment's own design.  You have to administer treatment, and the act itself can produce new results, so it has to be controlled for.  In many cases, how you do an experiment produces effects that you need to be aware of, and you may have to change an experiment's design or set up specific controls to factor those effects in, as in placebo groups.

A researcher tries to recognize potential confounding factors while designing an experiment, and either eliminate them or set up separate control tests to determine or eliminate their influence, but researchers can't anticipate everything.  Often peer review will reveal a possible confounding factor never recognized, and it's back to redesigning the test again.

Additional Information Links

A discussion about making qualitative data - ancient texts - quantitative for comparison purposes.

Why is the placebo effect
stronger now than it used to be?

Ablog about homeopathy trials that does a nice job explaining the requirements of medical testing.

An article with a historical perspective on how basic science works - better to be wrong than to let somebody fake your evidence.

An interesting perspective piece on science and values.

Research and the importance of being stupid.

A fairly bizarre page on research done with marshmallow peeps that sort of follows scientific method but uses groups that are too small to eliminate chance as a confounding factor.

Bread is dangerous!!

Several views of ideas that persisted long after being scientifically shown to be false.
 

Terms and Concepts -
Terms are in the order they appear.

Spontaneous generation
Francisco Redi  
Scientific method
Null hypothesis
Observation
Hypothesis
Predictions
Field tests
Controlled experiments
Experimental group
Variable
Control group
Quantitative vs qualitative data
Placebo effect
Reproducibility
Peer review
Confounding factors
Chance, Role of
Test Group - Need for Numbers / Size

Anecdotal Evidence
Significance
Bias in Experiments
Postmodernism
Ethics
Models
Blind & Double-blind tests

Placebo effect
Artifacts


GO ON TO CHAPTER FOUR - 

EVOLUTION
 

Organismal Biology

Copyright 2003 - 2021, Michael McDarby.

Reproduction and/or dissemination without permission is prohibited.  Linking to these pages is fine.

Hit Counter