Experimentation and Experimental Design

Image Copyright © Sidney Harris. sciencecartoonsplus.com. Reprinted with Permission.

There are two basic types of scientific studies. An experiment imposes a treatment or experimental condition on a group of objects or subjects to observe the response. An observational study involves collecting and analyzing data under existing native conditions with no application of an experimental treatment.

The validity of the scientific study is directly affected by its design. Therefore, the design of the study must be thoroughly planned before any experimentation takes place. Planning should be done with the hypothesis and the possible type of statistical analysis that will be performed in mind.

Proper experimental design entails, among other things, three important features: A clear definition of all variables, proper construction of experimental groups and controls and measurement repetition.

Clear Definition of All Variables. The specific objective of a typical scientific study is to find evidence that an independent variable affects a dependent variable. Thus, in a properly-designed experiment, the independent, dependent and the controlled variables must be clearly defined. The independent variable is the variable that is deliberately controlled by the experimenter. The dependent variables are those variables that are thought to be affected by and therefore dependent on the independent variable and are therefore measured or observed. Controlled variables are all other variables, deliberately kept constant to prevent them from affecting either the dependent or independent variables.

Proper Construction of Experimental and Control Groups. No matter what the specific objectives of the experiment, when studying a population of inanimate objects or living things, it is usually far more practical to study a subset of the population, called a sample, rather than every member of the entire population. The experimental and control groups of the study are representative samples of a population that are being experimentally manipulated in specific ways for the purpose of observing a response. Typically, the only difference among the groups is the level of the independent variable. In the control group, the independent variable is the baseline level, as determined by the experimenter. In each experimental group, the independent variable is set to a level different from the baseline. The controlled variables are the same among all groups, and the dependent variables are measured in exactly the same way among all groups.

In a valid experiment, it is critical that each group be truly representative of the population. This requires the random selection of individuals or objects from the population to create the groups. Thus, randomization, the process of randomly choosing objects or individuals from a population to create the groups, is an important part of group construction.

Group size, or sample size, is another consideration. The number of individual objects or organisms in a group must be enough to meet the statistical requirements of the study. While this depends on the design and specific statistical analyses to be performed, a good rule of thumb is that a minimum of 6 individual objects or organisms be placed in each treatment group. This number is determined statistically. Less than 6 objects per group reduces the statistical strength of the data, while more than 6 objects per group only slightly increases the statistical strength of the data.

Measurement Repetition. All measurements performed in a study are subject to experimental error. Experimental errors are not personal mistakes or miscalculations made by the experimenter, generally due to the lack of care. Such personal blunders are not legitimate and should not be made in any experiment. Rather, experimental errors are inherent to the measurement and cannot be eliminated simply by being more careful.

There are two types of experimental errors: systematic errors and random errors. Systematic errors are errors that are consistently repeated errors made in the same way and to the same extent every time a measurement is taken. They yield results that differ from the real value by the same amount. Thus, they affect the accuracy of a measurement. Common sources of systematic errors are faulty calibration or poor maintenance of an instrument, including systematic errors associated with pipets or pipetting, or parallax error by the user, that is, an error that results from the user reading an instrument at an angle resulting in a reading which is consistently high or consistently low. In this age of digital instrument readouts, parallax error is a rare problem.

Random errors are errors that vary from measurement to measurement. They are due to random, unpredictable variations in the measurement process, and they yield results that fluctuate above and below the true or accepted value. Thus, they affect the precision of a measurement. Common sources of random errors are random errors in pipetting, problems estimating a measurement that falls between the graduations on an instrument, guessing a measurement from a fluctuating instrument reading, and taking a measurement from an instrument at the limit of the instrumentís reliability and sensitivity.

Measurements subject to systematic errors cannot be improved by repeating those measurements. However, measurements subject to random errors can be improved by repeating those measurements several times and by refining the measurement method or technique. Thus, in any well-designed, measurement repetition is an important feature deliberately incorporated into the study. And when a measurement is reported, the arithmetic mean and standard deviation of each set of multiple measurements should be provided (See Statistics from information on mean and standard deviation).


  • Bailey, RA. Design of Comparative Experiments, Cambridge University Press, 2008. http://www.maths.qmul.ac.uk/~rab/DOEbook/
  • Experimental Error and Uncertainity. http://www.ece.rochester.edu/courses/ECE111/error_uncertainty.pdf
  • Experimental Errors. http://www.digipac.ca/chemical/sigfigs/experimental_errors.htm
  • Experimentation. http://www.stat.yale.edu/Courses/1997-98/101/expdes.htm
  • SAS. Concepts of Experimental Design. http://support.sas.com/resources/papers/sixsigma1.pdf
  • Stat Trek. Experimental Design in Statistics. http://stattrek.com/experiments/experimental-design.aspx
  • Sytsma, S. The Basics of Experimental Design. http://liutaiomottola.com/myth/expdesig.html
  • Web Center for Social Research Methods. Research Methods Knowledge Base. http://www.socialresearchmethods.net/kb/index.php
  • Wikipedia. Design of Experiments. http://en.wikipedia.org/wiki/Design_of_experiments
  • Wikipedia. Observational Study. http://en.wikipedia.org/wiki/Observational_study
  • Wikipedia. Observational Error. http://en.wikipedia.org/wiki/Observational_error

CellBiologyOLM is authored by Stephen Gallik, Ph. D.| Copyright © 2011 by Stephen Gallik, Ph. D. | Licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License