Advertisement
10 Outrageous Experiments Conducted on Humans
- Share Content on Facebook
- Share Content on LinkedIn
- Share Content on Flipboard
- Share Content on Reddit
- Share Content via Email
Prisoners, the disabled, the physically and mentally sick, the poor -- these are all groups once considered fair game to use as subjects in your research experiments. And if you didn't want to get permission, you didn't have to, and many doctors and researchers conducted their experiments on people who were unwilling to participate or who were unknowingly participating.
Forty years ago the U.S. Congress changed the rules; informed consent is now required for any government-funded medical study involving human subjects. But before 1974 the ethics involved in using humans in research experiments was a little, let's say, loose. And the exploitation and abuse of human subjects was often alarming. We begin our list with one of the most famous instances of exploitation, a study that eventually helped change the public view about the lack of consent in the name of scientific advancements.
- Tuskegee Syphilis Study
- The Nazi Medical Experiments
- Watson's 'Little Albert' Experiment
- The Monster Study of 1939
- Stateville Penitentiary Malaria Study
- The Aversion Project in South Africa
- Milgram Shock Experiments
- CIA Mind-Control Experiments (Project MK-Ultra)
- The Human Vivisections of Herophilus
10: Tuskegee Syphilis Study
Syphilis was a major public health problem in the 1920s, and in 1928 the Julius Rosenwald Fund, a charity organization, launched a public healthcare project for blacks in the American rural south. Sounds good, right? It was, until the Great Depression rocked the U.S. in 1929 and the project lost its funding. Changes were made to the program; instead of treating health problems in underserved areas, in 1932 poor black men living in Macon County, Alabama, were instead enrolled in a program to treat what they were told was their "bad blood" (a term that, at the time, was used in reference to everything from anemia to fatigue to syphilis). They were given free medical care, as well as food and other amenities such as burial insurance, for participating in the study. But they didn't know it was all a sham. The men in the study weren't told that they were recruited for the program because they were actually suffering from the sexually transmitted disease syphilis, nor were they told they were taking part in a government experiment studying untreated syphilis, the "Tuskegee Study of Untreated Syphilis in the Negro Male." That's right: untreated.
Despite thinking they were receiving medical care, subjects were never actually properly treated for the disease. This went on even after penicillin hit the scene and became the go-to treatment for the infection in 1945, and after Rapid Treatment Centers were established in 1947. Despite concerns raised about the ethics of the Tuskegee Syphilis Study as early as 1936, the study didn't actually end until 1972 after the media reported on the multi-decade experiment and there was subsequent public outrage.
9: The Nazi Medical Experiments
During WWII, the Nazis performed medical experiments on adults and children imprisoned in the Dachau, Auschwitz, Buchenwald and Sachsenhausen concentration camps. The accounts of abuse, mutilation, starvation, and torture reads like a grisly compilation of all nine circles of hell. Prisoners in these death camps were subjected to heinous crimes under the guise of military advancement, medical and pharmaceutical advancement, and racial and population advancement.
Jews were subjected to experiments intended to benefit the military, including hypothermia studies where prisoners were immersed in ice water in an effort to ascertain how long a downed pilot could survive in similar conditions. Some victims were only allowed sea water, a study of how long pilots could survive at sea; these subjects, not surprisingly, died of dehydration. Victims were also exposed to high altitude in decompression chambers -- often followed with brain dissection on the living -- to study high-altitude sickness and how pilots would be affected by atmospheric pressure changes.
Effectively treating war injuries was also a concern for the Nazis, and pharmaceutical testing went on in these camps. Sulfanilamide was tested as a new treatment for war wounds. Victims were inflicted with wounds that were then intentionally infected. Infections and poisonings were also studied on human subjects. Tuberculosis (TB) was injected into prisoners in an effort to better understand how to immunize against the infection. Experiments with poison, to determine how fast subjects would die, were also on the agenda.
The Nazis also performed genetic and racially-motivated sterilizations, artificial inseminations, and also conducted experiments on twins and people of short stature.
8: Watson's 'Little Albert' Experiment
In 1920 John Watson, along with graduate student Rosalie Rayner, conducted an emotional-conditioning experiment on a nine-month-old baby -- whom they nicknamed "Albert B" -- at Johns Hopkins University in an effort to prove their theory that we're all born as blank slates that can be shaped. The child's mother, a wet nurse who worked at the hospital, was paid one dollar for allowing her son to take part.
The "Little Albert" experiment went like this: Researchers first introduced the baby to a small, furry white rat, of which he initially had no fear . (According to reports, he didn't really show much interest at all). Then they re-introduced him to the rat while a loud sound rang out. Over and over, "Albert" was exposed to the rat and startling noises until he became frightened any time he saw any small, furry animal (rats, for sure, but also dogs and monkeys) regardless of noise.
Who exactly "Albert" was remained unknown until 2010, when his identity was revealed to be Douglas Merritte. Merritte, it turns out, wasn't a healthy subject: He showed signs of behavioral and neurological impairment, never learned to talk or walk, and only lived to age six, dying from hydrocephalus (water on the brain). He also suffered from a bacterial meningitis infection he may have acquired accidentally during treatments for his hydrocephalus, or, as some theorize, may have been -- horrifyingly -- intentionally infected as part of another experiment.
In the end, Merritte was never deconditioned, and because he died at such a young age no one knows if he continued to fear small furry things post-experiment.
7: The Monster Study of 1939
Today we understand that stuttering has many possible causes. It may run in some families, an inherited genetic quirk of the language center of the brain. It may also occur because of a brain injury, including stroke or other trauma. Some young children stutter when they're learning to talk, but outgrow the problem. In some rare instances, it may be a side effect of emotional trauma. But you know what it's not caused by? Criticism.
In 1939 Mary Tudor, a graduate student at the University of Iowa, and her faculty advisor, speech expert Wendell Johnson, set out to prove stuttering could be taught through negative reinforcement -- that it's learned behavior. Over four months, 22 orphaned children were told they would be receiving speech therapy, but in reality they became subjects in a stuttering experiment; only about half were actually stutterers, and none received speech therapy.
During the experiment the children were split into four groups:
- Half of the stutterers were given negative feedback.
- The other half of stutterers were given positive feedback.
- Half of the non-stuttering group were all told they were beginning to stutterer and were criticized.
- The other half of non-stutterers were praised.
The only significant impact the experiment had was on that third group; these kids, despite never actually developing a stutter, began to change their behavior, exhibiting low self-esteem and adopting the self-conscious behaviors associated with stutterers. And those who did stutter didn't cease doing so regardless of the feedback they received.
6: Stateville Penitentiary Malaria Study
It's estimated that between 60 to 65 percent of American soldiers stationed in the South Pacific during WWII suffered from a malarial infection at some point during their service. For some units the infection proved to be more deadly than the enemy forces were, so finding an effective treatment was a high priority [source: Army Heritage Center Foundation]. Safe anti-malarial drugs were seen as essential to winning the war.
Beginning in 1944 and spanning over the course of two years, more than 400 prisoners at the Stateville Penitentiary in Illinois were subjects in an experiment aimed at finding an effective drug against malaria . Prisoners taking part in the experiment were infected with malaria, and then treated with experimental anti-malarial treatments. The experiment didn't have a hidden agenda, and its unethical methodology didn't seem to bother the American public, who were united in winning WWII and eager to bring the troops home — safe and healthy. The intent of the experiments wasn't hidden from the subjects, who were at the time praised for their patriotism and in many instances given shorter prison sentences in return for their participation.
5: The Aversion Project in South Africa
If you were living during the apartheid era in South Africa, you lived under state-regulated racial segregation. If that itself wasn't difficult enough, the state also controlled your sexuality.
The South African government upheld strict anti-homosexual laws. If you were gay you were considered a deviant — and your homosexuality was also considered a disease that could be treated. Even after homosexuality ceased to be considered a mental illness and aversion therapy as a way to cure it debunked, psychiatrists and Army medical professionals in the South African Defense Force (SADF) continued to believe the outdated theories and treatments. In particular, aversion therapy techniques were used on prisoners and on South Africans who were forced to join the military under the conscription laws of the time.
At Ward 22 at 1 Military hospital in Voortrekkerhoogte, Pretoria, between 1969 and 1987 attempts were made to "cure" perceived deviants. Homosexuals, gay men and lesbians were drugged and subjected to electroconvulsive behavior therapy while shown aversion stimuli (same-sex erotic photos), followed by erotic photos of the opposite sex after the electric shock. When the technique didn't work (and it absolutely didn't), victims were then treated with hormone therapy, which in some cases included chemical castration. In addition, an estimated 900 men and women also underwent gender reassignment surgery when subsequent efforts to "reorient" them failed — most without consent, and some left unfinished [source: Kaplan ].
4: Milgram Shock Experiments
Ghostbuster Peter Venkman, who is seen in the fictional film conducting ESP/electro-shock experiments on college students, was likely inspired by social psychologist Stanley Milgram's famous series of shock experiments conducted in the early 1960s. During Milgram's experiments "teachers" — Americans recruited for a Yale study they thought was about memory and learning — were told to read lists of words to "learners" (actors, although the teachers didn't know that). Each person in the teacher role was instructed to press a lever that would deliver a shock to their "learner" every time he made a mistake on word-matching quizzes. Teachers believed the voltage of shocks increased with each mistake, and ranged from 15 to 450 possible volts; roughly two-thirds of teachers shocked learners to the highest voltage , continuing to deliver jolts at the instruction of the experimenter.
In reality, this wasn't an experiment about memory and learning; rather, it was about how obedient we are to authority. No shocks were actually given.
Today, Milgram's shock experiments continue to be controversial; while they're criticized for their lack of realism, others point to the results as important to how humans behave when under duress. In 2010 the results of Milgram's study were repeated — with about 70 percent of teachers obediently administering what they believed to be the highest voltage shocks to their learners.
3: CIA Mind-Control Experiments (Project MK-Ultra)
If you're familiar with "Men Who Stare at Goats" or "The Manchurian Candidate" then you know: There was a period in the CIA's history when they performed covert mind-control experiments. If you thought it was fiction, it wasn't.
During the Cold War the CIA started researching ways they could turn Americans into CIA-controlled "superagents," people who could carry out assassinations and who wouldn't be affected by enemy interrogations. Under what was known as the MK-ULTRA project, CIA researchers experimented on unsuspecting American (and Canadian) citizens by slipping them psychedelic drugs, including LSD , PCP and barbiturates, as well as additional — and additionally illegal — methods such as hypnosis, and, possibly, chemical, biological, and radiological agents. Universities participated, mostly as a delivery system, also without their knowledge. The U.S. Department of Veterans Affairs estimates 7,000 soldiers were also involved in the research, without their consent.
The project endured for more than 20 years, during which the agency spent about $20 million. There was one death tied to the project, although more were suspected; tin 1973 the CIA destroyed what records were kept.
2: Unit 731
Using biological warfare was banned by the Geneva Protocol in 1925, but Japan rejected the ban. If germ warfare was effective enough to be banned, it must work, military leaders believed. Unit 731 , a secret unit in a secret facility — publicly known as the Epidemic Prevention and Water Supply Unit — was established in Japanese-controlled Manchuria, where by the mid-1930s Japan began experimenting with pathogenic and chemical warfare and testing on human subjects. There, military physicians and officers intentionally exposed victims to infectious diseases including anthrax , bubonic plague, cholera, syphilis, typhus and other pathogens, in an effort to understand how they affected the body and how they could be used in bombs and attacks in WWII.
In addition to working with pathogens, Unit 731 conducted experiments on people, including — but certainly not limited to — dissections and vivisections on living humans, all without anesthesia (the experimenters believed using it would skew the results of the research).
Many of the subjects were Chinese civilians and prisoners of war, but also included Russian and American victims among others — basically, anyone who wasn't Japanese was a potential subject. Today it's estimated that about 100,000 people were victims within the facility, but when you include the germ warfare field experiments (such as reports of Japanese planes dropping plague-infected fleas over Chinese villages and poisoning wells with cholera) the death toll climbs to estimates closer to 250,000, maybe more.
Believe it or not, after WWII the U.S. granted immunity to those involved in these war crimes committed at Unit 731 as part of an information exchange agreement — and until the 1980s, the Japanese government refused to admit any of this even happened.
1: The Human Vivisections of Herophilus
Ancient physician Herophilus is considered the father of anatomy. And while he made significant discoveries during his practice, it's how he learned about internal workings of the human body that lands him on this list.
Herophilus practiced medicine in Alexandria, Egypt, and during the reign of the first two Ptolemaio Pharoahs was allowed, at least for about 30 to 40 years, to dissect human bodies, which he did, publicly, along with contemporary Greek physician and anatomist Erasistratus. Under Ptolemy I and Ptolemy II, criminals could be sentenced to dissection and vivisection as punishment, and it's said the father of anatomy not only dissected the dead but also performed vivisection on an estimated 600 living prisoners [source: Elhadi ].
Herophilus made great strides in the study of human anatomy — especially the brain , eyes, liver, circulatory system, nervous system and reproductive system, during a time in history when dissecting human cadavers was considered an act of desecration of the body (there were no autopsies conducted on the dead, although mummification was popular in Egypt at the time). And, like today, performing vivisection on living bodies was considered butchery.
Frequently Asked Questions
How have these experiments influenced current ethical standards in research, what protections are in place today to prevent similar unethical research on humans, lots more information, author's note.
There is no denying that involving living, breathing humans in medical studies have produced some invaluable results, but there's that one medical saying most of us know, even if we're not in a medical field: first do no harm (or, if you're fancy, primum non nocere).
Related Articles
- What will medicine consider unethical in 100 years?
- How Human Experimentation Works
- Top 5 Crazy Government Experiments
- 10 Cover-ups That Just Made Things Worse
- 10 Really Smart People Who Did Really Dumb Things
- How Scientific Peer Review Works
More Great Links
- Journal of Clinical Investigation, 1948: "Procedures Used at Stateville Penitentiary for the Testing of Potential Antimalarial Agents"
- Stanley Milgram: "Behavioral Study of Obedience"
- Alving, Alf S. "Procedures Used At Stateville Penitentiary For The Testing Of Potential Antimalarial Agents." Journal of Clinical Investigation. Vol. 27, No. 3 (part 2). Pages 2-5. May 1948. (Aug. 10, 2014) http://www.jci.org/articles/view/101956
- American Heritage Center Foundation. "Education Materials Index: Malaria in World War II." (Aug. 10, 2014) http://www.armyheritage.org/education-and-programs/educational-resources/education-materials-index/50-information/soldier-stories/182-malaria-in-world-war-ii
- Bartlett, Tom. "A New Twist in the Sad Saga of Little Albert." The Chronicle of Higher Education." Jan. 25, 2012. (Aug. 10, 2014) http://chronicle.com/blogs/percolator/a-new-twist-in-the-sad-saga-of-little-albert/28423
- Blass, Thomas. "The Man Who Shocked The World." Psychology Today. June 13, 2012. (Aug. 10, 2014) http://www.psychologytoday.com/articles/200203/the-man-who-shocked-the-world
- Brick, Neil. "Mind Control Documents & Links." Stop Mind Control and Ritual Abuse Today (S.M.A.R.T.). (Aug. 10, 2014) https://ritualabuse.us/mindcontrol/mc-documents-links/
- Centers for Disease Control and Prevention. "U.S. Public Health Service Syphilis Study at Tuskegee: The Tuskegee Timeline." Dec. 10, 2013. (Aug. 10, 2014) http://www.cdc.gov/tuskegee/timeline.htm
- Cohen, Baruch. "The Ethics Of Using Medical Data From Nazi Experiments." Jlaw.com - Jewish Law Blog.(Aug. 10, 2014) http://www.jlaw.com/Articles/NaziMedEx.html
- Collins, Dan. "'Monster Study' Still Stings." CBS News. Aug. 6, 2003. (Aug. 10, 2014) http://www.cbsnews.com/news/monster-study-still-stings/
- Comfort, Nathaniel. "The prisoner as model organism: malaria research at Stateville Penitentiary." Studies in History and Philosophy of Biological and Biomedical Sciences." Vol. 40, no. 3. Pages 190-203. September 2009. (Aug. 10, 2014) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2789481/
- DeAngelis, T. "'Little Albert' regains his identity." Monitor on Psychology. Vol. 41, no. Page 10. 2010. (Aug. 10, 2014) http://www.apa.org/monitor/2010/01/little-albert.aspx
- Elhadi, Ali M. "The Journey of Discovering Skull Base Anatomy in Ancient Egypt and the Special Influence of Alexandria." Neurosurgical Focus. Vol. 33, No. 2. 2012. (Aug. 10, 2014) http://www.medscape.com/viewarticle/769263_5
- Fridlund, Alan J. "Little Albert: A neurologically impaired child." History of Psychology. Vol. 15, No. 4. Pages 302-327. November 2013. (Aug. 10, 2014) http://psycnet.apa.org/psycinfo/2012-01974-001/
- Harcourt, Bernard E. "Making Willing Bodies: Manufacturing Consent Among Prisoners and Soldiers, Creating Human Subjects, Patriots, and Everyday Citizens - The University of Chicago Malaria Experiments on Prisoners at Stateville Penitentiary." University of Chicago Law & Economics, Olin Working Paper No. 544; Public Law Working Paper No. 341. Feb. 6, 2011. (Aug. 10, 2014) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1758829
- Harris, Sheldon H. "Biological Experiments." Crimes of War Project. 2011. (Aug. 10, 2014) http://www.crimesofwar.org/a-z-guide/biological-experiments/
- Hornblum, Allen M. "They Were Cheap and Available: Prisoners as Research Subjects in Twentieth Century America." British Medical Journal. Vol. 315. Pages 1437-1441. 1997. (Aug. 10, 2014) http://gme.kaiserpapers.org/they-were-cheap-and-available.html
- Kaplan, Robert. "The Aversion Project -- Psychiatric Abuses In The South African Defence Force During The Apartheid Era." South African Medical Journal. Vol. 91, no. 3. Pages 216-217. March 2001. (Aug. 10, 2014) http://archive.samj.org.za/2001%20VOL%2091%20Jan-Dec/Articles/03%20March/1.5%20THE%20AVERSION%20PROJECT%20-%20PSYCHIATRIC%20ABUSES%20IN%20THE%20SOUTH%20AFRICAN%20DEFENCE%20FORCE%20DURING%20THE%20APART.pdf
- Kaplan, Robert M. "Treatment of homosexuality during apartheid." British Medical Journal. Vol. 329, no. 7480. Pages 1415-1416. Dec. 18, 2004. (Aug. 10, 2014) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC535952/
- Kaplan, Robert M. "Treatment of homosexuality in the South African Defence Force during the Apartheid years ." British Medical Journal. February 20, 2004. (Aug. 10, 2014) http://www.bmj.com/rapid-response/2011/10/30/treatment-homosexuality-south-african-defence-force-during-apartheid-years
- Keen, Judy. "Legal battle ends over stuttering experiment." USA Today. Aug. 27, 2007. (Aug. 10, 2014) http://usatoday30.usatoday.com/news/nation/2007-08-26-stuttering_N.htm
- Kristof, Nicholas D. "Unmasking Horror -- A special report; Japan Confronting Gruesome War Atrocity." The New York Times. March 17, 1995. (Aug. 10, 2014) http://www.nytimes.com/1995/03/17/world/unmasking-horror-a-special-report-japan-confronting-gruesome-war-atrocity.html
- Landau, Elizabeth. "Studies show 'dark chapter' of medical research." CNN. Oct. 1, 2010. (Aug. 10, 2014) http://www.cnn.com/2010/HEALTH/10/01/guatemala.syphilis.tuskegee/
- Mayo Clinic. "Stuttering: Causes." Sept. 8, 2011. (Aug. 10, 2014) http://www.mayoclinic.org/diseases-conditions/stuttering/basics/causes/con-20032854
- Mayo Clinic. "Syphilis." Jan. 2, 2014. (Aug. 20, 2014) http://www.mayoclinic.org/diseases-conditions/syphilis/basics/definition/con-20021862
- McCurry, Justin. "Japan unearths site linked to human experiments." The Guardian. Feb. 21, 2011. (Aug. 10, 2014) http://www.theguardian.com/world/2011/feb/21/japan-excavates-site-human-experiments
- McGreal, Chris. "Gays tell of mutilation by apartheid army." The Guardian. July 28, 2000. (Aug. 10, 2014) http://www.theguardian.com/world/2000/jul/29/chrismcgreal
- Milgram, Stanley. "Behavioral Study of Obedience." Journal of Abnormal and Social Psychology. No. 67. Pages 371-378. 1963. (Aug. 10, 2014) http://wadsworth.cengage.com/psychology_d/templates/student_resources/0155060678_rathus/ps/ps01.html
- NPR. "Taking A Closer Look At Milgram's Shocking Obedience Study." Aug. 28, 2013. (Aug. 10, 2014) http://www.npr.org/2013/08/28/209559002/taking-a-closer-look-at-milgrams-shocking-obedience-study
- Rawlings, Nate. "Top 10 Weird Government Secrets: CIA Mind-Control Experiments." Time. Aug. 6, 2010. (Aug. 10, 2014) http://content.time.com/time/specials/packages/article/0,28804,2008962_2008964_2008992,00.html
- Reynolds, Gretchen. "The Stuttering Doctor's 'Monster Study'." The New York Times. March 16, 2003. (Aug. 10, 2014) http://www.nytimes.com/2003/03/16/magazine/the-stuttering-doctor-s-monster-study.html
- Ryall, Julian. "Human bones could reveal truth of Japan's 'Unit 731' experiments." The Telegraph. Feb. 15, 2010. (Aug. 10, 2014) http://www.telegraph.co.uk/news/worldnews/asia/japan/7236099/Human-bones-could-reveal-truth-of-Japans-Unit-731-experiments.html
- Science Channel - Dark Matters. "Project MKULTRA." (Aug. 10, 2014) http://www.sciencechannel.com/tv-shows/dark-matters-twisted-but-true/documents/project-mkultra.htm
- Shea, Christopher. "Stanley Milgram and the uncertainty of evil." The Boston Globe. Sept. 29, 2013. (Aug. 10, 2014) http://www.bostonglobe.com/ideas/2013/09/28/stanley-milgram-and-uncertainty-evil/qUjame9xApiKc6evtgQRqN/story.html
- Shermer, Michael. "What Milgram's Shock Experiments Really Mean." Scientific American. Oct. 16, 2012. (Aug. 10, 2014) http://www.scientificamerican.com/article/what-milgrams-shock-experiments-really-mean/
- Si-Yang Bay, Noel. "Green anatomist herohilus: the father of anatomy." Anatomy & Cell Biology. Vol. 43, No. 4. Pages 280-283. December 2010. (Aug. 10, 2014) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3026179/
- Stobbe, Mike. "Ugly past of U.S. human experiments uncovered." NBC News. Feb. 27, 2011. (Aug. 10, 2014) http://www.nbcnews.com/id/41811750/ns/health-health_care/t/ugly-past-us-human-experiments-uncovered
- Tuskegee University. "About the USPHS Syphilis Study." (Aug. 10, 2014) http://www.tuskegee.edu/about_us/centers_of_excellence/bioethics_center/about_the_usphs_syphilis_study.aspx
- Tyson, Peter. "Holocaust on Trial: The Experiments." PBS. October 2000. (Aug. 10, 2014) http://www.pbs.org/wgbh/nova/holocaust/experiside.html
- United States Holocaust Memorial Museum. "Nazi Medical Experiments." June 20, 2014. (Aug. 10, 2014) http://www.ushmm.org/wlc/en/article.php?ModuleId=10005168
- Van Zul, Mikki. "The Aversion Project." South African medical Research Council. October 1999. (Aug. 10, 2014) http://www.mrc.ac.za/healthsystems/aversion.pdf
- Watson, John B.; and Rosalie Rayner. "Conditioned Emotional Reactions." Journal of Experimental Psychology. Vol. 3, No. 1. Pages 1-14. 1920. (Aug. 10, 2014) http://psychclassics.yorku.ca/Watson/emotion.htm
- Wiltse, LL. "Herophilus of Alexandria (325-255 B.C.). The father of anatomy." Spine. Vol. 23, no. 7. Pages 1904-1914. Sept. 1, 1998. (Aug. 10, 2014) http://www.ncbi.nlm.nih.gov/pubmed/9762750
- Working, Russell. "The trial of Unit 731." June 2001. (Aug. 10, 2014) http://www.japantimes.co.jp/opinion/2001/06/05/commentary/world-commentary/the-trial-of-unit-731/
- Zetter, Kim. "April 13, 1953: CIA OKs MK-ULTRA Mind-Control Tests." Wired. April 13, 2010. (Aug. 10, 2014) http://www.wired.com/2010/04/0413mk-ultra-authorized/
Please copy/paste the following text to properly cite this HowStuffWorks.com article:
- The Magazine
- Stay Curious
- The Sciences
- Environment
- Planet Earth
The Top 10 Science Experiments of All Time
These seminal experiments changed our understanding of the universe and ourselves..
Every day, we conduct science experiments, posing an “if” with a “then” and seeing what shakes out. Maybe it’s just taking a slightly different route on our commute home or heating that burrito for a few seconds longer in the microwave. Or it could be trying one more variation of that gene, or wondering what kind of code would best fit a given problem. Ultimately, this striving, questioning spirit is at the root of our ability to discover anything at all. A willingness to experiment has helped us delve deeper into the nature of reality through the pursuit we call science.
A select batch of these science experiments has stood the test of time in showcasing our species at its inquiring, intelligent best. Whether elegant or crude, and often with a touch of serendipity, these singular efforts have delivered insights that changed our view of ourselves or the universe.
Here are nine such successful endeavors — plus a glorious failure — that could be hailed as the top science experiments of all time.
Eratosthenes Measures the World
Experimental result: The first recorded measurement of Earth’s circumference
When: end of the third century B.C.
Just how big is our world? Of the many answers from ancient cultures, a stunningly accurate value calculated by Eratosthenes has echoed down the ages. Born around 276 B.C. in Cyrene, a Greek settlement on the coast of modern-day Libya, Eratosthenes became a voracious scholar — a trait that brought him both critics and admirers. The haters nicknamed him Beta, after the second letter of the Greek alphabet. University of Puget Sound physics professor James Evans explains the Classical-style burn: “Eratosthenes moved so often from one field to another that his contemporaries thought of him as only second-best in each of them.” Those who instead celebrated the multitalented Eratosthenes dubbed him Pentathlos, after the five-event athletic competition.
That mental dexterity landed the scholar a gig as chief librarian at the famous library in Alexandria, Egypt. It was there that he conducted his famous experiment. He had heard of a well in Syene, a Nile River city to the south (modern-day Aswan), where the noon sun shone straight down, casting no shadows, on the date of the Northern Hemisphere’s summer solstice. Intrigued, Eratosthenes measured the shadow cast by a vertical stick in Alexandria on this same day and time. He determined the angle of the sun’s light there to be 7.2 degrees, or 1/50th of a circle’s 360 degrees.
Knowing — as many educated Greeks did — Earth was spherical, Eratosthenes fathomed that if he knew the distance between the two cities, he could multiply that figure by 50 and gauge Earth’s curvature, and hence its total circumference. Supplied with that information, Eratosthenes deduced Earth’s circumference as 250,000 stades, a Hellenistic unit of length equaling roughly 600 feet. The span equates to about 28,500 miles, well within the ballpark of the correct figure of 24,900 miles.
Eratosthenes’ motive for getting Earth’s size right was his keenness for geography, a field whose name he coined. Fittingly, modernity has bestowed upon him one more nickname: father of geography. Not bad for a guy once dismissed as second-rate.
William Harvey Takes the Pulse of Nature
Experimental result: The discovery of blood circulation
When: Theory published in 1628
Boy, was Galen wrong.
The Greek physician-cum-philosopher proposed a model of blood flow in the second century that, despite being full of whoppers, prevailed for nearly 1,500 years. Among its claims: The liver constantly makes new blood from food we eat; blood flows throughout the body in two separate streams, one infused (via the lungs) with “vital spirits” from air; and the blood that tissues soak up never returns to the heart.
Overturning all this dogma took a series of often gruesome experiments.
High-born in England in 1578, William Harvey rose to become royal physician to King James I, affording him the time and means to pursue his greatest interest: anatomy. He first hacked away (literally, in some cases) at the Galenic model by exsanguinating — draining the blood from — test critters, including sheep and pigs. Harvey realized that if Galen were right, an impossible volume of blood, exceeding the animals’ size, would have to pump through the heart every hour.
To drive this point home, Harvey sliced open live animals in public, demonstrating their puny blood supplies. He also constricted blood flow into a snake’s exposed heart by finger-pinching a main vein. The heart shrunk and paled; when pierced, it poured forth little blood. By contrast, choking off the main exiting artery swelled the heart. Through studies of the slow heart beats of reptiles and animals near death, he discerned the heart’s contractions, and deduced that it pumped blood through the body in a circuit.
According to Andrew Gregory, a professor of history and philosophy of science at University College London, this was no easy deduction on Harvey’s part. “If you look at a heart beating normally in its normal surroundings, it is very difficult to work out what is actually happening,” he says.
Experiments with willing people, which involved temporarily blocking blood flow in and out of limbs, further bore out Harvey’s revolutionary conception of blood circulation. He published the full theory in a 1628 book, De Motu Cordis [The Motion of the Heart]. His evidence-based approach transformed medical science, and he’s recognized today as the father of modern medicine and physiology.
Gregor Mendel Cultivates Genetics
Experimental result: The fundamental rules of genetic inheritance
When: 1855-1863
A child, to varying degrees, resembles a parent, whether it’s a passing resemblance or a full-blown mini-me. Why?
The profound mystery behind the inheritance of physical traits began to unravel a century and a half ago, thanks to Gregor Mendel. Born in 1822 in what is now the Czech Republic, Mendel showed a knack for the physical sciences, though his farming family had little money for formal education. Following the advice of a professor, he joined the Augustinian order, a monastic group that emphasized research and learning, in 1843.
Ensconced at a monastery in Brno, the shy Gregor quickly began spending time in the garden. Fuchsias in particular grabbed his attention, their daintiness hinting at an underlying grand design. “The fuchsias probably gave him the idea for the famous experiments,” says Sander Gliboff, who researches the history of biology at Indiana University Bloomington. “He had been crossing different varieties, trying to get new colors or combinations of colors, and he got repeatable results that suggested some law of heredity at work.”
These laws became clear with his cultivation of pea plants. Using paintbrushes, Mendel dabbed pollen from one to another, precisely pairing thousands of plants with certain traits over a stretch of about seven years. He meticulously documented how matching yellow peas and green peas, for instance, always yielded a yellow plant. Yet mating these yellow offspring together produced a generation where a quarter of the peas gleamed green again. Ratios like these led to Mendel’s coining of the terms dominant (the yellow color, in this case) and recessive for what we now call genes, and which Mendel referred to as “factors.”
He was ahead of his time. His studies received scant attention in their day, but decades later, when other scientists discovered and replicated Mendel’s experiments, they came to be regarded as a breakthrough.
“The genius in Mendel’s experiments was his way of formulating simple hypotheses that explain a few things very well, instead of tackling all the complexities of heredity at once,” says Gliboff. “His brilliance was in putting it all together into a project that he could actually do.”
Isaac Newton Eyes Optics
Experimental result: The nature of color and light
When: 1665-1666
Before he was that Isaac Newton — scientist extraordinaire and inventor of the laws of motion, calculus and universal gravitation (plus a crimefighter to boot) — plain ol’ Isaac found himself with time to kill. To escape a devastating outbreak of plague in his college town of Cambridge, Newton holed up at his boyhood home in the English countryside. There, he tinkered with a prism he picked up at a local fair — a “child’s plaything,” according to Patricia Fara, fellow of Clare College, Cambridge.
Let sunlight pass through a prism and a rainbow, or spectrum, of colors splays out. In Newton’s time, prevailing thinking held that light takes on the color from the medium it transits, like sunlight through stained glass. Unconvinced, Newton set up a prism experiment that proved color is instead an inherent property of light itself. This revolutionary insight established the field of optics, fundamental to modern science and technology.
Newton deftly executed the delicate experiment: He bored a hole in a window shutter, allowing a single beam of sunlight to pass through two prisms. By blocking some of the resulting colors from reaching the second prism, Newton showed that different colors refracted, or bent, differently through a prism. He then singled out a color from the first prism and passed it alone through the second prism; when the color came out unchanged, it proved the prism didn’t affect the color of the ray. The medium did not matter. Color was tied up, somehow, with light itself.
Partly owing to the ad hoc, homemade nature of Newton’s experimental setup, plus his incomplete descriptions in a seminal 1672 paper, his contemporaries initially struggled to replicate the results. “It’s a really, really technically difficult experiment to carry out,” says Fara. “But once you have seen it, it’s incredibly convincing.”
In making his name, Newton certainly displayed a flair for experimentation, occasionally delving into the self-as-subject variety. One time, he stared at the sun so long he nearly went blind. Another, he wormed a long, thick needle under his eyelid, pressing on the back of his eyeball to gauge how it affected his vision. Although he had plenty of misses in his career — forays into occultism, dabbling in biblical numerology — Newton’s hits ensured his lasting fame.
Michelson and Morley Whiff on Ether
Experimental result: The way light moves
Say “hey!” and the sound waves travel through a medium (air) to reach your listener’s ears. Ocean waves, too, move through their own medium: water. Light waves are a special case, however. In a vacuum, with all media such as air and water removed, light somehow still gets from here to there. How can that be?
The answer, according to the physics en vogue in the late 19th century, was an invisible, ubiquitous medium delightfully dubbed the “luminiferous ether.” Working together at what is now Case Western Reserve University in Ohio, Albert Michelson and Edward W. Morley set out to prove this ether’s existence. What followed is arguably the most famous failed experiment in history.
The scientists’ hypothesis was thus: As Earth orbits the sun, it constantly plows through ether, generating an ether wind. When the path of a light beam travels in the same direction as the wind, the light should move a bit faster compared with sailing against the wind.
To measure the effect, miniscule though it would have to be, Michelson had just the thing. In the early 1880s, he had invented a type of interferometer, an instrument that brings sources of light together to create an interference pattern, like when ripples on a pond intermingle. A Michelson interferometer beams light through a one-way mirror. The light splits in two, and the resulting beams travel at right angles to each other. After some distance, they reflect off mirrors back toward a central meeting point. If the light beams arrive at different times, due to some sort of unequal displacement during their journeys (say, from the ether wind), they create a distinctive interference pattern.
The researchers protected their delicate interferometer setup from vibrations by placing it atop a solid sandstone slab, floating almost friction-free in a trough of mercury and further isolated in a campus building’s basement. Michelson and Morley slowly rotated the slab, expecting to see interference patterns as the light beams synced in and out with the ether’s direction.
Instead, nothing. Light’s speed did not vary.
Neither researcher fully grasped the significance of their null result. Chalking it up to experimental error, they moved on to other projects. (Fruitfully so: In 1907, Michelson became the first American to win a Nobel Prize, for optical instrument-based investigations.) But the huge dent Michelson and Morley unintentionally kicked into ether theory set off a chain of further experimentation and theorizing that led to Albert Einstein’s 1905 breakthrough new paradigm of light, special relativity.
Marie Curie’s Work Matters
Experimental result: Defining radioactivity
Few women are represented in the annals of legendary scientific experiments, reflecting their historical exclusion from the discipline. Marie Sklodowska broke this mold.
Born in 1867 in Warsaw, she immigrated to Paris at age 24 for the chance to further study math and physics. There, she met and married physicist Pierre Curie, a close intellectual partner who helped her revolutionary ideas gain a foothold within the male-dominated field. “If it wasn’t for Pierre, Marie would never have been accepted by the scientific community,” says Marilyn B. Ogilvie, professor emeritus in the history of science at the University of Oklahoma. “Nonetheless, the basic hypotheses — those that guided the future course of investigation into the nature of radioactivity — were hers.”
The Curies worked together mostly out of a converted shed on the college campus where Pierre worked. For her doctoral thesis in 1897, Marie began investigating a newfangled kind of radiation, similar to X-rays and discovered just a year earlier. Using an instrument called an electrometer, built by Pierre and his brother, Marie measured the mysterious rays emitted by thorium and uranium. Regardless of the elements’ mineralogical makeup — a yellow crystal or a black powder, in uranium’s case — radiation rates depended solely on the amount of the element present.
From this observation, Marie deduced that the emission of radiation had nothing to do with a substance’s molecular arrangements. Instead, radioactivity — a term she coined — was an inherent property of individual atoms, emanating from their internal structure. Up until this point, scientists had thought atoms elementary, indivisible entities. Marie had cracked the door open to understanding matter at a more fundamental, subatomic level.
Curie was the first woman to win a Nobel Prize, in 1903, and one of a very select few people to earn a second Nobel, in 1911 (for her later discoveries of the elements radium and polonium).
“In her life and work,” says Ogilvie, “she became a role model for young women who wanted a career in science.”
Ivan Pavlov Salivates at the Idea
Experimental result: The discovery of conditioned reflexes
When: 1890s-1900s
Russian physiologist Ivan Pavlov scooped up a Nobel Prize in 1904 for his work with dogs, investigating how saliva and stomach juices digest food. While his scientific legacy will always be tied to doggie drool, it is the operations of the mind — canine, human and otherwise — for which Pavlov remains celebrated today.
Gauging gastric secretions was no picnic. Pavlov and his students collected the fluids that canine digestive organs produced, with a tube suspended from some pooches’ mouths to capture saliva. Come feeding time, the researchers began noticing that dogs who were experienced in the trials would start drooling into the tubes before they’d even tasted a morsel. Like numerous other bodily functions, the generation of saliva was considered a reflex at the time, an unconscious action only occurring in the presence of food. But Pavlov’s dogs had learned to associate the appearance of an experimenter with meals, meaning the canines’ experience had conditioned their physical responses.
“Up until Pavlov’s work, reflexes were considered fixed or hardwired and not changeable,” says Catharine Rankin, a psychology professor at the University of British Columbia and president of the Pavlovian Society. “His work showed that they could change as a result of experience.”
Pavlov and his team then taught the dogs to associate food with neutral stimuli as varied as buzzers, metronomes, rotating objects, black squares, whistles, lamp flashes and electric shocks. Pavlov never did ring a bell, however; credit an early mistranslation of the Russian word for buzzer for that enduring myth.
The findings formed the basis for the concept of classical, or Pavlovian, conditioning. It extends to essentially any learning about stimuli, even if reflexive responses are not involved. “Pavlovian conditioning is happening to us all of the time,” says W. Jeffrey Wilson of Albion College, fellow officer of the Pavlovian Society. “Our brains are constantly connecting things we experience together.” In fact, trying to “un-wire” these conditioned responses is the strategy behind modern treatments for post-traumatic stress disorder, as well as addiction.
Robert Millikan Gets a Charge
Experimental result: The precise value of a single electron’s charge
By most measures, Robert Millikan had done well for himself. Born in 1868 in a small town in Illinois, he went on to earn degrees from Oberlin College and Columbia University. He studied physics with European luminaries in Germany. He then joined the University of Chicago’s physics department, and even penned some successful textbooks.
But his colleagues were doing far more. The turn of the 20th century was a heady time for physics: In the span of just over a decade, the world was introduced to quantum physics, special relativity and the electron — the first evidence that atoms had divisible parts. By 1908, Millikan found himself pushing 40 without a significant discovery to his name.
The electron, though, offered an opportunity. Researchers had struggled with whether the particle represented a fundamental unit of electric charge, the same in all cases. It was a critical determination for further developing particle physics. With nothing to lose, Millikan gave it a go.
In his lab at the University of Chicago, he began working with containers of thick water vapor, called cloud chambers, and varying the strength of an electric field within them. Clouds of water droplets formed around charged atoms and molecules before descending due to gravity. By adjusting the strength of the electric field, he could slow down or even halt a single droplet’s fall, countering gravity with electricity. Find the precise strength where they balanced, and — assuming it did so consistently — that would reveal the charge’s value.
When it turned out water evaporated too quickly, Millikan and his students — the often-unsung heroes of science — switched to a longer-lasting substance: oil, sprayed into the chamber by a drugstore perfume atomizer.
The increasingly sophisticated oil-drop experiments eventually determined that the electron did indeed represent a unit of charge. They estimated its value to within whiskers of the currently accepted charge of one electron (1.602 x 10-19 coulombs). It was a coup for particle physics, as well as Millikan.
“There’s no question that it was a brilliant experiment,” says Caltech physicist David Goodstein. “Millikan’s result proved beyond reasonable doubt that the electron existed and was quantized with a definite charge. All of the discoveries of particle physics follow from that.”
Young, Davisson and Germer See Particles Do the Wave
Experimental result: The wavelike nature of light and electrons
When: 1801 and 1927, respectively
Light: particle or wave? Having long wrestled with this seeming either/or, many physicists settled on particle after Isaac Newton’s tour de force through optics. But a rudimentary, yet powerful, demonstration by fellow Englishman Thomas Young shattered this convention.
Young’s interests covered everything from Egyptology (he helped decode the Rosetta Stone) to medicine and optics. To probe light’s essence, Young devised an experiment in 1801. He cut two thin slits into an opaque object, let sunlight stream through them and watched how the beams cast a series of bright and dark fringes on a screen beyond. Young reasoned that this pattern emerged from light wavily spreading outward, like ripples across a pond, with crests and troughs from different light waves amplifying and canceling each other.
Although contemporary physicists initially rebuffed Young’s findings, rampant rerunning of these so-called double-slit experiments established that the particles of light really do move like waves. “Double-slit experiments have become so compelling [because] they are relatively easy to conduct,” says David Kaiser, a professor of physics and of the history of science at MIT. “There is an unusually large ratio, in this case, between the relative simplicity and accessibility of the experimental design and the deep conceptual significance of the results.”
More than a century later, a related experiment by Clinton Davisson and Lester Germer showed the depth of this significance. At what is now called Nokia Bell Labs in New Jersey, the physicists ricocheted electron particles off a nickel crystal. The scattered electrons interacted to produce a pattern only possible if the particles also acted like waves. Subsequent double slit-style experiments with electrons proved that particles with matter and undulating energy (light) can each act like both particles and waves. The paradoxical idea lies at the heart of quantum physics, which at the time was just beginning to explain the behavior of matter at a fundamental level.
“What these experiments show, at their root, is that the stuff of the world, be it radiation or seemingly solid matter, has some irreducible, unavoidable wavelike characteristics,” says Kaiser. “No matter how surprising or counterintuitive that may seem, physicists must take that essential ‘waviness’ into account.”
Robert Paine Stresses Starfish
Experimental result: The disproportionate impact of keystone species on ecosystems
When: Initially presented in a 1966 paper
Just like the purple starfish he crowbarred off rocks and chucked into the Pacific Ocean, Bob Paine threw conventional wisdom right out the window.
By the 1960s, ecologists had come to agree that habitats thrived primarily through diversity. The common practice of observing these interacting webs of creatures great and small suggested as much. Paine took a different approach.
Curious what would happen if he intervened in an environment, Paine ran his starfish-banishing experiments in tidal pools along and off the rugged coast of Washington state. The removal of this single species, it turned out, could destabilize a whole ecosystem. Unchecked, the starfish’s barnacle prey went wild — only to then be devoured by marauding mussels. These shellfish, in turn, started crowding out the limpets and algal species. The eventual result: a food web in tatters, with only mussel-dominated pools left behind.
Paine dubbed the starfish a keystone species, after the necessary center stone that locks an arch into place. A revelatory concept, it meant that all species do not contribute equally in a given ecosystem. Paine’s discovery had a major influence on conservation, overturning the practice of narrowly preserving an individual species for the sake of it, versus an ecosystem-based management strategy.
“His influence was absolutely transformative,” says Oregon State University’s Jane Lubchenco, a marine ecologist. She and her husband, fellow OSU professor Bruce Menge, met 50 years ago as graduate students in Paine’s lab at the University of Washington. Lubchenco, the administrator of the National Oceanic Atmospheric Administration from 2009 to 2013, saw over the years the impact that Paine’s keystone species concept had on policies related to fisheries management.
Lubchenco and Menge credit Paine’s inquisitiveness and dogged personality for changing their field. “A thing that made him so charismatic was almost a childlike enthusiasm for ideas,” says Menge. “Curiosity drove him to start the experiment, and then he got these spectacular results.”
Paine died in 2016. His later work had begun exploring the profound implications of humans as a hyper-keystone species, altering the global ecosystem through climate change and unchecked predation.
Adam Hadhazy is based in New Jersey. His work has also appeared in New Scientist and Popular Science , among other publications. This story originally appeared in print as "10 Experiments That Changed Everything"
Already a subscriber?
Register or Log In
Keep reading for as low as $1.99!
Sign up for our weekly science updates.
Save up to 40% off the cover price when you subscribe to Discover magazine.
Human Experimentation List (in Psychology)
Although experimentations on human subjects often prove to be ethically questionable, they have been carried out for almost two centuries and are now under strictly controlled and regulated by law.
What Is Human Experimentation?
Human experimentation is a systematic, scientific investigation where human beings serve as subjects in either medical (clinical) or non-medical research. Human subject research can be interventional or observational. This research method led to many revolutionary advances ever since its first use at the end of the 18th century.
Observational vs Interventional Research
In an observational research, investigators record their observations and analyze data without administering an intervention. Observational studies focus on aspects such as risk factors, disease progression, and disease treatments. Human subject research in the social sciences, for example, may involve surveys, questionnaires, interviews, and focus groups.
On the other hand, in an interventional research, investigators manipulate the subjects or their environment in order to modify specific processes or results. The most common human intervention studies are clinical trials in which new drugs and vaccines are being evaluated.
Examples of Human Experimentation
Human experiments were used extensively throughout the twentieth century. They were subject to both fame, controversy, and rage. Let’s have a look at some of the best-known experiments performed on humans.
The smallpox experiment
The earliest known human experimentation was done in 1796 by English physician Edward Jenner, famous for developing the world’s first vaccine.
As a country doctor, Jenner was aware of the fact that milkmaids rarely caught smallpox. However, since they were in frequent contact with cows, they often contracted cowpox. Jenner speculated that cowpox produced immunity against smallpox. To prove this theory, he injected fluid from a cowpox infection into the skin of his gardener’s son, eight-year-old James Phipps. When several weeks later Jenner exposed the boy to smallpox, he found that James has indeed become immune to the disease.
Following Jenner’s model, scientists in the 19th and 20th centuries developed new vaccines to fight many deadly diseases including polio, measles, and tetanus.
The Tuskegee experiment
In 1932, scientists at the Tuskegee Institute in Alabama started studying the natural progression of syphilis, a disease that represented a major health problem at the time. Six hundred black men were enrolled in the project that lasted for four decades; two-third of them had the disease.
The subjects of the study, officially known as the Tuskegee Study of Untreated Syphilis in the Negro Male, were not informed about the research. Instead, they were led to believe that they were receiving treatment for "bad blood"—a term that was used to describe several serious illnesses at the time—and promised free medical care and burial insurance as an incentive.
The men were given only placebos such as aspirin and mineral supplements. They were not treated for syphilis, although penicillin became an effective cure for the disease in 1947. As a result, many participants died from complications of syphilis. The survivors were given treatment in 1972, after the nature of the study became publicly known.
Henrietta Lacks
Henrietta Lacks was a poor and uneducated African American tobacco farmer from Baltimore, Maryland with cervical cancer. In 1951, scientists at Baltimore’s Johns Hopkins Hospital collected cells from her tissue sample without her knowledge.
Henrietta’s cells, nicknamed HeLa cells, soon became invaluable in medical research. These were the first cells to be successfully kept alive and cloned. They were essential in developing the polio vaccine and were sent to space in the first space missions to see how they would be affected by zero gravity. HeLa cells were also used in gene mapping, in vitro fertilization, and countless other scientific endeavors.
The Milgram experiment
In 1961, Yale University psychologist Stanley Milgram carried out what has become one of the best-known studies of obedience in psychology. Milgram conducted a series of experiments to determine to what extent people are willing to obey instructions that involve harming others.
Participants in Milgram’s experiment were asked to be "teachers" to a group of people placed in a separate room. They were instructed to administer an electric shock to “learners” every time they answer a question incorrectly. With every new incorrect answer, they were to increase the intensity of the electric shock, without realizing that the shocks were not real.
Despite Milgram’s expectations that no one would accept administering strong electric shocks to the learners, to his surprise, 65% of participants obeyed the instructions until the very end of the experiment, going all the way up to 450 volts.
The Bystander Effect
When 28-year-old Kitty Genovese was killed outside her apartment in New York City in 1964, it was reported that none of her neighbors stepped in to assist or call the police. A few years later, social psychologists Bibb Latane and John Darley decided to do a series of experiments to demonstrate this psychological phenomenon known as the bystander effect.
The participants in Latane and Darley’s experiment were confronted with several types of emergencies, like witnessing a seizure or smoke entering through air vents. The psychologists found that the larger the number of witnesses or “bystanders”, the more time it took for people to respond to the emergency. The experiment showed the diffusion of responsibility, that is, when surrounded by others, people expect someone else to take action. The lack of action was also a result of the social influence effect where individuals observe the behavior of those around them before deciding how to act.
The Stanford Prison Experiment
Psychologist Philip Zimbardo was the author of the infamous 1971 social psychology experiment that investigated the psychological effects of perceived power. Zimbardo was interested in finding out whether the brutality reported among guards in American prisons was due to their personality traits or was mostly situational and had to do with the prison environment.
Zimbardo converted a basement of the Stanford University psychology department into a “prison” and recruited volunteers to take part in a study of the psychological effects of prison life. Prisoners were arrested at their homes without warning and taken to the local police station, after which they were blindfolded and put in prison. Guards were instructed to do whatever was necessary to maintain law and order among prisoners except for resorting to physical violence.
The Stanford prison experiment revealed that people readily conformed to the stereotypical social roles they were expected to play. When they were placed in a position of authority, prison guards began to act in ways they would not usually behave.
Growth hormone therapy
The human growth hormone (hGH) was originally made available in the late 1950s to treat hormone-deficient children who would otherwise remain extremely short. Until the 1980s, only children lacking the hGH were eligible to receive the treatment.
With the rise of genetic engineering, however, the hormone has become more readily available. At the National Institutes of Health (NIH), the growth hormone has been administered also to perfectly healthy children who are short for their age, in spite of the fact that the procedure poses significant physical and psychological risks.
Ethics of Human Experimentation
There is no doubt that research involving human subjects is indispensable and has led to an improvement in the quality of lives and numerous medical breakthroughs. At the same time, as the above examples show, human experimentation has often been on the limit of what is ethically acceptable.
When Is Human Experimentation Criminal?
Jenner’s vaccine experiment was fortunately successful, but exposing a child to a deadly disease in the name of medical research is today considered as unethical. The HeLa cells and Tuskegee experiments have been cited as examples of racial discrimination in science. The Stanford study has been heavily criticized as unethical due to its lack of fully informed consent by prisoners to whom the arrests came as a surprise. The NIH treatment of short children is often seen as a profitable pharmacologic solution to what is fundamentally a social problem.
In addition, in order to ensure sufficient participation in research, human experimentation was frequently done among the most vulnerable population groups such as prisoners, poor people, minorities, mental patients, and children. Bill Clinton, for example, apologized to the communities affected by the Tuskegee experiments.
So how can researchers achieve a balance and justify exposing individual human subjects to risk for the sake of the advancement of science?
Ethical guidelines for human research
Ethical guidelines for regulating the use of human subjects in research were developed in response to numerous unethical experiments carried out throughout the 20th century. In the past sixty years, there has been a rapid emergence of various codes, regulations, and acts to govern ethical research in humans. In addition, several organizations were put in place to help monitor human experimentations.
The Nuremberg Code
The Nuremberg Code is a set of international rules and research ethics principles that were created to protect human test subjects. The code was established in 1947 as a result of the Nuremberg trials at the end of the Second World War. Originally, the code aimed to protect human subjects from any cruelty and exploitation similar to what the prisoners endured during the war.
The Nuremberg Code states that the voluntary consent in research is essential and that participants have the right to ask to end treatment at any moment. Furthermore, treatments can be carried only by licensed professionals who must terminate their study if the subjects are in danger.
The Nuremberg Code remains the most important document in the history of the ethics of medical research. It serves as a blueprint for today's principles that ensure the rights of subjects in human experimentation.
The Belmont report
The Belmont Report was established in 1978 by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The report describes the ethical behaviors in research that involve human subjects. It includes three ethical principles that must be taken into account when using human subjects for research:
- Respect for persons: individuals should be treated as autonomous agents and people with diminished autonomy are entitled to protection
- Beneficence: maximizing benefits and minimizing possible harms in human experimentation, that is, acting in the best interest of the participant
- Justice: informed consent, assessment of risks and benefits, fair treatment, and unbiased selection of subjects.
The Belmont Report provides the moral framework for understanding regulations on the use of humans in experimental methods in the United States.
Food and Drug Administration regulations
The Food and Drug Administration (FDA) is the highest authority of human subjects protection in research in the United States. The FDA regulations for the conduct of clinical trials have been in effect since the 1970s. These regulations require informing participants in an experiment that they could be used as control subjects or given a placebo, and that in certain cases alternative therapies may exist, and obtaining their written consent.
Ethics committees
To protect the rights and well-being of research participants, and at the same time allow obtaining meaningful results and insights into human behavior, all current biomedical and psychological research must go through a strict ethical review process.
Ethics committees assess and review trial designs. They approve, review, and monitor all research involving humans. Their task is to verify that subjects are not exposed to any unnecessary risks according to the key ethical guidelines including the assurance of confidentiality, informed consent, and debriefing.
Ethics committees in the European Union are bodies responsible for oversight of medical or human research studies in EU member states.
Institutional review boards
In the United States, ethics committees are usually known as institutional review boards. Institutional review boards (IRB), also called ethical review boards, are independent ethics committees that review Health and Human Services research proposals involving human subjects. The aim of the institutional review board is to ensure that the proposals meet the ethical foundations of the regulations.
Any study conducted by a university or research organization has to be approved by an institutional review board, often even before investigators can apply for funding. This is the case for any research in anthropology, economics, political science, and sociology as it is for clinical or experimental research in medicine and psychology.
Related posts:
- The Psychology of Long Distance Relationships
- Operant Conditioning (Examples + Research)
- Beck’s Depression Inventory (BDI Test)
- Variable Interval Reinforcement Schedule (Examples)
- Concrete Operational Stage (3rd Cognitive Development)
Reference this article:
About The Author
Free Personality Test
Free Memory Test
Free IQ Test
PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.
Follow Us On:
Youtube Facebook Instagram X/Twitter
Psychology Resources
Developmental
Personality
Relationships
Psychologists
Serial Killers
Psychology Tests
Personality Quiz
Memory Test
Depression test
Type A/B Personality Test
© PracticalPsychology. All rights reserved
Privacy Policy | Terms of Use
Giving Tuesday Matching Gift Challenge
Your gift DOUBLES to save lives! Deadline: December 3.
- For Clinicians
- For Medical Students
- For Scientists
- Our Victories
- Internships
- Annual & Financial Reports
- Barnard Medical Center
Human Experimentation: An Introduction to the Ethical Issues
- Share on Facebook
- Share on Twitter
- Share via Email
In January 1944, a 17-year-old Navy seaman named Nathan Schnurman volunteered to test protective clothing for the Navy. Following orders, he donned a gas mask and special clothes and was escorted into a 10-foot by 10-foot chamber, which was then locked from the outside. Sulfur mustard and Lewisite, poisonous gasses used in chemical weapons, were released into the chamber and, for one hour each day for five days, the seaman sat in this noxious vapor. On the final day, he became nauseous, his eyes and throat began to burn, and he asked twice to leave the chamber. Both times he was told he needed to remain until the experiment was complete. Ultimately Schnurman collapsed into unconsciousness and went into cardiac arrest. When he awoke, he had painful blisters on most of his body. He was not given any medical treatment and was ordered to never speak about what he experienced under the threat of being tried for treason. For 49 years these experiments were unknown to the public.
The Scandal Unfolds
In 1993, the National Academy of Sciences exposed a series of chemical weapons experiments stretching from 1944 to 1975 which involved 60,000 American GIs. At least 4,000 were used in gas-chamber experiments such as the one described above. In addition, more than 210,000 civilians and GIs were subjected to hundreds of radiation tests from 1945 through 1962.
Testimony delivered to Congress detailed the studies, explaining that “these tests and experiments often involved hazardous substances such as radiation, blister and nerve agents, biological agents, and lysergic acid diethylamide (LSD)....Although some participants suffered immediate acute injuries, and some died, in other cases adverse health problems were not discovered until many years later—often 20 to 30 years or longer.” 1
These examples and others like them—such as the infamous Tuskegee syphilis experiments (1932-72) and the continued testing of unnecessary (and frequently risky) pharmaceuticals on human volunteers—demonstrate the danger in assuming that adequate measures are in place to ensure ethical behavior in research.
Tuskegee Studies
In 1932, the U.S. Public Health Service in conjunction with the Tuskegee Institute began the now notorious “Tuskegee Study of Untreated Syphilis in the Negro Male.” The study purported to learn more about the treatment of syphilis and to justify treatment programs for African Americans. Six hundred African American men, 399 of whom had syphilis, became participants. They were given free medical exams, free meals, and burial insurance as recompense for their participation and were told they would be treated for “bad blood,” a term in use at the time referring to a number of ailments including syphilis, when, in fact, they did not receive proper treatment and were not informed that the study aimed to document the progression of syphilis without treatment. Penicillin was considered the standard treatment by 1947, but this treatment was never offered to the men. Indeed, the researchers took steps to ensure that participants would not receive proper treatment in order to advance the objectives of the study. Although, the study was originally projected to last only 6 months, it continued for 40 years.
Following a front-page New York Times article denouncing the studies in 1972, the Assistant Secretary for Health and Scientific Affairs appointed a committee to investigate the experiment. The committee found the study ethically unjustified and within a month it was ended. The following year, the National Association for the Advancement of Colored People won a $9 million class action suit on behalf of the Tuskegee participants. However, it was not until May 16, 1997, when President Clinton addressed the eight surviving Tuskegee participants and others active in keeping the memory of Tuskegee alive, that a formal apology was issued by the government.
While Tuskegee and the discussed U.S. military experiments stand out in their disregard for the well-being of human subjects, more recent questionable research is usually devoid of obvious malevolent intentions. However, when curiosity is not curbed with compassion, the results can be tragic.
Unnecessary Drugs Mean Unnecessary Experiments
A widespread ethical problem, although one that has not yet received much attention, is raised by the development of new pharmaceuticals. All new drugs are tested on human volunteers. There is, of course, no way subjects can be fully apprised of the risks in advance, as that is what the tests purport to determine. This situation is generally considered acceptable, provided volunteers give “informed” consent. Many of the drugs under development today, however, offer little clinical benefit beyond those available from existing treatments. Many are developed simply to create a patentable variation on an existing drug. It is easy to justify asking informed, consenting individuals to risk limited harm in order to develop new drug therapies for a condition from which they are suffering or for which existing treatments are inadequate. The same may not apply when the drug being tested offers no new benefits to the subjects because they are healthy volunteers, or when the drug offers no significant benefits to anyone because it is essentially a copy of an existing drug.
Manufacturers, of course, hope that animal tests will give an indication of how a given drug will affect humans. However, a full 70 to 75 percent of drugs approved by the Food and Drug Administration for clinical trials based on promising results in animal tests, ultimately prove unsafe or ineffective for humans. 2 Even limited clinical trials cannot reveal the full range of drug risks. A U.S. General Accounting Office (GAO) study reports that of the 198 new drugs which entered the market between 1976 and 1985, 102 (52 percent) caused adverse reactions that premarket tests failed to predict. 3 Even in the brief period between January and August 1997, at least 53 drugs currently on the market were relabeled due to unexpected adverse effects. 4
In the GAO study, no fewer than eight of the drugs in question were benzodiazepines, similar to Valium, Librium, and numerous other sedatives of this class. Two were heterocyclic antidepressants, adding little or nothing to the numerous existing drugs of this type. Several others were variations of cephalosporin antibiotics, antihypertensives, and fertility drugs. These are not needed drugs. The risks taken to develop these drugs by trial participants, and to a certain extent by consumers, were not in the name of science, but in the name of market share.
As physicians, we necessarily have a relationship with the pharmaceutical companies that produce, develop, and market drugs involved in medical treatment. A reflective, perhaps critical posture towards some of the standard practices of these companies—such as the routine development of unnecessary drugs—may help to ensure higher ethical standards in research.
Unnecessary Experimentation on Children
Unnecessary and questionable human experimentation is not limited to pharmaceutical development. In experiments at the National Institutes of Health (NIH), a genetically engineered human growth hormone (hGH) is injected into healthy short children. Consent is obtained from parents and affirmed by the children themselves. The children receive 156 injections each year in the hope of becoming taller.
Growth hormone is clearly indicated for hormone-deficient children who would otherwise remain extremely short. Until the early 1980s, they were the only ones eligible to receive it; because it was harvested from human cadavers, supplies were limited. But genetic engineering changed that, and the hormone can now be manufactured in mass quantities. This has led pharmaceutical houses to eye a huge potential market: healthy children who are simply shorter than average.
Short stature, of course, is not a disease. The problems short children face relate only to how others react to their height and their own feelings about it. The hGH injection, on the other hand, poses significant risks, both physical and psychological.
These injections are linked in some studies to a potential for increased cancer risk, 5-8 are painful, and may aggravate, rather than reduce, the stigma of short stature. 9,10 Moreover, while growth rate is increased in the short term, it is unclear that the final net height of the child is significantly increased by the treatment.
The Physicians Committee for Responsible Medicine worked to halt these experiments and recommended that the biological and psychological effects of hGH treatment be studied in hormone-deficient children who already receive hGH, and that non-pharmacologic interventions to counteract the stigma of short stature also be investigated. Unfortunately, the hGH studies have continued without modification, putting healthy short children at risk.
Use of Placebo in Clinical Research
Whooping cough, also known as pertussis, is a serious threat to infants, with dangerous and sometimes fatal complications. Vaccination has nearly wiped out pertussis in the U.S. Uncertainties remain, however, over the relative merits and safety of traditional whole-cell vaccines versus newer, acellular versions, prompting the NIH to propose an experiment testing various vaccines on children.
The controversial part of the 1993 experiment was the inclusion of a placebo group of more than 500 infants who get no protection at all, an estimated 5 percent of whom were expected to develop whooping cough, compared to the 1.4 percent estimated risk for the study group as a whole. Because of these risks, this study would not be permissible in the U.S. The NIH, however, insisted on the inclusion of a placebo control and therefore initiated the study in Italy where there are fewer restrictions on human research trials. Originally, Italian health officials recoiled from these studies on ethical as well as practical grounds, but persistent pressure from the NIH ensured that the study was conducted with the placebo group.
The use of double-blind placebo-controlled studies is the “gold standard” in the research community, usually for good reason. However, when a well-accepted treatment is available, the use of a placebo control group is not always acceptable and is sometimes unethical. 11 In such cases, it is often appropriate to conduct research using the standard treatment as an active control. The pertussis experiments on Italian children were an example of dogmatic adherence to a research protocol which trumped ethical concerns.
Placebos, Ethics, and Poorer Nations
The ethical problems that placebo-controlled trials raise are especially complicated in research conducted in economically disadvantaged countries. Recently, attention has been brought to studies conducted in Africa on preventing the transmission of HIV from mothers to newborns. Standard treatment for HIV-infected pregnant women in the U.S. is a costly regimen of AZT. This treatment can save the life of one in seven infants born to women with AIDS. 12 Sadly, the cost of AZT treatment is well beyond the means of most of the world’s population. This troubling situation has motivated studies to find a cost-effective treatment that can confer at least some benefit in poorer countries where the current standard of care is no treatment at all. A variety of these studies is now underway in which a control group of HIV-positive pregnant women receives no antiretroviral treatment.
Such studies would clearly be unethical in the U.S. where AZT treatment is the standard of care for all HIV-positive mothers. Peter Lurie, M.D., M.P.H., and Sidney Wolfe, M.D., in an editorial in the New England Journal of Medicine , hold that such use of placebo controls in research trials in poor nations is unethical as well. They contend that, by using placebo control groups, researchers adopt a double standard leading to “an incentive to use as research subjects those with the least access to health care.” 13 Lurie and Wolfe argue that an active control receiving the standard regimen of AZT can and should be compared with promising alternative therapies (such as a reduced dosage of AZT) to develop an effective, affordable treatment for poor countries.
Control Groups and Nutrition
Similar ethical problems are also emerging in nutrition research. In the past, it was ethical for prevention trials in heart disease or other serious conditions to include a control group which received weak nutritional guidelines or no dietary intervention at all. However, that was before diet and lifestyle changes—particularly those using very low fat, vegetarian diets—were shown to reverse existing heart disease, push adult-onset diabetes into remission, significantly lower blood pressure, and reduce the risk of some forms of cancer. Perhaps in the not-too-distant future, such comparison groups will no longer be permissible.
The Ethical Landscape
Ethical issues in human research generally arise in relation to population groups that are vulnerable to abuse. For example, much of the ethically dubious research conducted in poor countries would not occur were the level of medical care not so limited. Similarly, the cruelty of the Tuskegee experiments clearly reflected racial prejudice. The NIH experiments on short children were motivated to counter a fundamentally social problem, the stigma of short stature, with a profitable pharmacologic solution. The unethical military experiments during the Cold War would have been impossible if GIs had had the right to abort assignments or raise complaints. As we address the ethical issues of human experimentation, we often find ourselves traversing complex ethical terrain. Vigilance is most essential when vulnerable populations are involved.
- Frank C. Conahan of the National Security and International Affairs Division of the General Accounting Office, reporting to the Subcommittee of the House Committee on Government Operations.
- Flieger K. Testing drugs in people. U.S. Food and Drug Administration. September 10, 1997.
- U.S. General Accounting Office. FDA Drug Review: Postapproval Risks 1976-85. U.S. General Accounting Office, Washington, D.C., 1990.
- MedWatch, U.S. Food and Drug Administration. Labeling changes related to drug safety. U.S. Food and Drug Administration Home Page; http://www.fda.gov/medwatch/safety.htm . September 10, 1997.
- Arteaga CL, Osborne CK. Growth inhibition of human breast cancer cells in vitro with an antibody against the type I somatomedin receptor. Cancer Res . 1989;49:6237-6241.
- Pollak M, Costantino J, Polychronakos C, et al. Effect of tamoxifen on serum insulin-like growth factor I levels in stage I breast cancer patients. J Natl Cancer Inst . 1990;82:1693-1697.
- Stoll BA. Growth hormone and breast cancer. Clin Oncol . 1992;4:4-5.
- Stoll BA. Does extra height justify a higher risk of breast cancer? Ann Oncol . 1992;3:29-30.
- Kusalic M, Fortin C. Growth hormone treatment in hypopituitary dwarfs: longitudinal psychological effects. Canad Psychiatric Asso J . 1975;20:325-331.
- Grew RS, Stabler B, Williams RW, Underwood LE. Facilitating patient understanding in the treatment of growth delay. Clin Pediatr . 1983;22:685-90.
- For a more extensive discussion of the ethical status of placebo-controlled trials see especially: Freedman B, Glass KC, Weijer C. Placebo orthodoxy in clinical research II: ethical, legal and regulatory myths. J Law Med Ethics . 1996;24:252-259.
- Lurie P, Wolfe SM. Unethical trials of interventions to reduce perinatal transmission of the human immunnodeficiency virus in developing countries. N Engl J Med . 1997:337:12:853.
More on Ethical Science
Good Science Digest
News Release
15 Scientific Method Examples
Viktoriya Sus (MA)
Viktoriya Sus is an academic writer specializing mainly in economics and business from Ukraine. She holds a Master’s degree in International Business from Lviv National University and has more than 6 years of experience writing for different clients. Viktoriya is passionate about researching the latest trends in economics and business. However, she also loves to explore different topics such as psychology, philosophy, and more.
Learn about our Editorial Process
Chris Drew (PhD)
This article was peer-reviewed and edited by Chris Drew (PhD). The review process on Helpful Professor involves having a PhD level expert fact check, edit, and contribute to articles. Reviewers ensure all content reflects expert academic consensus and is backed up with reference to academic studies. Dr. Drew has published over 20 academic articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education and holds a PhD in Education from ACU.
The scientific method is a structured and systematic approach to investigating natural phenomena using empirical evidence .
The scientific method has been a lynchpin for rapid improvements in human development. It has been an invaluable procedure for testing and improving upon human ingenuity. It’s led to amazing scientific, technological, and medical breakthroughs.
Some common steps in a scientific approach would include:
- Observation
- Question formulation
- Hypothesis development
- Experimentation and collecting data
- Analyzing results
- Drawing conclusions
Definition of Scientific Method
The scientific method is a structured and systematic approach to investigating natural phenomena or events through empirical evidence.
Empirical evidence can be gathered from experimentation, observation, analysis, and interpretation of data that allows one to create generalizations about probable reasons behind those happenings.
As mentioned in the article published in the journal Nature,
“ As schoolchildren, we are taught that the scientific method involves a question and suggested explanation (hypothesis) based on observation, followed by the careful design and execution of controlled experiments, and finally validation, refinement or rejection of this hypothesis” (p. 237).
The use of scientific methods permits replication and validation of other people’s scientific analyses, leading toward improvement upon previous results, and solid empirical conclusions.
Voit (2019) adds that:
“…it not only prescribes the order and types of activities that give a scientific study validity and a stamp of approval but also has substantially shaped how we collectively think about the endeavor of investigating nature” (p. 1).
This method aims to minimize subjective biases while maximizing objectivity helping researchers gather factual data.
It follows set procedures and guidelines for testing hypotheses using controlled conditions, assuring optimum accuracy and relevance in concluding by assessing a range of aspects (Blystone & Blodgett, 2006).
Overall, the scientific method provides researchers with a structured way of inquiry that seeks insightful explanations regarding evidence-based investigation grounded in facts acquired from an array of fields.
15 Examples of Scientific Method
- Medicine Delivery : Scientists use scientific method to determine the most effective way of delivering a medicine to its target location in the body. They perform experiments and gather data on the different methods of medicine delivery, monitoring factors such as dosage and time release.
- Agricultural Research : Scientific method is frequently used in agricultural research to determine the most effective way to grow crops or raise livestock. This may involve testing different fertilizers, irrigation methods, or animal feed, measuring yield, and analyzing data.
- Food Science and Nutrition : Nutritionists and food scientists use the scientific method to study the effects of different food types and diet on health. They design experiments to understand the impact of dietary changes on weight, disease risk, and overall health outcomes.
- Environmental Studies : Researchers use scientific method to study natural ecosystems and how human activities impact them. They collect data on things like biodiversity, water quality, and pollution levels, analyzing changes over time.
- Psychological Studies : Psychologists use the scientific method to understand human behavior and cognition. They conduct experiments under controlled conditions to test theories about learning, memory, social interaction, and more.
- Climate Change Research : Climate scientists use the scientific method to study the Earth’s changing climate. They collect and analyze data on temperature, CO2 levels, and ice coverage to understand trends and make predictions about future changes.
- Geology Exploration : Geologists use scientific method to analyze rock samples from deep in the earth’s crust and gather information about geological processes over millions of years. They evaluate data by studying patterns left behind by these processes.
- Space Exploration : Scientists use scientific methods in designing space missions so that they can explore other planets or learn more about our solar system. They employ experiments like landing craft exploration missions as well as remote sensing techniques that allow them to examine far-off planets without having physically land on their surfaces.
- Archaeology : Archaeologists use the scientific method to understand past human cultures. They formulate hypotheses about a site or artifact, conduct excavations or analyses, and then interpret the data to test their hypotheses.
- Clinical Trials : Medical researchers use scientific method to test new treatments and therapies for various diseases. They design controlled studies that track patients’ outcomes while varying variables like dosage or treatment frequency.
- Industrial Research & Development : Many companies use scientific methods in their R&D departments. For example, automakers may assess the effectiveness of anti-lock brakes before releasing them into the marketplace through tests with dummy targets.
- Material Science Experiments : Engineers have extensively used scientific method experimentation efforts when designing new materials and testing which options could be flexible enough for certain applications. These experiments might include casting molten material into molds and then subjecting it to high heat to expose vulnerabilities
- Chemical Engineering Investigations : Chemical engineers also abide by scientific method principles to create new chemical compounds & technologies designed to be valuable in the industry. They may experiment with different substances, changing materials’ concentration and heating conditions to ensure the final end-product safety and reliability of the material.
- Biotechnology : Biotechnologists use the scientific method to develop new products or processes. For instance, they may experiment with genetic modification techniques to enhance crop resistance to pests or disease.
- Physics Research : Scientists use scientific method in their work to study fundamental principles of the universe. They seek answers for how atoms and molecules are breaking down and related events that unfold naturally by running many simulations using computer models or designing sophisticated experiments to test hypotheses.
Origins of the Scientific Method
The scientific method can be traced back to ancient times when philosophers like Aristotle used observation and logic to understand the natural world.
These early philosophers were focused on understanding the world around them and sought explanations for natural phenomena through direct observation (Betz, 2010).
In the Middle Ages, Muslim scholars played a key role in developing scientific inquiry by emphasizing empirical observations.
Alhazen (a.k.a Ibn al-Haytham), for example, introduced experimental methods that helped establish optics as a modern science. He emphasized investigation through experimentation with controlled conditions (De Brouwer, 2021).
During the Scientific Revolution of the 17th century in Europe, scientists such as Francis Bacon and René Descartes began to develop what we now know as the scientific method observation (Betz, 2010).
Bacon argued that knowledge must be based on empirical evidence obtained through observation and experimentation rather than relying solely upon tradition or authority.
Descartes emphasized mathematical methods as tools in experimentation and rigorous thinking processes (Fukuyama, 2021).
These ideas later developed into systematic research designs , including hypothesis testing, controlled experiments, and statistical analysis – all of which are still fundamental aspects of modern-day scientific research.
Since then, technological advancements have allowed for more sophisticated instruments and measurements, yielding far more precise data sets scientists use today in fields ranging from Medicine & Chemistry to Astrophysics or Genetics.
So, while early Greek philosophers laid much groundwork toward an observational-based approach to explaining nature, Islam scholars furthered our understanding of logical reasoning techniques and gave rise to a more formalized methodology.
Steps in the Scientific Method
While there may be variations in the specific steps scientists follow, the general process has six key steps (Blystone & Blodgett, 2006).
Here is a brief overview of each of these steps:
1. Observation
The first step in the scientific method is to identify and observe a phenomenon that requires explanation.
This can involve asking open-ended questions, making detailed observations using our senses or tools, or exploring natural patterns, which are sources to develop hypotheses.
2. Formulation of a Hypothesis
A hypothesis is an educated guess or proposed explanation for the observed phenomenon based on previous observations & experiences or working assumptions derived from a valid literature review .
The hypothesis should be testable and falsifiable through experimentation and subsequent analysis.
3. Testing of the Hypothesis
In this step, scientists perform experiments to test their hypothesis while ensuring that all variables are controlled besides the one being observed.
The data collected in these experiments must be measurable, repeatable, and consistent.
4. Data Analysis
Researchers carefully scrutinize data gathered from experiments – typically using inferential statistics techniques to analyze whether results support their hypotheses or not.
This helps them gain important insights into what previously unknown mechanisms might exist based on statistical evidence gained about their system.
See: 15 Examples of Data Analysis
5. Drawing Conclusions
Based on their data analyses, scientists reach conclusions about whether their original hypotheses were supported by evidence obtained from testing.
If there is insufficient supporting evidence for their ideas – trying again with modified iterations of the initial idea sometimes happens.
6. Communicating Results
Once results have been analyzed and interpreted under accepted principles within the scientific community, scientists publish findings in respected peer-reviewed journals.
These publications help knowledge-driven communities establish trends within respective fields while indirectly subjecting papers reviews requests boosting research quality across the scientific discipline.
Importance of the Scientific Method
The scientific method is important because it helps us to collect reliable data and develop testable hypotheses that can be used to explain natural phenomena (Haig, 2018).
Here are some reasons why the scientific method is so essential:
- Objectivity : The scientific method requires researchers to conduct unbiased experiments and analyses, which leads to more impartial conclusions. In this way, replication of findings by peers also ensures results can be relied upon as founded on sound principles allowing others confidence in building further knowledge on top of existing research.
- Precision & Predictive Power : Scientific methods usually include techniques for obtaining highly precise measurements, ensuring that data collected is more meaningful with fewer uncertainties caused by limited measuring errors leading to statistically significant results having firm logical foundations. If predictions develop scientifically tested generalized defined conditions factored into the analysis, it helps in delivering realistic expectations
- Validation : By following established scientific principles defined within the community – independent scholars can replicate observation data without being influenced by subjective biases or prejudices. It assures general acceptance among scientific communities who follow similar protocols when researching within respective fields.
- Application & Innovation : Scientific concept advancements that occur based on correct hypothesis testing commonly lead scientists toward new discoveries, identifying potential breakthroughs in research. They pave the way for technological innovations often seen as game changers, like mapping human genome DNA onto creating novel therapies against genetic diseases or unlocking secrets of today’s universe through discoveries at LHC.
- Impactful Decision-Making : Policymakers can draw from these scientific findings investing resources into informed decisions leading us toward a sustainable future. For example, research gathered about carbon pollution’s impact on climate change informs debate making policy action decisions about our planet’s environment, providing valuable knowledge-useful information benefiting societies (Haig, 2018).
The scientific method is an essential tool that has revolutionized our understanding of the natural world.
By emphasizing rigorous experimentation, objective measurement, and logical analysis- scientists can obtain more unbiased evidence with empirical validity .
Utilizing this methodology has led to groundbreaking discoveries & knowledge expansion that have shaped our modern world from medicine to technology.
The scientific method plays a crucial role in advancing research and our overall societal consensus on reliable information by providing reliable results, ensuring we can make more informed decisions toward a sustainable future.
As scientific advancements continue rapidly, ensuring we’re applying core principles of this process enables objectives to progress, paving new ways for interdisciplinary research across all fields, thereby fuelling ever-driving human curiosity.
Betz, F. (2010). Origin of scientific method. Managing Science , 21–41. https://doi.org/10.1007/978-1-4419-7488-4_2
Blystone, R. V., & Blodgett, K. (2006). WWW: The scientific method. CBE—Life Sciences Education , 5 (1), 7–11. https://doi.org/10.1187/cbe.05-12-0134
De Brouwer , P. J. S. (2021). The big r-book: From data science to learning machines and big data . John Wiley & Sons, Inc.
Defining the scientific method. (2009). Nature Methods , 6 (4), 237–237. https://doi.org/10.1038/nmeth0409-237
Fukuyama, F. (2012). The end of history and the last man . New York: Penguin.
Haig, B. D. (2018). The importance of scientific method for psychological science. Psychology, Crime & Law , 25 (6), 527–541. https://doi.org/10.1080/1068316x.2018.1557181
Voit, E. O. (2019). Perspective: Dimensions of the scientific method. PLOS Computational Biology , 15 (9), e1007279. https://doi.org/10.1371/journal.pcbi.1007279
- Viktoriya Sus (MA) #molongui-disabled-link Cognitive Dissonance Theory: Examples and Definition
- Viktoriya Sus (MA) #molongui-disabled-link 15 Free Enterprise Examples
- Viktoriya Sus (MA) #molongui-disabled-link 21 Sunk Costs Examples (The Fallacy Explained)
- Viktoriya Sus (MA) #molongui-disabled-link Price Floor: 15 Examples & Definition
- Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 23 Achieved Status Examples
- Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Ableism Examples
- Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 25 Defense Mechanisms Examples
- Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Theory of Planned Behavior Examples
Leave a Comment Cancel Reply
Your email address will not be published. Required fields are marked *
- The 25 Most Influential Psychological Experiments in History
While each year thousands and thousands of studies are completed in the many specialty areas of psychology, there are a handful that, over the years, have had a lasting impact in the psychological community as a whole. Some of these were dutifully conducted, keeping within the confines of ethical and practical guidelines. Others pushed the boundaries of human behavior during their psychological experiments and created controversies that still linger to this day. And still others were not designed to be true psychological experiments, but ended up as beacons to the psychological community in proving or disproving theories.
This is a list of the 25 most influential psychological experiments still being taught to psychology students of today.
1. A Class Divided
Study conducted by: jane elliott.
Study Conducted in 1968 in an Iowa classroom
Experiment Details: Jane Elliott’s famous experiment was inspired by the assassination of Dr. Martin Luther King Jr. and the inspirational life that he led. The third grade teacher developed an exercise, or better yet, a psychological experiment, to help her Caucasian students understand the effects of racism and prejudice.
Elliott divided her class into two separate groups: blue-eyed students and brown-eyed students. On the first day, she labeled the blue-eyed group as the superior group and from that point forward they had extra privileges, leaving the brown-eyed children to represent the minority group. She discouraged the groups from interacting and singled out individual students to stress the negative characteristics of the children in the minority group. What this exercise showed was that the children’s behavior changed almost instantaneously. The group of blue-eyed students performed better academically and even began bullying their brown-eyed classmates. The brown-eyed group experienced lower self-confidence and worse academic performance. The next day, she reversed the roles of the two groups and the blue-eyed students became the minority group.
At the end of the experiment, the children were so relieved that they were reported to have embraced one another and agreed that people should not be judged based on outward appearances. This exercise has since been repeated many times with similar outcomes.
For more information click here
2. Asch Conformity Study
Study conducted by: dr. solomon asch.
Study Conducted in 1951 at Swarthmore College
Experiment Details: Dr. Solomon Asch conducted a groundbreaking study that was designed to evaluate a person’s likelihood to conform to a standard when there is pressure to do so.
A group of participants were shown pictures with lines of various lengths and were then asked a simple question: Which line is longest? The tricky part of this study was that in each group only one person was a true participant. The others were actors with a script. Most of the actors were instructed to give the wrong answer. Strangely, the one true participant almost always agreed with the majority, even though they knew they were giving the wrong answer.
The results of this study are important when we study social interactions among individuals in groups. This study is a famous example of the temptation many of us experience to conform to a standard during group situations and it showed that people often care more about being the same as others than they do about being right. It is still recognized as one of the most influential psychological experiments for understanding human behavior.
3. Bobo Doll Experiment
Study conducted by: dr. alburt bandura.
Study Conducted between 1961-1963 at Stanford University
In his groundbreaking study he separated participants into three groups:
- one was exposed to a video of an adult showing aggressive behavior towards a Bobo doll
- another was exposed to video of a passive adult playing with the Bobo doll
- the third formed a control group
Children watched their assigned video and then were sent to a room with the same doll they had seen in the video (with the exception of those in the control group). What the researcher found was that children exposed to the aggressive model were more likely to exhibit aggressive behavior towards the doll themselves. The other groups showed little imitative aggressive behavior. For those children exposed to the aggressive model, the number of derivative physical aggressions shown by the boys was 38.2 and 12.7 for the girls.
The study also showed that boys exhibited more aggression when exposed to aggressive male models than boys exposed to aggressive female models. When exposed to aggressive male models, the number of aggressive instances exhibited by boys averaged 104. This is compared to 48.4 aggressive instances exhibited by boys who were exposed to aggressive female models.
While the results for the girls show similar findings, the results were less drastic. When exposed to aggressive female models, the number of aggressive instances exhibited by girls averaged 57.7. This is compared to 36.3 aggressive instances exhibited by girls who were exposed to aggressive male models. The results concerning gender differences strongly supported Bandura’s secondary prediction that children will be more strongly influenced by same-sex models. The Bobo Doll Experiment showed a groundbreaking way to study human behavior and it’s influences.
4. Car Crash Experiment
Study conducted by: elizabeth loftus and john palmer.
Study Conducted in 1974 at The University of California in Irvine
The participants watched slides of a car accident and were asked to describe what had happened as if they were eyewitnesses to the scene. The participants were put into two groups and each group was questioned using different wording such as “how fast was the car driving at the time of impact?” versus “how fast was the car going when it smashed into the other car?” The experimenters found that the use of different verbs affected the participants’ memories of the accident, showing that memory can be easily distorted.
This research suggests that memory can be easily manipulated by questioning technique. This means that information gathered after the event can merge with original memory causing incorrect recall or reconstructive memory. The addition of false details to a memory of an event is now referred to as confabulation. This concept has very important implications for the questions used in police interviews of eyewitnesses.
5. Cognitive Dissonance Experiment
Study conducted by: leon festinger and james carlsmith.
Study Conducted in 1957 at Stanford University
Experiment Details: The concept of cognitive dissonance refers to a situation involving conflicting:
This conflict produces an inherent feeling of discomfort leading to a change in one of the attitudes, beliefs or behaviors to minimize or eliminate the discomfort and restore balance.
Cognitive dissonance was first investigated by Leon Festinger, after an observational study of a cult that believed that the earth was going to be destroyed by a flood. Out of this study was born an intriguing experiment conducted by Festinger and Carlsmith where participants were asked to perform a series of dull tasks (such as turning pegs in a peg board for an hour). Participant’s initial attitudes toward this task were highly negative.
They were then paid either $1 or $20 to tell a participant waiting in the lobby that the tasks were really interesting. Almost all of the participants agreed to walk into the waiting room and persuade the next participant that the boring experiment would be fun. When the participants were later asked to evaluate the experiment, the participants who were paid only $1 rated the tedious task as more fun and enjoyable than the participants who were paid $20 to lie.
Being paid only $1 is not sufficient incentive for lying and so those who were paid $1 experienced dissonance. They could only overcome that cognitive dissonance by coming to believe that the tasks really were interesting and enjoyable. Being paid $20 provides a reason for turning pegs and there is therefore no dissonance.
6. Fantz’s Looking Chamber
Study conducted by: robert l. fantz.
Study Conducted in 1961 at the University of Illinois
Experiment Details: The study conducted by Robert L. Fantz is among the simplest, yet most important in the field of infant development and vision. In 1961, when this experiment was conducted, there very few ways to study what was going on in the mind of an infant. Fantz realized that the best way was to simply watch the actions and reactions of infants. He understood the fundamental factor that if there is something of interest near humans, they generally look at it.
To test this concept, Fantz set up a display board with two pictures attached. On one was a bulls-eye. On the other was the sketch of a human face. This board was hung in a chamber where a baby could lie safely underneath and see both images. Then, from behind the board, invisible to the baby, he peeked through a hole to watch what the baby looked at. This study showed that a two-month old baby looked twice as much at the human face as it did at the bulls-eye. This suggests that human babies have some powers of pattern and form selection. Before this experiment it was thought that babies looked out onto a chaotic world of which they could make little sense.
7. Hawthorne Effect
Study conducted by: henry a. landsberger.
Study Conducted in 1955 at Hawthorne Works in Chicago, Illinois
Landsberger performed the study by analyzing data from experiments conducted between 1924 and 1932, by Elton Mayo, at the Hawthorne Works near Chicago. The company had commissioned studies to evaluate whether the level of light in a building changed the productivity of the workers. What Mayo found was that the level of light made no difference in productivity. The workers increased their output whenever the amount of light was switched from a low level to a high level, or vice versa.
The researchers noticed a tendency that the workers’ level of efficiency increased when any variable was manipulated. The study showed that the output changed simply because the workers were aware that they were under observation. The conclusion was that the workers felt important because they were pleased to be singled out. They increased productivity as a result. Being singled out was the factor dictating increased productivity, not the changing lighting levels, or any of the other factors that they experimented upon.
The Hawthorne Effect has become one of the hardest inbuilt biases to eliminate or factor into the design of any experiment in psychology and beyond.
8. Kitty Genovese Case
Study conducted by: new york police force.
Study Conducted in 1964 in New York City
Experiment Details: The murder case of Kitty Genovese was never intended to be a psychological experiment, however it ended up having serious implications for the field.
According to a New York Times article, almost 40 neighbors witnessed Kitty Genovese being savagely attacked and murdered in Queens, New York in 1964. Not one neighbor called the police for help. Some reports state that the attacker briefly left the scene and later returned to “finish off” his victim. It was later uncovered that many of these facts were exaggerated. (There were more likely only a dozen witnesses and records show that some calls to police were made).
What this case later become famous for is the “Bystander Effect,” which states that the more bystanders that are present in a social situation, the less likely it is that anyone will step in and help. This effect has led to changes in medicine, psychology and many other areas. One famous example is the way CPR is taught to new learners. All students in CPR courses learn that they must assign one bystander the job of alerting authorities which minimizes the chances of no one calling for assistance.
9. Learned Helplessness Experiment
Study conducted by: martin seligman.
Study Conducted in 1967 at the University of Pennsylvania
Seligman’s experiment involved the ringing of a bell and then the administration of a light shock to a dog. After a number of pairings, the dog reacted to the shock even before it happened. As soon as the dog heard the bell, he reacted as though he’d already been shocked.
During the course of this study something unexpected happened. Each dog was placed in a large crate that was divided down the middle with a low fence. The dog could see and jump over the fence easily. The floor on one side of the fence was electrified, but not on the other side of the fence. Seligman placed each dog on the electrified side and administered a light shock. He expected the dog to jump to the non-shocking side of the fence. In an unexpected turn, the dogs simply laid down.
The hypothesis was that as the dogs learned from the first part of the experiment that there was nothing they could do to avoid the shocks, they gave up in the second part of the experiment. To prove this hypothesis the experimenters brought in a new set of animals and found that dogs with no history in the experiment would jump over the fence.
This condition was described as learned helplessness. A human or animal does not attempt to get out of a negative situation because the past has taught them that they are helpless.
10. Little Albert Experiment
Study conducted by: john b. watson and rosalie rayner.
Study Conducted in 1920 at Johns Hopkins University
The experiment began by placing a white rat in front of the infant, who initially had no fear of the animal. Watson then produced a loud sound by striking a steel bar with a hammer every time little Albert was presented with the rat. After several pairings (the noise and the presentation of the white rat), the boy began to cry and exhibit signs of fear every time the rat appeared in the room. Watson also created similar conditioned reflexes with other common animals and objects (rabbits, Santa beard, etc.) until Albert feared them all.
This study proved that classical conditioning works on humans. One of its most important implications is that adult fears are often connected to early childhood experiences.
11. Magical Number Seven
Study conducted by: george a. miller.
Study Conducted in 1956 at Princeton University
Experiment Details: Frequently referred to as “ Miller’s Law,” the Magical Number Seven experiment purports that the number of objects an average human can hold in working memory is 7 ± 2. This means that the human memory capacity typically includes strings of words or concepts ranging from 5-9. This information on the limits to the capacity for processing information became one of the most highly cited papers in psychology.
The Magical Number Seven Experiment was published in 1956 by cognitive psychologist George A. Miller of Princeton University’s Department of Psychology in Psychological Review . In the article, Miller discussed a concurrence between the limits of one-dimensional absolute judgment and the limits of short-term memory.
In a one-dimensional absolute-judgment task, a person is presented with a number of stimuli that vary on one dimension (such as 10 different tones varying only in pitch). The person responds to each stimulus with a corresponding response (learned before).
Performance is almost perfect up to five or six different stimuli but declines as the number of different stimuli is increased. This means that a human’s maximum performance on one-dimensional absolute judgment can be described as an information store with the maximum capacity of approximately 2 to 3 bits of information There is the ability to distinguish between four and eight alternatives.
12. Pavlov’s Dog Experiment
Study conducted by: ivan pavlov.
Study Conducted in the 1890s at the Military Medical Academy in St. Petersburg, Russia
Pavlov began with the simple idea that there are some things that a dog does not need to learn. He observed that dogs do not learn to salivate when they see food. This reflex is “hard wired” into the dog. This is an unconditioned response (a stimulus-response connection that required no learning).
Pavlov outlined that there are unconditioned responses in the animal by presenting a dog with a bowl of food and then measuring its salivary secretions. In the experiment, Pavlov used a bell as his neutral stimulus. Whenever he gave food to his dogs, he also rang a bell. After a number of repeats of this procedure, he tried the bell on its own. What he found was that the bell on its own now caused an increase in salivation. The dog had learned to associate the bell and the food. This learning created a new behavior. The dog salivated when he heard the bell. Because this response was learned (or conditioned), it is called a conditioned response. The neutral stimulus has become a conditioned stimulus.
This theory came to be known as classical conditioning.
13. Robbers Cave Experiment
Study conducted by: muzafer and carolyn sherif.
Study Conducted in 1954 at the University of Oklahoma
Experiment Details: This experiment, which studied group conflict, is considered by most to be outside the lines of what is considered ethically sound.
In 1954 researchers at the University of Oklahoma assigned 22 eleven- and twelve-year-old boys from similar backgrounds into two groups. The two groups were taken to separate areas of a summer camp facility where they were able to bond as social units. The groups were housed in separate cabins and neither group knew of the other’s existence for an entire week. The boys bonded with their cabin mates during that time. Once the two groups were allowed to have contact, they showed definite signs of prejudice and hostility toward each other even though they had only been given a very short time to develop their social group. To increase the conflict between the groups, the experimenters had them compete against each other in a series of activities. This created even more hostility and eventually the groups refused to eat in the same room. The final phase of the experiment involved turning the rival groups into friends. The fun activities the experimenters had planned like shooting firecrackers and watching movies did not initially work, so they created teamwork exercises where the two groups were forced to collaborate. At the end of the experiment, the boys decided to ride the same bus home, demonstrating that conflict can be resolved and prejudice overcome through cooperation.
Many critics have compared this study to Golding’s Lord of the Flies novel as a classic example of prejudice and conflict resolution.
14. Ross’ False Consensus Effect Study
Study conducted by: lee ross.
Study Conducted in 1977 at Stanford University
Experiment Details: In 1977, a social psychology professor at Stanford University named Lee Ross conducted an experiment that, in lay terms, focuses on how people can incorrectly conclude that others think the same way they do, or form a “false consensus” about the beliefs and preferences of others. Ross conducted the study in order to outline how the “false consensus effect” functions in humans.
Featured Programs
In the first part of the study, participants were asked to read about situations in which a conflict occurred and then were told two alternative ways of responding to the situation. They were asked to do three things:
- Guess which option other people would choose
- Say which option they themselves would choose
- Describe the attributes of the person who would likely choose each of the two options
What the study showed was that most of the subjects believed that other people would do the same as them, regardless of which of the two responses they actually chose themselves. This phenomenon is referred to as the false consensus effect, where an individual thinks that other people think the same way they do when they may not. The second observation coming from this important study is that when participants were asked to describe the attributes of the people who will likely make the choice opposite of their own, they made bold and sometimes negative predictions about the personalities of those who did not share their choice.
15. The Schachter and Singer Experiment on Emotion
Study conducted by: stanley schachter and jerome e. singer.
Study Conducted in 1962 at Columbia University
Experiment Details: In 1962 Schachter and Singer conducted a ground breaking experiment to prove their theory of emotion.
In the study, a group of 184 male participants were injected with epinephrine, a hormone that induces arousal including increased heartbeat, trembling, and rapid breathing. The research participants were told that they were being injected with a new medication to test their eyesight. The first group of participants was informed the possible side effects that the injection might cause while the second group of participants were not. The participants were then placed in a room with someone they thought was another participant, but was actually a confederate in the experiment. The confederate acted in one of two ways: euphoric or angry. Participants who had not been informed about the effects of the injection were more likely to feel either happier or angrier than those who had been informed.
What Schachter and Singer were trying to understand was the ways in which cognition or thoughts influence human emotion. Their study illustrates the importance of how people interpret their physiological states, which form an important component of your emotions. Though their cognitive theory of emotional arousal dominated the field for two decades, it has been criticized for two main reasons: the size of the effect seen in the experiment was not that significant and other researchers had difficulties repeating the experiment.
16. Selective Attention / Invisible Gorilla Experiment
Study conducted by: daniel simons and christopher chabris.
Study Conducted in 1999 at Harvard University
Experiment Details: In 1999 Simons and Chabris conducted their famous awareness test at Harvard University.
Participants in the study were asked to watch a video and count how many passes occurred between basketball players on the white team. The video moves at a moderate pace and keeping track of the passes is a relatively easy task. What most people fail to notice amidst their counting is that in the middle of the test, a man in a gorilla suit walked onto the court and stood in the center before walking off-screen.
The study found that the majority of the subjects did not notice the gorilla at all, proving that humans often overestimate their ability to effectively multi-task. What the study set out to prove is that when people are asked to attend to one task, they focus so strongly on that element that they may miss other important details.
17. Stanford Prison Study
Study conducted by philip zimbardo.
Study Conducted in 1971 at Stanford University
The Stanford Prison Experiment was designed to study behavior of “normal” individuals when assigned a role of prisoner or guard. College students were recruited to participate. They were assigned roles of “guard” or “inmate.” Zimbardo played the role of the warden. The basement of the psychology building was the set of the prison. Great care was taken to make it look and feel as realistic as possible.
The prison guards were told to run a prison for two weeks. They were told not to physically harm any of the inmates during the study. After a few days, the prison guards became very abusive verbally towards the inmates. Many of the prisoners became submissive to those in authority roles. The Stanford Prison Experiment inevitably had to be cancelled because some of the participants displayed troubling signs of breaking down mentally.
Although the experiment was conducted very unethically, many psychologists believe that the findings showed how much human behavior is situational. People will conform to certain roles if the conditions are right. The Stanford Prison Experiment remains one of the most famous psychology experiments of all time.
18. Stanley Milgram Experiment
Study conducted by stanley milgram.
Study Conducted in 1961 at Stanford University
Experiment Details: This 1961 study was conducted by Yale University psychologist Stanley Milgram. It was designed to measure people’s willingness to obey authority figures when instructed to perform acts that conflicted with their morals. The study was based on the premise that humans will inherently take direction from authority figures from very early in life.
Participants were told they were participating in a study on memory. They were asked to watch another person (an actor) do a memory test. They were instructed to press a button that gave an electric shock each time the person got a wrong answer. (The actor did not actually receive the shocks, but pretended they did).
Participants were told to play the role of “teacher” and administer electric shocks to “the learner,” every time they answered a question incorrectly. The experimenters asked the participants to keep increasing the shocks. Most of them obeyed even though the individual completing the memory test appeared to be in great pain. Despite these protests, many participants continued the experiment when the authority figure urged them to. They increased the voltage after each wrong answer until some eventually administered what would be lethal electric shocks.
This experiment showed that humans are conditioned to obey authority and will usually do so even if it goes against their natural morals or common sense.
19. Surrogate Mother Experiment
Study conducted by: harry harlow.
Study Conducted from 1957-1963 at the University of Wisconsin
Experiment Details: In a series of controversial experiments during the late 1950s and early 1960s, Harry Harlow studied the importance of a mother’s love for healthy childhood development.
In order to do this he separated infant rhesus monkeys from their mothers a few hours after birth and left them to be raised by two “surrogate mothers.” One of the surrogates was made of wire with an attached bottle for food. The other was made of soft terrycloth but lacked food. The researcher found that the baby monkeys spent much more time with the cloth mother than the wire mother, thereby proving that affection plays a greater role than sustenance when it comes to childhood development. They also found that the monkeys that spent more time cuddling the soft mother grew up to healthier.
This experiment showed that love, as demonstrated by physical body contact, is a more important aspect of the parent-child bond than the provision of basic needs. These findings also had implications in the attachment between fathers and their infants when the mother is the source of nourishment.
20. The Good Samaritan Experiment
Study conducted by: john darley and daniel batson.
Study Conducted in 1973 at The Princeton Theological Seminary (Researchers were from Princeton University)
Experiment Details: In 1973, an experiment was created by John Darley and Daniel Batson, to investigate the potential causes that underlie altruistic behavior. The researchers set out three hypotheses they wanted to test:
- People thinking about religion and higher principles would be no more inclined to show helping behavior than laymen.
- People in a rush would be much less likely to show helping behavior.
- People who are religious for personal gain would be less likely to help than people who are religious because they want to gain some spiritual and personal insights into the meaning of life.
Student participants were given some religious teaching and instruction. They were then were told to travel from one building to the next. Between the two buildings was a man lying injured and appearing to be in dire need of assistance. The first variable being tested was the degree of urgency impressed upon the subjects, with some being told not to rush and others being informed that speed was of the essence.
The results of the experiment were intriguing, with the haste of the subject proving to be the overriding factor. When the subject was in no hurry, nearly two-thirds of people stopped to lend assistance. When the subject was in a rush, this dropped to one in ten.
People who were on the way to deliver a speech about helping others were nearly twice as likely to help as those delivering other sermons,. This showed that the thoughts of the individual were a factor in determining helping behavior. Religious beliefs did not appear to make much difference on the results. Being religious for personal gain, or as part of a spiritual quest, did not appear to make much of an impact on the amount of helping behavior shown.
21. The Halo Effect Experiment
Study conducted by: richard e. nisbett and timothy decamp wilson.
Study Conducted in 1977 at the University of Michigan
Experiment Details: The Halo Effect states that people generally assume that people who are physically attractive are more likely to:
- be intelligent
- be friendly
- display good judgment
To prove their theory, Nisbett and DeCamp Wilson created a study to prove that people have little awareness of the nature of the Halo Effect. They’re not aware that it influences:
- their personal judgments
- the production of a more complex social behavior
In the experiment, college students were the research participants. They were asked to evaluate a psychology instructor as they view him in a videotaped interview. The students were randomly assigned to one of two groups. Each group was shown one of two different interviews with the same instructor. The instructor is a native French-speaking Belgian who spoke English with a noticeable accent. In the first video, the instructor presented himself as someone:
- respectful of his students’ intelligence and motives
- flexible in his approach to teaching
- enthusiastic about his subject matter
In the second interview, he presented himself as much more unlikable. He was cold and distrustful toward the students and was quite rigid in his teaching style.
After watching the videos, the subjects were asked to rate the lecturer on:
- physical appearance
His mannerisms and accent were kept the same in both versions of videos. The subjects were asked to rate the professor on an 8-point scale ranging from “like extremely” to “dislike extremely.” Subjects were also told that the researchers were interested in knowing “how much their liking for the teacher influenced the ratings they just made.” Other subjects were asked to identify how much the characteristics they just rated influenced their liking of the teacher.
After responding to the questionnaire, the respondents were puzzled about their reactions to the videotapes and to the questionnaire items. The students had no idea why they gave one lecturer higher ratings. Most said that how much they liked the lecturer had not affected their evaluation of his individual characteristics at all.
The interesting thing about this study is that people can understand the phenomenon, but they are unaware when it is occurring. Without realizing it, humans make judgments. Even when it is pointed out, they may still deny that it is a product of the halo effect phenomenon.
22. The Marshmallow Test
Study conducted by: walter mischel.
Study Conducted in 1972 at Stanford University
In his 1972 Marshmallow Experiment, children ages four to six were taken into a room where a marshmallow was placed in front of them on a table. Before leaving each of the children alone in the room, the experimenter informed them that they would receive a second marshmallow if the first one was still on the table after they returned in 15 minutes. The examiner recorded how long each child resisted eating the marshmallow and noted whether it correlated with the child’s success in adulthood. A small number of the 600 children ate the marshmallow immediately and one-third delayed gratification long enough to receive the second marshmallow.
In follow-up studies, Mischel found that those who deferred gratification were significantly more competent and received higher SAT scores than their peers. This characteristic likely remains with a person for life. While this study seems simplistic, the findings outline some of the foundational differences in individual traits that can predict success.
23. The Monster Study
Study conducted by: wendell johnson.
Study Conducted in 1939 at the University of Iowa
Experiment Details: The Monster Study received this negative title due to the unethical methods that were used to determine the effects of positive and negative speech therapy on children.
Wendell Johnson of the University of Iowa selected 22 orphaned children, some with stutters and some without. The children were in two groups. The group of children with stutters was placed in positive speech therapy, where they were praised for their fluency. The non-stutterers were placed in negative speech therapy, where they were disparaged for every mistake in grammar that they made.
As a result of the experiment, some of the children who received negative speech therapy suffered psychological effects and retained speech problems for the rest of their lives. They were examples of the significance of positive reinforcement in education.
The initial goal of the study was to investigate positive and negative speech therapy. However, the implication spanned much further into methods of teaching for young children.
24. Violinist at the Metro Experiment
Study conducted by: staff at the washington post.
Study Conducted in 2007 at a Washington D.C. Metro Train Station
During the study, pedestrians rushed by without realizing that the musician playing at the entrance to the metro stop was Grammy-winning musician, Joshua Bell. Two days before playing in the subway, he sold out at a theater in Boston where the seats average $100. He played one of the most intricate pieces ever written with a violin worth 3.5 million dollars. In the 45 minutes the musician played his violin, only 6 people stopped and stayed for a while. Around 20 gave him money, but continued to walk their normal pace. He collected $32.
The study and the subsequent article organized by the Washington Post was part of a social experiment looking at:
- the priorities of people
Gene Weingarten wrote about the social experiment: “In a banal setting at an inconvenient time, would beauty transcend?” Later he won a Pulitzer Prize for his story. Some of the questions the article addresses are:
- Do we perceive beauty?
- Do we stop to appreciate it?
- Do we recognize the talent in an unexpected context?
As it turns out, many of us are not nearly as perceptive to our environment as we might like to think.
25. Visual Cliff Experiment
Study conducted by: eleanor gibson and richard walk.
Study Conducted in 1959 at Cornell University
Experiment Details: In 1959, psychologists Eleanor Gibson and Richard Walk set out to study depth perception in infants. They wanted to know if depth perception is a learned behavior or if it is something that we are born with. To study this, Gibson and Walk conducted the visual cliff experiment.
They studied 36 infants between the ages of six and 14 months, all of whom could crawl. The infants were placed one at a time on a visual cliff. A visual cliff was created using a large glass table that was raised about a foot off the floor. Half of the glass table had a checker pattern underneath in order to create the appearance of a ‘shallow side.’
In order to create a ‘deep side,’ a checker pattern was created on the floor; this side is the visual cliff. The placement of the checker pattern on the floor creates the illusion of a sudden drop-off. Researchers placed a foot-wide centerboard between the shallow side and the deep side. Gibson and Walk found the following:
- Nine of the infants did not move off the centerboard.
- All of the 27 infants who did move crossed into the shallow side when their mothers called them from the shallow side.
- Three of the infants crawled off the visual cliff toward their mother when called from the deep side.
- When called from the deep side, the remaining 24 children either crawled to the shallow side or cried because they could not cross the visual cliff and make it to their mother.
What this study helped demonstrate is that depth perception is likely an inborn train in humans.
Among these experiments and psychological tests, we see boundaries pushed and theories taking on a life of their own. It is through the endless stream of psychological experimentation that we can see simple hypotheses become guiding theories for those in this field. The greater field of psychology became a formal field of experimental study in 1879, when Wilhelm Wundt established the first laboratory dedicated solely to psychological research in Leipzig, Germany. Wundt was the first person to refer to himself as a psychologist. Since 1879, psychology has grown into a massive collection of:
- methods of practice
It’s also a specialty area in the field of healthcare. None of this would have been possible without these and many other important psychological experiments that have stood the test of time.
- 20 Most Unethical Experiments in Psychology
- What Careers are in Experimental Psychology?
- 10 Things to Know About the Psychology of Psychotherapy
About Education: Psychology
Explorable.com
Mental Floss.com
About the Author
After earning a Bachelor of Arts in Psychology from Rutgers University and then a Master of Science in Clinical and Forensic Psychology from Drexel University, Kristen began a career as a therapist at two prisons in Philadelphia. At the same time she volunteered as a rape crisis counselor, also in Philadelphia. After a few years in the field she accepted a teaching position at a local college where she currently teaches online psychology courses. Kristen began writing in college and still enjoys her work as a writer, editor, professor and mother.
- 5 Best Online Ph.D. Marriage and Family Counseling Programs
- Top 5 Online Doctorate in Educational Psychology
- 5 Best Online Ph.D. in Industrial and Organizational Psychology Programs
- Top 10 Online Master’s in Forensic Psychology
- 10 Most Affordable Counseling Psychology Online Programs
- 10 Most Affordable Online Industrial Organizational Psychology Programs
- 10 Most Affordable Online Developmental Psychology Online Programs
- 15 Most Affordable Online Sport Psychology Programs
- 10 Most Affordable School Psychology Online Degree Programs
- Top 50 Online Psychology Master’s Degree Programs
- Top 25 Online Master’s in Educational Psychology
- Top 25 Online Master’s in Industrial/Organizational Psychology
- Top 10 Most Affordable Online Master’s in Clinical Psychology Degree Programs
- Top 6 Most Affordable Online PhD/PsyD Programs in Clinical Psychology
- 50 Great Small Colleges for a Bachelor’s in Psychology
- 50 Most Innovative University Psychology Departments
- The 30 Most Influential Cognitive Psychologists Alive Today
- Top 30 Affordable Online Psychology Degree Programs
- 30 Most Influential Neuroscientists
- Top 40 Websites for Psychology Students and Professionals
- Top 30 Psychology Blogs
- 25 Celebrities With Animal Phobias
- Your Phobias Illustrated (Infographic)
- 15 Inspiring TED Talks on Overcoming Challenges
- 10 Fascinating Facts About the Psychology of Color
- 15 Scariest Mental Disorders of All Time
- 15 Things to Know About Mental Disorders in Animals
- 13 Most Deranged Serial Killers of All Time
Site Information
- About Online Psychology Degree Guide
Javascript is disabled
- Food and drink
- Accessibility
- Group trips
- Objects and stories
- Formal education groups
- Other Groups
- Home Educators
- Community partnerships
- FAQs for groups
- Learning resources
- Educator CPD and events
- Researchers
- Dana Research Centre and Library
- Digital library
- Ordering library materials
- Research Events
- Science Museum Group Journal
- Press office
- Volunteering
Free entry Open daily, 10.00–18.00
Science Museum Exhibition Road London SW7 2DD
Book your free admission ticket now to visit the museum. Schools and groups can book free tickets here .
Clinical trials and medical experiments
Published: 30 July 2019
Experimentation is an essential part of scientific medicine.
Doctors have always conducted investigations and experiments in order to understand the body in sickness and health, and to test the effectiveness of treatments. Medical laboratories carry out experimental research into new techniques and treatments, but at some point developments intended for use on patients have to be tested on people.
Experimenting with the living—animals and humans—is complex and sometimes dangerous. In their efforts to discover more about diseases and find effective treatments, doctors and researchers have put vulnerable and powerless patients at risk.
The modern clinical trial—an experiment in which people are the test subjects—has developed over time not only to ensure the optimal conditions to produce valid, scientific results but also to safeguard the rights and well-being of participants.
Clinical trials
In the 1030s, the physician Ibn Sina put forward rules for testing the effect of drugs on patients. One key criterion was that:
The effect of the drug should be the same in all cases or, at least, in most. If that is not the case, the effect is then accidental, because things that occur naturally are always or mostly consistent. Ibn Sina
This remains the essential criterion for any treatment—that it has the same effect on most patients in similar conditions. But testing a drug on one person does not tell you very much. Their response may not be typical, side effects may be the result of an allergy, or their recovery may be due to some external factor.
Today new medical devices and drugs have to undergo several stages of testing before they reach the final stage of being tested on people. Drug testing and regulation was tightened in the mid-1960s following the impact of thalidomide worldwide. Usually a therapy is tested on animals before clinical trials are permitted.
Participants in clinical trials are carefully selected in order to limit the number of variable factors that might affect the results. For example, only patients at the same stage of a particular condition may be selected in order to see if a new therapy is effective in treating the condition at that stage.
In order to run a clinical trial on people, researchers have to go through a rigorous procedure that includes registering the trial with the authorities and presenting their proposal to an ethics committee, who will decide it the trial is valid and that there are safeguards in place to ensure that participants understand what will happen to them.
Randomised clinical trials
In randomised trials, the test subjects are divided into at least two different treatment groups. Participants are assigned to a group at random.
One group is usually given the standard treatment for their condition. They are the control group. People in the other group (or groups) will have the treatment or procedure that is being tested. A randomised trial that has a control group is called a randomised controlled trial (RCT).
If there is no standard treatment, then people in the control group may be given a dummy treatment, called a placebo. A placebo is a treatment with no medical effects. It allows researchers to take into account the psychological influence of experiencing treatment, regardless of what is in the treatment.
A blind trial is a trial where the people taking part don't know which treatment they are getting. A double blind trial is a trial where neither the researchers nor the patients know what they are getting. The identity of patients in each group is kept secret until the end of the trial.
What is informed consent?
Legally and ethically, participants in a clinical trial need to have adequate information to allow for an informed decision about participation in a trial. This includes what tests are involved what the risks and benefits may be, how much of your time it will take and what will happen to any of your samples after the trial.
The modern definition of informed consent came out of the Nuremberg trials, a series of legal trials between 1945 and 1947 to prosecute surviving German war criminals after the Second World War.
People were shocked by the horrific things done by doctors in the name of medical research and the Nuremberg Code was developed as a result. It is the basis for all rules regarding human experiments, including the requirement for informed consent.
Most countries now have regulatory boards for clinical trials that insist on informed consent before people can participate in clinical trials.
Before the Nuremburg Code, people in charge of human experiments did not have to tell their patients what they were doing. Some groups of people had no choice in whether or not they participated.
British troops heading to the South African War (1899–1902) were offered a new typhoid vaccine before it was fully tested and the side-effects understood and eliminated. These side effects were one reason why take-up of the vaccine was so low. Alongside volunteers, some prisoners were used to test a new cholera vaccine in India in 1897.
Throughout the 1900s, psychiatrists who wanted to find effective treatments for conditions such as schizophrenia tested experimental convulsive shock therapies on their patients. Researchers had little knowledge of the effects—and patients were not always asked for their consent.
Self-experimentation
Occasionally medical researchers decide to test a new idea or treatment on the most convenient test subject around—themselves. They might do this because the weight of medical opinion is resistant to their idea and they can’t get funding or support to test it any other way.
Or they might simply have wanted to prove their theory before sharing it with others. Whatever their reasons, self-experimentation has contributed some valuable treatments and techniques to medicine—but it has also gone very wrong.
Do-it-yourself anaesthesia
One field of medicine seems to be full of self-experimenters. The American dentist William Morton was one of several people to try ether as an anaesthetic on himself after witnessing its numbing effects on revellers at the ‘ether frolics’ that were the craze in the 1800s.
The Scottish surgeon James Young Simpson and his friends were searching for an alternative general anaesthetic to ether and tested several compounds on themselves, including chloroform. Another celebrated surgeon, Joseph Lister , took a more scientific approach when he and his wife Agnes tested different doses of chloroform on themselves to find the most effective one for his patients.
But perhaps the most surprising case of self-experimentation in anaesthesia was that of the German surgeon August Bier, who decided to find out for himself the effects of cocaine as a local anaesthetic by having his assistant Augustus Hildebrandt inject it into the fluid surrounding the spinal cord.
But, thanks to a mix-up with the equipment, Bier was left with a hole in his neck that began to leak cerebrospinal fluid. Rather than abandon the effort, however, the two men switched places. Once Hildebrandt had been anaesthetized, Bier stabbed, hammered and burned his assistant, pulled out his pubic hairs and squashed his testicles!
Needless to say both felt the after-effects in subsequent days. But cocaine did prove to be a very effective local anaesthetic and was a forerunner to the modern epidural.
How to cause an ulcer
Australian doctor Barry Marshall had a theory that challenged the medical consensus of the day. He and his colleague, pathologist Robin Warren, were convinced that ulcers were caused by the bacterium Helicobacter pylori, and not—as was the general medical opinion—that they were the result of lifestyle factors such as stress, spicy foods and alcohol.
They had tried to submit their findings to a peer-reviewed journal in 1983, but their paper was turned down. In 1984, Marshall drank a broth containing cultured H. pylori, because he wanted to see the effects on a healthy person. As he explained: "I was the only person informed enough to consent".
He expected to develop an ulcer after perhaps a year, so he was surprised when, only three days later, he developed nausea and halitosis (bad breath). On day five, he began vomiting. On day eight, an endoscopy showed massive inflammation (gastritis, a precursor to an ulcer) in his stomach, and a biopsy showed that the H. pylori had colonised his stomach.
On the fourteenth day Marshall began to take antibiotics to fight the H. pylori infection.
The traditional treatment for severe ulcers was antacids and medications that block acid production in the stomach. Despite this treatment, there was a high recurrence of ulcers. Marshall and Warren’s discovery meant that ulcers could now be cured using antibiotics, preventing years of pain and discomfort and saving money on pharmaceuticals that didn’t work.
Marshall and Warren won the Nobel Prize for Physiology or Medicine in 2005 for their work.
Animal experiments
Animals have long been used for dissections and medical experiments. For centuries, human dissection was severely restricted and physicians and surgeons relied on animal dissection to learn about human anatomy.
The Roman physician Galen dissected pigs and monkeys to develop his knowledge anatomy. Although he was restricted by law to dissecting animals, the three years he spent from 158 CE as physician to the gladiators of his home city of Pergamon were a formative period in his life in medicine. The traumatic injuries he regularly encountered gave Galen the perfect opportunity to extend his practical medical knowledge of the human body.
Discussions about whether to experiment on animals has always been part of the debate. Some religious authorities said that animals had no souls and they were under the dominion of mankind, along with the rest of the natural world. The 1600s philosopher and researcher René Descartes (1596-1650) claimed that animals did not feel pain.
The number of experiments on animals increased in the 1800s with the rise of life sciences such as experimental physiology. The French physiologist Claude Bernard used animals in his research and drew criticism for it from opponents, including his own wife and daughters.
Louis Pasteur used rabbits to develop a vaccine for rabies and was the target of protests.
As scientific experimentation on living animals, known as vivisection, grew, so did the anti-vivisection movement. In 1875 the activist Frances Power Cobbe founded the Society for the Protection of Animals. The protests of the early animal rights movement led to the Cruelty to Animals Act of 1876, which regulated animal experimentation in England, Wales and Ireland.
Modern medical research still relies on animals. As well as medical research, testing on animals, primarily rats and mice, is used to assess the safety or effectiveness of products such as drugs, chemicals and cosmetics. Medical researchers are increasingly aware of animal welfare and continue to seek scientific alternatives to animal testing.
Where the ability to replace animal experiments with alternatives such as tissue cultures, microorganisms or computer models is limited, researchers have tried to reduce the amount of animal testing needed. This is because, apart from the ethical concerns, animal experiments are expensive and (as with all experiments on living organisms) highly complicated.
Both scientific research organisations and animal rights groups promote the use and development of methods of scientific testing that don’t use animals, such as:
in Vitro techniques
An example of a toxicity test in animals that is being replaced is the LD50 test, in which the concentration of a chemical is increased in a population of test animals until 50 percent of the animals die.
A similar in vitro test is the IC50 test, which tests the cytotoxicity (cell toxicity) of a chemical’s ability to inhibit the growth of half of a population of cells. The IC50 test uses human cells grown in the laboratory and thus produces data that are more relevant to humans than an LD50 value obtained from rats, mice, or other animals.
in silico techniques (computer modeling)
Researchers have developed a wide range of sophisticated computer models that simulate human biology and the progression of disease. Studies show that these models can be used to predict the ways that new drugs will react in the human body without the need for a lot of animal testing.
Suggestions for further research
- A Harrington (ed.), The Placebo Effect: An Interdisciplinary Exploration , 1997
- J S Hawkins and E J.Emanuel (eds.), Exploitation and Developing Countries: The Ethics of Clinical Research , 2008
- Ruth Chadwick and Duncon Wilson, ' The Emergence and Development of Bioethics in the UK ', in Medical Law Review, Vol. 26 No. 2
Find out more
What is scientific medicine?
Medicine has always involved scientific and empirical methods—but in the 19th century, new disciplines emerged that radically changed the way medicine was practised.
Science and technology in medicine
Science and technology have changed how medicine is practised around the world.
Understanding the body
In order to understand illness, you have to understand the body and how it works.
- Part of the Science Museum Group
- Terms and conditions
- Privacy and cookies
- Modern Slavery Statement
- Web accessibility
IMAGES
VIDEO
COMMENTS
Delve into the intricacies of human biology and health with this collection of science experiments. Investigate anatomy, physiology, and diseases. Jump to main content. Menu. Science Projects. ... For example, patients in a hospital might have stationary, bedside equipment monitor their heart rate and alert medical staff in case of an emergency
The atrocities carried out by the Nazis in their concentration camps are horrifying, but there have been many other human-on-human experiments through in history. Merche Portu/Getty Images Prisoners, the disabled, the physically and mentally sick, the poor -- these are all groups once considered fair game to use as subjects in your research ...
"Double-slit experiments have become so compelling [because] they are relatively easy to conduct," says David Kaiser, a professor of physics and of the history of science at MIT. "There is an unusually large ratio, in this case, between the relative simplicity and accessibility of the experimental design and the deep conceptual ...
1. William Beaumont: Because of William Beaumont's experimentation on Alexis St. Martin, he is coined "The father of gastric physiology;" however, some scientific historians que
Human subject research in the social sciences, for example, may involve surveys, questionnaires, interviews, and focus groups. ... as the above examples show, human experimentation has often been on the limit of what is ethically acceptable. ... The HeLa cells and Tuskegee experiments have been cited as examples of racial discrimination in ...
These examples and others like them—such as the infamous Tuskegee syphilis experiments (1932-72) and the continued testing of unnecessary (and frequently risky) pharmaceuticals on human volunteers—demonstrate the danger in assuming that adequate measures are in place to ensure ethical behavior in research.
15 Examples of Scientific Method. Medicine Delivery: Scientists use scientific method to determine the most effective way of delivering a medicine to its target location in the body. They perform experiments and gather data on the different methods of medicine delivery, monitoring factors such as dosage and time release.
Delve into the intricacies of human biology and health with this collection of science experiments. Investigate anatomy, physiology, and diseases. Jump to main content. Menu. ... Human Biology & Health Science Experiments (116 results) Add Favorite Remove Favorite Print Email Share Menu. Facebook; Pinterest; Twitter; ... For example, patients ...
3. Bobo Doll Experiment Study Conducted by: Dr. Alburt Bandura. Study Conducted between 1961-1963 at Stanford University . Experiment Details: During the early 1960s a great debate began regarding the ways in which genetics, environmental factors, and social learning shaped a child's development. This debate still lingers and is commonly referred to as the Nature vs. Nurture Debate.
Before the Nuremburg Code, people in charge of human experiments did not have to tell their patients what they were doing. Some groups of people had no choice in whether or not they participated. British troops heading to the South African War (1899-1902) were offered a new typhoid vaccine before it was fully tested and the side-effects ...