/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 5 The following is a quotation fro... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

The following is a quotation from Sir Ronald A. Fisher, a famous statistician. "For the logical fallacy of believing that a hypothesis has been proved true, merely because it is not contradicted by the available facts, has no more right to insinuate itself in statistics than in other kinds of scientific reasoning \(\ldots . .\) It would, therefore, add greatly to the clarity with which the tests of significance are regarded if it were generally understood that tests of significance, when used accurately, are capable of rejecting or invalidating hypotheses, in so far as they are contradicted by the data: but that they are never capable of establishing them as certainly true \(\ldots .\) " In your own words, explain what this quotation means.

Short Answer

Expert verified
Fisher warns that statistical tests can only reject hypotheses, not prove them true. Avoid assuming a hypothesis is true just because it isn't contradicted by data.

Step by step solution

01

Identify the Key Points

First, identify the key points of Fisher's quotation. He discusses the logical fallacy of assuming a hypothesis is true just because it hasn't been contradicted, and he emphasizes that statistical tests of significance can only reject hypotheses, not prove them true.
02

Understand the Logical Fallacy

Recognize the logical fallacy mentioned: assuming a hypothesis is true because it hasn't been disproven. In reasoning, this is similar to saying, 'No one has shown me definitive evidence that contradicts my claim, so my claim must be true.' This is flawed logic.
03

Relate to Statistical Significance

Understand how this fallacy applies to statistical tests of significance. Fisher asserts that these tests can show whether data contradict a hypothesis, but they can't definitively prove the hypothesis to be true.
04

Paraphrase the Quotation

Now, paraphrase the quotation in your own words. For example, 'Fisher is saying that we shouldn't think a hypothesis is true just because our data doesn't contradict it. Statistical tests can tell us if a hypothesis is likely false, but they can't confirm its truth absolutely.'

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

logical fallacy
A 'logical fallacy' is an error in reasoning that undermines the logic of an argument. In Sir Ronald A. Fisher's quotation, he mentions a specific logical fallacy. This fallacy occurs when someone believes a hypothesis is true simply because no evidence has contradicted it. Imagine you claim, 'It has not rained today because no one has told me they saw rain.' This is faulty reasoning because the absence of evidence against your claim doesn't necessarily validate it. Fisher warns against this kind of thinking in statistical reasoning. Just because a hypothesis isn’t contradicted by current data doesn't mean it’s true. This fallacy is important to understand because it can mislead researchers into accepting unproven hypotheses.
tests of significance
In the context of hypothesis testing, 'tests of significance' are tools used by statisticians to determine if their data supports or contradicts a given hypothesis. These tests help assess whether observed outcomes are consistent with what would be expected under the null hypothesis. Common tests include the t-test, chi-square test, and ANOVA. These tests produce a p-value, which tells us whether the observed results could have happened by random chance alone. According to Fisher, while these tests can indicate when data contradicts a hypothesis, they cannot prove the hypothesis is true. This is crucial because it sets a limit on what statistical tests can achieve. They help us invalidate or reject hypotheses that don't align with the data but don’t confirm the absolute truth of a hypothesis.
hypothesis rejection
When conducting a statistical test, 'hypothesis rejection' occurs if the data significantly contradicts the null hypothesis. The null hypothesis generally states there is no effect or no difference. If a test returns a p-value below a predetermined significance level (commonly 0.05), the null hypothesis can be rejected. Rejection implies that the observed data is unlikely to have occurred under the null hypothesis. Fisher's point is that while rejection of the hypothesis suggests it is not true, not being able to reject it does not confirm it is true. There could be many reasons for not rejecting a hypothesis, such as insufficient data or sample size. This understanding helps prevent the logical fallacy of believing a hypothesis is true just because it hasn’t been disproven.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

(a) determine the null and alternative hypotheses, (b) explain what it would mean to make a Type I error, and (c) explain what it would mean to make a Type II error. According to Giving and Volunteering in the United States, 2001 Edition, the mean charitable contribution per household in the United States in 2000 was \(\$ 1623 .\) A researcher believes that the level of giving has changed since then.

In Problems \(3-8,\) test the hypothesis, using (a) the classical approach and then (b) the P-value approach. Be sure to verify the requirements of the test. $$\begin{aligned}&H_{0}: p=0.3 \text { versus } H_{1}: p>0.3\\\&n=200 ; x=75 ; \alpha=0.05\end{aligned}$$

Simulation Simulate drawing 50 simple random samples of size \(n=20\) from a population that is normally distributed with mean 80 and standard deviation \(7 .\) (a) Test the null hypothesis \(H_{0}: \mu=80\) versus the alternative hypothesis \(H_{1}: \mu \neq 80\) (b) Suppose we were testing this hypothesis at the \(\alpha=0.1\) level of significance. How many of the 50 samples would you expect to result in a Type I error? (c) Count the number of samples that lead to a rejection of the null hypothesis. Is it close to the expected value determined in part (b)? (d) Describe how we know that a rejection of the null hypothesis results in making a Type I error in this situation.

To test \(H_{0}: \mu=105\) versus \(H_{1}: \mu \neq 105,\) a random sample of size \(n=35\) is obtained from a population whose standard deviation is known to be \(\sigma=12\) (a) Does the population need to be normally distributed to compute the \(P\) -value? (b) If the sample mean is determined to be \(\bar{x}=101.2\) compute and interpret the \(P\) -value. (c) If the researcher decides to test this hypothesis at the \(\alpha=0.02\) level of significance, will the researcher reject the null hypothesis? Why?

In your own words, explain the difference between "beyond all reasonable doubt" and "beyond all doubt."

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.