/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 33 Equal opportunity? A company is ... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Equal opportunity? A company is sued for job discrimination because only \(19 \%\) of the newly hired candidates were minorities when \(27 \%\) of all applicants were minorities. Is this strong evidence that the company's hiring practices are discriminatory? a) Is this a one-tailed or a two-tailed test? Why? b) In this context, what would a Type I error be? c) In this context, what would a Type II error be? d) In this context, what is meant by the power of the test? e) If the hypothesis is tested at the \(5 \%\) level of significance instead of \(1 \%,\) how will this affect the power of the test? f) The lawsuit is based on the hiring of 37 employees. Is the power of the test higher than, lower than, or the same as it would be if it were based on 87 hires?

Short Answer

Expert verified
a) One-tailed. b) Concluding discrimination exists when it doesn't. c) Not detecting discrimination when it does. d) Likelihood of detecting true discrimination. e) Increases power. f) Higher with 87 hires.

Step by step solution

01

Identifying the Test Type

Since the lawsuit claims that the hiring process discriminates against minorities, we are looking for evidence that the proportion of minorities hired is less than the proportion of minority applicants. Thus, this is a one-tailed test because we are only interested in testing if the hiring proportion is significantly lower than the applicant proportion.
02

Understanding Type I Error

A Type I error occurs when we reject the null hypothesis when it is actually true. In this context, a Type I error would mean concluding that the company's hiring policies are discriminatory when, in fact, they are not.
03

Understanding Type II Error

A Type II error happens when we fail to reject the null hypothesis when it is false. For this situation, it means concluding that the hiring practices are not discriminatory when they actually are discriminatory.
04

Understanding Power of the Test

The power of a test is the probability that it correctly rejects a false null hypothesis. In this case, it represents the likelihood that the test will detect discrimination if discrimination truly exists.
05

Effect of the Significance Level

Testing at a 5% level of significance instead of 1% increases the probability of rejecting the null hypothesis when it is actually false (i.e., increases power). This means the test is more likely to detect actual discrimination but also has a higher risk of Type I error.
06

Comparing Sample Sizes

The power of a test increases with larger sample sizes. Therefore, if the analysis was based on 87 hires instead of 37, the test would have higher power, making it more capable of detecting true discrimination if it exists.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Type I Error
In the context of hypothesis testing, a **Type I Error** occurs when the null hypothesis is incorrectly rejected when it is actually true. Imagine you are conducting a study. You have an assumption or null hypothesis (often a statement of "no effect" or "no difference") that you are testing. If your test results lead you to reject this null hypothesis when, in truth, your initial assumption was accurate, a Type I Error has occurred. In the job discrimination example, this means concluding the company's hiring practices are discriminatory when, in fact, they are not. This kind of error is also known as a "false positive" or "alpha error."

To manage the risk of committing a Type I Error, you set a **significance level** prior to conducting your test. This level, denoted by alpha (\( \alpha \)), is the threshold at which you are willing to reject the null hypothesis.
Type II Error
A **Type II Error** happens when you fail to reject the null hypothesis, even though it is false. Think of it as a "missed opportunity" to identify an actual effect or difference. In our job discrimination scenario, this error leads to the conclusion that there is no discrimination in hiring when there actually is. This type of error is referred to as a "false negative" or "beta error."

The probability of making a Type II Error is denoted by beta (\( \beta \)). To reduce the risk of a Type II Error, you can increase your sample size or select a higher significance level, both of which can enhance the **power of the test**.
Power of a Test
The **Power of a Test** is an important concept in hypothesis testing. It represents the probability that the test will reject a false null hypothesis. In simpler terms, it's the ability of the test to detect an effect or a difference when one truly exists. In our discrimination example, it means the capability of the test to reveal discriminatory hiring if it's actually happening.

Power is influenced by various factors, including the significance level, sample size, and the effect size. Increasing the sample size or choosing a higher significance level often increases the power of the test. More power means a reduced chance of a Type II Error occurring. Mathematically, power is calculated as \( 1 - \beta \), with beta representing the probability of a Type II Error.
Significance Level
The **Significance Level** is foundational in hypothesis testing. It is a threshold set by the researcher, usually denoted as \( \alpha \), which determines the probability of rejecting the null hypothesis incorrectly, thus committing a Type I Error. Common significance levels are 0.05, 0.01, and 0.10, each reflecting a 5%, 1%, and 10% risk of rejecting the null hypothesis when it is actually true.

In our example about testing for job discrimination, if the significance level is set at 5%, it means there is a 5% risk of concluding that discrimination exists when it doesn't. A lower significance level, like 1%, indicates a more stringent criterion for rejecting the null hypothesis, thus reducing the chance of a Type I Error, but potentially increasing the risk of a Type II Error.
One-tailed Test
When conducting a hypothesis test, choosing between a **one-tailed** or a two-tailed test is significant. A one-tailed test checks for the possibility of an effect in a single direction. In our discrimination scenario, we are particularly interested in testing whether the proportion of minorities hired is less than the proportion of minority applicants. This interest in a specific directional outcome makes it a one-tailed test.

Such a test is suitable when it is appropriate to consider only one direction of interest. Alternatively, a two-tailed test would examine both directions (greater or less than) for any significant difference. One-tailed tests can be more powerful if a specific direction is expected, as they allocate the entire alpha level to testing that one direction.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Hypotheses and parameters As in Exercise \(1,\) for each of the following situations, define the parameter and write the null and alternative hypotheses in terms of parameter values. a) Seat-belt compliance in Massachusetts was \(65 \%\) in \(2008 .\) The state wants to know if it has changed. b) Last year, a survey found that \(45 \%\) of the employees were willing to pay for on-site day care. The company wants to know if that has changed. c) Regular card customers have a default rate of \(6.7 \%\) A credit card bank wants to know if that rate is different for their Gold card customers.

Spam Spam filters try to sort your e-mails, deciding which are real messages and which are unwanted. One method used is a point system. The filter reads each incoming e-mail and assigns points to the sender, the subject, key words in the message, and so on. The higher the point total, the more likely it is that the message is unwanted. The filter has a cutoff value for the point total; any message rated lower than that cutoff passes through to your inbox, and the rest, suspected to be spam, are diverted to the junk mailbox. We can think of the filter's decision as a hypothesis test. The null hypothesis is that the e-mail is a real message and should go to your inbox. A higher point total provides evidence that the message may be spam; when there's sufficient evidence, the filter rejects the null, classifying the message as junk. This usually works pretty well, but, of course, sometimes the filter makes a mistake. a) When the filter allows spam to slip through into your inbox, which kind of error is that? b) Which kind of error is it when a real message gets classified as junk? c) Some filters allow the user (that's you) to adjust the cutoff. Suppose your filter has a default cutoff of 50 points, but you reset it to \(60 .\) Is that analogous to choosing a higher or lower value of \(\alpha\) for a hypothesis test? Explain. d) What impact does this change in the cutoff value have on the chance of each type of error?

More errors For each of the following situations, state whether a Type I, a Type II, or neither error has been made. a) A test of \(\mathrm{H}_{0}: p=0.8\) vs. \(\mathrm{H}_{\mathrm{A}}: p<0.8\) fails to reject the null hypothesis. Later it is discovered that \(p=0.9\) b) A test of \(\mathrm{H}_{0}: p=0.5\) vs. \(\mathrm{H}_{\mathrm{A}}: p \neq 0.5\) rejects the null hypothesis. Later is it discovered that \(p=0.65\) c) A test of \(\mathrm{H}_{0}: p=0.7\) vs. \(\mathrm{H}_{\mathrm{A}}: p<0.7\) fails to reject the null hypothesis. Later is it discovered that \(p=0.6\)

Stop signs Highway safety engineers test new road signs, hoping that increased reflectivity will make them more visible to drivers. Volunteers drive through a test course with several of the new- and old-style signs and rate which kind shows up the best. a) Is this a one-tailed or a two-tailed test? Why? b) In this context, what would a Type I error be? c) In this context, what would a Type II error be? d) In this context, what is meant by the power of the test? e) If the hypothesis is tested at the \(1 \%\) level of significance instead of \(5 \%,\) how will this affect the power of the test? f) The engineers hoped to base their decision on the reactions of 50 drivers, but time and budget constraints may force them to cut back to 20. How would this affect the power of the test? Explain.

Significant again? A new reading program may reduce the number of elementary school students who read below grade level. The company that developed this program supplied materials and teacher training for a large-scale test involving nearly 8500 children in several different school districts. Statistical analysis of the results showed that the percentage of students who did not meet the grade-level goal was reduced from \(15.9 \%\) to \(15.1 \%\) The hypothesis that the new reading program produced no improvement was rejected with a P-value of 0.023 a) Explain what the P-value means in this context. b) Even though this reading method has been shown to be significantly better, why might you not recommend that your local school adopt it?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.