/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 125 Constrained Least Squares. Suppo... [FREE SOLUTION] | 91Ó°ÊÓ

91Ó°ÊÓ

Constrained Least Squares. Suppose we wish to find the least squares estimator of \(\beta\) in the model \(\mathbf{y}=\mathbf{X} \boldsymbol{\beta}+\boldsymbol{\epsilon}\) subject to a set of equality constraints, say, \(\mathbf{T} \boldsymbol{\beta}=\mathbf{c} .\) (a) Show that the estimator is $$ \begin{array}{l} \hat{\boldsymbol{\beta}}_{c}=\hat{\boldsymbol{\beta}}+\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \times \mathbf{T}^{\prime}\left[\mathbf{T}\left(\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1}\right) \mathbf{T}^{\prime}\right]^{-1}(\mathbf{c}-\mathbf{T} \hat{\boldsymbol{\beta}}) \\ \text { where } \hat{\boldsymbol{\beta}}=\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{y} \end{array} $$ (b) Discuss situations where this model might be appropriate.

Short Answer

Expert verified
The constrained least squares estimator adjusts for specific equality constraints on parameters, useful in structured econometric models.

Step by step solution

01

Understand the Least Squares Estimator

The ordinary least squares (OLS) estimator for \( \beta \) is obtained by minimizing the difference between \( \mathbf{y} \) and \( \mathbf{X}\beta \). The best fit is found by using \( \hat{\boldsymbol{\beta}} = (\mathbf{X}'\mathbf{X})^{-1} \mathbf{X}'\mathbf{y} \).
02

Introduce the Constraint

We need to incorporate the constraint \( \mathbf{T} \boldsymbol{\beta}=\mathbf{c} \) into our estimator. This constraint modifies our solution to account for specific conditions that \( \beta \) must satisfy.
03

Derive the Constrained Estimator

To derive the constrained estimator, adjust your estimator formula to satisfy the given constraint. This involves adding a correction term to the OLS solution: \( \hat{\boldsymbol{\beta}}_{c}=\hat{\boldsymbol{\beta}}+ (\mathbf{X}'\mathbf{X})^{-1} \mathbf{T}' [\mathbf{T} ((\mathbf{X}'\mathbf{X})^{-1}) \mathbf{T}']^{-1}(\mathbf{c}-\mathbf{T}\hat{\boldsymbol{\beta}}) \).
04

Analyze Benefits and Applicability

This model is useful in situations where the estimated parameters must meet certain conditions or sum up to specific values. Scenarios could include econometric models where variables have to obey market constraints or sum constraints in resource allocation.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91Ó°ÊÓ!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Ordinary Least Squares (OLS)
Ordinary Least Squares (OLS) is a fundamental method used in statistics and econometrics to estimate the parameters of a linear regression model. It aims to find the parameter values that minimize the sum of the squares of the differences between the observed values and the values predicted by the model. The OLS estimator can be expressed with the formula:
  • \( \hat{\boldsymbol{\beta}} = (\mathbf{X}'\mathbf{X})^{-1} \mathbf{X}'\mathbf{y} \)
where \( \mathbf{y} \) represents the vector of observed values, \( \mathbf{X} \) is the matrix of predictor variables, and \( \hat{\boldsymbol{\beta}} \) denotes the vector of estimated parameters.

By utilizing OLS, one ensures that the differences between actual and predicted values, also known as residuals, are minimized, resulting in the best linear unbiased estimate (BLUE) of the parameters. This technique is particularly significant due to its simplicity and optimal properties under classical linear regression assumptions.
Linear Regression
Linear Regression is a predictive modeling technique that explores the relationship between a dependent variable and one or more independent variables using a linear equation. The primary goal of linear regression is to model the linear relationship and predict outcomes based on input data.

In any linear regression model, the aim is to find a linear combination of independent variables that best explains the dependent variable. The model can be mathematically expressed as:
  • \( \mathbf{y} = \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\epsilon} \)
Here, \( \boldsymbol{\epsilon} \) represents the error term capturing the differences not explained by the model. The parameters \( \boldsymbol{\beta} \) are unknown and are to be estimated through methods like OLS.

Linear regression is widely utilized due to its interpretability and efficiency in depicting relationships in complex data sets. However, it assumes linear relationships, normality, homoscedasticity, and independence among observations, which are crucial for the model's validity.
Parameter Constraints
In real-world applications, sometimes we need to impose specific conditions on the parameters of our models. These conditions are known as parameter constraints and are essential for various practical applications. Parameter constraints ensure that the estimated parameters meet particular criteria relevant to the problem at hand.

For example, suppose we have a linear regression model with an additional constraint \( \mathbf{T} \boldsymbol{\beta} = \mathbf{c} \) imposed on the parameters. Such constraints can be vital in scenarios like:
  • Econometric models where parameters must satisfy market conditions.
  • Resource allocation models where totals need to adhere to budget limits or resource availability.
Incorporating these constraints transforms the ordinary least squares estimator into a constrained least squares (CLS) estimator, enabling the model to obey the necessary conditions. The derived estimator becomes:
  • \( \hat{\boldsymbol{\beta}}_{c} = \hat{\boldsymbol{\beta}} + (\mathbf{X}'\mathbf{X})^{-1} \mathbf{T}' [\mathbf{T} ((\mathbf{X}'\mathbf{X})^{-1}) \mathbf{T}']^{-1}(\mathbf{c}-\mathbf{T}\hat{\boldsymbol{\beta}}) \)
Understanding parameter constraints ensures that the model outputs are both meaningful and applicable to real-world scenarios where specific conditions must be respected.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

An article in Technometrics (1974, Vol. 16, pp. \(523-531\) ) considered the following stack-loss data from a plant oxidizing ammonia to nitric acid. Twenty-one daily responses of stack loss (the amount of ammonia escaping) were measured with air flow \(x_{1},\) temperature \(x_{2}\), and acid concentration \(x_{3}\). $$ \begin{aligned} y=& 42,37,37,28,18,18,19,20,15,14,14,13,11,12,8,7, \\ & 8,8,9,15,15 \\ x_{1}=& 80,80,75,62,62,62,62,62,58,58,58,58,58,58,50,50, \\ & 50,50,50,56,70 \\ x_{2}=& 27,27,25,24,22,23,24,24,23,18,18,17,18,19,18,18, \\ & 19,19,20,20,20 \\ x_{3}=& 89,88,90,87,87,87,93,93,87,80,89,88,82,93,89,86, \\ & 72,79,80,82,91 \end{aligned} $$ (a) Fit a linear regression model relating the results of the stack loss to the three regressor varilables. (b) Estimate \(\sigma^{2}\). (c) Find the standard error \(\operatorname{se}\left(\hat{\boldsymbol{\beta}}_{j}\right)\). (d) Use the model in part (a) to predict stack loss when \(x_{1}=60\), \(x_{2}=26,\) and \(x_{3}=85\)

The data from a patient satisfaction survey in a hospital are in Table E12-1. $$ \begin{array}{cccccc} \hline \begin{array}{l} \text { Obser- } \\ \text { vation } \end{array} & \text { Age } & \text { Severity } & \text { Surg-Med } & \text { Anxiety } & \begin{array}{c} \text { Satis- } \\ \text { faction } \end{array} \\ \hline 1 & 55 & 50 & 0 & 2.1 & 68 \\ 2 & 46 & 24 & 1 & 2.8 & 77 \\ 3 & 30 & 46 & 1 & 3.3 & 96 \\ 4 & 35 & 48 & 1 & 4.5 & 80 \\ 5 & 59 & 58 & 0 & 2.0 & 43 \\ 6 & 61 & 60 & 0 & 5.1 & 44 \\ 7 & 74 & 65 & 1 & 5.5 & 26 \\ 8 & 38 & 42 & 1 & 3.2 & 88 \\ 9 & 27 & 42 & 0 & 3.1 & 75 \\ 10 & 51 & 50 & 1 & 2.4 & 57 \\ 11 & 53 & 38 & 1 & 2.2 & 56 \\ 12 & 41 & 30 & 0 & 2.1 & 88 \\ 13 & 37 & 31 & 0 & 1.9 & 88 \\ 14 & 24 & 34 & 0 & 3.1 & 102 \\ 15 & 42 & 30 & 0 & 3.0 & 88 \\ 16 & 50 & 48 & 1 & 4.2 & 70 \\ 17 & 58 & 61 & 1 & 4.6 & 52 \\ 18 & 60 & 71 & 1 & 5.3 & 43 \\ 19 & 62 & 62 & 0 & 7.2 & 46 \\ 20 & 68 & 38 & 0 & 7.8 & 56 \\ 21 & 70 & 41 & 1 & 7.0 & 59 \\ 22 & 79 & 66 & 1 & 6.2 & 26 \\ 23 & 63 & 31 & 1 & 4.1 & 52 \\ 24 & 39 & 42 & 0 & 3.5 & 83 \\ 25 & 49 & 40 & 1 & 2.1 & 75 \\ \hline \end{array} $$ The regressor variables are the patient's age, an illness severity index (higher values indicate greater severity), an indicator variable denoting whether the patient is a medical patient (0) or a surgical patient (1), and an anxiety index (higher values indicate greater anxiety). (a) Fit a multiple linear regression model to the satisfaction response using age, illness severity, and the anxiety index as the regressors. (b) Estimate \(\sigma^{2}\). (c) Find the standard errors of the regression coefficients. (d) Are all of the model parameters estimated with nearly the same precision? Why or why not?

Consider the following inverse model matrix. \(\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1}=\left[\begin{array}{cccc}0.125 & 0 & 0 & 0 \\ 0 & 0.125 & 0 & 0 \\ 0 & 0 & 0.125 & 0 \\ 0 & 0 & 0 & 0.125\end{array}\right]\) (a) How many regressors are in this model? (b) What was the sample size? (c) Notice the special diagonal structure of the matrix. What does that tell you about the columns in the original X matrix?

Heat treating is often used to carburize metal parts such as gears. The thickness of the carburized layer is considered a crucial feature of the gear and contributes to the overall reliability of the part. Because of the critical nature of this feature, two different lab tests are performed on each furnace load. One test is run on a sample pin that accompanies each load. The other test is a destructive test that cross-sections an actual part. This test involves running a carbon analysis on the surface of both the gear pitch (top of the gear tooth) and the gear root (between the gear teeth). Table \(\mathrm{E} 12-6\) shows the results of the pitch carbon analysis test for 32 parts. $$ \begin{array}{cccccc} \hline \text { TEMP } & \text { SOAKTIME } & \text { SOAKPCT } & \text { DIFFTIME } & \text { DIFFPCT } & \text { PITCH } \\ \hline 1650 & 0.58 & 1.10 & 0.25 & 0.90 & 0.013 \\ 1650 & 0.66 & 1.10 & 0.33 & 0.90 & 0.016 \\ 1650 & 0.66 & 1.10 & 0.33 & 0.90 & 0.015 \\ 1650 & 0.66 & 1.10 & 0.33 & 0.95 & 0.016 \\ 1600 & 0.66 & 1.15 & 0.33 & 1.00 & 0.015 \\ 1600 & 0.66 & 1.15 & 0.33 & 1.00 & 0.016 \\ 1650 & 1.00 & 1.10 & 0.50 & 0.80 & 0.014 \\ 1650 & 1.17 & 1.10 & 0.58 & 0.80 & 0.021 \\ 1650 & 1.17 & 1.10 & 0.58 & 0.80 & 0.018 \\ 1650 & 1.17 & 1.10 & 0.58 & 0.80 & 0.019 \\ 1650 & 1.17 & 1.10 & 0.58 & 0.90 & 0.021 \\ 1650 & 1.17 & 1.10 & 0.58 & 0.90 & 0.019 \end{array} $$ $$ \begin{array}{cccccc} \hline \text { TEMP } & \text { SOAKTIME } & \text { SOAKPCT } & \text { DIFFTIME } & \text { DIFFPCT } & \text { PITCH } \\ \hline 1650 & 1.17 & 1.15 & 0.58 & 0.90 & 0.021 \\ 1650 & 1.20 & 1.15 & 1.10 & 0.80 & 0.025 \\ 1650 & 2.00 & 1.15 & 1.00 & 0.80 & 0.025 \\ 1650 & 2.00 & 1.10 & 1.10 & 0.80 & 0.026 \\ 1650 & 2.20 & 1.10 & 1.10 & 0.80 & 0.024 \\ 1650 & 2.20 & 1.10 & 1.10 & 0.80 & 0.025 \\ 1650 & 2.20 & 1.15 & 1.10 & 0.80 & 0.024 \\ 1650 & 2.20 & 1.10 & 1.10 & 0.90 & 0.025 \\ 1650 & 2.20 & 1.10 & 1.10 & 0.90 & 0.027 \\ 1650 & 2.20 & 1.10 & 1.50 & 0.90 & 0.026 \\ 1650 & 3.00 & 1.15 & 1.50 & 0.80 & 0.029 \\ 1650 & 3.00 & 1.10 & 1.50 & 0.70 & 0.030 \\ 1650 & 3.00 & 1.10 & 1.50 & 0.75 & 0.028 \\ 1650 & 3.00 & 1.15 & 1.66 & 0.85 & 0.032 \\ 1650 & 3.33 & 1.10 & 1.50 & 0.80 & 0.033 \\ 1700 & 4.00 & 1.10 & 1.50 & 0.70 & 0.039 \\ 1650 & 4.00 & 1.10 & 1.50 & 0.70 & 0.040 \\ 1650 & 4.00 & 1.15 & 1.50 & 0.85 & 0.035 \\ 1700 & 12.50 & 1.00 & 1.50 & 0.70 & 0.056 \\ 1700 & 18.50 & 1.00 & 1.50 & 0.70 & 0.068 \end{array} $$ The regressors are furnace temperature (TEMP), carbon concentration and duration of the carburizing cycle (SOAKPCT, SOAKTIME), and carbon concentration and duration of the diffuse cycle (DIFFPCT, DIFFTIME). (a) Fit a linear regression model relating the results of the pitch carbon analysis test (PITCH) to the five regressor variables. (b) Estimate \(\sigma^{2}\). (c) Find the standard errors \(\operatorname{se}\left(\hat{\boldsymbol{\beta}}_{j}\right)\) (d) Use the model in part (a) to predict PITCH when TEMP = \(1650,\) SOAKTIME \(=1.00,\) SOAKPCT \(=1.10,\) DIFFTIME \(=\) \(1.00,\) and \(\mathrm{DIFFPCT}=0.80\)

You have fit a regression model with two regressors to a data set that has 20 observations. The total sum of squares is 1000 and the model sum of squares is 750 (a) What is the value of \(R^{2}\) for this model? (b) What is the adjusted \(R^{2}\) for this model? (c) What is the value of the \(F\) -statistic for testing the significance of regression? What conclusions would you draw about this model if \(\alpha=0.05 ?\) What if \(\alpha=0.01 ?\) (d) Suppose that you add a third regressor to the model and as a result, the model sum of squares is now \(785 .\) Does it seem to you that adding this factor has improved the model?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.