Chapter 10: Problem 15
$$f ( x ) = \sin x - 6 \sin 4 x$$
Short Answer
Step by step solution
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none}
Learning Materials
Features
Discover
Chapter 10: Problem 15
$$f ( x ) = \sin x - 6 \sin 4 x$$
These are the key concepts you need to understand to accurately answer the question.
All the tools & learning materials you need for study success - in one app.
Get started for free
Find a solution to the mixed boundary value problem \(\frac{\partial^{2} u}{\partial r^{2}}+\frac{1}{r} \frac{\partial u}{\partial r}+\frac{1}{r^{2}} \frac{\partial^{2} u}{\partial \theta^{2}}=0, \quad 1< r <3, \quad-\pi \leq \theta \leq \pi,\) \(u(1, \theta)=f(\theta), \quad-\pi \leq \theta \leq \pi,\) \(\frac{\partial u}{\partial r}(3, \theta)=g(\theta), \quad-\pi \leq \theta \leq \pi.\)
$$\begin{array} { l } { \frac { \partial ^ { 2 } u } { \partial t ^ { 2 } } + \frac { \partial u } { \partial t } + u = \alpha ^ { 2 } \frac { \partial ^ { 2 } u } { \partial x ^ { 2 } } } \\ { \text { with } u ( x , t ) = X ( x ) T ( t ) \text { yields } } \\ { X ^ { \prime \prime } ( x ) + \lambda X ( x ) = 0 } \\ { T ^ { \prime \prime } ( t ) + T ^ { \prime } ( t ) + \left( 1 + \lambda \alpha ^ { 2 } \right) T ( t ) = 0 } \\ { \text { where } \lambda \text { is a constant. } } \end{array}$$
Find a solution to the following Dirichlet problem for a half disk: \(\frac{\partial^{2} u}{\partial r^{2}}+\frac{1}{r} \frac{\partial u}{\partial r}+\frac{1}{r^{2}} \frac{\partial^{2} u}{\partial \theta^{2}}=0, \quad 0< r <1, \quad 0<\theta<\pi,\) \(u(r, 0)=0, \quad 0 \leq r \leq 1,\) \(u(r, \pi)=0, \quad 0 \leq r \leq 1,\) \(u(1, \theta)=\sin 3 \theta, \quad 0 \leq \theta \leq \pi,\) $$u(0, \theta) \quad bounded$$
Least-Squares Approximation Property. The Nth partial sum of the Fourier series $$f(x) \sim \frac{a_{0}}{2}+\sum_{n=1}^{\infty}\left\\{a_{n} \cos n x+b_{n} \sin n x\right\\}$$ gives the best mean-square approximation of \(f\) by a trigonometric polynomial. To prove this, let \(F_{N}(x)\) denote an arbitrary trigonometric polynomial of degree \(N :\) $$F_{N}(x)=\frac{\alpha_{0}}{2}+\sum_{n=1}^{N}\left\\{\alpha_{n} \cos n x+\beta_{n} \sin n x\right\\}$$ and define $$E :=\int_{-\pi}^{\pi}\left[f(x)-F_{N}(x)\right]^{2} d x$$ which is the total square error. Expanding the integrand,we get $$\begin{aligned} E=& \int_{-\pi}^{\pi} f^{2}(x) d x-2 \int_{-\pi}^{\pi} f(x) F_{N}(x) d x \\ &+\int_{-\pi}^{\pi} F_{N}^{2}(x) d x \end{aligned}$$ (a) Use the orthogonality of the functions \(\\{1,\) cos \(x\) ,\(\sin x, \cos 2 x, \ldots \\}\) to show that $$\begin{array}{r}{\int_{-\pi}^{\pi} F_{N}^{2}(x) d x=\pi\left(\frac{\alpha_{0}^{2}}{2}+\alpha_{1}^{2}+\cdots+\alpha_{N}^{2}\right.} \\\ {+\beta_{1}^{2}+\cdots+\beta_{N}^{2}}\end{array}$$ and $$\begin{array}{r}{\int_{-\pi}^{\pi} f(x) F_{N}(x) d x=\pi\left(\frac{\alpha_{0} a_{0}}{2}+\alpha_{1} a_{1}+\cdots+\alpha_{N} a_{N}\right.} \\ {+\beta_{1} b_{1}+\cdots+\beta_{N} b_{N}}\end{array}$$ (b) Let \(E^{*}\) be the error when we approximate \(f\) by the \(N\) th partial sum of its Fourier series, that is, when we choose \(\alpha_{n}=a_{n}\) and \(\beta_{n}=b_{n} .\) Show that $$\begin{array}{r}{E^{*}=\int_{-\pi}^{\pi} f^{2}(x) d x-\pi\left(\frac{a_{0}^{2}}{2}+a_{1}^{2}+\cdots+a_{N}^{2}\right.} \\\ {+b_{1}^{2}+\cdots+b_{N}^{2}}\end{array}$$ (c) Using the results of parts (a) and (b), show that \(E-E^{*} \geq 0,\) that is \(, E \geq E^{*},\) by proving that $$\begin{array}{r}{E-E^{*}=\pi\left\\{\frac{\left(\alpha_{0}-a_{0}\right)^{2}}{2}+\left(\alpha_{1}-a_{1}\right)^{2}\right.} \\\ {+\cdots+\left(\alpha_{N}-a_{N}\right)^{2}+\left(\beta_{1}-b_{1}\right)^{2}} \\\ {+\cdots+\left(\beta_{N}-b_{N}\right)^{2}}\end{array}$$ Hence, the \(N\) th partial sum of the Fourier series gives the least total square error, since \(E \geq E^{*}\) .
Find a formal solution to the given boundary value problem.
$$
\begin{array}{l}
\frac{\partial^{2} u}{\partial x^{2}}+\frac{\partial^{2} u}{\partial y^{2}}=0,
\quad 0
What do you think about this solution?
We value your feedback to improve our textbook solutions.