/*! This file is auto-generated */ .wp-block-button__link{color:#fff;background-color:#32373c;border-radius:9999px;box-shadow:none;text-decoration:none;padding:calc(.667em + 2px) calc(1.333em + 2px);font-size:1.125em}.wp-block-file__button{background:#32373c;color:#fff;text-decoration:none} Problem 38 The numerical solution to the ti... [FREE SOLUTION] | 91影视

91影视

The numerical solution to the time-independent Schr枚dinger equation in Problem 2.61 can be extended to solve the time-dependent Schr枚dinger equation. When we discretize the variable \(x,\) we obtain the matrix equation \(\mathrm{H} \Psi=i \hbar \frac{d}{d t} \Psi\) The solution to this equation can be written \(\Psi(t+\Delta t)=U(\Delta t) \Psi(t)\) If \(H\) is time independent, the exact expression for the time-evolution operator is \(^{50}\) \(U(\Delta t)=e^{-i \mathrm{H} \Delta t / h}\) and for \(\Delta t\) small enough, the time-evolution operator can be approximated as \\[ \mathrm{U}(\Delta t) \approx 1-i \mathrm{H} \frac{\Delta t}{\hbar} \\] While Equation 11.152 is the most obvious way to approximate \(U, a\) numerical scheme based on it is unstable, and it is preferable to use Cayley's form for the approximation: \(U(\Delta t) \approx \frac{1-\frac{1}{2} i \frac{\Delta t}{\hbar} H}{1+\frac{1}{2} i \frac{\Delta r}{h} H}\) Combining Equations 11.153 and 11.150 we have \(\left(1+\frac{1}{2} i \frac{\Delta t}{\hbar} \mathrm{H}\right) \Psi(t+\Delta t)=\left(1-\frac{1}{2} i \frac{\Delta t}{\hbar} \mathrm{H}\right) \Psi(t)\) (11.154) This has the form of a matrix equation \(\mathrm{Mx}=\mathbf{b}\) which can be solved for the unknown \(\mathbf{x}=\Psi(t+\Delta t)\). Because the matrix \(M=1+\frac{1}{2} i \frac{\Delta l}{\hbar} H\) is tridiagonal, 52 efficient algorithms exist for doing so. (a) Show that the approximation in Equation \(\underline{11.153}\) is accurate to second order. That is, show that Equations 11.151 and \(11.153,\) expanded as power series in \(\Delta t,\) agree up through terms of order \((\Delta t)^{2}\). Verify that the matrix in Equation 11.153 is unitary. As an example, consider a particle of mass \(m\) moving in one dimension in a simple harmonic oscillator potential. For the numerical part set \(m=1, \omega=1,\) and \(\hbar=1\) (this just defines the units of mass, time, and length). (b) Construct the Hamiltonian matrix H for \(N+1=100\) spatial grid points. Set the spatial boundaries where the dimensionless length is \(\xi=\pm 10\) (far enough out that we can assume that the wave function vanishes there for low-energy states). By computer, find the lowest two eigenvalues of \(\mathrm{H}\) and compare the exact values. Plot the corresponding eigenfunctions. Are they normalized? If not, normalize them before doing part (c). (c) Take \(\Psi(0)=\left(\psi_{0}+\psi_{1}\right) / \sqrt{2}\) (from part (b)) and use Equation 11.154 to evolve the wave function from time \(t=0\) to \(t=4 \pi / \omega\). Create a movie (Animate, in Mathematica) showing \(\operatorname{Re}(\Psi(t)), \operatorname{Im}(\Psi(t)),\) and \(|\Psi(t)|,\) together with the exact result. Hint: You need to decide what to use for \(\Delta t\). In terms of the number of time steps \(N_{p}, N_{t} \Delta t=4 \pi / \omega\) In order for the approximation of the exponential to hold, we need to have \(E \Delta t / \hbar \ll 1 .\) The energy of our state is of order \(\hbar \omega,\) and therefore \(N_{t} \gg 4 \pi .\) So you will need at least (say) 100 time steps.

Short Answer

Expert verified
Approximation is accurate to second order; matrix is unitary. Numerically solve eigenvalue problem to animate \(\Psi(t)\).

Step by step solution

01

Expand the Approximations in Power Series

To verify the accuracy of Equation 11.153 to the second order, expand both Equation 11.151 and Equation 11.153 as power series in \( \Delta t \). For Equation 11.151, the expansion is \( e^{-i H \Delta t / \hbar} = 1 - i H \frac{\Delta t}{\hbar} - \frac{1}{2} H^2 \left( \frac{\Delta t}{\hbar} \right)^2 + \cdots \). For Equation 11.153, the expansion gives \( U(\Delta t) = \frac{1 - \frac{1}{2} i \frac{\Delta t}{\hbar} H}{1 + \frac{1}{2} i \frac{\Delta t}{\hbar} H} \). Use the binomial expansion for \( (1+x)^{-1} \) to expand the denominator.
02

Show Agreement up to Second Order

After expanding both approximations, compare the terms. Inserting the expansion for Equation 11.153, you get \( 1 - i H \frac{\Delta t}{\hbar} - \frac{1}{2} H^2 \left( \frac{\Delta t}{\hbar} \right)^2 + \cdots \), which matches the expansion of Equation 11.151 up to the second-order term \( (\Delta t)^2 \). Thus, both approximations are accurate to second order.
03

Verify Unitarity of the Matrix

To check if the matrix in Equation 11.153 is unitary, verify that \( U U^{\dagger} = I \). Here, \( U^{\dagger} \) is the conjugate transpose. Consider \( U(t) = \frac{1 - \frac{1}{2} i \frac{\Delta t}{\hbar} H}{1 + \frac{1}{2} i \frac{\Delta t}{\hbar} H} \) . Calculate \( U^{\dagger} = \frac{1 + \frac{1}{2} i \frac{\Delta t}{\hbar} H}{1 - \frac{1}{2} i \frac{\Delta t}{\hbar} H} \), then \( U U^{\dagger} = I \) confirms unitarity.
04

Construct the Hamiltonian Matrix

Consider a simple harmonic oscillator potential \( V(x) = \frac{1}{2} m \omega^2 x^2 \) with the parameters \(m = 1\), \(\omega = 1\). Discretize the space into 100 points over \([-10,10]\). The Hamiltonian \( H \) in a discretized form involves a tridiagonal matrix where the off-diagonal elements represent the kinetic energy and the diagonal elements incorporate both kinetic and potential energy terms.
05

Compute Eigenvalues and Eigenfunctions

Numerically find the lowest two eigenvalues of the Hamiltonian \( H \) using computational tools (like Python's SciPy or MATLAB). Using \( H \), solve the eigenvalue problem \( H \psi = E \psi \). Compare these numerical results to the analytical solutions \( E_n = \hbar \omega (n + 1/2) \) for \( n = 0,1 \). Plot the corresponding eigenfunctions to check their forms and whether they are normalized.
06

Normalize the Eigenfunctions

Ensure that the eigenfunctions \( \psi_0 \) and \( \psi_1 \) obtained in Step 5 are normalized, i.e., \( \int \psi_n^* \psi_n \, dx = 1 \). If not, normalize them by dividing each eigenfunction by its square root of the integral of its squared modulus over the domain.
07

Setup Initial State and Time Evolution

Take the initial wave function \( \Psi(0) = ( \psi_0 + \psi_1 ) / \sqrt{2} \). Use the matrix form of Equation 11.154 to advance the wave function \( \Psi(t) \) through time. Decide \( \Delta t \) based on \( N_t \Delta t = 4 \pi / \omega \) and \( E \Delta t / \hbar \ll 1 \). Use at least 100 time steps \( N_t > 4 \pi \).
08

Visualize the Time Evolution

Employ Mathematica or similar software to animate \( \operatorname{Re}(\Psi(t)) \), \( \operatorname{Im}(\Psi(t)) \), and \( |\Psi(t)| \) through the time interval from \( t = 0 \) to \( t = 4 \pi / \omega \). Conduct this visualization to compare the numerical solution with an exact time-dependent Schr枚dinger solution for the same setup.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with 91影视!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Time-Dependent Schr枚dinger Equation
The Time-Dependent Schr枚dinger Equation (TDSE) is fundamental in quantum mechanics. It describes how the quantum state of a system changes over time. The equation is written as \( i \hbar \frac{d}{dt} \Psi(x, t) = H\Psi(x, t) \), where \( \Psi(x, t) \) is the wave function, \( i \) is the imaginary unit, \( \hbar \) is the reduced Planck's constant, and \( H \) is the Hamiltonian operator, which represents the total energy of the system.
This equation is crucial because it allows us to calculate the future state of a quantum system when the Hamiltonian is known. When discretizing space, the TDSE transforms into a matrix equation, useful for numerical simulations. This form is vital for computational approaches to study complex systems where analytical solutions are challenging.
Time Evolution Operator
In quantum mechanics, the Time Evolution Operator \( U(t) \) describes how a quantum state evolves over time. It is defined by the equation \( \Psi(t + \Delta t) = U(\Delta t) \Psi(t) \), linking the wave function at a later time \( t + \Delta t \) to its earlier state \( t \).
With a time-independent Hamiltonian, the time evolution operator can be written as \( U(\Delta t) = e^{-i H \Delta t / \hbar} \). This exponential form guarantees an accurate description of the system when \( \Delta t \) is very small. However, calculating this exponential operator numerically can introduce instability, which necessitates the use of approximations like Cayley's form to maintain numerical stability in simulations.
Cayley's Form
Cayley's form provides a stable method for approximating the time evolution operator in numerical solutions of the TDSE. It is expressed as \( U(\Delta t) \approx \frac{1 - \frac{1}{2} i \frac{\Delta t}{\hbar} H}{1 + \frac{1}{2} i \frac{\Delta t}{\hbar} H} \).
This approximation is particularly useful because it is stable and guarantees unitary evolution, crucial for preserving the total probability of the quantum state over time. The form involves a quotient of expressions, simplifying the calculation as it can be treated like a linear algebra problem.
Furthermore, this approach maintains accuracy up to the second-order term \( (\Delta t)^2 \), making it a robust choice for simulations requiring precision over many time steps.
Hamiltonian Matrix
The Hamiltonian matrices are central in quantum mechanics as they represent the energy of the system. For a potential like the harmonic oscillator, the Hamiltonian \( H \) includes both kinetic and potential energy terms. In numerical solutions, this becomes a matrix \( H \).
To construct it, we discretize space into a grid and compute the Hamiltonian matrix elements, yielding a tridiagonal form. Tridiagonal matrices are computationally efficient, allowing fast numerical solutions for eigenvalues and eigenvectors.
The eigenvalues correspond to the system's energy levels and the associated eigenvectors define the possible quantum states. Understanding and computing the Hamiltonian matrix is critical for simulating systems like quantum oscillators and solving the TDSE effectively.
Unitarity
Unitarity is a core concept ensuring that quantum evolution is probability-conserving. This means the total probability of all states in a quantum system remains constant over time, represented mathematically as \( U U^{\dagger} = I \), where \( U^{\dagger} \) is the Hermitian conjugate of \( U \), and \( I \) is the identity matrix.
For the approximation of the time-evolution operator using Cayley's form, ensuring unitarity is essential. By showing that \( U U^{\dagger} = I \), we verify that our numerical method preserves the norm of the wave function, safeguarding the probabilities across different quantum states.
This principle is not only fundamental to quantum mechanics theory but also an important aspect when designing stable and accurate numerical algorithms for time evolution.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

A hydrogen atom is placed in a (time-dependent) electric field \(\mathbf{E}=E(t) \hat{k} .\) Calculate all four matrix elements \(H_{i j}^{\prime}\) of the perturbation \(\hat{H}^{\prime}=e E z\) between the ground state \((n=1)\) and the (quadruply degenerate) first excited states \((n=2) .\) Also show that \(H_{i i}^{\prime}=0\) for all five states. Note: There is only one integral to be done here, if you exploit oddness with respect to \(z ;\) only one of the \(n=2\) states is "accessible" from the ground state by a perturbation of this form, and therefore the system functions as a two-state configuration-assuming transitions to higher excited states can be ignored.

You could derive the spontaneous emission rate (Equation 11.63 ) without the detour through Einstein's \(A\) and \(B\) coefficients if you knew the ground state energy density of the electromagnetic field, \(\rho_{0}(\omega),\) for then it would simply be a case of stimulated emission (Equation 11.54 ). To do this honestly would require quantum electrodynamics, but if you are prepared to believe that the ground state consists of one photon in each classical mode, then the derivation is fairly simple: (a) To obtain the classical modes, consider an empty cubical box, of side \(l\) with one corner at the origin. Electromagnetic fields (in vacuum) satisfy the classical wave equation \(^{18}\) \\[ \left(\frac{1}{c^{2}} \frac{\partial^{2}}{\partial t^{2}}-\nabla^{2}\right) f(x, y, z, t)=0 \\] where \(f\) stands for any component of \(\mathbf{E}\) or of \(\mathbf{B}\). Show that separation of variables, and the imposition of the boundary condition \(f=0\) on all six surfaces yields the standing wave patterns \\[ f_{n_{x}, n_{y}, n_{z}}=A \cos (\omega t) \sin \left(\frac{n_{x} \pi}{l} x\right) \sin \left(\frac{n_{y} \pi}{l} y\right) \sin \left(\frac{n_{z} \pi}{l} z\right) \\] with \\[ \omega=\frac{\pi c}{l} \sqrt{n_{x}^{2}+n_{y}^{2}+n_{z}^{2}} \\] There are two modes for each triplet of positive integers \(\left(n_{x}, n_{y}, n_{z}=1,2,3, \ldots\right),\) corresponding to the two polarization states. (b) The energy of a photon is \(E=h v=\hbar \omega\) (Equation 4.92 ), so the energy in the mode \(\left(n_{x}, n_{y}, n_{z}\right)\) is \\[ E_{n_{x}, n_{y}, n_{z}}=2 \frac{\pi \hbar c}{l} \sqrt{n_{x}^{2}+n_{y}^{2}+n_{z}^{2}} \\] What, then, is the total energy per unit volume in the frequency range \(d \omega\) 9 if each mode gets one photon? Express your answer in the form \(\frac{1}{l^{3}} d E=\rho_{0}(\omega) d \omega\) and read off \(\rho_{0}(\omega) .\) Hint: refer to Figure 5.3 (c) Use your result, together with Equation 11.54 , to obtain the spontaneous emission rate. Compare Equation 11.63

We have encountered stimulated emission, (stimulated) absorption, and spontaneous emission. How come there is no such thing as spontaneous absorption?

Problem 11.16 An electron in the \(n=3, \ell=0, m=0\) state of hydrogen decays by a sequence of (electric dipole) transitions to the ground state. (a) What decay routes are open to it? Specify them in the following way: |300\rangle\(\rightarrow|n \ell m\rangle \rightarrow\left|n^{\prime} \ell^{\prime} m^{\prime}\right\rangle \rightarrow \dots \rightarrow|100\rangle\) (b) If you had a bottle full of atoms in this state, what fraction of them would decay via each route? (c) What is the lifetime of this state? Hint: Once it's made the first transition, it's no longer in the state |300\rangle , so only the first step in each sequence is relevant in computing the lifetime.

Problem 11.5 Suppose you don't assume \(H_{a a}^{\prime}=H_{b b}^{\prime}=0\) (a) Find \(c_{a}(t)\) and \(c_{b}(t)\) in first-order perturbation theory, for the case \(c_{a}(0)=1, c_{b}(0)=0 .\) Show that \(\left|c_{a}^{(1)}(t)\right|^{2}+\left|c_{b}^{(1)}(t)\right|^{2}=1,\) to first order in \(\hat{H}^{\prime}\) (b) There is a nicer way to handle this problem. Let \\[ d_{a} \equiv e^{\frac{1}{n} \int_{0}^{t} H_{a a}^{\prime}\left(t^{\prime}\right) d t^{\prime}} c_{a}, \quad d_{b} \equiv e^{\frac{1}{h} \int_{0}^{t} H_{b b}^{\prime}\left(t^{\prime}\right) d t^{\prime}} c_{b} \\] Show that \\[ \dot{d}_{a}=-\frac{i}{\hbar} e^{i \phi} H_{a b}^{\prime} e^{-i \omega_{0} t} d_{b} ; \quad \dot{d}_{b}=-\frac{i}{\hbar} e^{-i \phi} H_{b a}^{\prime} e^{i \omega_{0} t} d_{a} \\] where \\[ \phi(t) \equiv \frac{1}{\hbar} \int_{0}^{l}\left[H_{a a}^{\prime}\left(t^{\prime}\right)-H_{b b}^{\prime}\left(t^{\prime}\right)\right] d t^{\prime} \\] So the equations for \(d_{a}\) and \(d_{b}\) are identical in structure to Equation 11.17 (with an extra factor \(e^{i \phi}\) tacked onto \(\hat{H}^{\prime}\) ). (c) Use the method in part (b) to obtain \(c_{a}(t)\) and \(c_{b}(t)\) in first- order perturbation theory, and compare your answer to (a). Comment on any discrepancies.

See all solutions

Recommended explanations on Physics Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.