Applications in Computational Physics

Physics 366

Winter 2017

Office hours

A 2x2 matrix

Consider the following matrix

$$ \begin{align} M &= \left[ \begin{array}{rr} -0.2 & 0.6 \\ 0.6 & 0.7 \end{array} \right] \end{align} $$

  1. Pick any vector $v$, and plot $v$ as well as $Mv$ and $M^2v$. Keep plotting until you find a pattern.

  2. Repeat the plotting process above, but with a different starting vector. What difference does it make?

  3. Try doing your plots again, but this time normalize each vector before plotting.

  4. Construct a second vector which is orthogonal to the interesting vector you have already found. What happens when you mulitply $M$ by this vector?

  5. Now try the same process with the following matrix. $$ \begin{align} M &= \left[ \begin{array}{rr} 1.5895 & -1.3917 \\ -1.3917 & 3.1965 \end{array} \right] \end{align} $$ How does this matrix differ from the first? Can you cause it to behave more like the first matrix?

Extra fun
Construct a matrix of your own that will (when repeatedly multiplied) take any vector and output a vector in the $\cos(.2)\hat{x} - \sin(.2)\hat{y}$ direction.

A big Hermitian matrix

Suppose we have a Hamiltonian matrix, which is a moderately large square matrix (anywhere between 10x10 and 100x100 will be fine). This matrix is all zero except that it has a value of -1 on the diagonal, and -0.1 on either side of the diagonal.

  1. Write code to construct this matrix.

  2. Write code to generate a random vector of an appropriate size and normalize it.

  3. Create a plot to visualize your random vector.

  4. Solve for the eigenvector corresponding to the largest eigenvalue (in magnitude). You can do this by mulitplying repeatedly by the Hamiltonian and normalizing, just like we did last week with a smaller matrix.

  5. Plot this eigenvector, and find the corresponding eigenvalue.

Next eigenstate

  1. Create a new random vector, and orthogonalize this vector to the eigenvector you have already found

  2. Normalize your new random orthogonal vector. Plot it and compare with the eigenstate.

  3. Solve for second eigenvector using the fact that different eigenvectors are always orthogonal.

Extra fun
Get more eigenstates. The fun never ends! (Actually, technically it ends after you find eigenvector $N$...)

Energy eigenvalue equation

Today we will solve for the eigenstates of a particle confined to a one-dimensional box with an arbitrary potential within the box. To do this, we will solve the energy eigenvalue equation:

$$\begin{align} \mathcal{H}\left|\psi_n\right> &= E_n \left|\psi_n\right> \\ \mathcal{H}\psi_n(x) &= E_n \psi_n(x) \\ -\frac{\hbar^2}{2m}\frac{\partial^2 \psi_n(x)}{\partial x^2} + V(x)\psi_n(x) &= E_n \psi_n(x) \end{align}$$

There are an infinite number of solutions (infinite fun!!!), so we'll need to take some care to specify which solution we're looking for. The conventional approach is to define $n$ as the number of nodes in $\psi_n(x)$

I should note here that the solution to Schroedinger's eigenvalue equation may be taken to be real for any bound state, provided there are no magnetic fields involved.

Shooting

If we already knew the energy $E_n$, we could solve for the eigenstate using a finite difference approximation.

$$\begin{align} \frac{\partial^2 \psi_n(x)}{\partial x^2} &= -\frac{2m}{\hbar^2}(E_n-V(x)) \psi_n(x) \\ &\approx \frac{\psi_n(x+\Delta x) + \psi_n(x-\Delta x) - 2\psi_n(x)}{\Delta x^2} \end{align}$$

We can now take this equation, and if we know $\psi_n(x)$ and we also know $\psi_n(x-\Delta x)$, we can solve for $\psi_n(x+\Delta x)$. This allows us to pull ourselves up by our bootstraps! We just need two bootstraps: $\psi_n(0)$ and $\psi_n(\Delta x)$.

The first of these is easy: the wave function must go to zero at the boundaries of the box. The second one is slightly more subtle, but turns out to be just as easy: we can pick any non-zero value for the value of $\psi$ at the second point ($x = \Delta x$). The reason we can do this is that any function proportional to an eigenfunction is also an eigenfunction, which is a property of a linear differential equation.

So we can write the second derivative in Schroedinger's eigenvalue equation as a finite difference, and solve for $\psi(x+\Delta x)$, and that gives us an equation for the next point. And we can continue finding the next point until we reach the other end of the box, at which time we will know $\psi(x)$ everywhere within the box.

  1. Write a function that does this, storing $\psi(x)$ as an array representing its value at points separated by $\Delta x$.

  2. Plot $\psi(x)$ for $V(x)=0$ and some energy (just pick something).

  3. Try to find an eigenvalue of the Hamiltonian by tweaking the energy.

  4. Now plot $\psi(x)$ for some interesting potential and a few different energies. Tweak the energy until it is an eigenvalue of the Hamiltonian.

  5. Visualize the energy as well as the potential $V(x)$.

  6. Solve for several eigenvalues, and visualize both the eigenvalues and the wave functions.

Aiming

When you first looked at the solution for $\psi(x)$ from the code you wrote when you plugged in some random energy, you hopefully didn't like it very much. The wave function was zero at one of the two boundaries, but most likely wasn't zero at the other boundary! The reason was that you probably didn't use an eigenvalue of the Hamiltonian for your energy.

So while you've solved the differential equation, you hadn't yet solved the eigenvalue equation. Eigenvalue equations are funny that way. Fortunately, given the above function, it isn't too hard to find the eigenvalue. Before you program anything, spend a little time running your function for different energies and try to find a few eigenvalues of the energy. You can distinguish eigenstates according to the number $n$ which represents how many times they cross zero (i.e. the number of nodes).

  1. Pick a new potential $V(x)$.

  2. Look for eigenstates corresponding to $n = 3$, $n=4$, and $n = 5$

  3. Once you've gotten the hang of finding eigenenergies by hand, write a function that solves for a particular specified eigenvalue and eigenfunction.

Hint
You will likely want to write a function to change the energy based on the wave function at the edge of the box. Instead you should change the energy based on the number of nodes, keeping in mind that the energy is never correct and the number of nodes will tell you if it is too high or too low.

Systems with barriers

Try creating a potential within your box such that there are multiple wells.

Simple harmonic oscillator

Try creating a simple harmonic oscillator potential within your box, and see what the energy eigenvalues are. Try finding a very high energy state (many nodes), and see where such a particle is likely to be found. Is it more likely to be found in a state with low potential energy, or high potential energy? Why? How does this relate to the classical limit, which we expect to show up at large quantum numbers, according to the Correspondence Principle?

Waves on a string

We will be modeling waves on a string under tension, as in a guitar. This system is accurately described by the non-dispersive one-dimensional wave equation. $$\begin{align} \frac{\partial^2 y(x,t)}{\partial t^2} &= v^2\frac{\partial^2 y(x,t)}{\partial x^2} \end{align}$$ where the wave speed $v$ is given by $$\begin{align} v &= \sqrt{\frac{T}{\mu}} \end{align}$$ where $T$ is the tension in the string, and $\mu$ is the mass per unit length of the string. Since a string could have a non-uniform linear mass density, we could write $\mu(x)$ which gives us a position-dependent speed $v(x)$.

We will solve this partial differential equation using a finite-difference approach, in which we treat both time and space in terms of finite differences $\Delta t$ and $\Delta x$. When we do this, our partial differential equation becomes a finite difference equation: $$\begin{multline} \frac{-2 y(x,t) + y(x,t-\Delta t) + y(x,t+\Delta t)}{\Delta t^2} =\\ v^2 \frac{-2 y(x,t) + y(x-\Delta x,t) + y(x+\Delta x,t)}{\Delta x^2} \end{multline}$$ which we can simplify a bit to look like: $$\begin{multline} 2 y(x,t) - y(x,t-\Delta t) - y(x,t+\Delta t) =\\ v^2\tfrac{\Delta t^2}{\Delta x^2} \left(2 y(x,t) - y(x-\Delta x,t) - y(x+\Delta x,t)\right) \end{multline}$$ We can now solve for the future value of $y$, and we will have the Verlet method: $$\begin{multline} y(x,t+\Delta t) =\\ 2 y(x,t) - y(x,t-\Delta t)\\ - v^2\tfrac{\Delta t^2}{\Delta x^2} \left(2 y(x,t) - y(x-\Delta x,t) - y(x+\Delta x,t)\right) \end{multline}$$

This is a formula which you can implement. Note that in order to solve for the future value of $y$, you need to know both its current value and its previous value.

Normal modes

  1. Write code to solve for the motion of a string, given its initial position at two subsquent time steps.

  2. Try to find a normal mode of the system (which has a repeating pattern). What is the period (and frequency) of this normal mode?

  3. Create a static visualization of the time-dependent data you are animating. (There are several effective ways to do this!)

Creating animations

Once you've written the above algorithm, you'll want to see what is going on. To do so, write code to animate the string's motion. You could google for matplotlib animation, but to save you some time (and because the documentation that is out there is a bit hard to understand), you can use the following example:

 import matplotlib.pyplot as plt
 import numpy as np

 plt.ion() # this turns on interaction---needed for animation

 x = np.arange(0,7,0.5)            # x-array
 omega = 2 # radian/second
 k = 1 # radian/meter
 for t in np.arange(0,2*np.pi/omega,0.1):  # run for one period
     plt.cla() # erase the existing plot to start over
     plt.plot(x, np.sin(k*x-omega*t), 'o-') # plot the data
     plt.xlabel('x') # label the axes
     plt.ylabel('$\psi(x)$')
     plt.draw()                            # redraw the canvas

Creating a wave packet

So far we have been working with normal modes. They are nice, but sometimes are less than helpful, if we want to understand how pulses propogate. Even looking at reflection and transmission, while possible using plane waves, can be inconvenient.

Travelling waves can also lead to interesting insights. Travelling waves are solutions of the wave equation that have the form: $$ y = f(x-vt) $$ where $f$ is any smooth function.

For this task you will implement initial conditions for a "wave packet", which is a travelling wave that is localized in space, i.e. it $y=0$ for all positions the left or the right of the wave packet.

  1. Try to implement the initial conditions for such a wave packet, and watch it move using your code that solves the wave equation. This sort of a system is called a wave packet. Note that your wave packet should start somewhere in the center of your string, not right next to the wall.

  2. As before, create a static visualization of your wave packet. Note that to be effective your static visualization should use time was one of its two axes, so that you can clearly see the packet moving over time.

Reflection and transmission

Now imagine that you wrap wire around a portion of the string (think guitar string), increasing its mass density $\mu$ for just a portion of its length. This will change the wave speed on that portion of the string. You could alternatively thing of this as tying string to rope.

Extra fun
Adjust your code to handle this case where part of the string is heavier than the rest. See how your wave packet behaves when it hits this region. You should see both transmission and reflection, and should be able to see (especially in your static visualization) that the speed is different in that region.

Numerical stability

An algorithm is numerically unstable if small errors grow exponentially. In the particular case of our finite difference integration of the wave equation, our numerical stability is determined by the relationship between the resolution in space and time, $\Delta x$ and $\Delta t$. To understand numerical stability physically, it is often helpful to consider the dimensions and behavior in the relevant dimensions.

In this case, it is useful to ask how quickly the $y$ coordinate might change, and then to ensure that our $\Delta t$ is considerably smaller than that value. There are various ways to estimate this value, but for the non-dispersional wave equation, there is a very simple way to think about this. (When we get to Schroedinger's equation, we will use one of the other approaches.)

Consider the case where $y=0$ for $x > x_0$. We know that a signal will travel at speed $v$. If $\Delta x < v \Delta t$, then the signal ought to travel by more than one grid point in one time step. In our algorithm (if you look carefully) that cannot happen. Therefore, since we know we will get nonsense otherwise, we can conclude that $$\begin{align} \Delta t \le \frac{\Delta x}{v} \end{align}$$ One can derive a more precise stability limit, but we can also just experiment to find what $\Delta t$ values give good results. Note that this means that if you increase $v$, you must also decrease $\Delta t$.

Balls attached by springs (under tension)

We will next study a set of balls attached by springs under tension. This is similar to the waves you have been working on, but differs in that the discrete masses are no longer an approximation.

We will consider transverse oscillations, in which the balls move in the $y$ direction, and their attachment is in the $x$ direction. For small displacements, the equation of motion of each ball is very simple: $$\begin{align} \vec{F}_i\cdot \hat y &= T \sin(\theta_L) + T \sin(\theta_R) \\ &\approx T\left( \frac{\Delta y_R}{\Delta x} + \frac{\Delta y_L}{\Delta x} \right) \\ &= \tfrac{T}{\Delta x}\left( y_{i+1} + y_{i-1} - 2y_i \right) \\ &= m \vec{a}_i\cdot \hat y \\ &= m \frac{d^2 y_i}{d t^2} \end{align}$$ or, solving for the second derivative, we have $$\begin{align} \frac{d^2 y_i}{d t^2} &= \tfrac{T}{m\Delta x}\left( y_{i+1} + y_{i-1} - 2y_i \right) \end{align}$$ At this point, we can discretize the time dimension with time step $\Delta t$, and when we write the second derivative as a finite difference, we find $$\begin{multline} \frac{y_i(t+\Delta t) + y_i(t-\Delta t) - 2 y_i(t)}{\Delta t^2} \approx\\ \tfrac{T}{m\Delta x}\left( y_{i+1}(t) + y_{i-1}(t) - 2y_i(t) \right) \end{multline}$$ Now by solving for the "future" value of the position $y_i(t+\Delta t)$, we obtain the Verlet method for these coupled balls: $$\begin{multline} y_i(t+\Delta t) = 2 y_i(t) - y_i(t-\Delta t) \\+ \tfrac{T\Delta t^2}{m\Delta x}\left( y_{i+1}(t) + y_{i-1}(t) - 2y_i(t) \right) \end{multline}$$ This gives us a formula from which we can obtain the future position if we know the current position and previous position.

You will want to experiment with different sine-wave initial conditions, with different wavelengths, and try to find a relationship between wavelength and the frequency of oscillation. It will be most helpful to plot the frequency versus the wave vector given by $k=2\pi/\lambda$. This wave vector is sometimes known as the crystal momentum.

  1. Please implement this algorithm, to model a system of a number of balls attached by springs under tension. This should be a small change from the waves code you have worked on.

  2. Write code to initialize the balls in any normal mode, recognizing that the normal modes are sinusoidal.

  3. Find the frequency of a normal mode.

  4. Plot the frequency as a function of $k$. This is called the dispersion relation.

  5. Seek an analytic function for the dispersion relation. Test it by plotting it versus your computed relationship.

extra fun
Construct a wave packet out of your set of balls. You may need to increase $N$ for this to be convenient. Try making a narrow wave packet. You may want to try creating a gaussian wave packet such as $$ y(x,t) = \sin(kx-\omega t)e^{-\frac{(x-vt-x_0)^2}{2\sigma^2}} $$ where you will want to play with the $k$, $\omega$, $v$ and $\sigma$ parameters to make this work.