# hydrogen atom simulation

### story

Being able to numerically simulate hydrogen orbitals has been a longstanding goal of mine. I thought this would a relatively simple task – translating the standard calculation into an eigenvector problem, but I’ve hit several numerical roadblocks along the way. I’ve finally found a few workarounds to be able to get meaningful results.

Disclaimer: due to my general stubbornness, I have refused to read up on how these orbitals are usually computed – so there may very well be a better way to do this.

### 1D potentials

Let’s start with 1D potentials and the standard time independent Schrodinger equation.

\[ \left(\hat{K} + \hat{V} \right) |\psi\rangle = E |\psi\rangle \]

where \(\hat{K}\) and \(\hat{V}\) are the kinetic and potential energy operators respectively.

Typically one analytically solves for the eigenkets \(|\psi_n\rangle\) and eigenvalues \(E_n\) of the above equation.

For example, in the standard the standard \(x\) basis, the equation becomes

\[ \left(\frac{-\hbar^2}{2m}\frac{\partial^2}{\partial x^2} + V(x)\right) \psi(x) = E\psi(x) \]

where \(V(x)\) is the potential energy with dependence only on \(x\).

Using this continuous basis, however, is not ideal since we are trying to solve the problem numerically. Let’s instead write the equation in a approximate discrete basis where \(\psi\) becomes a vector and the operators become matrices. Essentially we’re approximating \(\psi(x)\) with \(\psi(x_k)\) with \(k\in\{1,2,…,n\}\). In this basis, \(\hat{K}\) and \(\hat{V}\) become \(K\in S^n\) and \(V\in S^n\) respectively, where \(S^n\) is the set of symmetric \(n \times n\) matrices.

More concretely, \(K = \frac{-\hbar^2}{2m} D^n_2\) where \(D^n_2\) is the \(n \times n\) finite difference matrix for the second derivative and \(V = diag(V(x_1),V(x_2),…,V(x_n))\) – that is a diagonal matrix with the discretely sampled potential function along the diagonal.

This becomes the following matrix equation

\[ (K+V)\psi_n = E_n\psi_n \]

where \(\psi_n\) and \(E_n\) are the eigenvectors and eigenvalues of \(K+V\) respectively.

I constructed \(K\) and \(V\) as described for a few different potentials. For numerical stability (and since our matrices are symmetric), the SVD of the matrix \(K+V\) was computed in order to solve for eigenvectors.

Below is an example of the infinite potential well.

The black line indicates the potential, and the coloured lines are the first few eigenfunctions ordered by energy. Note that the vertical spacing is arbitrary, but is used to indicate relative energy spacing.

But of course, this is overkill for the infinite potential well where the closed form solutions are already well known. Let’s apply it to some weird potentials for fun!

Here’s a harmonic oscillator potential inside an infinite well.

Here’s a sawtooth inside an infinite well.

For now, I’m limited to wavefunctions that go to zero at the edge of the simulation domain due to some numerical issues. This is why the potentials above are all within an infinite well.

### 3D central potentials

Now let’s look at the 3D time independent Schrodinger equation with a central potential \(\hat{V}(r)\) in spherical coordinates.

\[ \left( \frac{-\hbar^2}{2\mu} \nabla^2 + V(r) \right) \psi = E\psi \]

Fully writing out the Laplacian,

\[ \left[ \frac{-\hbar^2}{2\mu} \left( \frac{1}{r^2}\frac{\partial}{\partial r} r^2 \frac{\partial}{\partial r} + \frac{1}{r^2 \sin\theta} \frac{\partial}{\partial\theta} \sin\theta \frac{\partial}{\partial\theta} + \frac{1}{r^2 \sin^2\theta } \frac{\partial^2}{\partial\phi^2} \right) + V(r) \right] \psi = E\psi \]

Let’s separate \(\psi_{n,l,m}(r,\theta,\phi) = R_n(r) Y^m_l(\theta,\phi)\) where \(Y^m_l(\theta,\phi)\) are eigenfunctions of the angular part of the Laplacian. That is, it satisfies

\[ \nabla^2Y^m_l(\theta,\phi) = -\frac{l(l+1)}{r^2}Y^m_l(\theta,\phi) \]

Making the substitution and rearranging

\[ \left[ -\frac{\hbar^2}{2\mu}\left( \frac{1}{r^2} \frac{\partial}{\partial r} r^2 \frac{\partial}{\partial r} - \frac{l(l+1)}{r^2} \right) + V(r) \right] R_n = E_nR_n \]

Making the additional substitution \(U_n(r) = rR_n(r)\),

\[ \left[ -\frac{\hbar^2}{2\mu} \left( \frac{\partial^2}{\partial r^2} - \frac{l(l+1)}{r^2} \right) + V(r) \right] U_n = E_nU_n \]

This is in the same form as the standard 1D equation with an effective potential of \(V_{eff}(r) = V(r) + \frac{\hbar^2l(l+1)}{2\mu r^2}\). This additional term is known as the angular momentum barrier.

Now that we’ve reduced the equation to the 1D problem, we can use the same solver I used above on the following equation.

\[ \left[ -\frac{\hbar^2}{2\mu} \frac{\partial}{\partial r^2} + V_{eff}(r) \right] U_n = E_n U_n \]

#### spherical harmonics

One big thing that I’ve skipped over here is the existence of the \(Y^m_l(\theta,\phi)\) used above. Luckily, these are known as the spherical harmonics and are well known.

I decided to cheat here a bit. Instead of numerically solving for the angular component of the equation, I’ve precomputed the solution to the angular component of the spherical harmonics.

#### radial equation with Coulomb potential

With the angular part precomputed and taken care of, it should be relatively simple to solve for the radial component (it wasn’t). Using the technique described above, this problem can be reduced to a simple 1D problem in \(U_n(r)\) with effective potential given by

\[ V_{eff}(r) = -\frac{e^2}{4\pi \epsilon_0} \frac{1}{r} + \frac{\hbar^2l(l+1)}{2\mu r^2} \]

For simplicity, let’s nondimensionalize the equation. By substituting \(r=a_0\rho=\frac{4\pi \epsilon_0 \hbar^2}{\mu e^2}\rho\) and \(E_n=\frac{\hbar^2}{2\mu a_0^2}\epsilon_n=\frac{\mu e^4}{32\pi^2\epsilon_0^2\hbar^2}\epsilon_n\) we have,

\[ \left[ -\frac{\partial^2}{\partial \rho^2} + \frac{l(l+1)}{\rho^2} - \frac{2}{\rho} \right]U_n = \epsilon_n U_n \]

where \(a_0\) is the Bohr radius. This equation can be solved using the technique described above as long as it is placed within an infinite potential well (for numerical reasons). The infinite barriers are placed far enough away that they do not affect the results.

The radial probability distributions (proportional to \(\rho^2 |R(\rho)|^2\)) for \(n=3\) are plotted below.

Combining the radial solution with the spherical harmonics, we obtain the full solution. Plotted below are the the xz-plane cross sections of \(|\psi_{n,l,m}|\) for a seleciton of the \(n=3\) wavefunctions below

This technique also allows us to look at the energies of the orbitals. Plotted below are the computed relative energy levels for different \(n\)’s and \(l\)’s.

The energy is determined completely by \(n\) (as expected). This is more degeneracy than can be explained by simple rotational symmetry, which only results in a degeneracy of \(2l+1\). This additional \(n^2\) degeneracy is well known and unique to the Coulomb potential and comes from the ‘accidental’ conservation of an additional quantity known as the Laplace-Runge-Lenz vector. This extra degeneracy does not appear in reality (this is shown in the standard orbital energy diagrams). The splitting of this degeneracy comes from fine structure corrections that I will try to calculate in the future.

### future work

I am currently limited to bound states that fall to zero near the simulation domain boundaries due to issues with the finite difference matrix near the edges. As a workaround, I’ve been artificially imposing an infinite well on all of the bound potentials (see above). This should not have a large effect on the simulation since the infinite walls are placed far from where the electron would be expected to be found, but it would be nice to have a more elegant solution to this problem.

#### fine structure corrections

One of the main motivations for doing this was to be able to numerically compute how different corrections to the Coulomb potential would split the degeneracies in the spectrum. It is well known that the Coulomb potential has an extra ‘hidden’ \(O(4)\) symmetry that results in extra degeneracies. Adding corrections due to relativistic effects can split this extra symmetry. This is pretty high on my list of things to do.

The ultimate goal would be a fully general stationary state solver given an arbitrary Hamiltonian, but this is probably overkill.