Here, as in the previous post, the dot denotes differentiation with respect to the affine parameter .
We write the geodesic equation
and use the definition of the Christoffel symbols,
to expand the geodesic equation as
Notice the indices of the terms in bracket may be interchanged in the sum over the velocity vectors given the symmetry of the tensors involved, permitting us to combine terms and rearrange
Next, we (ab)use the chain rule to express
and therefore
or, using the product rule,
Examining just the radial derivative components (), along with the condition that , forces the LHS to disappear for our class of metrics, with the only non-zero terms of the RHS for stable orbits being
This is just an alternative method to achieve a result from the previous blog post, which I wanted to derive motivated by the Johannsen paper.
We introduce the Keplerian frequency, derived from Kepler's third law, which is the rate at which the angular coordinate changes with coordinate time. As an aside, the Keplerian frequency also allows us to calculate the rotation limits of various astrophysical bodies, as a star rotating faster than this frequency would promote particles into higher orbits, effectively ripping itself apart (something for another post).
The Keplerian frequency for our purposes may be defined
and therefore Eq. (9) may be expressed
inviting us to solve for the Keplerian frequency
As an exercise, substituting in the Kerr metric terms yields
To solve for the energy and angular momentum of a given stable circular orbit, we use that
expanding components to find
where we have identified covariant momenta and , and used to the freedom to raise and lower indices under contraction. We will write an effective potential , and solve
simultaneously for and . This approach is very similar to last time, except now we will use the Keplerian frequency in Eq. (10), with the observation that and , inferred from the operation of the metric. Consequently, the Keplerian frequency is
Johannsen uses the definition of in Eq. (12) along with Eq. (16) and Eq. (17) to derive the energy and angular momentum, but I was unable to recreate this. Instead, I came up with method to derive these using only the latter two and the metric condition
manifest for any metric tensor. In particular, for the class of static, axis-symmetric metrics, with one obtains
and with ,
We begin by re-arranging Eq. (17) into
Multiplying in the fraction by
and using Eq. (20) to substitute relevant symbols, we find
where we use and for convenience to denote the numerator and denominator respectively. Substituting this expression for into Eq. (16) yields
or equivalently
and, by symmetry of Eq. (23), one can immediately write
Examining this common denominator by expanding all and and collecting powers of gives us
dropping the negative sign temporarily.
The next steps are pretty methodological, so I will demonstrate the mechanics for terms, and then state the results for the other powers in the interest of brevity. First we use Eq. (19) to simplify the first term
which may then be combined to write
and, via Eq. (20),
Similarly for the other powers. After working our way through this piece of algebra, we obtain
To verify these results, we can attempt to trace orbits around a Kerr black hole. We first define the relevant formulae
using Gradus
using StaticArrays
# implementation for energy
# fix ourselves to the negative energy solution arbitrarily
function energy(g, Ωϕ)
-(g[1] + g[5] * Ωϕ) / √(-g[1] - 2g[5]*Ωϕ - g[4]*Ωϕ^2)
end
# implementation for angular momentum
# include a flag for whether we want the prograde or retrograde orbit
function angmom(g, Ωϕ, prograde::Bool)
res = (g[5] + g[4] * Ωϕ) / √(-g[1] - 2g[5]*Ωϕ - g[4]*Ωϕ^2)
if prograde
res
else
-res
end
end
# keplerian frequency
# requires choice of positive or negative domain
function Ω(m, rθ, positive::Bool)
# require derivatives with respect to radial coordinate
jacs = Gradus.GeodesicTracer.metric_jacobian(m, rθ)
∂rg = jacs[:, 1]
Δ = √(∂rg[5]^2 - ∂rg[1] * ∂rg[4])
if positive
-(∂rg[5] + Δ) / ∂rg[4]
else
-(∂rg[5] - Δ) / ∂rg[4]
end
end
# calculate all relevant metric quantities for stable circular orbit system
function stable_system(m, r; contra_rotating=false, prograde=true)
let rθ = @SVector([r, π/2])
g = Gradus.GeodesicTracer.metric_components(m, rθ)
ginv = Gradus.GeodesicTracer.inverse_metric_components(g)
Ωϕ = Ω(m, rθ, contra_rotating)
E = energy(g, Ωϕ)
L = angmom(g, Ωϕ, prograde)
(g,ginv,E,L)
end
end
This code is all relatively straight forward, using Gradus.jl to calculate the metric components for us, and just using them where needed. We can then write a utility function for creating stable 4-velocities:
function fourvelocity(m, r; kwargs...)
g, ginv, E, L = stable_system(m, r; kwargs...)
vt = -E * ginv[1] + L * ginv[5]
vϕ = -E * ginv[5] + L * ginv[4]
@SVector[vt, 0.0, 0.0, vϕ]
end
and trace as usual
# choice of metric
m = BoyerLindquistAD(M=1.0, a=0.0)
# range of radii
rs = range(Gradus.isco(m), 10.0, 7)
# generate position and init velocities
us = [@SVector([0.0, r, deg2rad(90), 0.0]) for r in rs]
vs = map(r -> fourvelocity(m, r), rs)
# trace the geodesics for a fairly long duration to ensure stability
simsols = @time tracegeodesics(
m,
us, vs,
(0.0, 2000.0);
abstol=1e-10, reltol=1e-10,
μ=1.0
)
Excellent, the orbits are indeed perfectly stable!
The plunging region is the region of unstable circular orbits, where the energies are hyperbolic – any small deviation from the trajectory will cause the particle to spiral inwards towards the black hole. As stated in the introduction, these trajectories are traditionally calculated using analytic first-order equations of motion, by substituting and at the ISCO, along with the condition of equatorial orbits , and calculating .
Instead, we will attempt to calculate from metric terms at the ISCO.
We need minimally and , since and is constrained by the integrator. We already have from the previous section, and may be found from Eq. (15).
A possible implementation for our plunging four-velocity may then be
function plunging_fourvelocity(m, r; kwargs...) where {T}
g, ginv, E, L = stable_system(m, r; kwargs...)
vt = -E * ginv[1] + L * ginv[5]
vϕ = -E * ginv[5] + L * ginv[4]
nom = ginv[1] * E^2 - 2ginv[5] * E * L + ginv[4] * L^2
denom = -g[2]
@SVector[vt, -sqrt(abs(nom/denom)), 0.0, vϕ]
end
Note that we have hardcoded the negative choice of radial component, since we are infalling. Tracing this with
u_plunging = @SVector[0.0, Gradus.isco(m), deg2rad(90), 0.0]
v_plunging = CircularOrbits.plunging_fourvelocity(m, Gradus.isco(m))
sol = @time tracegeodesics(
m,
u_plunging, v_plunging,
# need a very large integration time, else we never leave the ISCO
(0.0, 18_000.0);
abstol=1e-11, reltol=1e-11,
μ=1.0
)
and plotting:
This is good, but the long integration times are perhaps not ideal, since most of the time is just spent tracing the circular orbit.
An alternative method to the above is to drop the photon into the black hole, by moving the orbiting radius a tiny fraction closer. This only requires the component of the velocity at the ISCO, and introduces as a free parameter which we can tweak. In code
u_dropped = @SVector[0.0, Gradus.isco(m) - δr, deg2rad(90), 0.0]
v_dropped = fourvelocity(m, Gradus.isco(m))
sol = @time tracegeodesics(
m,
u_dropped, v_dropped,
(0.0, 500.0);
abstol=1e-11, reltol=1e-11,
μ=1.0,
)
Tracing for different :
Here, the solid line is , whereas the dashed line has . The latter has significantly lower integration time, as one might expect, but falls nearly identically, modulo some initial rotation . Plotting the components of velocity as a function of :
The above plot requires the keyword argument closet_approach=1.0001
to be passed to tracegeodesics
, as default termination is 1% beyond the inner horizon radius. Also, I used in the above plot to include non-zero terms.
The Cunningham values are from Cunningham (1975)[2], calculated analytically from first-order equations of motion, and evenly sampled for the sake of illustration.
Note that all three traced curves agree: the one directly from plunging_fourvelocity
, and the two dropped from a slightly closer radius. Setting too high causes the paths to diverge from expectation.
We can attempt to check the deviation from Cunningham's calculations of the above methods – to do this, we need to interpolate the velocity component calculated by the tracer as a function of radius. This may be done with Interpolations.jl:
using Interpolations
# need to sort our values
# drop the first index as it is duplicated
I = sortperm(pl_sol[2,:])[2:end]
r_sol = sol[2,:][I]
ut_sol = sol[5,:][I]
interp_ut = LinearInterpolation(r_sol, ut_sol)
Comparing the values with
using Statistics
isco = Gradus.isco(m)
# range of points to compute error over
# need to shrink the domain a little to avoid
# stepping outside of the interpolated domain
r_range = range(
Gradus.inner_radius(m) + 0.1,
isco - 0.1,
2000
)
# Cunningham analytic calculation
ut_true = map(
r -> Gradus.AccretionFormulae.RedshiftFunctions.uᵗ(m.M, isco, r, m.a),
r_range
)
# find interpolated values
test_t = map(interp_ut, r_range)
# absolute error
err_t = abs.(test_t .- ut_true)
The same is done for and . These components are then used to create a measure of error distance between the true and interpolated velocities, using the norm,
where denotes the difference between the velocity component of the two methods.
As an example, for various offsets, this error by radius is:
The above figure shows the rolling mean, with window size over a total points. Interesting to note is that the error close to the event horizon is left relatively unchanged for different . Instead, perhaps unsurprisingly, the offset primarily impacts the initial error at . To me, this implies there may be some error in the integration over this region that I will need to check, but investigating this is beyond the scope of this post.
To examine how this error changes with offset and integrator tolerance, we will plot the mean and standard deviation over the lifetime of the plunging trajectory. For in some range between and , we find these scale:
Both measures scale relatively linearly up until some cutoff , at which point we find a minimum deviation in error, before a plateau. This plateau and the location of the cutoff scales with the reltol
of the integrator, implying the mean error is indeed a numerical error, and not directly a methodological error.
Note that the overall best mean error is very approximately , which is useful for anticipating expected errors, and that this best-mean plateau also occurs at approximately , giving us some way to tune the offset to the integrator tolerances.
We can, however, overall expect the relative error to be no worse than 1 part in 100 using this interpolation method, which should be sufficient for our purposes (though I will note this seems quite high to me). We can test this by recreating a known rendering result, namely the relative redshift of an equatorial accretion disc.
A standard result in relativity is that the energy of a particle with four-momentum , measured by an observer moving with is
If we consider the redshift of the accretion disc as viewed by a distant observer as the ratio of energies measured at the disc and at the observer,
then we can colour the redshift for each pixel in our traced image. Using Tullio.jl for the Einsum notation, we can easily express a PointFunction
which will do this for us:
using Tullio
# need the ISCO radius so we can differentiate plunging and regular orbits
isco = Gradus.isco(m)
# fixed observer velocity
v_obs = @SVector [1.0, 0.0, 0.0, 0.0]
# fixed observer position
# have to put ourselves further away than usual to ensure the metric-flat space
# approximation holds
u_obs = @SVector [0.0, 10_000.0, deg2rad(85), 0.0]
# an interpolation which returns the plunging velocities for us
# (this is part of the standard Gradus.jl exports)
pl_int = interpolate_plunging_velocities(m)
# metric matrix at observer
g_obs = metric(m, u_obs)
redshift = PointFunction((m, gp, mt) -> begin
let r = gp.u2[2]
v_disc = if r < isco
# plunging region
vtemp = pl_int(r)
# we have to reverse radial velocity due to backwards tracing convention
# see https://github.com/astro-group-bristol/Gradus.jl/issues/3
@SVector [vtemp[1], -vtemp[2], vtemp[3], vtemp[4]]
else
# regular circular orbit
CircularOrbits.fourvelocity(m, r)
end
# get metric matrix at position on disc
g = metric(m, gp.u2)
@tullio E_disc := - g[i,j] * gp.v2[i] * v_disc[j]
@tullio E_obs := - g_obs[i,j] * gp.v1[i] * v_obs[j]
E_obs / E_disc
end
end)
Note that we position the observer at instead of . We do this to ensure the metric is closer to being flat space, and therefore that for all geodesics. At , the deviation in is in the order of part in for geodesics directed towards the edges of the accretion disc, but at this alternate distance, the deviation is approximately 1 part in 1000. In the literature, the common practice is to assume , and hard-code this result into the redshift calculation, as is done with common Cunningham implementations, but we will instead use the full definition of redshift, precisely as in Eq. (35), and suffer the consequences.
We can then render an image in the usual way:
# set inner radius within the ISCO
d = GeometricThinDisc(
1.0, 50.0, deg2rad(90)
)
# small utility function
render_redshift(m, rspf) = rendergeodesics(
m, u_obs, d, 20_000.0;
pf = rspf ∘ ConstPointFunctions.filter_intersected,
image_width=750,
image_height=400,
fov_factor=16.0,
effective_infinity=11_000,
abstol=1e-11, reltol=1e-11,
verbose=true
)
# comparison metric
m_cmp = BoyerLindquistAD(M=m.M, a=m.a)
# render the redshift images
img_ad = render_redshift(m, redshift)
img_cmp = render_redshift(m_cmp, ConstPointFunctions.redshift)
Subtracting these images to compare the difference between our interpolated redshift and the Cunningham analytic values for :
We see the error is in the order of , though fortunately this error is relatively robust to integrator tolerances, retaining the order of magnitude even at tolerances of .
For first-order method comparison, where m_cmp = BoyerLindquistFO(M=m.M, a=m.a)
, the situation is a little worse, but not dire:
This is all pretty good news. Sort of. A difference of is still about 5 orders of magnitude larger than I would desire. The main difference is clearly within the ISCO, and judging by the features in the noise, is caused by some geometrical anomaly. I will investigate this more at a later date.
I have a strong suspicion the differences in the first-order method outside of the ISCO, which look considerably more like noise, are correlated with numerical differences in how the impact parameter is mapped, but I would need to test to confirm – which, again, is beyond the scope of this post.
But, with a proof of concept, we just have one missing ingredient before we can calculate this redshift for other metrics!
If we want this interpolation method to work on general static, axis-symmetric methods, the only thing left to define is a function to find the ISCO.
The ISCO is the radius corresponding to
which may be found relatively easily via Eq. (31). The strategy here is to use automatic differentiation to compute the gradient, and use a root finding tool to optimize finding the minima. With the help of ForwardDiff.jl and Roots.jl, this is pretty easy and fast:
using ForwardDiff
using Roots
function isco(m, lower_bound upper_bound)
dE(r) = ForwardDiff.derivative(
x -> CircularOrbits.energy(m, x),
r
)
d2E(r) = ForwardDiff.derivative(dE, r)
find_zero((dE, d2E), (lower_bound, upper_bound))
end
We have to use the bounded solvers, since our domain is undefined below a certain , and throws a DomainError
. The upper bound may be quite confidently set to something large, e.g. , however there is generally no safe lower bound we can use that works for all . Instead, an approach is determine the lower bound coarsely as a radius for which . This may be programmatically found with a simple brute-force:
function find_lower_isco_bound(m; upper_bound = 100.0, step=-0.005)
# iterate in reverse with a negative step
for r in upper_bound:step:1.0
if CircularOrbits.energy(m, r) > 1.0
return r
end
end
# for type stability
return 0.0
end
This function, despite its crudeness, performs fine, and seems to be able to find a suitable solution for all currently implemented metrics:
julia> @btime find_lower_isco_bound($m)
92.847 μs (0 allocations: 0 bytes)
Adding a dispatch to make our isco
API a little friendlier:
isco(m; upper_bound = 100.0) = isco(m, find_lower_isco_bound(m; upper_bound), upper_bound)
Testing this for a few different metric setups:
Seems to work well! We should now be able to visualise redshift patterns for any of our static, axis-symmetric metrics.
Just for a laugh, let's see what the redshift of an equitorial accretion disc for an Einstein-Maxwell-Dilaton-Axion singularity would look like. The metric components for this I have taken from García et al. (1995)[3], allowing reuse of all of our above code, with just one alteration:
m = DilatonAxionAD(M=1.0, a=0.998, β=1.0, b=1.0)
This metric is a little fiddly to render at the moment, because I probably have some mathematical typos in it, and, since the Schwarzschild equivalent radius is currently not implemented, the cutoff on the inner horizon is underestimated. To avoid uneccessarily long integrations, we can just fix the minimum timestep to something an order of magnitude smaller than our tolerances:
img = @time rendergeodesics(
m, u_obs, d, 2000.0;
pf = redshift ∘ ConstPointFunctions.filter_intersected,
image_width=1400,
image_height=800,
fov_factor=18.0,
abstol=1e-10, reltol=1e-10,
verbose=false,
closest_approach=1.01,
dtmin=1e-9
)
We set verbose=false
to avoid excessive warnings concerning iteration counts, and produce the following render:
Gorgeous! But no-doubt highly unphysical, and probably litered with mathematical typos. There are also clearly numerical anomalies and instabilities visible in the render, which appears to be related to close-to-ISCO redshifts – plently of bug hunting and optimizing to be done.
At some point I will endevour to properly follow the EMDA derivation and motivations, and maybe there's a blog post in that, but this brief tourist's curiousity outside of GR is sufficient for the spectacle, and shows the method of calculating redshift does indeed "work" for static, axis-symmetric metrics.
[1] | T Johannsen (2013). Regular black hole metric with three constants of motion. PHYS REV D 88:4. |
[2] | C T Cunningham (1975). The effects of redshifts and focusing on the spectrum of an accretion disk around a Kerr black hole. APJ, 202:788-802. |
[3] | A García, D Galtsov, and O Kechkin (1995). Class of stationary axisymmetric solutions of the Einstein-Maxwell-Dilaton-Axion field equations. PHYS REV L 74:1276. |