Laurent's personal blog https://laurentrdc.xyz/atom.xml Laurent P. René de Cotret 2023-05-07 The algebraic structure of a trading stop-loss system https://laurentrdc.xyz//posts/algebra-stoploss.html 2023-05-07T00:00:00Z 2023-05-07 I was once an undergraduate student in a joint Mathematics & Physics program. Some of the math courses, namely group theory and algebra, remained very abstract to me throughout my education. There is some group theory in the description of symmetries of physical systems; but being an experimentalist, I didn’t use more than 5% of what I learned in my undergrad during my PhD.

However, in the course of my work now in finance, I had the pleasure of discovering that I was actually working with an algebraic structure. This post describes how that happened.

The small trading firm for which I work is focusing a bit more on automated performance monitoring these days. With detailed trading performance data streaming in, it is now a good time to implement a stop-loss system.

A stop-loss system is a system which receives trading performance data, and emits three categories of signal:

• an all-clear signal, meaning that nothing in recent trading performance indicates a problem;
• a warning signal, meaning that recent trading performance is degraded – but not yet concerning – and a human should take a look under the hood;
• a halt signal, meaning that there is most probably something wrong, trading should be halted at once.

Of course, we’re trading different products in different markets and even jurisdictions, and therefore the trading performance of every product is monitored independently. Moreover, our risk tolerance or expectations may be different for every product, and so a stop-loss system is really a framework in which to express multiple stop-loss rules, with different products being supervised by completely different stop-loss rules.

Let us consider examples: assume that we’re trading a particular stock like AAPL1. Sensible stop-loss rules might be:

• If our current position has lost >10% in value over the last month, emit a warning; if the position has lost >25% over the last month, emit a halt signal.
• If we’re expecting market volatility in the next hour to be high (for example, due to expected high-impact news), emit a halt signal.
• If our forecast of the ticker price is way off – perhaps due to a problem in the forecasting model –, emit a halt signal.

Here is what a rule framework might looks like2:

from enum import Enum, auto, unique
from typing import Callable

@unique
class Signal(Enum):
AllClear = auto()
Warn     = auto()
Halt     = auto()

class Context:
...

Rule = Callable[[Context], Signal]

# Example rule
def rule(context: Context) -> Signal:
...

A Rule is a function from some Context object to a Signal. We’re packing all information required to make decisions in a single data structure for reasons which will become obvious shortly. In this framework, we may express one of the stop loss rule examples as:

def rule(context: Context) -> Signal:
recent_loss = loss_percent( context.recent_performance(period="30d") )
if recent_loss > 0.25:
return Signal.Halt
elif recent_loss > 0.10
return Signal.Warn
else:
return Signal.AllClear

For the remainder of this post, I don’t care anymore about the domain-specific content of a rule.

My colleagues and I are expecting that, in practice, we will have pretty complex rules. In order to build complex rules from smaller, simpler rules, I wanted to be able to compose Rules together. This is straightforward because all rules have the same input and output types. Consider two rules, rule1 and rule2. If I want a new rule to halt if both rule1 and rule2 emit Signal.Halt, I could write it like this:

def rule1(context: Context) -> Signal:
...

def rule2(context: Context) -> Signal:
...

def rule_lax(context: Context) -> Signal:
sig1 = rule1(context)
sig2 = rule2(context)

if sig1 == sig2 == Signal.Halt:
return Signal.Halt
elif sig1 == sig2 == Signal.Warn:
return Signal.Warn
else:
return Signal.AllClear

That is an acceptable definition of rule composition. Since rule_lax will emit a Halt signal if both sub-rules emit a Halt signal, we’ll call this type of composition conjunction. In order to make it more ergonomic to write, let us wrap all rules in an object and re-use the & (overloaded and) operator:

from dataclasses import dataclass
from enum import Enum
from operator import attrgetter

class Signal(Enum):
"""
Signals can be composed using (&):

>>> Signal.AllClear & Signal.AllClear
< Signal.AllClear: 1 >
>>> Signal.Warn & Signal.Halt
< Signal.Warn: 2 >
>>> Signal.Halt & Signal.Halt
< Signal.Halt: 3 >
"""
AllClear = 1
Warn     = 2
Halt     = 3

def __and__(self, other: "Signal") -> "Signal":
return min(self, other, key=attrgetter('value'))

@dataclass
class rule(Callable):
_inner: Callable[[Context], Signal]

def __call__(self, context: Context) -> Signal:
return self._inner.__call__(context=context)

def __and__(self, other: "rule"):
def newinner(context: Context) -> Signal:
return rule1(context) & rule2(context)
return self.__class__(newinner)

and now we can re-write rule_lax like so:

# The @rule decorator is required in order to lift rule1 from a regular function
# to the rule object
@rule
def rule1(context: Context) -> Signal:
...

@rule
def rule2(context: Context) -> Signal:
...

rule_lax = rule1 & rule2

Now, rule_lax is defined such that it’ll emit Signal.Halt if both rule1 and rule2 emit Signal.Halt. The same is true of warnings; if both rules emit a warning, then rule_lax will emit Signal.Warning. Here is a table which summarizes this composition:

$A$ $B$ $A ~ \& ~ B$
$C$ $C$ $C$
$C$ $W$ $C$
$C$ $H$ $C$
$W$ $C$ $C$
$W$ $W$ $W$
$W$ $H$ $W$
$H$ $C$ $C$
$H$ $W$ $W$
$H$ $H$ $H$

where $C$ is Signal.AllClear, $W$ is Signal.Warning, and $H$ is Signal.Halt. Therefore, & is a binary function from Rules to Rule.

def rule_strict(context: Context) -> Signal:
sig1 = rule1(context)
sig2 = rule2(context)

if (sig1 == Signal.Halt) or (sig2 == Signal.Halt):
return Signal.Halt
elif (sig1 == Signal.Warning) or (sig2 == Signal.Warning):
return Signal.Warning
else:
return Signal.AllClear

In this case, rule_strict is more, uh, strict than rule_lax; it emits Signal.Halt if either rule1 or rule2 emits a stop signal. We’ll call this composition disjunction and re-use the | (overloaded or) operator to make it more ergonomic to write:

class Signal(Enum):
"""
Signals can be composed using (&) and (|):

>>> Signal.AllClear & Signal.AllClear
< Signal.AllClear: 1 >
>>> Signal.Warn & Signal.Halt
< Signal.Warn: 2 >
>>> Signal.Warn | Signal.Halt
< Signal.Halt: 3 >
"""
AllClear = 1
Warn     = 2
Halt     = 3

def __and__(self, other: "Signal") -> "Signal":
return min(self, other, key=attrgetter('value'))

def __or__(self, other: "Signal") -> "Signal":
return max(self, other, key=attrgetter('value'))

@dataclass
class rule(Callable):
_inner: Callable[[Context], Signal]

def __call__(self, context: Context) -> Signal:
return self._inner.__call__(context=context)

def __and__(self, other: "rule"):
def newinner(context: Context) -> Signal:
return rule1(context) & rule2(context)
return self.__class__(newinner)

def __or__(self, other: "rule"):
def newinner(context: Context) -> Signal:
return rule1(context) | rule2(context)
return self.__class__(newinner)

With this implementation, we can express rule_lax and rule_strict as:

# The @rule decorator is required in order to lift rule1 from a regular function
# to the rule object
@rule
def rule1(context: Context) -> Signal:
...

@rule
def rule2(context: Context) -> Signal:
...

rule_lax    = rule1 & rule2
rule_strict = rule1 | rule2

We can update the table for the definition of & and |:

$A$ $B$ $A ~ \& ~ B$ $A ~ | ~ B$
$C$ $C$ $C$ $C$
$C$ $W$ $C$ $W$
$C$ $H$ $C$ $H$
$W$ $C$ $C$ $W$
$W$ $W$ $W$ $W$
$W$ $H$ $W$ $H$
$H$ $C$ $C$ $H$
$H$ $W$ $W$ $H$
$H$ $H$ $H$ $H$

So for a given a given Context, which is fixed when the trading stop-loss system is running, we have:

• A set of rule outcomes of type Signal;
• A binary operation called conjunction (the & operator);
• & is associative;
• & is commutative;
• & has an identity, Signal.Halt;
• & does NOT have an inverse element.
• A binary operation called disjunction (the | operator).
• | is associative;
• | is commutative;
• | has an identity, Signal.AllClear;
• | does NOT have an inverse element.

That looks like a commutative semiring to me! Just a few more things to check:

• | distributes from both sides over &:
• $a ~|~ (b ~\&~ c)=(a ~|~ b) ~\&~ (a ~\&~ c)$ for all $a$, $b$, and $c$;
• $(a ~ \& ~ b) ~|~ c = (a ~|~ c) ~\&~ (b ~\&~ c)$ for all $a$, $b$, and $c$.
• The identity element of & (called $0$, in this case Signal.Halt) annihilates the | operation, i.e. $0 ~ | ~ a = 0$ for all $a$.

Don’t take my word for it, we can check exhaustively:

from itertools import product

zero = Signal.Halt
one  = Signal.AllClear

# Assert & is associative
assert all( (a & b) & c == a & (b & c) for (a, b, c) in product(Signal, repeat=3)  )
# Assert & is commutative
assert all( a & b == b & a for (a, b) in product(Signal, repeat=2)  )
# Assert & has an identity
assert all( a & zero == a for a in Signal )

# Assert | is associative
assert all( (a | b) | c == a | (b | c) for (a, b, c) in product(Signal, repeat=3)  )
# Assert | has an identity
assert all( a | one == a for a in Signal )

# Assert | distributes over & on both sides
assert all( a | (b & c) == (a | b) & (a | c) for (a, b, c) in product(Signal, repeat=3)  )
assert all( (a & b) | c == (a | c) & (b | c) for (a, b, c) in product(Signal, repeat=3)  )

# Assert identity of & annihilates with respect to |
assert all( (zero | a) == zero for a in Signal)

and there we have it! This design of a trading stop-loss system is an example of commutative semirings. This fact does absolutely nothing in the practical sense; I’m just happy to have spotted this structure more than 10 years after seeing it in undergrad.

]]>
Efficient rolling statistics https://laurentrdc.xyz//posts/rolling-stats.html 2023-03-23T00:00:00Z 2023-07-17 In the context of an array, rolling operations are operations on a set of values which are computed at each index of the array based on a subset of values in the array. A common rolling operation is the rolling mean, also known as the moving average.

The best way to understand is to see it in action. Consider the following list:

[0, 1, 2, 3, 4, 3, 2, 1]

The rolling average with a window size of 2 is:

[ (0 + 0)/2, (0 + 1)/2, (1 + 2)/2, (2 + 3)/2, (3 + 4)/2, (4 + 3)/2, (3 + 2)/2, (2 + 1)/2]

or

[0, 0.5, 1.5, 2.5, 3.5, 3.5, 2.5, 1.5]

Rolling operations such as the rolling mean tremendously useful at my work. When working with time-series, for example, the rolling mean may be a good indicator to include as part of machine learning feature engineering or trading strategy design. Here’s an example of using the rolling average price of AAPL stock as an indicator: Closing price of AAPL stock (solid), with the rolling mean of the closing price using three different windows as an example of indicator (dashed). (Source code)

The problem is that rolling operations can be rather slow if implemented improperly. In this post, I’ll show you how to implement efficient rolling statistics using a method based on recurrence relations.

In principle, a general rolling function for lists might have the following type signature:

rolling :: Int        -- ^ Window length
-> ([a] -> b) -- ^ Rolling function, e.g. the mean or the standard deviation
-> [a]        -- ^ An input list of values
-> [b]        -- ^ An output list of values

In this hypothetical scenario, the rolling function of type [a] -> b receives a sublist of length $M$, the window length. The problem is, if the input list has size $N$, and the window has length $M$, the complexity of this operation is at best $\mathcal{O}(N \cdot M)$. Even if you’re using a data structure which is more efficient than a list – an array, for example –, this is still inefficient.

Let’s see how to make this operation $\mathcal{O}(N)$, i.e. constant in the window length!

## Recurrence relations and the rolling average

The recipe for these algorithms involves constructing the recurrence relation of the operation. A recurrence relation is a way to describe a series by expressing how a term at index $i$ is related to the term at index $i-1$.

Let proceed by example. Consider a series of values $X$ like so:

$X = \left[ x_0, x_1, ...\right]$

We want to calculate the rolling average $\bar{X} = \left[ \bar{x}_0, \bar{x}_1, ... \right]$ of series $X$ with a window length $N$. The equation for the $j$th term, $\bar{x}_j$ is given by:

$\bar{x}_j = \frac{1}{N}\sum_{i=j - N + 1}^{N} x_i = \frac{1}{N} \sum \left[ x_{j - N + 1}, x_{j - N + 2}, ..., x_{j} \right]$

Now let’s look at the equation for the $(j-1)$th term:

$\bar{x}_{j-1} = \frac{1}{N}\sum_{i=j - N}^{j-1} x_i = \frac{1}{N} \sum \left[ x_{j - N}, x_{j - N + 1}, ..., x_{j-1} \right]$

Note the large overlap between the computation of $\bar{x}_j$ and $\bar{x}_{j-1}$; in both cases, you need to sum up $\left[ x_{j-N+1}, x_{j-N+2}, ..., x_{j-1} \right]$

Given that the overlap is very large, let’s take the difference between two consecutive terms, $\bar{x}_j$ and $\bar{x}_{j-1}$:

\begin{aligned} \bar{x}_j - \bar{x}_{j-1} &= \frac{1}{N} \sum \left[ x_{j - N + 1}, x_{j - N + 2}, ..., x_j \right] - \frac{1}{N} \sum \left[ x_{j - N}, x_{j - N + 1}, ..., x_{j-1} \right] \\ &= \frac{1}{N} \sum \left[ -x_{j-N} + x_{j - N + 1} - x_{j - N + 1} + x_{j - N + 2} - x_{j - N + 2} + ... + x_{j-1} - x_{j-1} + x_j\right] \\ &= \frac{1}{N} ( x_{j} - x_{j - N} ) \end{aligned}

Rewriting a little:

$\bar{x}_j = \bar{x}_{j-1} + \frac{1}{N} ( x_j - x_{j-N} )$

This is the recurrence relation of the rolling average with a window of length $N$. It tells us that for every term of the rolling average series $\bar{X}$, we only need to involve two terms of the original series $X$, regardless of the window. Awesome!

Let’s implement this in Haskell. We’ll use the vector library which is much faster than lists for numerical calculations like this, and comes with some combinators which make it pretty easy to implement the rolling mean. Regular users of vector will notice that the recurrence relation above fits the scanl use-case. If you’re unfamiliar, scanl is a function which looks like this:

scanl :: (b -> a -> b) -- ^ Combination function
-> b             -- ^ Starting value
-> Vector a      -- ^ Input
-> Vector b      -- ^ Output

For example:

>>> import Data.Vector as Vector
>>> Vector.scanl (+) 0 (Vector.fromList [1, 4, 7, 10])
[1, 5, 12, 22]

If we decompose the example:

[    0 + 1                 -- 1
,   (0 + 1) + 4            -- 5
,  ((0 + 1) + 4) + 7       -- 12
, (((0 + 1) + 4) + 7) + 10 -- 22
]

In this specific case, Vector.scanl (+) 0 is the same as numpy.cumsum if you’re more familiar with Python. In general, scanl is an accumulation from left to right, where the “scanned” term at index i depends on the value of the input at indices i and the scanned term at i-1. This is perfect to represent recurrence relations. Note that in the case of the rolling mean recurrence relation, we’ll need access to the value at index i and i - N, where again N is the length of the window. The canonical way to operate on more than one array at once elementwise is the zip* family of functions.

-- from the vector library
import           Data.Vector ( Vector )
import qualified Data.Vector as Vector

-- | Perform the rolling mean calculation on a vector.
rollingMean :: Int            -- ^ Window length
-> Vector Double  -- ^ Input series
-> Vector Double
rollingMean window vs
= let w     = fromIntegral window
-- Starting point is the mean of the first complete window
start = Vector.sum (Vector.take window vs) / w

-- Consider the recurrence relation mean[i] = mean[i-1] + (edge - lag)/w
-- where w    = window length
--       edge = vs[i]
--       lag  = vs[i - w]
edge = Vector.drop window vs
lag  = Vector.take (Vector.length vs - window) vs

-- mean[i] = mean[i-1] + diff, where diff is:
diff = Vector.zipWith (\p n -> (p - n)/w) edge lag

-- The rolling mean for the elements at indices i < window is set to 0
in Vector.replicate (window - 1) 0 <> Vector.scanl (+) start diff

With this function, we can compute the rolling mean like so:

>>> import Data.Vector as Vector
>>> rollingMean 2 Vector.fromList [0,1,2,3,4,5] [0.0,1.5,2.5,3.5,4.5] #### Complexity analysis Let’s say the window length is $N$ and the input array length is $n$. The naive algorithm has complexity $\mathcal{O}(n \cdot N)$. On the other hand, rollingMean has a complexity of $\mathcal{O}(n + N)$: • Vector.sum to compute start is $\mathcal{O}(N)$; • Vector.replicate (window - 1) has order $\mathcal{O}(N)$ • Vector.drop and Vector.take are both $\mathcal{O}(1)$; • Vector.scanl and Vector.zipWith are both $\mathcal{O}(n)$ (and in practice, these operations should get fused to a single pass); However, usually $N << n$. For example, at work, we typically roll 10+ years of data with a window on the order of days / weeks. Therefore, we have that rollingMean scales linearly with the length of the input ($\mathcal{O}(n)$) ## Efficient rolling variance Now that we’ve developed a procedure on how to determine an efficient rolling algorithm, let’s do it for the (unbiased) variance. Again, consider a series of values: $X = \left[ x_0, x_1, ...\right]$ We want to calculate the rolling variance $\sigma^2(X)$ of series $X$ with a window length $N$. The equation for the $j$th term, $\sigma^2_j$ is given by: $\sigma^2_j = \frac{1}{N - 1}\sum_{i=j - N + 1}^{j} (x_i - \bar{x}_j)^2 = \frac{1}{N-1} \sum \left[ (x_{j - N + 1} - \bar{x}_j)^2, ..., (x_j - \bar{x}_j)^2 \right]$ where $\bar{x}_i$ is the rolling mean at index $i$, just like in the previous section. Let’s simplify a bit by expanding the squares: \begin{aligned} (N - 1) ~ \sigma^2_j &= \sum_{i=j-N+1}^{j} (x_i - \bar{x}_j)^2 \\ &= N\bar{x}^2_j + \sum_{i=j - N + 1}^{j} x^2_i - 2 x_i \bar{x}_j \end{aligned} We note here that $\sum_{i=j - N + 1}^{j} x_i \equiv N \bar{x}_j$, which allows to simplify the equation further: \begin{aligned} (N - 1) ~ \sigma^2_j &= N\bar{x}^2_j - 2 N \bar{x}^2_j + \sum_{i=j - N + 1}^{j} x^2_i \\ &= -N\bar{x}^2_j + \sum_{i=j - N + 1}^{j} x^2_i \end{aligned} This leads to the following difference between consecutive rolling unbiased variance terms: \begin{aligned} (N - 1) \left( \sigma^2_j - \sigma^2_{j-1} \right) &= N\bar{x}^2_{j - 1} - N\bar{x}^2_j + \sum_{i=j - N + 1}^{j} x^2_i - \sum_{i'=j - N}^{j-1} x^2_{i'} \\ &= N\bar{x}^2_{j - 1} - N\bar{x}^2_j + x^2_j - x^2_{j-N} \end{aligned} and therefore, the recurrence relation: $\sigma^2_j = \sigma^2_{j-1} + \frac{1}{N-1} \left[ N\bar{x}^2_{j - 1} - N\bar{x}^2_j + x^2_j - x^2_{j - N} \right]$ This recurrence relation looks pretty similar to the rolling mean recurrence relation, with the added wrinkle that you need to know the rolling mean in advance. #### Haskell implementation Let’s implement this in Haskell again. We can re-use our rollingMean. We’ll also need to compute the unbiased variance in the starting window; I’ll use the statistics library for brevity, but it’s easy to implement yourself if you care about minimizing dependencies. -- from the vector library import Data.Vector ( Vector ) import qualified Data.Vector as Vector -- from the statistics library import Statistics.Sample ( varianceUnbiased ) rollingMean :: Int -> Vector Double -> Vector Double rollingMean = (...) -- see above -- | Perform the rolling unbiased variance calculation on a vector. rollingVar :: Int -> Vector Double -> Vector Double rollingVar window vs = let start = varianceUnbiased Vector.take window vs
n       = fromIntegral window
ms      = rollingMean window vs

-- Rolling mean terms leading by N
ms_edge = Vector.drop window ms
-- Rolling mean terms leading by N - 1
ms_lag  = Vector.drop (window - 1) ms

xs_edge = Vector.drop window vs
xs_lag  = vs

-- Implementation of the recurrence relation, minus the previous term in the series
-- There's no way to make the following look nice, sorry.
-- N * \bar{x}^2_{N-1} - N * \bar{x}^2_{N} + x^2_N - x^2_0
term xbar_nm1 xbar_n x_n x_0 = (n * (xbar_nm1**2) - n * (xbar_n ** 2) + x_n**2 - x_0**2)/(n - 1)

-- The rolling variance for the elements at indices i < window is set to 0
in Vector.replicate (window - 1) 0 <> Vector.scanl (+) start (Vector.zipWith4 term ms_lag ms_edge xs_edge xs_lag)

Note that it may be benificial to reformulate the $N\bar{x}^2_{j - 1} - N\bar{x}^2_j + x^2_j - x^2_{j - N}$ part of the recurrence relation to optimize the rollingVar function. For example, is it faster to minimize the number of exponentiations, or multiplications? I do not know, and leave further optimizations aside.

#### Complexity analysis

Again, let’s say the window length is $N$ and the input array length is $n$. The naive algorithm still has complexity $\mathcal{O}(n \cdot N)$. On the other hand, rollingVar has a complexity of $\mathcal{O}(n + N)$:

• varianceUnbiased to compute start is $\mathcal{O}(N)$;
• Vector.replicate (window - 1) has order $\mathcal{O}(N)$
• Vector.drop and Vector.take are both $\mathcal{O}(1)$;
• Vector.scanl and Vector.zipWith4 are both $\mathcal{O}(n)$ (and in practice, these operations should get fused to a single pass);

Since usually $N << n$, as before, we have that rollingVar scales linearly with the length of the input ($\mathcal{O}(n)$).

## Bonus: rolling Sharpe ratio

The Sharpe ratio1 is a common financial indicator of return on risk. Its definition is simple. Consider excess returns in a set $X$. The Sharpe ratio $S(X)$ of these excess returns is:

$S(X) = \frac{\bar{X}}{\sigma_X}$

For ordered excess returns $X = \left[ x_0, x_1, ... \right]$, the rolling Sharpe ratio at index $j$ is:

$S_j = \frac{\bar{x}_j}{\sigma_j}$

where $\bar{x}_j$ and $\sigma_j$ are the rolling mean and standard deviation at index $j$, respectively.

Since the rolling variance requires knowledge of the rolling mean, we can easily compute the rolling Sharpe ratio by modifying the implementation of rollingVariance:

-- from the vector library
import           Data.Vector ( Vector )
import qualified Data.Vector as Vector
-- from the statistics library
import           Statistics.Sample ( varianceUnbiased )

rollingMean :: Int
-> Vector Double
-> Vector Double
rollingMean = (...)  -- see above

rollingSharpe :: Int
-> Vector Double
-> Vector Double
rollingSharpe window vs
= let start   = varianceUnbiased $Vector.take window vs n = fromIntegral window ms = rollingMean window vs -- The following expressions are taken from rollingVar ms_edge = Vector.drop window ms ms_lag = Vector.drop (window - 1) ms xs_edge = Vector.drop window vs xs_lag = vs term xbar_nm1 xbar_n x_n x_0 = (n * (xbar_nm1**2) - n * (xbar_n ** 2) + x_n**2 - x_0**2)/(n - 1) -- standard deviation from variance std = sqrt <$> Vector.scanl (+) start (Vector.zipWith4 term ms_lag ms_edge xs_edge xs_lag)

-- The rolling Sharpe ratio for the elements at indices i < window is set to 0
in Vector.replicate (window - 1) 0 <> Vector.zipWith (/) (Vector.drop window ms) std

## Conclusion

In this blog post, I’ve shown you a recipe to design rolling statistics algorithms which are efficient (i.e. $\mathcal{O}(n)$) based on recurrence relations. Efficient rolling statistics as implemented in this post are an essential part of backtesting software, which is software to test trading strategies.

All code is available in this Haskell module.

]]>
Filtering noise with discrete wavelet transforms https://laurentrdc.xyz//posts/wavelet-filtering.html 2022-11-23T00:00:00Z 2022-11-23

All experimental data contains noise. Distinguishing between measurement and noise is an important component of any data analysis pipeline. However, different noise-filtering techniques are suited to different categories of noise.

In this post, I’ll show you a class of filtering technique, based on discrete wavelet transforms, which is suited to noise that cannot be filtered away with more traditional techniques – such as ones that rely on the Fourier transform. This has been important in my past research1 2, and I hope that this can help you too.

## Integral transforms

A large category of filtering techniques are based on integral transforms. Broadly speaking, an integral transform $T$ is an operation that is performed on a function $f$ and builds a function $T\left[ f\right]$ which is defined on a variable $s$, such that:

$T\left[ f \right](s) = \int dt ~ f(t) \cdot K(t, s)$

Here, $K$ (for kernel) is a function which “selects” which parts of $f(t)$ are important as a fixed $s$. Note that for an integral transform to be useful as a filter, we’ll need the ability to invert the transformation, i.e. there exists an inverse kernel $K^{-1}(s, t)$ such that:

$f(t) = \int ds ~ \left( T \left[ f\right] (s) \right) \cdot K^{-1}(s,t)$

All of this was very abstract, so let’s look at a concrete example: the Fourier transform. The Fourier transform is an integral transform where3:

\begin{align} K(t, \omega) &\equiv \frac{e^{-i \omega t}}{\sqrt{2 \pi}}\\ K^{-1}(\omega, t) &\equiv e^{i \omega t}\\ \omega & \in \mathbb{R} \end{align}

There are many other integral transforms, such as:

• The Laplace transform ($K(t, s) \equiv e^{- s t}$) which is useful to solve linear ordinary differential equations;
• The Legendre transforms ($K_n(t, s) \equiv P_n(s)$, where $P_n$ is the nth Legendre polynomial) which is used to solve for electron motion in hydrogen atoms;
• The Radon transform (for which I cannot write down a kernel) which is used to analyze computed tomography data.

So why are integral transforms interesting? Well, depending on the function $f(t)$ you want to transform, you might end up with a representation of $f$ in the transformed space, $T \left[ f\right] (s)$, which has nice properties! Re-using the Fourier transform for a simple, consider a function made up of two well-defined frequencies: $f(t) \equiv e^{-i ~ 2t} + e^{-i ~ 5t}$ The representation of $f(t)$ in frequency space – the Fourier transform of $f$, $F\left[ f\right](\omega)$ – is very simple: $F\left[ f\right](\omega) = \sqrt{2 \pi} \left[ \delta(\omega - 2) + \delta(\omega - 5) \right]$ The Fourier transform of $f$ is perfectly localized in frequency space, being zero everywhere except at $\omega=2$ and $\omega=5$. Functions composed of infinite waves (like the example above) always have the nice property of being localized in frequency space, which makes it easy to manipulate them… like filtering some of their components away!

### Discretization

It is much more efficient to use discretized versions of integral transforms on computers. Loosely speaking, given a discrete signal composed of $N$ terms $x_0$, …, $x_{N-1}$: $T\left[ f \right](k) = \sum_n x_n \cdot K(n, k)$ i.e. the integral is now a finite sum. For example, the discrete Fourier transform of the signal $x_n$, $X_k$, can be written as: $X_k = \sum_n x_n \cdot e^{-i 2 \pi k n / N}$ and its inverse becomes: $x_n = \frac{1}{N}\sum_k X_k \cdot e^{i 2 \pi k n / N}$ This is the definition used by numpy, for example. Let’s use this definition to compute the discrete Fourier transform of $f(t) \equiv e^{-i ~ 2t} + e^{-i ~ 5t}$: Top: Signal which is composed of two natural frequencies. Bottom: Discrete Fourier transform of the top signal, showing two natural frequencies. (Source code)

## Using the discrete Fourier transform to filter noise

Let’s add some noise to our signal and see how we can use the discrete Fourier transform to filter it away. The discrete Fourier transform is most effective if your noise has some nice properties in frequency space. For example, consider high-frequency noise: $N(t) = \sum_{\omega=20}^{50} \sin(\omega t + \phi_{\omega})$ where $\phi_\omega$ are random phases, one for each frequency component of the noise. While the signal looks very noisy, it’s very obvious in frequency-space what is noise and what is signal: Top: Noisy signal (red) with the pure signal shown in comparison. Bottom: Discrete Fourier transform of the noisy signal shows that noise is confined to a specific region of frequency space. (Source code)

The basics of filtering is as follows: set the transform of a signal to 0 in regions which are thought to be undesirable. In the case of the Fourier transform, this is known as a band-pass filter; frequency components of a particular frequency band are passed-through unchanged, and frequency components outside of this band are zeroed. Special names are given to band-pass filters with no lower bound (low-pass filter) and no upper bound (high-pass filter). We can express this filtering as a window function $W_k$ in the inverse discrete Fourier transform: $x_{n}^{\text{filtered}} = \frac{1}{N}\sum_k W_k \cdot X_k \cdot e^{i 2 \pi k n / N}$ In the case of the plot above, we want to apply a low-pass filter with a cutoff at $\omega=10$. That is: $W_k = \left\{ \begin{array}{cl} 1 & : \ |k| \leq 10 \\ 0 & : \ |k| > 10 \end{array} \right.$ Visually: Top: Noisy signal with the pure signal shown in comparison. Middle: Discrete Fourier transform of the noisy signal. The band of our band-pass filter is shown, with a cutoff of ω=10\omega=10. All Fourier components in the zeroed region are set to 0 before performing the inverse discrete Fourier transform. Bottom: Comparison between the filtered signal and the pure signal. The only (small) deviations can be observed at the edges. (Source code)

The lesson here is that filtering signals using a discretized integral transform (like the discrete Fourier transform) consists in:

1. Performing a forward transform;
2. Modifying the transformed signal using a window function, usually by zeroing components;
3. Performing the inverse transform on the modified signal.

## Discrete wavelet transforms

Discrete wavelet transforms are a class of discrete transforms which decomposes signals into a sum of wavelets. While the complex exponential functions which make up the Fourier transform are localized in frequency but infinite in space, wavelets are localized in both time space and frequency space.

In order to generate the basis wavelets, the original wavelet is stretched. This is akin to the Fourier transform, where the sine/cosine basis functions are ‘stretched’ by decreasing their frequency. In technical terms, the amount of ‘stretch’ is called the level. For example, the discrete wavelet transform using the db44 wavelet up to level 5 is the decomposition of a signal into the following wavelets: Five of the db4 basis wavelets shown. As the level increases, the wavelet is stretched such that it can represent lower-frequency components of a signal. (Source code)

In practice, discrete wavelet transforms are expressed as two transforms per level. This means that a discrete wavelet transform of level 1 gives back two sets of coefficients. One set of coefficient contains the low-frequency components of the signal, and are usually called the approximate coefficients. The other set of coefficients contains the high-frequency components of the signal, and are usually called the detail coefficients. A wavelet transform of level 2 is done by taking the approximate coefficients of level 1, and transforming them using a stretched wavelet into two sets of coefficients: the approximate coefficients of level 2, and the detail coefficients of level 2. Therefore, a signal transformed using a wavelet transform of level $N$ has $N$ sets of coefficients: the approximate and detail coefficients of level $N$, and the detail coefficients of levels $N-1$, $N-2$, …, $1$.

## Filtering using the discrete wavelet transform

The discrete Fourier transform excels at filtering away noise which has nice properties in frequency space. This is isn’t always the case in practice; for example, noise may have frequency components which overlap with the signal we’re looking for. This was the case in my research on ultrafast electron diffraction of polycrystalline samples5 6, where the ‘noise’ was a trendline which moved over time, and whose frequency components overlapped with diffraction pattern we were trying to isolate.

As an example, let’s use real diffraction data and we’ll pretend this is a time signal, to keep the units familiar. We’ll take a look at some really annoying noise: normally-distributed white noise drawn from this distribution7: $P(x) = \frac{1}{\sqrt{2 \pi}} \exp{-\frac{(x + 1/2)^2}{2}}$

Visually: Top: Example signal with added synthetic noise. Bottom: Frequency spectrum of both the pure signal and the noise, showing overlap. This figure shows that filtering techniques based on the Fourier transform would not help in filtering the noise in this signal. (Source code)

This example shows a common situation: realistic noise whose frequency components overlap with the signal we’re trying to isolate. We wouldn’t be able to use filtering techniques based on the Fourier transform.

Now let’s look at a particular discrete wavelet transform, with the underlying wavelet sym17. Decomposing the noisy signal up to level 3, we get four components: All coefficients from a discrete wavelet transform up to level 3 with wavelet sym17. (Source code)

Looks like the approximate coefficients at level 3 contain all the information we’re looking for. Let’s set all detail coefficients to 0, and invert the transform:

That’s looking pretty good! Not perfect of course, which I expected because we’re using real data here.

## Conclusion

In this post, I’ve tried to give some of the intuition behind filtering signals using discrete wavelet transforms as an analogy to filtering with the discrete Fourier transform.

This was only a basic explanation. There is so much more to wavelet transforms. There are many classes of wavelets with different properties, some of which8 are very useful when dealing with higher-dimensional data (e.g. images and videos). If you’re dealing with noisy data, it won’t hurt to try and see if wavelets will help you understand it!

]]>
Chesterton's fence and why I'm not sold on the blockchain https://laurentrdc.xyz//posts/chesterton.html 2022-08-02T00:00:00Z 2022-08-02 The key technological advances which brought Bitcoin to life are the blockchain and its associated proof-of-work consensus algorithm. The Bitcoin whitepaper1 is very clear on its purpose:

A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending.

The double-spending problem to which Nakamoto refers is a unique challenge of digital cash implementations. Contrary to physical cash, which is difficult to copy, digital cash is but bytes; it can be trivially copied. Before Bitcoin, the most popular way to prevent double-spending has been to route all digital cash transactions on a particular network through a trusted entity which ensures that no double-spending occurs. This is how the credit card and Interac networks work, for example.

The Bitcoin whitepaper brings a new solution to the double-spending problem, a solution designed to explicitly avoid centralized trusted entities.

In software engineering, there is a principle that one should understand why something is the way it is, before trying to change it. This principle is known as Chesterton’s fence2:

There exists (…) a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, ‘I don’t see the use of this; let us clear it away.’ To which the more intelligent type of reformer will do well to answer: ’If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

To me, the push towards decentralization is a case of Chesterton’s fence. No one wants to involve a third-party in every transactions, but it is this way for two main reasons: fraud management and performance (transaction throughput).

Fraud management is a weak point of an anonymous peer-to-peer network like Bitcoin. While I appreciate the desire for anonymity, this leads to the same behaviors which lead to the founding of the US Securities and Exchange Commission almost a hundred years ago. Decentralization also enabled the rise of ransomware, as it is now much harder to track the flow of money between anonymous, single-use cryptocurrency accounts.

Performance is another major downside of decentralization. As an example, Bitcoin’s throughput has never reached more than 6 transactions per second as of the time of writing. By contract, the electronic payment network VisaNet (which powers Visa credit card) can process up to 76 000 transactions per second.

Until blockchain enthusiasts understand the advantages of centralization presented above, I don’t think cryptocurrencies will become mainstream.

This post was inspired by the Tim O’Reilly interview on the Rational Reminder podcast.

]]>
Exploring the multiverse of possibilities all at once using monads https://laurentrdc.xyz//posts/multiverse.html 2022-03-02T00:00:00Z 2023-03-30 I’m working on a global optimization problem these days. Unlike local optimization problems, e.g. what you would solve using least-square minimization, global optimization inevitably involves exhaustively evaluating all possible solutions and choosing the best one. As you can imagine, global optimization is much more computationally-intensive than local optimization, due to the size of the set of potential solutions. Speeding up a global optimization problem involves reducing the set of possible solution to a minimum, based on the specifics of the problem.

In this post, I’ll show you how to build the minimal set of possible solutions to an optimization problem, instead of searching for solutions in a larger space. As we’ll see, only viable solutions are ever considered. This will be done by splitting the computations into multiple universes whenever a choice is presented to us, such that we traverse the multiverse of possibilities all at once.

### An example problem

Let’s say we’ve got 8 friends going out for a drink, in two cars with four seats each. How many arrangements of people can we have? If we don’t care about where people sit in each car, the number of arrangements is the number of combinations of 4 people we can make from 8 people, since the remaining 4 people will go in the second car. For every configuration, there’s also a configuration which swaps the car. Therefore, there are:

$\binom{8}{4} \times 2 = \frac{8!}{4!(8-4)!} \times 2 = 140$

possible combinations. If you’re not familiar with this notation, you can read $\binom{8}{4}$ as choose 4 people out of 8 people, of which there are 70 possibilities (and then 70 other possibilities with the cars swapped). That means that if we wanted to optimize the distribution of people into the two cars – for example, if we wanted to group up the best friends together, or minimize the total weight of people in car1, or some other objective –, we would need to look at 140 solutions. This problem is purely combinatorial.

Now let’s add some constraints. Our 8 friends are coming back from the bar. Out of the 8 friends, 3 of them didn’t drink and are therefore allowed to drive. Thus, the number of possible arrangements of friends in the car has been reduced, as each car needs a driver. For one car, we need to select 1 driver out of 3, and 3 remaining passengers out of 7. However, the other car will need a driver, so really there are 6 passengers to choose from. Finally, for every arrangement there is a duplicate arrangement with the cars swapped. The number of possibilities is therefore:

$\binom{3}{1} \binom{6}{3} \times 2 = 120$

### Potential solutions as a decision graph

How else can we express the number of combinations? Think of building a solution, instead of searching for one. We may want to start by assigning a driver to car 1. For each possible decision here, we’ll assign a driver to the second car next, then passengers. The possibilities look like this: Expressing the possibilities as a decision graph. Each layer represents a choice, and each trajectory from top to bottom represents a universe in which these choices were made. (Source code)

In the figure above, no one is assigned at the start. Then, we assign the first driver (out of three choices). Then, we need to assign a second driver, of which there are only two remaining. Each of the 6 passengers are then assigned. A potential solution (i.e. a assignment between people and cars) is represented by a path in the decision tree. Three possibilities are shown as examples.

This way of thinking about solutions reminds me strongly of the Everett interpretation of quantum mechanics, also known as the many-worlds interpretation or the multiverse interpretation. The three potential assignments are three universes that split from the same starting point. Enumerating all possible solutions to our example problem consists in crawling the decision tree, or crawling the multiverse of possibilities.

### Expressing the multiverse of solutions in Haskell

Based on the decision tree above, I want to run a computation which, when presented with choices, explores all possibilities all at once.

Consider the following type constructor:

newtype Possibilities a = Possibilities [a]

A computation that returns a result Possibilities a represents all possible answers of final type a. For example, a computation can possibly have multiple answers might look like:

possibly :: [a] -> Possibilities a
possibly xs = Possibilities xs

Alternatively, a computation which is certain, i.e. has a single possibility, is represented by:

certainly :: a -> Possibilities a
certainly x = Possibilities [x] -- A single possibility = a certainty.

Possibilities is basically a list, so we’ll start with a Foldable instance which is useful for counting the number of possibilities using length:

instance Foldable Possibilities where
foldMap m (Possibilities xs) = foldMap m xs

Possibilities is a functor:

instance Functor Possibilities where
fmap f (Possibilities ps) = Possibilities (fmap f ps)

The interesting tidbit starts with the Applicative instance. Combining possibilities should be combinatorial, e.g. combining the possibilities of 3 drivers and 6 passengers results in 18 possibilities.

instance Applicative Possibilities where
pure x = certainly x -- see above

(Possibilities fs) <*> (Possibilities ps) = Possibilities [f p | f <- fs, p <- ps]

Recall that the list comprehension notation is combinatorial, i.e. [(n,m) | n <- [1..3], m <- [1..3]] has 9 elements ([(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)]).

Now for the crucial part of composing possibilities. We want past possibilities to influence future possibilities; we’ll need a monad instance. A monad instance means that if we start with multiple possibilities, and each possibility can results in multiple possibilities, the whole computation should produce multiple possibilities1.

instance Monad Possibilities where

Possibilities ps >>= f = Possibilities $concat [toList (f p) | p <- ps] -- concat :: [ [a] ] -> [a] where toList (Possibilities xs) = xs Let’s define some helper datatypes and functions. We {- With the following imports: import Data.Set (Set, (\\)) import qualified Data.Set as Set -} -- | All possible people which can be assigned to cars data People = Driver1 | Driver2 | Driver3 | Passenger1 | Passenger2 | Passenger3 | Passenger4 | Passenger5 | Passenger6 deriving (Bounded, Eq, Enum) -- A car assignment consists in two cars, each with a driver, -- as well as passengers data CarAssignment = CarAssignment { driver1 :: Person , driver2 :: Person , car1Passengers :: Set Person , car2Passengers :: Set Person } deriving Show allDrivers :: Set Person allDrivers = Set.fromList [Driver1, Driver2, Driver3] -- Pick a driver from an available group of people. -- Returns the assigned driver, and the remaining unassigned people assignDriver :: Set Person -> Possibilities (Person, Set Person) assignDriver people = possibly [ (driver, Set.delete driver people) | driver <- Set.toList$ people Set.intersection allDrivers
]

-- Pick three passengers from an available group of people.
-- Returns the assigned passengers, and the remaining unassigned people
assign3Passengers :: Set Person -> Possibilities (Set Person, Set Person)
assign3Passengers people = possibly [ (passengers, people \\ passengers)
| passengers <- setsOf3
]
where setsOf3 = filter (\s -> length s == 3) $Set.toList$ Set.powerSet people

Finally, we can express the multiverse of possible drivers-and-passengers assignments with great elegance. Behold:

carAssignments :: Possibilities CarAssignment
carAssignments = do
let everyone = Set.fromList $enumFromTo minBound maxBound -- [Driver1, Driver2, ..., Passenger6] (driver1, rest) <- assignDriver everyone (driver2, rest) <- assignDriver rest (car1Passengers, rest) <- assign3Passengers rest (car2Passengers, _) <- assign3Passengers rest return$ CarAssignment driver1 driver2 car1Passengers car2Passengers

Given the monad instance for Possibilities, the return function returns all possible possibilities. Let’s take a look at the size of the multiverse in this case:

ghci> let multiverse = carAssignments
ghci> print $length multiverse 120 Just as we had calculated by hand. Amazing! ### Conclusion What I’ve shown you today is how to structure computations in such a way that you are exploring the multiverse of possibilities all at once. The seasoned Haskell programmer will have recognized that the Functor, Applicative, and Monad instances of Possibilities are just like lists! Although I’m not using Haskell at work2, I expect that something similar will need to be built in the near future to speed up our global optimization problem. The specific problem we are tackling has many more constraints than the example presented in this post. It’s easier to generate a list of solutions, most of which are unsuitable, and filter the solutions one by one. There is a fixed computational cost associated with generating and checking a solution, and so reducing the set of possible solutions is even more important. This post was partly inspired by the legendary blog post Typing the technical interview A self-contained Haskell source file containing all code from this post is available for download here ]]> Can you make heterogeneous lists in Haskell? Sure — as long as your intent is clear https://laurentrdc.xyz//posts/existential.html 2021-09-26T00:00:00Z 2022-07-06 Featured in Haskell Weekly issue 283 Sometimes, Haskell’s type system seems a bit restrictive compared to dynamic languages like Python. The most obvious example is the heterogenous list: >>> # Python >>> mylist = ["hello", "world", 117, None] >>> >>> for item in mylist: ... print(item) hello world 117 None but in Haskell, list items must be of the same type: -- Haskell mylist = ["hello", "world", 117, ()] -- Invalid: type cannot be inferred! This is a contrived example, of course. But consider this use-case: I just want to print the content of the list. It’s unfortunate I can’t write: mylist :: Show a => [a] mylist = ["hello", "world", 117, ()] -- All these types have Show instances, but this won't compile For this specific application, the type system is overly restrictive – as long as all I want to do is print the content of my list! In this post, I’ll show you how to do something like this using the ExistentialQuantification language extension. ## A more complex example Let’s say I want to list American football players. There are two broad classes of players (offensive and defensive) and we want to keep track of the players in a list – the player registry. Our final objective is to print the list of players to standard output. Let’s try to do the same in Haskell. Our first reflex might be to use a sum type: data Player = OffensivePlayer String String -- name and position | DefensivePlayer String String -- name and position playerRegistry :: [Player] playerRegistry = ... However, not all sports stats apply to OffensivePlayer and DefensivePlayer constructors. For example: passingAccuracy :: Player -> IO Double passingAccuracy (OffensivePlayer name pos) = lookupFromDatabase "passingAccuracy" name passingAccuracy (DefensivePlayer name pos) = return 0 -- Defensive players don't pass tacklesPerGame :: Player -> IO Double tacklesPerGame (OffensivePlayer name pos) = return 0 -- Offensive players don't tackle tacklesPerGame (DefensivePlayer name pos) = lookupFromDatabase "tacklesPerGame" name The Player type is too general; we’re not using the type system to its full potential. It’s much more representative of our situation to use two separate types: data OffensivePlayer = OffensivePlayer String String data DefensivePlayer = DefensivePlayer String String passingAccuracy :: OffensivePlayer -> IO Double passingAccuracy = ... tacklesPerGame :: DefensivePlayer -> IO Double tacklesPerGame = ... This is much safer and appropriate. Now let’s give ourselves the ability to print players: instance Show OffensivePlayer where show (OffensivePlayer name pos) = mconcat ["< ", name, " : ", pos, " >"] instance Show DefensivePlayer where show (DefensivePlayer name pos) = mconcat ["< ", name, " : ", pos, " >"] Awesome. One last problem: -- This won't typecheck playerRegistry = [ OffensivePlayer "Tom Brady" "Quarterback" , DefensivePlayer "Michael Strahan" "Defensive end" ] printPlayerList :: IO () printPlayerList = forM_ playerRegistry print -- forM_ from Control.Monad Rather annoying. We could wrap the two player types in a sum type: data Player = OP OffensivePlayer | DP DefensivePlayer instance Show Player where show (OP p) = show p show (DP p) = show p playerRegistry :: [Player] playerRegistry = [ OP (OffensivePlayer "Tom Brady" "Quarterback") , DP (DefensivePlayer "Michael Strahan" "Defensive end") ] printPlayerList :: IO () printPlayerList = forM_ playerRegistry print  but this is quite clunky. It also doesn’t scale well to cases where we have a lot more types! ## Enter existential quantification The latest version of the Haskell language (Haskell 2010) is somewhat dated at this point. However, the Glasgow Haskell Compiler supports language extensions at the cost of portability. Turns out that the ExistentialQuantification language extension can help us with this problem. We turn on the extension at the top of our module: {-# LANGUAGE ExistentialQuantification #-} and create an existential datatype: data ShowPlayer = forall a. Show a => ShowPlayer a The datatype ShowPlayer is a real datatype that bundles any data a which can be shown. Note that everything else about the internal type is forgotten, since the ShowPlayer type wraps any type that can be shown (that’s what forall a. Show a means). We can facilitate the construction of a Player with the following helper function: mkPlayer :: Show a => a -> ShowPlayer mkPlayer a = ShowPlayer a show  Now since the data bundled in a ShowPlayer can be shown, the only operation supported by ShowPlayer is Show: instance Show ShowPlayer where show (ShowPlayer a) = show a Finally, our heterogenous list: playerRegistry :: [ShowPlayer] playerRegistry = [ -- ✓ OffensivePlayer has a Show instance ✓ ShowPlayer (OffensivePlayer "Tom Brady" "Quarterback")) -- ✓ DefensivePlayer has a Show instance ✓ , ShowPlayer (DefensivePlayer "Michael Strahan" "Defensive end")) ] printPlayerList :: IO () printPlayerList = forM_ playerRegistry print  So we can have an heterogenous list – as long as the only thing we can do with it is show it! The advantage here compared to the sum-type approach is when we extend our code to many more types: data Quarterback = Quarterback String deriving Show data Lineman = Lineman String deriving Show data Runningback = Runningback String deriving Show data WideReceiver = WideReceiver String deriving Show data DefensiveEnd = DefensiveEnd String deriving Show data Linebacker = Linebacker String deriving Show data Safety = Safety String deriving Show data Corner = Corner String deriving Show -- Example: some functions are specific to certain positions passingAccuracy :: Quarterback -> IO Double assingAccuracy = ... playerRegistry :: [ShowPlayer] playerRegistry = [ mkPlayer (Quarterback "Tom Brady")) , mkPlayer (DefensiveEnd "Michael Strahan")) , mkPlayer (Safety "Richard Sherman")) , ... ] printPlayerList :: IO () printPlayerList = forM_ playerRegistry print  This way, we can keep the benefits of the type system when we want it, but also give ourselves some flexibility when we need it. This is actually similar to object-oriented programming, where classes bundle data and operations on them into an object! ## A bit more functionality Let’s pack in more operations on our heterogenous list. We might want to not only show players, but also access their salaries. We describe the functionality common to all players in a typeclass called BasePlayer: class Show p => BasePlayer p where -- Operate in IO because of database access, for example getYearlySalary :: p -> IO Double instance BasePlayer Quarterback where ... instance BasePlayer Lineman where ... We can update our player registry to support the same operations as BasePlayer through the Player existential type: data Player = forall a. BasePlayer a => Player a instance Show Player where show (Player a) = show a instance BasePlayer Player where getYearlySalary (Player a) = getYearlySalary a and our new heterogenous list now supports: playerRegistry :: [Player] playerRegistry = [ Player (Quarterback "Tom Brady") , Player (DefensiveEnd "Michael Strahan") , Player (Safety "Richard Sherman") , ... ] printPlayerList :: IO () printPlayerList = forM_ playerRegistry print -- unchanged average_salary :: IO Double average_salary = do salaries <- for playerRegistry getYearlySalary -- (for from Data.Traversable) return$ (sum salaries) / (length salaries)

So we can have a heterogenous list – but we can only perform operations which are supported by the Player type. In this sense, the Player type encodes our intent.

## Conclusion

In this post, we’ve seen how to create heterogenous lists in Haskell. However, contrary to dynamic languages, we can only do so provided we are explicit about our intent. That means we get the safety of strong, static types with some added flexibility if we so choose.

If you’re interested in type-level programming, including but not limited to the content of this present post, I strongly recommend Rebecca Skinner’s An Introduction to Type Level Programming

Thanks to Brandon Chinn for some explanation on how to simplify existential types.

]]>
In defence of the PhD prelim exam https://laurentrdc.xyz//posts/prelim.html 2021-06-12T00:00:00Z 2021-06-12 In the department of Physics at McGill University, there are a few requirements for graduation in the PhD program. One of these requirements is to pass the preliminary examination, or prelim for short, at the end of the first year1. This type of examination is becoming rarer across North America. The Physics department has been discussing the modernization of the prelim, either by changing its format or removing it entirely.

In this post, I want to explain what the prelim is and why I think its essence should be preserved.

### What is the prelim?

The prelim in its pre-COVID-19 form is a 6h sit-down exam, split in two 3h sessions. It aims to test students’ mastery of Physics concepts at the undergraduate level. At McGill, there are four themes of questions:

1. Classical mechanics and special relativity;
2. Thermodynamics and statistical mechanics;
3. Electromagnetism;
4. Quantum mechanics.

The first 3h session is composed of 16 short questions, 10 of which must be answered. Some of the short questions are conceptual, while other involve a small calculations. Here is an example of a short question from the year I passed the prelim:

Imagine a planet being a long infinite solid cylinder of radius $R$ with a mass per unit length $\Lambda$. The matter is uniformly distributed over its radius. Find the potential and gravitational field everywhere, i.e. inside and outside the cylinder, and sketch the field lines.

The second 3h session is composed of 8 long questions, split evenly among the four themes. Four questions must be answered (no more!), with at least one question from each theme. Here is an example of a long question from the year I passed the prelim:

A simple 1-dimensional model for an ionic crystal (such as NaCl) consists of an array of $N$ point charges in a straight line, alternately $+e$ and $−e$ and each at a distance $a$ from its nearest neighbours. If $N$ is very large, find the potential energy of a charge in the middle of the row and of one at the end of the row in the form $\alpha e^2/(4\pi \epsilon_0 a)$.

I passed the prelim exam in 2018. For the curious, here are all the questions from that year: short (PDF) and long (PDF). The department of Physics also keeps a record of the prelim questions going back to 1996. Senior undergraduates are well-equipped to answer prelim questions. The difficulty comes from the breath of possible questions, as well as the time constraint.

### A test of competence

Of course, the prelim is only one of the requirements on the way to earn a doctoral degree. Most importantly, PhD students need to write a dissertation and defend its content in front of a committee of experts. So why have the prelim at all?

The prelim serves as a way to ensure that all PhD students have a certain level of competence in all historical areas of Physics. Evaluating students for admission to the Physics department is inherently hard because it is difficult to compare academic records from different institutions across the world.

Earning a PhD makes you an expert in a narrow subject. Passing the prelim indicates that students have a baseline knowledge across all historical Physics discipline.

### Proposed alternative: the comprehensive examination

Not every department in the McGill Faculty of Science requires PhD students to pass a prelim exam. Another popular alternative, in use in the Chemistry department for example, is the so-called comprehensive examination2.

The structure of the comprehensive exam varies across departments, but generally it involves the student writing a multi-page project proposal and defending this proposal in front of a committee of faculty members. In the course of the comprehensive exam, committee members may ask the student any question related to their research topic.

A comprehensive exam has two attractive attributes. First, its scope is closer to students’ area of research. Second, a large part of the comprehensive (the project proposal) can be done offline, without the pressure of being timed.

### In defence of the prelim

The prelim is a stressful event. Not everyone is comfortable in a sit-down exam setting. A PhD career can end because someone slept poorly the night before the exam. I support any and all adjustments to the current prelim format to make the experience more accessible in this sense.

My main objection with replacing the prelim with something closer to the comprehensive exam is the functionalization of education. Removing the prelim eliminates the incentive to have a baseline knowledge across Physics. It encourages PhD students to have an even narrower set of skills, making the PhD program more focused around the resulting dissertation.

The comprehensive exam is inherently about making students’ experience more focused on their research area. This is appealing from students point-of-view: why should they have to go out of their way to stay aware about classical mechanics, something which they might never use? The comprehensive exam (in the format that I have described above) streamlines the requirements for graduation.

The graduate student experience is about much more than the resulting dissertation. We want our students to be more than just experts in their narrow fields; we also want them to be ready to contribute to society beyond their immediate expertise. Does the prelim ensure that this is the case? Of course not. But removing the prelim sends the wrong message about what it means to graduate with a PhD.

On a personal note, the prelim made me review all of my undergraduate studies. I purchased the Feynman Lectures on Physics and read all three volumes. With a Masters’ degree under my belt, I was able to appreciate my learnings under a new light, even though I haven’t used most of it since then. While I cannot say that the exam was fun, the studying experience was definitely one of the highlights of my PhD.

]]>
Harnessing symmetry to find the center of a diffraction pattern https://laurentrdc.xyz//posts/autocenter.html 2021-01-23T00:00:00Z 2022-02-20 Ultrafast electron diffraction involves the analysis of diffraction patterns. Here is an example diffraction pattern for a thin (<100nm) flake of graphite1:

A diffraction pattern is effectively the intensity of the Fourier transform. Given that crystals like graphite are well-ordered, the diffraction peaks (i.e. Fourier components) are very large. You can see that the diffraction pattern is six-fold symmetric; that’s because the atoms in graphite arrange themselves in a honeycomb pattern, which is also six-fold symmetric. In these experiments, the fundamental Fourier component is so strong that we need to block it. That’s what that black beam-block is about.

There are crystals that are not as well-ordered as graphite. Think of a powder made of many small crystallites, each being about 50nm x 50nm x 50nm. Diffraction electrons through a sample like that results in a kind of average of all possible diffraction patterns. Here’s an example with polycrystalline Chromium:

Each ring in the above pattern pattern corresponds to a Fourier component. Notice again how symmetric the pattern is; the material itself is symmetric enough that the fundamental Fourier component needs to be blocked.

For my work on iris-ued, a data analysis package for ultrafast electron scattering, I needed to find a reliable, automatic way to get the center of such diffraction patterns to get rid of the manual work required now. So let’s see how!

## First try: center of mass

A first naive attempt might start with the center-of-mass, i.e. the average of pixel positions weighted by their intensity. Since intensity is symmetric about the center, the center-of-mass should coincide with the actual physical center of the image.

Good news, scipy’s ndimage module exports such a function: center_of_mass. Let’s try it: Demonstration of using scipy.ndimage.center_of_mass to find the center of diffraction patterns. (Source code)

Not bad! Especially in the first image, really not a bad first try. But I’m looking for something pixel-perfect. Intuitively, the beam-block in each image should mess with the calculation of the center of mass. Let’s define the following areas that we would like to ignore:

Masks are generally defined as boolean arrays with True (or 1) where pixels are valid, and False (or 0) where pixels are invalid. Therefore, we should ignore the weight of masked pixels. scipy.ndimage.center_of_mass does not support this feature; we need an extension of center_of_mass:

def center_of_mass_masked(im, mask):
rr, cc = np.indices(im.shape)

r = np.average(rr, weights=weights)
c = np.average(cc, weights=weights)
return r, c

This is effectively an average of the row and column coordinates (rr and cc) weighted by the image intensity. The trick here is that mask.astype(im.dtype) is 0 where pixels are “invalid”; therefore they don’t count in the average! Let’s look at the result:

I’m not sure if it’s looking better, honestly. But at least we have an approximate center! That’s a good starting point that feeds in to the next step.

## Friedel pairs and radial inversion symmetry

In his thesis2, which is now also a book, Nelson Liu describes how he does it:

A rough estimate of its position is obtained by calculating the ‘centre of intensity’ or intensity-weighted arithmetic mean of the position of > 100 random points uniformly distributed over the masked image; this is used to match diffraction spots into Friedel pairs amongst those found earlier. By averaging the midpoint of the lines connecting these pairs of points, a more accurate position of the centre is obtained.

Friedel pairs are peaks related by inversion through the center of the diffraction pattern. The existence of these pairs is guaranteed by crystal symmetry. For polycrystalline patterns, Friedel pairs are averaged into rings; rings are always inversion-symmetric about their centers. Here’s an example of two Friedel pairs: Example of two Friedel pairs: white circles form pair 1, while red circles form pair 2. (Source code)

The algorithm by Liu was meant for single-crystal diffraction patterns with well-defined peaks, and not so much for rings. However, we can distill Liu’s idea into a new, more general approach. If the approximate center coincides with the actual center of the image, then the image should be invariant under radial-inversion with respect to the approximate center. Said another way: if the image $I$ is defined on polar coordinates $(r, \theta)$, then the center maximizes correlation between $I(r, \theta)$ and $I(-r, \theta)$. Thankfully, computing the masked correlation between images is something I’ve worked on before!

Let’s look at what radial inversion looks like. There are ways to do it with interpolation, e.g. scikit-image’s warp function. However, in my testing, this is incredibly slow compared to what I will show you. A faster approach is to consider that if the image was centered on the array, then radial inversion is really flipping the direction of the array axes; that is, if the image array I has size (128, 128), and the center is at (64, 64), the radial inverse of I is I[::-1, ::-1] (numpy) / flip(flip(I, 1), 2) (MATLAB) / I[end:-1:1,end:-1:1] (Julia). Another important note is that if the approximate center of the image is far from the center of the array, the overlap between the image and its radial inverse is limited. Consider this:

If we cropped out the bright areas around the frame, then the approximate center found would coincide with the center of the array; then, radial inversion is very fast. Demonstration of what parts of the image to crop so that the image center coincides with the center of the array. (Source code)

Now, especially for the right column of images, it’s pretty clear that the approximate center wasn’t perfect. The correction to the approximate center is can be calculated with the masked normalized cross-correlation3 4: Top left: diffraction pattern. Top right: radially-inverted diffraction pattern about an approximate center. Bottom left: masked normalized cross-correlation between the two diffraction patterns. Bottom right: 2x zoom on the cross-correlation shows the translation mismatch between the diffraction patterns. (Source code)

The cross-correlation in the bottom right corner (zoomed by 2x) shows that the true center is the approximate center we found earlier, corrected by the small shift (white arrow)! For single-crystal diffraction patterns, the resulting is even more striking: Top left: diffraction pattern. Top right: radially-inverted diffraction pattern about an approximate center. Bottom left: masked normalized cross-correlation between the two diffraction patterns. Bottom right: 2x zoom on the cross-correlation shows the translation mismatch between the diffraction patterns. (Source code)

We can put the two steps together and determine a pixel-perfect center:

## Bonus: low-quality diffraction

Here’s a fun consequence: the technique works also for diffraction patterns that are pretty crappy and very far off center, provided that the asymmetry in the background is taken care-of:

## Conclusion

In this post, we have determined a robust way to compute the center of a diffraction pattern without any parameters, by making use of a strong invariant: radial inversion symmetry. My favourite part: this method admits no free parameters!

If you want to make use of this, take a look at autocenter, a new function that has been added to scikit-ued.

]]>
Matplotlib for graphic design https://laurentrdc.xyz//posts/banner.html 2020-11-03T00:00:00Z 2020-11-05 In this post, I will show you how I generated the banner for this website using Matplotlib. In case it disappears in the future, here is an image of it:

Matplotlib is a plotting library for python, historically inspired by the plotting capabilities of MATLAB. You can take a look at the various examples on their website. One thing that is not immediately obvious is that you can use Matplotlib to also draw shapes! In this sense, Matplotlib becomes a graphic design library.

(You can see the exact source code for the images below by clicking on the link in the caption)

### Basic shapes

Let’s start at the beginning: drawing a single hexagon.

import matplotlib.patches as patches

mpatches.RegularPolygon(
xy=center,
numVertices=6,
facecolor=color,
edgecolor="k",
orientation=0,
fill=True,
)
)

Using the function, we can draw a tiling of hexagons. Let’s first set-up our plot:

import math
import numpy as np
import matplotlib.pyplot as plt

# Note that Matplotlib figure size is (width, height) in INCHES...
# We want it to be 100mm x 100mm
mm_to_in = 0.03937008
figure, ax = plt.subplots(1,1, figsize=(100 * mm_to_in, 100*mm_to_in))

# Hide as much of the axis borders/margins as possible
ax.axis("off")
ax.set_xlim([0, 100])
ax.set_ylim([0, 100])

# Dimensions of the bounding box of the hexagons
height = 2 * radius

### Tiling

We note that a tiling of regular hexagons requires a different offset for every row. If you imagine rows being numbered starting at 0, hexagons in rows with odd indices need to be offset by $\frac{\sqrt{3}}{2} r$, where $r$ is the radius (or distance from the center to vertex). To find the centers of the hexagons, the following loop does the trick:

import itertools

centers = list()

for offset_x, offset_y in [(0, 0), (width / 2, (3 / 2) * radius)]:

rows    = np.arange(start=offset_x, stop=105, step=width)
columns = np.arange(start=offset_y, stop=105, step=3 * radius)

for x, y in itertools.product(rows, columns):
centers.append( (x,y) )

Once we know about the centers of the hexagons, we can place them one-by-one:

for (x,y) in centers:
draw_hexagon(ax, center=(x,y), radius=radius)

Here’s what it looks like so far:

### Color

The figure above has the wrong dimension, but you get the idea. Let’s color the hexagons appropriately. In the banner, the color of the hexagons is based on the “inferno” colormap. The color radiates away from the bottom left corner:

def draw_hexagon(ax, center, radius, color='w'):
mpatches.RegularPolygon(
xy=center,
numVertices=6,
facecolor=color,
edgecolor="none", #note: edgecolor=None is actually the default value!
orientation=0,
fill=True,
)
)

colormap = plt.get_cmap('inferno')
for (x,y) in centers:
# radius away from bottom left corner
# proportional to the distance of the top right corner
# i.e. 0 < r < 1
r = math.hypot(x, y) / math.hypot(100, 100)
draw_hexagon(ax, center=(x, y), radius=radius, color=colormap(r))

Here’s the result:

Because of rounding errors of the hexagon dimensions, there is some visible spacing between the hexagons. To get rid of it, we draw the hexagons a bit larger (0.2 millimeters):

def draw_hexagon(ax, center, radius, color='w'):
mpatches.RegularPolygon(
xy=center,
numVertices=6,
facecolor=color,
edgecolor="none",
orientation=0,
fill=True,
)
)

### A bit of randomness

For a light touch of whimsy, I like to make the color fluctuate a little:

import random

colormap = plt.get_cmap('inferno')
for (x,y) in centers:
# radius away from bottom left corner
# proportional to the distance of the top right corner
# i.e. 0 < r < 1
r = math.hypot(x, y) / math.hypot(100, 100)
r += random.gauss(0, 0.01)
draw_hexagon(ax, center=(x, y), radius=radius, color=colormap(r))

We arrive at the final result:

You can imagine adapting this approach to different tilings, and different colors schemes. Here’s a final example using squares and the “cool” colormap:

The masked normalized cross-correlation and its application to image registration https://laurentrdc.xyz//posts/mnxc.html 2019-04-30T00:00:00Z 2022-02-20 Image registration consists in determinining the most likely transformation between two images — most importantly translation, which is what I am most concerned with.

How can we detect the translation between two otherwise similar image? This is an application of cross-correlation. The cross-correlation of two images is the degree of similitude between images for every possible translation between them. Mathematically, given grayscale images as discrete functions $I_1(i,j)$ and $I_2(i,j)$, their cross-correlation $I_1 \star I_2$ is defined as: $(I_1 \star I_2)(u, v) \equiv \sum_{i,j} I_1(i, j) \cdot I_2(i - u, j - v)$

For example, if $I_1 = I_2$, then $I_1 \star I_2$ has its maximum at $(u,v) =$ (0,0). What happens if $I_1$ and $I_2$ are shifted from each other? Let’s see: The cross-correlation between shifted images exhibits a global maxima at the location corresponding to relative translation. (Source code)

In the above example, the cross-correlation is maximal at (50, 0), which is exactly the translation required to shift back the second image to match the first one. Finding the translation between images is then a simple matter of determining the glocal maximum of the cross-correlation. This operation is so useful that it is implemented in the Python library scikit-image as skimage.feature.phase_cross_correlation.

It turns out that in my field of research, image registration can be crucial to correct experimental data. My primary research tool is ultrafast electron diffraction. Without knowing the details, you can think of this technique as a kind of microscope. A single image from one of our experiments looks like this:

Most of the electron beam is unperturbed by the sample; this is why we use a metal beam-block (seen as a black rod in the image above) to prevent the electrons from damaging our apparatus.

Our experiments are synthesized from hundreds of gigabytes of images like the one above, and it may take up to 72h (!) to take all the images we need. Over the course of this time, the electron beam may shift in a way that moves the image, but not the beam-block1. Heres’s what I mean: Here is the difference between two equivalent images, acquired a few hours apart. The shift between them is evident in the third panel. (Source code)

This does not fly. We need to be able to compare images together, and shifts by more than 1px are problematic. We need to correct for this shift, for every image, with respect to the first one. However, we are also in a bind, because unlike the example above, the images are not completely shifted; one part of them, the beam-block, is static, while the image behind it shifts.

The crux of the problem is this: the cross-correlation between images gives us the shift between them. However, it is not immediately obvious how to tell the cross-correlation operation to ignore certain parts of the image. Is there some kind of operation, similar to the cross-correlation, that allows to mask parts of the images we want to ignore?

Thanks to the work of Dr. Dirk Padfield2 3, we now know that such an operation exists: the masked normalized cross-correlation. In his 2012 article, he explains the procedure and performance of this method to register images with masks. One such example is the registration of ultrasound images; unfortunately, showing you the figure from the article would cost me 450 \$US, so you’ll have to go look at it yourselves.

In order to fix our registration problem, then, I implemented the masked normalized cross-correlation operation — and its associated registration function — in our ultrafast electron diffraction toolkit, scikit-ued4. Here’s an example of it in action: Using the masked-normalized cross-correlation to align two diffraction patterns of polycrystalline chromium. The mask shown tells the algorithm to ignore the beam-block of both images. (Source code)

## Contributing to scikit-image

However, since this tool could see use in a more general setting, I decided to contribute it to scikit-image:

1. My contribution starts by bringing up the subject via a GitHub issue (issue #3330).
2. I forked scikit-image and integrated the code and tests from scikit-ued to scikit-image. The changes are visible in the pull request #3334.
3. Finally, some documentation improvements and an additional gallery example were added in pull request #3528.

In the end, a new function has been added, skimage.registration.phase_cross_correlation (previously skimage.feature.masked_register_translation).

]]>