Laurent's personal blog https://laurentrdc.xyz/atom.xml Laurent P. René de Cotret laurent.decotret@outlook.com 2022-11-23 Filtering noise with discrete wavelet transforms https://laurentrdc.xyz//posts/wavelet-filtering.html 2022-11-23T00:00:00Z 2022-11-23

All experimental data contains noise. Distinguishing between measurement and noise is an important component of any data analysis pipeline. However, different noise-filtering techniques are suited to different categories of noise.

In this post, I’ll show you a class of filtering technique, based on discrete wavelet transforms, which is suited to noise that cannot be filtered away with more traditional techniques – such as ones that rely on the Fourier transform. This has been important in my past research1 2, and I hope that this can help you too.

## Integral transforms

A large category of filtering techniques are based on integral transforms. Broadly speaking, an integral transform $$T$$ is an operation that is performed on a function $$f$$ and builds a function $$T\left[ f\right]$$ which is defined on a variable $$s$$, such that:

$T\left[ f \right](s) = \int dt ~ f(t) \cdot K(t, s)$

Here, $$K$$ (for kernel) is a function which “selects” which parts of $$f(t)$$ are important as a fixed $$s$$. Note that for an integral transform to be useful as a filter, we’ll need the ability to invert the transformation, i.e. there exists an inverse kernel $$K^{-1}(s, t)$$ such that:

$f(t) = \int ds ~ \left( T \left[ f\right] (s) \right) \cdot K^{-1}(s,t)$

All of this was very abstract, so let’s look at a concrete example: the Fourier transform. The Fourier transform is an integral transform where3:

\begin{align} K(t, \omega) &\equiv \frac{e^{-i \omega t}}{\sqrt{2 \pi}}\\ K^{-1}(\omega, t) &\equiv e^{i \omega t}\\ \omega & \in \mathbb{R} \end{align}

There are many other integral transforms, such as:

• The Laplace transform ($$K(t, s) \equiv e^{- s t}$$) which is useful to solve linear ordinary differential equations;
• The Legendre transforms ($$K_n(t, s) \equiv P_n(s)$$, where $$P_n$$ is the nth Legendre polynomial) which is used to solve for electron motion in hydrogen atoms;
• The Radon transform (for which I cannot write down a kernel) which is used to analyze computed tomography data.

So why are integral transforms interesting? Well, depending on the function $$f(t)$$ you want to transform, you might end up with a representation of $$f$$ in the transformed space, $$T \left[ f\right] (s)$$, which has nice properties! Re-using the Fourier transform for a simple, consider a function made up of two well-defined frequencies: $f(t) \equiv e^{-i ~ 2t} + e^{-i ~ 5t}$ The representation of $$f(t)$$ in frequency space – the Fourier transform of $$f$$, $$F\left[ f\right](\omega)$$ – is very simple: $F\left[ f\right](\omega) = \sqrt{2 \pi} \left[ \delta(\omega - 2) + \delta(\omega - 5) \right]$ The Fourier transform of $$f$$ is perfectly localized in frequency space, being zero everywhere except at $$\omega=2$$ and $$\omega=5$$. Functions composed of infinite waves (like the example above) always have the nice property of being localized in frequency space, which makes it easy to manipulate them… like filtering some of their components away!

### Discretization

It is much more efficient to use discretized versions of integral transforms on computers. Loosely speaking, given a discrete signal composed of $$N$$ terms $$x_0$$, …, $$x_{N-1}$$: $T\left[ f \right](k) = \sum_n x_n \cdot K(n, k)$ i.e. the integral is now a finite sum. For example, the discrete Fourier transform of the signal $$x_n$$, $$X_k$$, can be written as: $X_k = \sum_n x_n \cdot e^{-i 2 \pi k n / N}$ and its inverse becomes: $x_n = \frac{1}{N}\sum_k X_k \cdot e^{i 2 \pi k n / N}$ This is the definition used by numpy, for example. Let’s use this definition to compute the discrete Fourier transform of $$f(t) \equiv e^{-i ~ 2t} + e^{-i ~ 5t}$$: Top: Signal which is composed of two natural frequencies. Bottom: Discrete Fourier transform of the top signal, showing two natural frequencies. (Source code)

## Using the discrete Fourier transform to filter noise

Let’s add some noise to our signal and see how we can use the discrete Fourier transform to filter it away. The discrete Fourier transform is most effective if your noise has some nice properties in frequency space. For example, consider high-frequency noise: $N(t) = \sum_{\omega=20}^{50} \sin(\omega t + \phi_{\omega})$ where $$\phi_\omega$$ are random phases, one for each frequency component of the noise. While the signal looks very noisy, it’s very obvious in frequency-space what is noise and what is signal: Top: Noisy signal (red) with the pure signal shown in comparison. Bottom: Discrete Fourier transform of the noisy signal shows that noise is confined to a specific region of frequency space. (Source code)

The basics of filtering is as follows: set the transform of a signal to 0 in regions which are thought to be undesirable. In the case of the Fourier transform, this is known as a band-pass filter; frequency components of a particular frequency band are passed-through unchanged, and frequency components outside of this band are zeroed. Special names are given to band-pass filters with no lower bound (low-pass filter) and no upper bound (high-pass filter). We can express this filtering as a window function $$W_k$$ in the inverse discrete Fourier transform: $x_{n}^{\text{filtered}} = \frac{1}{N}\sum_k W_k \cdot X_k \cdot e^{i 2 \pi k n / N}$ In the case of the plot above, we want to apply a low-pass filter with a cutoff at $$\omega=10$$. That is: $W_k = \left\{ \begin{array}{cl} 1 & : \ |k| \leq 10 \\ 0 & : \ |k| > 10 \end{array} \right.$ Visually: Top: Noisy signal with the pure signal shown in comparison. Middle: Discrete Fourier transform of the noisy signal. The band of our band-pass filter is shown, with a cutoff of $$\omega=10$$. All Fourier components in the zeroed region are set to 0 before performing the inverse discrete Fourier transform. Bottom: Comparison between the filtered signal and the pure signal. The only (small) deviations can be observed at the edges. (Source code)

The lesson here is that filtering signals using a discretized integral transform (like the discrete Fourier transform) consists in:

1. Performing a forward transform;
2. Modifying the transformed signal using a window function, usually by zeroing components;
3. Performing the inverse transform on the modified signal.

## Discrete wavelet transforms

Discrete wavelet transforms are a class of discrete transforms which decomposes signals into a sum of wavelets. While the complex exponential functions which make up the Fourier transform are localized in frequency but infinite in space, wavelets are localized in both time space and frequency space.

In order to generate the basis wavelets, the original wavelet is stretched. This is akin to the Fourier transform, where the sine/cosine basis functions are ‘stretched’ by decreasing their frequency. In technical terms, the amount of ‘stretch’ is called the level. For example, the discrete wavelet transform using the db44 wavelet up to level 5 is the decomposition of a signal into the following wavelets: Five of the db4 basis wavelets shown. As the level increases, the wavelet is stretched such that it can represent lower-frequency components of a signal. (Source code)

In practice, discrete wavelet transforms are expressed as two transforms per level. This means that a discrete wavelet transform of level 1 gives back two sets of coefficients. One set of coefficient contains the low-frequency components of the signal, and are usually called the approximate coefficients. The other set of coefficients contains the high-frequency components of the signal, and are usually called the detail coefficients. A wavelet transform of level 2 is done by taking the approximate coefficients of level 1, and transforming them using a stretched wavelet into two sets of coefficients: the approximate coefficients of level 2, and the detail coefficients of level 2. Therefore, a signal transformed using a wavelet transform of level $$N$$ has $$N$$ sets of coefficients: the approximate and detail coefficients of level $$N$$, and the detail coefficients of levels $$N-1$$, $$N-2$$, …, $$1$$.

## Filtering using the discrete wavelet transform

The discrete Fourier transform excels at filtering away noise which has nice properties in frequency space. This is isn’t always the case in practice; for example, noise may have frequency components which overlap with the signal we’re looking for. This was the case in my research on ultrafast electron diffraction of polycrystalline samples5 6, where the ‘noise’ was a trendline which moved over time, and whose frequency components overlapped with diffraction pattern we were trying to isolate.

As an example, let’s use real diffraction data and we’ll pretend this is a time signal, to keep the units familiar. We’ll take a look at some really annoying noise: normally-distributed white noise drawn from this distribution7: $P(x) = \frac{1}{\sqrt{2 \pi}} \exp{-\frac{(x + 1/2)^2}{2}}$

Visually: Top: Example signal with added synthetic noise. Bottom: Frequency spectrum of both the pure signal and the noise, showing overlap. This figure shows that filtering techniques based on the Fourier transform would not help in filtering the noise in this signal. (Source code)

This example shows a common situation: realistic noise whose frequency components overlap with the signal we’re trying to isolate. We wouldn’t be able to use filtering techniques based on the Fourier transform.

Now let’s look at a particular discrete wavelet transform, with the underlying wavelet sym17. Decomposing the noisy signal up to level 3, we get four components: All coefficients from a discrete wavelet transform up to level 3 with wavelet sym17. (Source code)

Looks like the approximate coefficients at level 3 contain all the information we’re looking for. Let’s set all detail coefficients to 0, and invert the transform:

That’s looking pretty good! Not perfect of course, which I expected because we’re using real data here.

## Conclusion

In this post, I’ve tried to give some of the intuition behind filtering signals using discrete wavelet transforms as an analogy to filtering with the discrete Fourier transform.

This was only a basic explanation. There is so much more to wavelet transforms. There are many classes of wavelets with different properties, some of which8 are very useful when dealing with higher-dimensional data (e.g. images and videos). If you’re dealing with noisy data, it won’t hurt to try and see if wavelets will help you understand it!

1. L. P. René de Cotret and B. J. Siwick, A general method for baseline-removal in ultrafast electron powder diffraction data using the dual-tree complex wavelet transform, Struct. Dyn. 4 (2017) DOI:10.1063/1.4972518↩︎

2. M. R. Otto, L. P. René de Cotret, et al, How optical excitation controls the structure and properties of vanadium dioxide, PNAS (2018) DOI: 10.1073/pnas.1808414115.↩︎

3. Note that it is traditional in physics to represent the transform variable as $$\omega$$ instead of $$s$$. If $$t$$ is time (in seconds), then $$\omega$$ is angular frequency (in radians per seconds). If $$t$$ is distance (in meters), $$\omega$$ is spatial angular frequency (in radians per meter).↩︎

4. I will be using the wavelet naming scheme from PyWavelets.↩︎

5. L. P. René de Cotret and B. J. Siwick, A general method for baseline-removal in ultrafast electron powder diffraction data using the dual-tree complex wavelet transform, Struct. Dyn. 4 (2017) DOI:10.1063/1.4972518↩︎

6. M. R. Otto, L. P. René de Cotret, et al, How optical excitation controls the structure and properties of vanadium dioxide, PNAS (2018) DOI: 10.1073/pnas.1808414115.↩︎

7. Note that this distribution contains a bias of -$$1/2$$, which is useful in order to introduce low-frequencies in the noise which overlap with the spectrum of the signal.↩︎

8. N. G. Kingsbury, The dual-tree complex wavelet transform: a new technique for shift invariance and directional filters, IEEE Digital Signal Processing Workshop, DSP 98 (1998)↩︎

]]>
Chesterton's fence and why I'm not sold on the blockchain https://laurentrdc.xyz//posts/chesterton.html 2022-08-02T00:00:00Z 2022-08-02 The key technological advances which brought Bitcoin to life are the blockchain and its associated proof-of-work consensus algorithm. The Bitcoin whitepaper1 is very clear on its purpose:

A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending.

The double-spending problem to which Nakamoto refers is a unique challenge of digital cash implementations. Contrary to physical cash, which is difficult to copy, digital cash is but bytes; it can be trivially copied. Before Bitcoin, the most popular way to prevent double-spending has been to route all digital cash transactions on a particular network through a trusted entity which ensures that no double-spending occurs. This is how the credit card and Interac networks work, for example.

The Bitcoin whitepaper brings a new solution to the double-spending problem, a solution designed to explicitly avoid centralized trusted entities.

In software engineering, there is a principle that one should understand why something is the way it is, before trying to change it. This principle is known as Chesterton’s fence2:

There exists (…) a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, ‘I don’t see the use of this; let us clear it away.’ To which the more intelligent type of reformer will do well to answer: ’If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

To me, the push towards decentralization is a case of Chesterton’s fence. No one wants to involve a third-party in every transactions, but it is this way for two main reasons: fraud management and performance (transaction throughput).

Fraud management is a weak point of an anonymous peer-to-peer network like Bitcoin. While I appreciate the desire for anonymity, this leads to the same behaviors which lead to the founding of the US Securities and Exchange Commission almost a hundred years ago. Decentralization also enabled the rise of ransomware, as it is now much harder to track the flow of money between anonymous, single-use cryptocurrency accounts.

Performance is another major downside of decentralization. As an example, Bitcoin’s throughput has never reached more than 6 transactions per second as of the time of writing. By contract, the electronic payment network VisaNet (which powers Visa credit card) can process up to 76 000 transactions per second.

Until blockchain enthusiasts understand the advantages of centralization presented above, I don’t think cryptocurrencies will become mainstream.

This post was inspired by the Tim O’Reilly interview on the Rational Reminder podcast.

1. S. Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System (2008). Link to PDF.↩︎

2. G. K. Chesterton, The Thing: Why I Am a Catholic, chapter 4 (1929).↩︎

]]>
Exploring the multiverse of possibilities all at once using monads https://laurentrdc.xyz//posts/multiverse.html 2022-03-02T00:00:00Z 2022-03-02 I’m working on a global optimization problem these days. Unlike local optimization problems, e.g. what you would solve using least-square minimization, global optimization inevitably involves exhaustively evaluating all possible solutions and choosing the best one. As you can imagine, global optimization is much more computationally-intensive than local optimization, due to the size of the set of potential solutions. Speeding up a global optimization problem involves reducing the set of possible solution to a minimum, based on the specifics of the problem.

In this post, I’ll show you how to build the minimal set of possible solutions to an optimization problem, instead of searching for solutions in a larger space. As we’ll see, only viable solutions are ever considered. This will be done by splitting the computations into multiple universes whenever a choice is presented to us, such that we traverse the multiverse of possibilities all at once.

### An example problem

Let’s say we’ve got 8 friends going out for a drink, in two cars with four seats each. How many arrangements of people can we have? If we don’t care about where people sit in each car, the number of arrangements is the number of combinations of 4 people we can make from 8 people, since the remaining 4 people will go in the second car. Therefore, there are:

$\binom{8}{4} = \frac{8!}{4!(8-4)!} = 70$

possible combinations. If you’re not familiar with this notation, you can read $$\binom{8}{4}$$ as choose 4 people out of 8 people, of which there are 70 possibilities. That means that if we wanted to optimize the distribution of people into the two cars – for example, if we wanted to group up the best friends together, or minimize the total weight of people in car1, or some other objective –, we would need to look at 70 solutions. This problem is purely combinatorial.

Now let’s add some constraints. Our 8 friends are coming back from the bar. Out of the 8 friends, 3 of them didn’t drink and are therefore allowed to drive. Thus, the number of possible arrangements of friends in the car has been reduced, as each car needs a driver. For one car, we need to select 1 driver out of 3, and 3 remaining passengers out of 7. However, the other car will need a driver, so really there are 6 passengers to choose from. Finally, for every arrangement there is a duplicate arrangement with the cars swapped. The number of possibilities is therefore:

$\binom{3}{1} \binom{6}{3} \times 2 = 120$

### Potential solutions as a decision graph

How else can we express the number of combinations? Think of building a solution, instead of searching for one. We may want to start by assigning a driver to car 1. For each possible decision here, we’ll assign a driver to the second car next, then passengers. The possibilities look like this: Expressing the possibilities as a decision graph. Each layer represents a choice, and each trajectory from top to bottom represents a universe in which these choices were made. (Source code)

In the figure above, no one is assigned at the start. Then, we assign the first driver (out of three choices). Then, we need to assign a second driver, of which there are only two remaining. Each of the 6 passengers are then assigned. A potential solution (i.e. a assignment between people and cars) is represented by a path in the decision tree. Three possibilities are shown as examples.

This way of thinking about solutions reminds me strongly of the Everett interpretation of quantum mechanics, also known as the many-worlds interpretation or the multiverse interpretation. The three potential assignments are three universes that split from the same starting point. Enumerating all possible solutions to our example problem consists in crawling the decision tree, or crawling the multiverse of possibilities.

### Expressing the multiverse of solutions in Haskell

Based on the decision tree above, I want to run a computation which, when presented with choices, explores all possibilities all at once.

Consider the following type constructor:

newtype Possibilities a = Possibilities [a]

A computation that returns a result Possibilities a represents all possible answers of final type a. For example, a computation can possibly have multiple answers might look like:

possibly :: [a] -> Possibilities a
possibly xs = Possibilities xs

Alternatively, a computation which is certain, i.e. has a single possibility, is represented by:

certainly :: a -> Possibilities a
certainly x = Possibilities [x] -- A single possibility = a certainty.

Possibilities is basically a list, so we’ll start with a Foldable instance which is useful for counting the number of possibilities using length:

instance Foldable Possibilities where
foldMap m (Possibilities xs) = foldMap m xs

Possibilities is a functor:

instance Functor Possibilities where
fmap f (Possibilities ps) = Possibilities (fmap f ps)

The interesting tidbit starts with the Applicative instance. Combining possibilities should be combinatorial, e.g. combining the possibilities of 3 drivers and 6 passengers results in 18 possibilities.

instance Applicative Possibilities where
pure x = certainly x -- see above

(Possibilities fs) <*> (Possibilities ps) = Possibilities [f p | f <- fs, p <- ps]

Recall that the list comprehension notation is combinatorial, i.e. [(n,m) | n <- [1..3], m <- [1..3]] has 9 elements ([(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)]).

Now for the crucial part of composing possibilities. We want past possibilities to influence future possibilities; we’ll need a monad instance. A monad instance means that if we start with multiple possibilities, and each possibility can results in multiple possibilities, the whole computation should produce multiple possibilities1.

instance Monad Possibilities where

Possibilities ps >>= f = Possibilities $concat [toList (f p) | p <- ps] -- concat :: [ [a] ] -> [a] where toList (Possibilities xs) = xs Let’s define some helper datatypes and functions. We {- With the following imports: import Data.Set (Set, (\\)) import qualified Data.Set as Set -} -- | All possible people which can be assigned to cars data People = Driver1 | Driver2 | Driver3 | Passenger1 | Passenger2 | Passenger3 | Passenger4 | Passenger5 | Passenger6 deriving (Bounded, Eq, Enum) -- A car assignment consists in two cars, each with a driver, -- as well as passengers data CarAssignment = CarAssignment { driver1 :: Person , driver2 :: Person , car1Passengers :: Set Person , car2Passengers :: Set Person } deriving Show allDrivers :: Set Person allDrivers = Set.fromList [Driver1, Driver2, Driver3] -- Pick a driver from an available group of people. -- Returns the assigned driver, and the remaining unassigned people assignDriver :: Set Person -> Possibilities (Person, Set Person) assignDriver people = possibly [ (driver, Set.delete driver people) | driver <- Set.toList$ people Set.intersection allDrivers
]

-- Pick three passengers from an available group of people.
-- Returns the assigned passengers, and the remaining unassigned people
assign3Passengers :: Set Person -> Possibilities (Set Person, Set Person)
assign3Passengers people = possibly [ (passengers, people \\ passengers)
| passengers <- setsOf3
]
where setsOf3 = filter (\s -> length s == 3) $Set.toList$ Set.powerSet people

Finally, we can express the multiverse of possible drivers-and-passengers assignments with great elegance. Behold:

carAssignments :: Possibilities CarAssignment
carAssignments = do
let everyone = Set.fromList $enumFromTo minBound maxBound -- [Driver1, Driver2, ..., Passenger6] (driver1, rest) <- assignDriver everyone (driver2, rest) <- assignDriver rest (car1Passengers, rest) <- assign3Passengers rest (car2Passengers, _) <- assign3Passengers rest return$ CarAssignment driver1 driver2 car1Passengers car2Passengers

Given the monad instance for Possibilities, the return function returns all possible possibilities. Let’s take a look at the size of the multiverse in this case:

ghci> let multiverse = carAssignments
ghci> print $length multiverse 120 Just as we had calculated by hand. Amazing! ### Conclusion What I’ve shown you today is how to structure computations in such a way that you are exploring the multiverse of possibilities all at once. The seasoned Haskell programmer will have recognized that the Functor, Applicative, and Monad instances of Possibilities are just like lists! Although I’m not using Haskell at work2, I expect that something similar will need to be built in the near future to speed up our global optimization problem. The specific problem we are tackling has many more constraints than the example presented in this post. It’s easier to generate a list of solutions, most of which are unsuitable, and filter the solutions one by one. There is a fixed computational cost associated with generating and checking a solution, and so reducing the set of possible solutions is even more important. This post was partly inspired by the legendary blog post Typing the technical interview A self-contained Haskell source file containing all code from this post is available for download here 1. This is why some people like to thing of monads as types that support flatMap.↩︎ 2. Boss, if you’re reading this, please let me use Haskell :).↩︎ ]]> Can you make heterogeneous lists in Haskell? Sure — as long as your intent is clear https://laurentrdc.xyz//posts/existential.html 2021-09-26T00:00:00Z 2022-07-06 Featured in Haskell Weekly issue 283 Sometimes, Haskell’s type system seems a bit restrictive compared to dynamic languages like Python. The most obvious example is the heterogenous list: >>> # Python >>> mylist = ["hello", "world", 117, None] >>> >>> for item in mylist: ... print(item) hello world 117 None but in Haskell, list items must be of the same type: -- Haskell mylist = ["hello", "world", 117, ()] -- Invalid: type cannot be inferred! This is a contrived example, of course. But consider this use-case: I just want to print the content of the list. It’s unfortunate I can’t write: mylist :: Show a => [a] mylist = ["hello", "world", 117, ()] -- All these types have Show instances, but this won't compile For this specific application, the type system is overly restrictive – as long as all I want to do is print the content of my list! In this post, I’ll show you how to do something like this using the ExistentialQuantification language extension. ## A more complex example Let’s say I want to list American football players. There are two broad classes of players (offensive and defensive) and we want to keep track of the players in a list – the player registry. Our final objective is to print the list of players to standard output. Let’s try to do the same in Haskell. Our first reflex might be to use a sum type: data Player = OffensivePlayer String String -- name and position | DefensivePlayer String String -- name and position playerRegistry :: [Player] playerRegistry = ... However, not all sports stats apply to OffensivePlayer and DefensivePlayer constructors. For example: passingAccuracy :: Player -> IO Double passingAccuracy (OffensivePlayer name pos) = lookupFromDatabase "passingAccuracy" name passingAccuracy (DefensivePlayer name pos) = return 0 -- Defensive players don't pass tacklesPerGame :: Player -> IO Double tacklesPerGame (OffensivePlayer name pos) = return 0 -- Offensive players don't tackle tacklesPerGame (DefensivePlayer name pos) = lookupFromDatabase "tacklesPerGame" name The Player type is too general; we’re not using the type system to its full potential. It’s much more representative of our situation to use two separate types: data OffensivePlayer = OffensivePlayer String String data DefensivePlayer = DefensivePlayer String String passingAccuracy :: OffensivePlayer -> IO Double passingAccuracy = ... tacklesPerGame :: DefensivePlayer -> IO Double tacklesPerGame = ... This is much safer and appropriate. Now let’s give ourselves the ability to print players: instance Show OffensivePlayer where show (OffensivePlayer name pos) = mconcat ["< ", name, " : ", pos, " >"] instance Show DefensivePlayer where show (DefensivePlayer name pos) = mconcat ["< ", name, " : ", pos, " >"] Awesome. One last problem: -- This won't typecheck playerRegistry = [ OffensivePlayer "Tom Brady" "Quarterback" , DefensivePlayer "Michael Strahan" "Defensive end" ] printPlayerList :: IO () printPlayerList = forM_ playerRegistry print -- forM_ from Control.Monad Rather annoying. We could wrap the two player types in a sum type: data Player = OP OffensivePlayer | DP DefensivePlayer instance Show Player where show (OP p) = show p show (DP p) = show p playerRegistry :: [Player] playerRegistry = [ OP (OffensivePlayer "Tom Brady" "Quarterback") , DP (DefensivePlayer "Michael Strahan" "Defensive end") ] printPlayerList :: IO () printPlayerList = forM_ playerRegistry print  but this is quite clunky. It also doesn’t scale well to cases where we have a lot more types! ## Enter existential quantification The latest version of the Haskell language (Haskell 2010) is somewhat dated at this point. However, the Glasgow Haskell Compiler supports language extensions at the cost of portability. Turns out that the ExistentialQuantification language extension can help us with this problem. We turn on the extension at the top of our module: {-# LANGUAGE ExistentialQuantification #-} and create an existential datatype: data ShowPlayer = forall a. Show a => ShowPlayer a The datatype ShowPlayer is a real datatype that bundles any data a which can be shown. Note that everything else about the internal type is forgotten, since the ShowPlayer type wraps any type that can be shown (that’s what forall a. Show a means). We can facilitate the construction of a Player with the following helper function: mkPlayer :: Show a => a -> ShowPlayer mkPlayer a = ShowPlayer a show  Now since the data bundled in a ShowPlayer can be shown, the only operation supported by ShowPlayer is Show: instance Show ShowPlayer where show (ShowPlayer a) = show a Finally, our heterogenous list: playerRegistry :: [ShowPlayer] playerRegistry = [ -- ✓ OffensivePlayer has a Show instance ✓ ShowPlayer (OffensivePlayer "Tom Brady" "Quarterback")) -- ✓ DefensivePlayer has a Show instance ✓ , ShowPlayer (DefensivePlayer "Michael Strahan" "Defensive end")) ] printPlayerList :: IO () printPlayerList = forM_ playerRegistry print  So we can have an heterogenous list – as long as the only thing we can do with it is show it! The advantage here compared to the sum-type approach is when we extend our code to many more types: data Quarterback = Quarterback String deriving Show data Lineman = Lineman String deriving Show data Runningback = Runningback String deriving Show data WideReceiver = WideReceiver String deriving Show data DefensiveEnd = DefensiveEnd String deriving Show data Linebacker = Linebacker String deriving Show data Safety = Safety String deriving Show data Corner = Corner String deriving Show -- Example: some functions are specific to certain positions passingAccuracy :: Quarterback -> IO Double assingAccuracy = ... playerRegistry :: [ShowPlayer] playerRegistry = [ mkPlayer (Quarterback "Tom Brady")) , mkPlayer (DefensiveEnd "Michael Strahan")) , mkPlayer (Safety "Richard Sherman")) , ... ] printPlayerList :: IO () printPlayerList = forM_ playerRegistry print  This way, we can keep the benefits of the type system when we want it, but also give ourselves some flexibility when we need it. This is actually similar to object-oriented programming, where classes bundle data and operations on them into an object! ## A bit more functionality Let’s pack in more operations on our heterogenous list. We might want to not only show players, but also access their salaries. We describe the functionality common to all players in a typeclass called BasePlayer: class Show p => BasePlayer p where -- Operate in IO because of database access, for example getYearlySalary :: p -> IO Double instance BasePlayer Quarterback where ... instance BasePlayer Lineman where ... We can update our player registry to support the same operations as BasePlayer through the Player existential type: data Player = forall a. BasePlayer a => Player a instance Show Player where show (Player a) = show a instance BasePlayer Player where getYearlySalary (Player a) = getYearlySalary a and our new heterogenous list now supports: playerRegistry :: [Player] playerRegistry = [ Player (Quarterback "Tom Brady") , Player (DefensiveEnd "Michael Strahan") , Player (Safety "Richard Sherman") , ... ] printPlayerList :: IO () printPlayerList = forM_ playerRegistry print -- unchanged average_salary :: IO Double average_salary = do salaries <- for playerRegistry getYearlySalary -- (for from Data.Traversable) return$ (sum salaries) / (length salaries)

So we can have a heterogenous list – but we can only perform operations which are supported by the Player type. In this sense, the Player type encodes our intent.

## Conclusion

In this post, we’ve seen how to create heterogenous lists in Haskell. However, contrary to dynamic languages, we can only do so provided we are explicit about our intent. That means we get the safety of strong, static types with some added flexibility if we so choose.

If you’re interested in type-level programming, including but not limited to the content of this present post, I strongly recommend Rebecca Skinner’s An Introduction to Type Level Programming

Thanks to Brandon Chinn for some explanation on how to simplify existential types.

]]>
In defence of the PhD prelim exam https://laurentrdc.xyz//posts/prelim.html 2021-06-12T00:00:00Z 2021-06-12 In the department of Physics at McGill University, there are a few requirements for graduation in the PhD program. One of these requirements is to pass the preliminary examination, or prelim for short, at the end of the first year1. This type of examination is becoming rarer across North America. The Physics department has been discussing the modernization of the prelim, either by changing its format or removing it entirely.

In this post, I want to explain what the prelim is and why I think its essence should be preserved.

### What is the prelim?

The prelim in its pre-COVID-19 form is a 6h sit-down exam, split in two 3h sessions. It aims to test students’ mastery of Physics concepts at the undergraduate level. At McGill, there are four themes of questions:

1. Classical mechanics and special relativity;
2. Thermodynamics and statistical mechanics;
3. Electromagnetism;
4. Quantum mechanics.

The first 3h session is composed of 16 short questions, 10 of which must be answered. Some of the short questions are conceptual, while other involve a small calculations. Here is an example of a short question from the year I passed the prelim:

Imagine a planet being a long infinite solid cylinder of radius $$R$$ with a mass per unit length $$\Lambda$$. The matter is uniformly distributed over its radius. Find the potential and gravitational field everywhere, i.e. inside and outside the cylinder, and sketch the field lines.

The second 3h session is composed of 8 long questions, split evenly among the four themes. Four questions must be answered (no more!), with at least one question from each theme. Here is an example of a long question from the year I passed the prelim:

A simple 1-dimensional model for an ionic crystal (such as NaCl) consists of an array of $$N$$ point charges in a straight line, alternately $$+e$$ and $$−e$$ and each at a distance $$a$$ from its nearest neighbours. If $$N$$ is very large, find the potential energy of a charge in the middle of the row and of one at the end of the row in the form $$\alpha e^2/(4\pi \epsilon_0 a)$$.

I passed the prelim exam in 2018. For the curious, here are all the questions from that year: short (PDF) and long (PDF). The department of Physics also keeps a record of the prelim questions going back to 1996. Senior undergraduates are well-equipped to answer prelim questions. The difficulty comes from the breath of possible questions, as well as the time constraint.

### A test of competence

Of course, the prelim is only one of the requirements on the way to earn a doctoral degree. Most importantly, PhD students need to write a dissertation and defend its content in front of a committee of experts. So why have the prelim at all?

The prelim serves as a way to ensure that all PhD students have a certain level of competence in all historical areas of Physics. Evaluating students for admission to the Physics department is inherently hard because it is difficult to compare academic records from different institutions across the world.

Earning a PhD makes you an expert in a narrow subject. Passing the prelim indicates that students have a baseline knowledge across all historical Physics discipline.

### Proposed alternative: the comprehensive examination

Not every department in the McGill Faculty of Science requires PhD students to pass a prelim exam. Another popular alternative, in use in the Chemistry department for example, is the so-called comprehensive examination2.

The structure of the comprehensive exam varies across departments, but generally it involves the student writing a multi-page project proposal and defending this proposal in front of a committee of faculty members. In the course of the comprehensive exam, committee members may ask the student any question related to their research topic.

A comprehensive exam has two attractive attributes. First, its scope is closer to students’ area of research. Second, a large part of the comprehensive (the project proposal) can be done offline, without the pressure of being timed.

### In defence of the prelim

The prelim is a stressful event. Not everyone is comfortable in a sit-down exam setting. A PhD career can end because someone slept poorly the night before the exam. I support any and all adjustments to the current prelim format to make the experience more accessible in this sense.

My main objection with replacing the prelim with something closer to the comprehensive exam is the functionalization of education. Removing the prelim eliminates the incentive to have a baseline knowledge across Physics. It encourages PhD students to have an even narrower set of skills, making the PhD program more focused around the resulting dissertation.

The comprehensive exam is inherently about making students’ experience more focused on their research area. This is appealing from students point-of-view: why should they have to go out of their way to stay aware about classical mechanics, something which they might never use? The comprehensive exam (in the format that I have described above) streamlines the requirements for graduation.

The graduate student experience is about much more than the resulting dissertation. We want our students to be more than just experts in their narrow fields; we also want them to be ready to contribute to society beyond their immediate expertise. Does the prelim ensure that this is the case? Of course not. But removing the prelim sends the wrong message about what it means to graduate with a PhD.

On a personal note, the prelim made me review all of my undergraduate studies. I purchased the Feynman Lectures on Physics and read all three volumes. With a Masters’ degree under my belt, I was able to appreciate my learnings under a new light, even though I haven’t used most of it since then. While I cannot say that the exam was fun, the studying experience was definitely one of the highlights of my PhD.

1. Other institutions might call it the qualifying examination.↩︎

2. Again, this might have other names at other institutions.↩︎

]]>
Harnessing symmetry to find the center of a diffraction pattern https://laurentrdc.xyz//posts/autocenter.html 2021-01-23T00:00:00Z 2022-02-20 Ultrafast electron diffraction involves the analysis of diffraction patterns. Here is an example diffraction pattern for a thin (<100nm) flake of graphite1:

A diffraction pattern is effectively the intensity of the Fourier transform. Given that crystals like graphite are well-ordered, the diffraction peaks (i.e. Fourier components) are very large. You can see that the diffraction pattern is six-fold symmetric; that’s because the atoms in graphite arrange themselves in a honeycomb pattern, which is also six-fold symmetric. In these experiments, the fundamental Fourier component is so strong that we need to block it. That’s what that black beam-block is about.

There are crystals that are not as well-ordered as graphite. Think of a powder made of many small crystallites, each being about 50nm x 50nm x 50nm. Diffraction electrons through a sample like that results in a kind of average of all possible diffraction patterns. Here’s an example with polycrystalline Chromium:

Each ring in the above pattern pattern corresponds to a Fourier component. Notice again how symmetric the pattern is; the material itself is symmetric enough that the fundamental Fourier component needs to be blocked.

For my work on iris-ued, a data analysis package for ultrafast electron scattering, I needed to find a reliable, automatic way to get the center of such diffraction patterns to get rid of the manual work required now. So let’s see how!

## First try: center of mass

A first naive attempt might start with the center-of-mass, i.e. the average of pixel positions weighted by their intensity. Since intensity is symmetric about the center, the center-of-mass should coincide with the actual physical center of the image.

Good news, scipy’s ndimage module exports such a function: center_of_mass. Let’s try it: Demonstration of using scipy.ndimage.center_of_mass to find the center of diffraction patterns. (Source code)

Not bad! Especially in the first image, really not a bad first try. But I’m looking for something pixel-perfect. Intuitively, the beam-block in each image should mess with the calculation of the center of mass. Let’s define the following areas that we would like to ignore:

Masks are generally defined as boolean arrays with True (or 1) where pixels are valid, and False (or 0) where pixels are invalid. Therefore, we should ignore the weight of masked pixels. scipy.ndimage.center_of_mass does not support this feature; we need an extension of center_of_mass:

def center_of_mass_masked(im, mask):
rr, cc = np.indices(im.shape)

r = np.average(rr, weights=weights)
c = np.average(cc, weights=weights)
return r, c

This is effectively an average of the row and column coordinates (rr and cc) weighted by the image intensity. The trick here is that mask.astype(im.dtype) is 0 where pixels are “invalid”; therefore they don’t count in the average! Let’s look at the result:

I’m not sure if it’s looking better, honestly. But at least we have an approximate center! That’s a good starting point that feeds in to the next step.

## Friedel pairs and radial inversion symmetry

In his thesis2, which is now also a book, Nelson Liu describes how he does it:

A rough estimate of its position is obtained by calculating the ‘centre of intensity’ or intensity-weighted arithmetic mean of the position of > 100 random points uniformly distributed over the masked image; this is used to match diffraction spots into Friedel pairs amongst those found earlier. By averaging the midpoint of the lines connecting these pairs of points, a more accurate position of the centre is obtained.

Friedel pairs are peaks related by inversion through the center of the diffraction pattern. The existence of these pairs is guaranteed by crystal symmetry. For polycrystalline patterns, Friedel pairs are averaged into rings; rings are always inversion-symmetric about their centers. Here’s an example of two Friedel pairs: Example of two Friedel pairs: white circles form pair 1, while red circles form pair 2. (Source code)

The algorithm by Liu was meant for single-crystal diffraction patterns with well-defined peaks, and not so much for rings. However, we can distill Liu’s idea into a new, more general approach. If the approximate center coincides with the actual center of the image, then the image should be invariant under radial-inversion with respect to the approximate center. Said another way: if the image $$I$$ is defined on polar coordinates $$(r, \theta)$$, then the center maximizes correlation between $$I(r, \theta)$$ and $$I(-r, \theta)$$. Thankfully, computing the masked correlation between images is something I’ve worked on before!

Let’s look at what radial inversion looks like. There are ways to do it with interpolation, e.g. scikit-image’s warp function. However, in my testing, this is incredibly slow compared to what I will show you. A faster approach is to consider that if the image was centered on the array, then radial inversion is really flipping the direction of the array axes; that is, if the image array I has size (128, 128), and the center is at (64, 64), the radial inverse of I is I[::-1, ::-1] (numpy) / flip(flip(I, 1), 2) (MATLAB) / I[end:-1:1,end:-1:1] (Julia). Another important note is that if the approximate center of the image is far from the center of the array, the overlap between the image and its radial inverse is limited. Consider this:

If we cropped out the bright areas around the frame, then the approximate center found would coincide with the center of the array; then, radial inversion is very fast. Demonstration of what parts of the image to crop so that the image center coincides with the center of the array. (Source code)

Now, especially for the right column of images, it’s pretty clear that the approximate center wasn’t perfect. The correction to the approximate center is can be calculated with the masked normalized cross-correlation3 4: Top left: diffraction pattern. Top right: radially-inverted diffraction pattern about an approximate center. Bottom left: masked normalized cross-correlation between the two diffraction patterns. Bottom right: 2x zoom on the cross-correlation shows the translation mismatch between the diffraction patterns. (Source code)

The cross-correlation in the bottom right corner (zoomed by 2x) shows that the true center is the approximate center we found earlier, corrected by the small shift (white arrow)! For single-crystal diffraction patterns, the resulting is even more striking: Top left: diffraction pattern. Top right: radially-inverted diffraction pattern about an approximate center. Bottom left: masked normalized cross-correlation between the two diffraction patterns. Bottom right: 2x zoom on the cross-correlation shows the translation mismatch between the diffraction patterns. (Source code)

We can put the two steps together and determine a pixel-perfect center:

## Bonus: low-quality diffraction

Here’s a fun consequence: the technique works also for diffraction patterns that are pretty crappy and very far off center, provided that the asymmetry in the background is taken care-of:

## Conclusion

In this post, we have determined a robust way to compute the center of a diffraction pattern without any parameters, by making use of a strong invariant: radial inversion symmetry. My favourite part: this method admits no free parameters!

If you want to make use of this, take a look at autocenter, a new function that has been added to scikit-ued.

1. L.P. René de Cotret et al, Time- and momentum-resolved phonon population dynamics with ultrafast electron diffuse scattering, Phys. Rev. B 100 (2019) DOI: 10.1103/PhysRevB.100.214115.↩︎

2. Liu, Lai Chung. Chemistry in Action: Making Molecular Movies with Ultrafast Electron Diffraction and Data Science. University of Toronto, 2019.↩︎

3. Dirk Padfield. Masked object registration in the Fourier domain. IEEE Transactions on Image Processing, 21(5):2706–2718, 2012. DOI: 10.1109/TIP.2011.2181402↩︎

4. Dirk Padfield. Masked FFT registration. Prov. Computer Vision and Pattern Recognition. pp 2918-2925 (2010). DOI:10.1109/CVPR.2010.5540032↩︎

]]>
Matplotlib for graphic design https://laurentrdc.xyz//posts/banner.html 2020-11-03T00:00:00Z 2020-11-05 In this post, I will show you how I generated the banner for this website using Matplotlib. In case it disappears in the future, here is an image of it:

Matplotlib is a plotting library for python, historically inspired by the plotting capabilities of MATLAB. You can take a look at the various examples on their website. One thing that is not immediately obvious is that you can use Matplotlib to also draw shapes! In this sense, Matplotlib becomes a graphic design library.

(You can see the exact source code for the images below by clicking on the link in the caption)

### Basic shapes

Let’s start at the beginning: drawing a single hexagon.

import matplotlib.patches as patches

mpatches.RegularPolygon(
xy=center,
numVertices=6,
facecolor=color,
edgecolor="k",
orientation=0,
fill=True,
)
)

Using the function, we can draw a tiling of hexagons. Let’s first set-up our plot:

import math
import numpy as np
import matplotlib.pyplot as plt

# Note that Matplotlib figure size is (width, height) in INCHES...
# We want it to be 100mm x 100mm
mm_to_in = 0.03937008
figure, ax = plt.subplots(1,1, figsize=(100 * mm_to_in, 100*mm_to_in))

# Hide as much of the axis borders/margins as possible
ax.axis("off")
ax.set_xlim([0, 100])
ax.set_ylim([0, 100])

# Dimensions of the bounding box of the hexagons
height = 2 * radius

### Tiling

We note that a tiling of regular hexagons requires a different offset for every row. If you imagine rows being numbered starting at 0, hexagons in rows with odd indices need to be offset by $$\frac{\sqrt{3}}{2} r$$, where $$r$$ is the radius (or distance from the center to vertex). To find the centers of the hexagons, the following loop does the trick:

import itertools

centers = list()

for offset_x, offset_y in [(0, 0), (width / 2, (3 / 2) * radius)]:

rows    = np.arange(start=offset_x, stop=105, step=width)
columns = np.arange(start=offset_y, stop=105, step=3 * radius)

for x, y in itertools.product(rows, columns):
centers.append( (x,y) )

Once we know about the centers of the hexagons, we can place them one-by-one:

for (x,y) in centers:
draw_hexagon(ax, center=(x,y), radius=radius)

Here’s what it looks like so far:

### Color

The figure above has the wrong dimension, but you get the idea. Let’s color the hexagons appropriately. In the banner, the color of the hexagons is based on the “inferno” colormap. The color radiates away from the bottom left corner:

def draw_hexagon(ax, center, radius, color='w'):
mpatches.RegularPolygon(
xy=center,
numVertices=6,
facecolor=color,
edgecolor="none", #note: edgecolor=None is actually the default value!
orientation=0,
fill=True,
)
)

colormap = plt.get_cmap('inferno')
for (x,y) in centers:
# radius away from bottom left corner
# proportional to the distance of the top right corner
# i.e. 0 < r < 1
r = math.hypot(x, y) / math.hypot(100, 100)
draw_hexagon(ax, center=(x, y), radius=radius, color=colormap(r))

Here’s the result:

Because of rounding errors of the hexagon dimensions, there is some visible spacing between the hexagons. To get rid of it, we draw the hexagons a bit larger (0.2 millimeters):

def draw_hexagon(ax, center, radius, color='w'):
mpatches.RegularPolygon(
xy=center,
numVertices=6,
facecolor=color,
edgecolor="none",
orientation=0,
fill=True,
)
)

### A bit of randomness

For a light touch of whimsy, I like to make the color fluctuate a little:

import random

colormap = plt.get_cmap('inferno')
for (x,y) in centers:
# radius away from bottom left corner
# proportional to the distance of the top right corner
# i.e. 0 < r < 1
r = math.hypot(x, y) / math.hypot(100, 100)
r += random.gauss(0, 0.01)
draw_hexagon(ax, center=(x, y), radius=radius, color=colormap(r))

We arrive at the final result:

You can imagine adapting this approach to different tilings, and different colors schemes. Here’s a final example using squares and the “cool” colormap:

The masked normalized cross-correlation and its application to image registration https://laurentrdc.xyz//posts/mnxc.html 2019-04-30T00:00:00Z 2022-02-20 Image registration consists in determinining the most likely transformation between two images — most importantly translation, which is what I am most concerned with.

How can we detect the translation between two otherwise similar image? This is an application of cross-correlation. The cross-correlation of two images is the degree of similitude between images for every possible translation between them. Mathematically, given grayscale images as discrete functions $$I_1(i,j)$$ and $$I_2(i,j)$$, their cross-correlation $$I_1 \star I_2$$ is defined as: $(I_1 \star I_2)(u, v) \equiv \sum_{i,j} I_1(i, j) \cdot I_2(i - u, j - v)$

For example, if $$I_1 = I_2$$, then $$I_1 \star I_2$$ has its maximum at $$(u,v) =$$ (0,0). What happens if $$I_1$$ and $$I_2$$ are shifted from each other? Let’s see: The cross-correlation between shifted images exhibits a global maxima at the location corresponding to relative translation. (Source code)

In the above example, the cross-correlation is maximal at (50, 0), which is exactly the translation required to shift back the second image to match the first one. Finding the translation between images is then a simple matter of determining the glocal maximum of the cross-correlation. This operation is so useful that it is implemented in the Python library scikit-image as skimage.feature.phase_cross_correlation.

It turns out that in my field of research, image registration can be crucial to correct experimental data. My primary research tool is ultrafast electron diffraction. Without knowing the details, you can think of this technique as a kind of microscope. A single image from one of our experiments looks like this:

Most of the electron beam is unperturbed by the sample; this is why we use a metal beam-block (seen as a black rod in the image above) to prevent the electrons from damaging our apparatus.

Our experiments are synthesized from hundreds of gigabytes of images like the one above, and it may take up to 72h (!) to take all the images we need. Over the course of this time, the electron beam may shift in a way that moves the image, but not the beam-block1. Heres’s what I mean: Here is the difference between two equivalent images, acquired a few hours apart. The shift between them is evident in the third panel. (Source code)

This does not fly. We need to be able to compare images together, and shifts by more than 1px are problematic. We need to correct for this shift, for every image, with respect to the first one. However, we are also in a bind, because unlike the example above, the images are not completely shifted; one part of them, the beam-block, is static, while the image behind it shifts.

The crux of the problem is this: the cross-correlation between images gives us the shift between them. However, it is not immediately obvious how to tell the cross-correlation operation to ignore certain parts of the image. Is there some kind of operation, similar to the cross-correlation, that allows to mask parts of the images we want to ignore?

Thanks to the work of Dr. Dirk Padfield2 3, we now know that such an operation exists: the masked normalized cross-correlation. In his 2012 article, he explains the procedure and performance of this method to register images with masks. One such example is the registration of ultrasound images; unfortunately, showing you the figure from the article would cost me 450 \$US, so you’ll have to go look at it yourselves.

In order to fix our registration problem, then, I implemented the masked normalized cross-correlation operation — and its associated registration function — in our ultrafast electron diffraction toolkit, scikit-ued4. Here’s an example of it in action: Using the masked-normalized cross-correlation to align two diffraction patterns of polycrystalline chromium. The mask shown tells the algorithm to ignore the beam-block of both images. (Source code)

## Contributing to scikit-image

However, since this tool could see use in a more general setting, I decided to contribute it to scikit-image:

1. My contribution starts by bringing up the subject via a GitHub issue (issue #3330).
2. I forked scikit-image and integrated the code and tests from scikit-ued to scikit-image. The changes are visible in the pull request #3334.
3. Finally, some documentation improvements and an additional gallery example were added in pull request #3528.

In the end, a new function has been added, skimage.registration.phase_cross_correlation (previously skimage.feature.masked_register_translation).

1. Technically, the rotation of the electron beam about its source will also move the shadow of the beam-block. However, because the beam-block is much closer to the electron source, the effect is imperceptible.↩︎

2. Dirk Padfield. Masked object registration in the Fourier domain. IEEE Transactions on Image Processing, 21(5):2706–2718, 2012. DOI: 10.1109/TIP.2011.2181402↩︎

3. Dirk Padfield. Masked FFT registration. Prov. Computer Vision and Pattern Recognition. pp 2918-2925 (2010). DOI:10.1109/CVPR.2010.5540032↩︎

4. L. P. René de Cotret et al, An open-source software ecosystem for the interactive exploration of ultrafast electron scattering data, Advanced Structural and Chemical Imaging 4:11 (2018) DOI:10.1186/s40679-018-0060-y. This publication is open-access .↩︎

]]>
When one temperature is not enough: the two-temperature model https://laurentrdc.xyz//posts/two-temp-model.html 2019-04-03T00:00:00Z 2021-12-14 Temperature is a measure of the average kinetic energy of all particles in a system. An example of such as system is presented below: Translational motion of particles in a box. Some particles are colored red for better tracking. Image credit to A. Greg.

Note that the above system has a temperature because there exists a clear average motion, even though not all particles are moving at the same velocity. This means, a system is at some temperature $$T$$ as long as the distribution of kinetic energies (often related to velocities) ressembles a normal distribution: Examples of distribution of particle kinetic energies. Left: distribution of particle energies with a well-defined temperature. Right: distribution of particle energies does not match an expected thermal equilibrium. (Source code)

So, a system with a well-defined temperature exhibits a normal distribution of particle energies. It turns out that it is possible to prepare systems into a state where there are two clear average energies , if only for a very short moment.

Real materials are composed of two types of particles, nuclei and electrons1. These particles have widly different masses, so electromagnetic fields — for example, an intense pulse of light — will not affect them at the same time; since nuclei are at least ~1000x more massive than electrons, we should expect the electrons to react about ~1000x faster.

After decades of development culminating in the 2018 Nobel Prize in Physics, the production of ultrafast laser pulses (less than 30 femtoseconds2) is now routine. These ultrafast laser pulses can be used to prepare systems in a strange configuration: one with seemingly two temperatures, albeit only for a short time. Modeling of this situation in crystalline material was done decades ago, and the model is known as the two-temperature model3.

Roughly 100fs after dumping a lot of energy into a material, the nuclei might not have reacted yet, and we might have the following energetic landscape: Idealized view of the distribution of kinetic energy, 100 femtosecond after photoexcitation by an ultrafast laser pulse. For a very short time, the system can be described by two temperatures; one for the lattice of nuclei, $$T_l$$, and one for the electronic system, $$T_e$$. (Source code)

where the nucliei will still be at equilibrium temperature, and the electrons might be at a temperature of 20000$$^{\circ}$$C. Therefore, we have a system with two temperatures for a few picoseconds4.

1. The atomic forces at the nanometer-scale are mostly electromagnetic, so I will consider the atomic nuclei as a single particle.↩︎

2. $$1$$ femtosecond $$= 10^{-15}$$ seconds↩︎

3. P. B. Allen, Theory of thermal relaxation of electrons in metals (1987). Physics Review Letters 59, DOI: 10.1103/PhysRevLett.59.1460↩︎

4. $$1$$ picosecond $$= 1000$$ fs $$= 10^{-12}$$ seconds↩︎

]]>
Example of a Pandoc filter to abstract away CSS framework quirks https://laurentrdc.xyz//posts/bulma-pandoc-filter.html 2018-09-12T00:00:00Z 2018-10-08 To make this static website render correctly on both desktop and mobile, I’ve decided to ‘upgrade’ my setup to use the Bulma CSS framework. This introduced a problem I did not anticipate.

For example, consider the following “raw” HTML tag to create a level 1 title:

<h1>Title</h1>

However, in Bulma, headings must be of a specific class, like so 1:

<!-- Level-1 title -->
<h1 class="title is-1">Title</h1>

<!-- Level-2 title -->
<h2 class="title is-2">Title</h2>

<!-- Level-1 subtitle -->
<h1 class="subtitle is-1">Title</h1>

Problem is, a lot of headings included on my website are generated from Markdown to HTML using Pandoc. Predictably, Markdown headings like # Title are converted to “raw” HTML headings like <h1>Title</h1>, and not the <h1 class="title is-1">Title</h1> that I need to use.

This is a textbook example of a problem that can be solved with a Pandoc filter.

During the conversion from Markdown to HTML, Pandoc constructs an abstract syntax tree representing the document. A Pandoc filter is used to include transformations to this abstract syntax tree. This is precisely what we want : we want to transform headings into a slightly different type of headings that will play nicely with Bulma.

There are some examples in the Pandoc documentation on filters, but I would like to document the process I used to create this filter.

We’ll be writing the filter in Haskell, because I can then include in directly in the website code generation (more info here).

### The Pandoc abstract syntax tree

We need to familiarize ourselves with the Pandoc abstract syntax tree (AST). This is defined in the pandoc-types package, most importantly in the Text.Pandoc.Definition module (see here).

We’re using Haskell, so let’s look at the data types. A Pandoc document is converted from some source format (in our case, Markdown) to the Pandoc type:

data Pandoc = Pandoc Meta [Block]

Without looking at the details, we can see that a document is a list of blocks as well as some metadata. The Block datatype is more interesting:

data Block
= Plain [Inline]        -- ^ Plain text, not a paragraph
| Para [Inline]         -- ^ Paragraph
(...)                   -- (omitted)
| Header Int Attr [Inline] -- ^ Header - level (integer) and text (inlines)
(...)                   -- (omitted)

(source here)

There we go! One of the possible type of blocks is a header. This header has a level (level 1 header is the largest title), some attributes, and [Inline] represents the content of the header. We’re interested in modifying the header attributes, so let’s look at Attr:

-- | Attributes: identifier, classes, key-value pairs
type Attr = ( String                -- Identifier. Not important
, [String]              -- ^ class      (e.g. ["a", "b"] -> class="a b" in HTML)
, [(String, String)])   -- Not important

The “classes” part of the attribute is precisely what we’d like to modify. Recall that to get Bulma to work, we want to have headings looking like <h3 class="title is-3">Title</h3>.

### Modifying one AST node

Let’s write a function that modifies Blocks (i.e. one tree node) like we want 2:

-- This is from the pandoc-types package
import Text.Pandoc.Definition   (Block(..), Attr)

-- Pattern matching on the input
-- Any Block that is actually a header should be changed
where
(identifier, classes, keyvals) = attrs
-- We leave identifier and key-value pairs unchanged
newAttrs = ( identifier
-- We extend header classes to have the Bulma classes "title" and "is-*"
-- where * is the header level
, classes <> ["title", "is-" <> show level]
, keyvals)

-- We leave any non-header blocks unchanged
toBulmaHeading x = x

### Modifying the entire AST

All we need now is to traverse the entire syntax tree, and modify every block according to the toBulmaHeading function. This is trivial using the Text.Pandoc.Walk.walk function (also from pandoc-types). Thanks to typeclasses, walk works on many types, but the one specialization I’m looking for is:

walk :: (Block -> Block)    -- ^ A function that modifies the abstract syntax three
-> Pandoc              -- ^ A syntax tree
-> Pandoc              -- ^ Our modified syntax tree

Our filter then becomes:

-- This is from the pandoc-types package
import Text.Pandoc.Definition   (Pandoc, Block(..), Attr)
import Text.Pandoc.Walk         (walk)

where
(identifier, classes, keyvals) = attrs
-- We leave identifier and key-value pairs unchanged
newAttrs = ( identifier
-- We extend header classes to have the Bulma classes "title" and "is-*"
-- where * is the header level
, classes <> ["title", "is-" <> show level]
, keyvals)

-- We leave any non-header blocks unchanged

-- | Pandoc filter that changes headings to play nicely with Bulma
bulmaHeadingTransform = walk toBulmaHeading

### Hooking into Hakyll

To include this filter in my Hakyll pipeline, I only need to provide this filter to the pandocCompilerWithTransform function. Hakyll will then apply the Pandoc filter after the AST has been generated from Markdown, but before HTML rendering happens.

If you want to know how to integrate all of this you can shoot me an e-mail.

## Closing remarks

I hope this example has shown you the process behind writing Pandoc filters. Without modifying the content of my posts, I have been able to integrate Bulma in my static website.

I could also have done it by replacing Markdown headers with inline HTML. However, this would have been less fun.

You can take a look at the source code used to generate this website.

1. I’m sure there is a way to abstract those details away, but the objective today is to play with Pandoc.↩︎

2. I’m using the mappend operation <> to concatenate lists and strings. I could have used ++, but <> just looks so slick.↩︎

]]>