Hash Me If You Can

We are living in the era of Big Data but the problem of course is that the bigger our data sets become the slower even simple search operations get. I will now show you a trick that is the next best thing to magic: building a search function that practically doesn’t slow down even for large data sets… in base R!

On first thought this is totally counterintuitive: the bigger the data set is, the longer it should take to search it, right? Wrong!

The data structure we will be talking about is called a hash or a dictionary (sometimes also called associated array). The big idea is to use a mathematical function (called hash function) which maps each data item (e.g. a name) to an address (called hash) where the corresponding value (e.g. a telephone number) is stored. So to find the telephone number for a certain name you don’t have to search through all the names but you just put it into the hash function and you get back the address for the telephone number instantaneously:

Source: wikimedia

This image is from the very good wikipedia article on hash function algorithms: Hash function. It also gives a trivial example of a hash function to get the idea:

If the data to be hashed is small enough, one can use the data itself (reinterpreted as an integer) as the hashed value. The cost of computing this “trivial” (identity) hash function is effectively zero. This hash function is perfect, as it maps each input to a distinct hash value.

To do this with R we use environments with the option hash = TRUE. The following example is from an answer I gave on stackoverflow (https://stackoverflow.com/a/42350672/468305):

# vectorize assign, get and exists for convenience
assign_hash <- Vectorize(assign, vectorize.args = c("x", "value"))
get_hash <- Vectorize(get, vectorize.args = "x")
exists_hash <- Vectorize(exists, vectorize.args = "x")

# keys and values
key <- c("tic", "tac", "toe")
value <- c(1, 22, 333)

# initialize hash
hash <- new.env(hash = TRUE, parent = emptyenv(), size = 100L)

# assign values to keys
assign_hash(key, value, hash)
## tic tac toe 
##   1  22 333

# get values for keys
get_hash(c("toe", "tic"), hash)
## toe tic 
## 333   1

# alternatively:
mget(c("toe", "tic"), hash)
## $toe
## [1] 333
## 
## $tic
## [1] 1

# show all keys
ls(hash)
## [1] "tac" "tic" "toe"

# show all keys with values
get_hash(ls(hash), hash)
## tac tic toe 
##  22   1 333

# remove key-value pairs
rm(list = c("toe", "tic"), envir = hash)
get_hash(ls(hash), hash)
## tac 
##  22

# check if keys are in hash
exists_hash(c("tac", "nothere"), hash)
##     tac nothere 
##    TRUE   FALSE

# for single keys this is also possible:
# show value for single key
hash[["tac"]]
## [1] 22

# create new key-value pair
hash[["test"]] <- 1234
get_hash(ls(hash), hash)
##  tac test 
##   22 1234

# update single value
hash[["test"]] <- 54321
get_hash(ls(hash), hash)
##   tac  test 
##    22 54321

So you see that using the inbuilt hash functionality is quite simple. To get an idea of the performance boost there is a very thorough article here: http://jeffreyhorner.tumblr.com/post/114524915928/hash-table-performance-in-r-part-i.

Have a look at the following plot from the article:

Horner writes:

Bam! See that blue line? That’s near constant time for searching the entire 2^{15} size hash table!

If I could whet your appetite I want to close with a pointer to a much more professional implementation of hash tables using environments: https://CRAN.R-project.org/package=hash

I haven’t tried the package myself so far but the author, Christopher Brown, promises:

The hash package is the only full featured hash implementation for the R language. It provides more features and finer control of the hash behavior than the native feature set and has similar and sometimes better performance.

If you use this package (or any other hash implementation) I would love to read about your experience in the comments!

If wealth had anything to do with intelligence…

…the richest man on earth would have a fortune of no more than $43,000! If you don’t believe me read this post!

Have you ever thought about the distribution of wealth as a function of some quality? Especially rich people pride themselves on extraordinary abilities, so that they somehow “deserve” their wealth. Now “abilities” is somewhat hard to measure so let us take “intelligence” as a proxy. Intelligence is distributed normally with mean = 100 and standard deviation = 15. An interesting question is what is the expected maximum intelligence of all persons alive today?

IQ distribution Source: wikimedia

The first thought could be to simulate this but as it soon turns out this approach doesn’t work because you had to create and analyze a vector with nearly 8 billion elements – R will balk at this.

So we will have to look for an analytical solution – fortunately, asking my colleague Prof. Dr. Google brings up the following mathunderflow question and answers: Expected value for maximum of n normal random variable. According to this the expected maximum of n independent N(\mu, \sigma) is:

    \[\mu + \sigma n \int_{-\infty}^{\infty} x \phi(x) \Phi(x)^{n-1}dx,\]

where \phi (lowercase phi) and \Phi (uppercase phi) stand for the pdf and cdf of the normal distribution respectively. I spare you the mathematical details of the derivation.

Unfortunately there doesn’t seem to be a closed form for the integral so we will solve it numerically by making use of the integral function – first we reproduce the values of the accepted answer:

f <- function(x, n) x * dnorm(x) * pnorm(x)^(n-1)
for (i in 2:10) print(i * integrate(f, n = i, -Inf, Inf, abs.tol = 1e-20)$value) # test
## [1] 0.5641896
## [1] 0.8462844
## [1] 1.029375
## [1] 1.162964
## [1] 1.267206
## [1] 1.352178
## [1] 1.4236
## [1] 1.485013
## [1] 1.538753

This seems to be good enough. Now, let us calculate the maximum IQ that is to be expected for the current world population:

100 + 15 * world_population * integrate(f, n = world_population, -Inf, Inf, abs.tol = 1e-20)$value # expected max IQ of all people alive
## [1] 196.1035

To compare this with the actual record we find that Marilyn vos Savant seems to be (one of) the most intelligent persons on the planet. When you read the Wikipedia article you will find that there is some debate about what the actual number really is but it seems to be around 200… again, the above approximation seems to be good enough.

And now for the final leg: if the wealth distribution followed the same rules as the IQ distribution we could use the same formula to calculate the expected maximum wealth:

global_wealth <- 1.68e+14 # source: Allianz Global Wealth Report 2018
world_population <- 7.7e+09
per_capita_wealth <- global_wealth / world_population
per_capita_wealth_sd <- per_capita_wealth * 0.15 # equivalent to IQ

per_capita_wealth
## [1] 21818.18

per_capita_wealth + per_capita_wealth_sd * world_population * integrate(f, n = world_population, -Inf, Inf, abs.tol = 1e-20)$value
## [1] 42786.21

There you go: not more than $43.000! But the richest man at the moment, Jeff Bezos, has a net worth of about 140 billion US$!!! Conversely that would translate to an IQ of

1.4e+11 / per_capita_wealth * 100
## [1] 641666667

So, compared to an IQ of over 641 million Einstein would be the mental equivalent of an amoeba! If IQ were additive, i.e. if two persons with an IQ of 100 each had an combined IQ of 200, you had to take nearly the whole population of Europe (about 738 million) to match that; even the whole North American continent wouldn’t suffice with its 579 million inhabitants!

But if it is not ability/intelligence that determines the distribution of wealth what else could account for the extreme inequality we perceive in the world?

Stay tuned for more to come…

Update
The follow-up post with an answer to the last question is now online:
The Rich didn’t earn their Wealth, they just got Lucky

Understanding the Magic of Neural Networks


Everything “neural” is (again) the latest craze in machine learning and artificial intelligence. Now what is the magic here?

Let us dive directly into a (supposedly little silly) example: we have three protagonists in the fairy tail little red riding hood, the wolf, the grandmother and the woodcutter. They all have certain qualities and little red riding hood reacts in certain ways towards them. For example the grandmother has big eyes, is kindly and wrinkled – little red riding hood will approach her, kiss her on the cheek and offer her food (the behavior “flirt with” towards the woodcutter is a little sexist but we kept it to reproduce the original example from Jones, W. & Hoskins, J.: Back-Propagation, Byte, 1987). We will build and train a neural network which gets the qualities as inputs and little red riding wood’s behaviour as output, i.e. we train it to learn the adequate behaviour for each quality.

Have a look at the following code and its output including the resulting plot:

library(neuralnet)
library(NeuralNetTools)

# code qualities and actions
qualities <- matrix (c(1, 1, 1, 0, 0, 0,
                       0, 1, 0, 1, 1, 0,
                       1, 0, 0, 1, 0, 1), byrow = TRUE, nrow = 3)
colnames(qualities) <- c("big_ears", "big_eyes", "big_teeth", "kindly", "wrinkled", "handsome")
rownames(qualities) <- c("wolf", "grannie", "woodcutter")
qualities
##            big_ears big_eyes big_teeth kindly wrinkled handsome
## wolf              1        1         1      0        0        0
## grannie           0        1         0      1        1        0
## woodcutter        1        0         0      1        0        1

actions <- matrix (c(1, 1, 1, 0, 0, 0, 0,
                     0, 0, 0, 1, 1, 1, 0,
                     0, 0, 0, 1, 0, 1, 1), byrow = TRUE, nrow = 3)
colnames(actions) <- c("run_away", "scream", "look_for_woodcutter", "kiss_on_cheek", "approach", "offer_food", "flirt_with")
rownames(actions) <- rownames(qualities)
actions
##            run_away scream look_for_woodcutter kiss_on_cheek approach offer_food flirt_with
## wolf              1      1                   1             0        0          0          0
## grannie           0      0                   0             1        1          1          0
## woodcutter        0      0                   0             1        0          1          1

data <- cbind(qualities, actions)

# train the neural network (NN)
set.seed(123) # for reproducibility
neuralnetwork <- neuralnet(run_away + scream+look_for_woodcutter + kiss_on_cheek + approach + 
                           offer_food + flirt_with ~ 
                           big_ears + big_eyes + big_teeth + kindly + wrinkled + handsome,
                           data = data, hidden = 3, exclude = c(1, 8, 15, 22, 26, 30, 34, 38, 42, 46), 
                           lifesign = "minimal", linear.output = FALSE)
## hidden: 3    thresh: 0.01    rep: 1/1    steps:      48  error: 0.01319  time: 0.01 secs

# plot the NN
par_bkp <- par(mar = c(0, 0, 0, 0)) # set different margin to minimize cutoff text
plotnet(neuralnetwork, bias = FALSE)

par(par_bkp)

# predict actions
round(compute(neuralnetwork, qualities)$net.result)
##            [,1] [,2] [,3] [,4] [,5] [,6] [,7]
## wolf          1    1    1    0    0    0    0
## grannie       0    0    0    1    1    1    0
## woodcutter    0    0    0    1    0    1    1

First the qualities and the actions are coded as binary variables in a data frame. After that the neural network is being trained with the qualities as input and the resulting behaviour as output (using the standard formula syntax). In the neuralnet function a few additional technical arguments are set which details won’t concern us here, they just simplify the process in this context). Then we plot the learned net and test it by providing it with the respective qualities: in all three cases it predicts the right actions. How did it learn those?

Let us look at the plot of the net. We see that there are two basic building blocks: neurons and weighted connections between them. We have one neuron for each quality and one neuron for each action. Between both layers we have a so called hidden layer with three neurons in this case. The learned strength between the neurons is shown by the thickness of the lines (whereby ‘black’ means positive and ‘grey’ negative weights). Please have a thorough look at those weights.

You might have noticed that although the net didn’t know anything about the three protagonists in our little story it nevertheless correctly built a representation of them: ‘H1’ (for Hidden 1) represents the wolf because its differentiating quality is ‘big teeth’ which leads to ‘run away’, ‘scream’ and ‘look for woodcutter’, by the same logic ‘H2’ is the woodcutter and ‘H3’ is the grandmother. So the net literally learned to connect the qualities with respective actions of little red riding hood by creating a representation of the three protagonists!

So an artificial neural network is obviously a network of neurons… so let us have a look at those neurons! Basically they are mathematical abstractions of real neurons in your brain. They consist of inputs and an output. The biologically inspired idea is that when the activation of the inputs surpasses a certain threshold the neuron fires. To be able to learn the neuron must, before summing up the inputs, adjust the inputs so that the output is not just arbitrary but matches some sensible result. What is ‘sensible’ you might ask. In a biological environment the answer is not always so clear cut but in our simple example here the neuron has just to match the output we provide it with (= supervised learning).

The following abstraction has all we need, inputs, weights, the sum function, a threshold after that and finally the output of the neuron:

Simple artificial neuron

Let us talk a little bit about what is going on here intuitively. First every input is taken, multiplied by its weight and all of this is summed up. Some of you might recognize this mathematical operation as a scalar product (also called dot product). Another mathematical definition of a scalar product is the following:

    \[\mathbf{a}\cdot\mathbf{b}=\|\mathbf{a}\|\ \|\mathbf{b}\|\cos(\theta)\]

That is we multiply the length of two vectors by the cosine of the angle of those two vectors. What has cosine to do with it? The cosine of an angle becomes one when both vectors point into the same direction, it becomes zero when they are orthogonal and minus one when both point into opposite directions. Does this make sense? Well, I give you a litte (albeit crude) parable. When growing up there are basically three stages: first you are totally dependent on your parents, then comes puberty and you are against whatever they say or think and after some years you are truly independent (some never reach that stage…). What does “independent” mean here? It means that you agree with some of the things your parents say and think and you disagree with some other things. During puberty you are as dependent on your parents as during being a toddler – you just don’t realize that but in reality you, so to speak, only multiply everything your parents say or think times minus one!

What is the connection with cosine? Well, as a toddler both you and your parents tend to be aligned which gives one, during puberty both of you are aligned but in opposing directions which gives minus one and only as a grown up you are both independent which mathematically means that your vector in a way points in both directions at the same time which is only possible when it is orthogonal on the vector of your parents (you entered a new dimension, literally) – and that gives zero for the cosine.

So cosine is nothing but a measure of dependence – as is correlation by the way. So this setup ensures that the neuron learns the dependence (or correlation) structure between the inputs and the output! The step function is just a way to help it to decide on which side of the fence it wants to sit, to make the decision clearer whether to fire or not. To sum it up, an artificial neuron is a non-linear function (in this case a step function) on a scalar product of the inputs (fixed) and the weights (adaptable to be able to learn). By adapting the weights the neuron learns the dependence structure between inputs and output.

In R you code this idea of an artificial neuron as follows:

neuron <- function(input) ifelse(weights %*% input > 0, 1, 0)

Now let us use this idea in R by training an artificial neuron to classify points in a plane. Have a look at the following table:

Input 1 Input 2 Output
1 0 0
0 0 1
1 1 0
0 1 1

If you plot those points with the colour coded pattern you get the following picture:

The task for the neuron is to find a separating line and thereby classify the two groups. Have a look at the following code:

# inspired by Kubat: An Introduction to Machine Learning, p. 72
plot_line <- function(w, col = "blue", add = TRUE)
  curve(-w[1] / w[2] * x - w[3] / w[2], xlim = c(-0.5, 1.5), ylim = c(-0.5, 1.5), col = col, lwd = 3, xlab = "Input 1", ylab = "Input 2", add = add)
neuron <- function(input) as.vector(ifelse(input %*% weights > 0, 1, 0)) # step function on scalar product of weights and input
eta <- 0.5 # learning rate

# examples
input <- matrix(c(1, 0,
                  0, 0,
                  1, 1,
                  0, 1), ncol = 2, byrow = TRUE)
input <- cbind(input, 1) # bias for intercept of line
output <- c(0, 1, 0, 1)
weights <- c(0.25, 0.2, 0.35) # random initial weights

plot_line(weights, add = FALSE); grid()
points(input[ , 1:2], pch = 16, col = (output + 2))

# training of weights of neuron
for (example in 1:length(output)) {
  weights <- weights + eta * (output[example] - neuron(input[example, ])) * input[example, ]
  plot_line(weights)
}
plot_line(weights, col = "black")

# test: applying neuron on input
apply(input, 1, neuron)
## [1] 0 1 0 1

As you can see the result matches the desired output, graphically the black line is the end result and as you can see it separates the green from the red points: the neuron has learned this simple classification task. The blue lines are where the neuron starts from and where it is during training – they are not able to classify the points correctly.

The training, i.e. adapting the weights, takes places in this line:

weights <- weights + eta * (output[example] - neuron(input[example, ])) * input[example, ]

The idea is to compare the current output of the neuron with the wanted output, scale that by some learning factor \eta (eta) and modify the weights accordingly. So if the output is too big make the weights smaller and vice versa. Do this for all examples (sometimes you need another loop to train the neuron with the examples several times) and that’s it. That is the core idea behind the ongoing revolution of neural networks!

Ok, so far we had a closer look at one part of neural networks, namely the neurons, let us now turn to the network structure (also called network topology). First, why do we need a whole network anyway when the neurons are already able to solve classification tasks? The answer is that they can do that only for very simple problems. For example the neuron above can only distinguish between linearly separable points, i.e. it can only draw lines. It fails in case of the simple problem of four points that are coloured green, red, red, green from top left to bottom right (try it yourself). We would need a non-linear function to separate the points. We have to combine several neurons to solve more complicated problems.

The biggest problem you have to overcome when you combine several neurons is how to adapt all the weights. You need a system how to attribute the error at the output layer to all the weights in the net. This had been a profound obstacle until an algorithm called backpropagation (also abbreviated backprop) was invented (or found). We won’t get into the details here but the general idea is to work backwards from the output layers through all of the hidden layers till one reaches the input layer and modify the weights according to their respective contribution to the resulting error. This is done several (sometimes millions of times) for all training examples until one achieves an acceptable error rate for the training data.

The result is that you get several layers of abstraction, so when you e.g. want to train a neural network to recognize certain faces you start with the raw input data in the form of pixels, these are automatically combined into abstract geometrical structures, after that the net detects certain elements of faces, like eyes and noses, and finally abstractions of certain faces are being rebuilt by the net. See the following picture (from nivdul.wordpress.com) for an illustration:

So far we have only coded very small examples of a neural networks. Real-world examples often have dozens of layers with thousands of neurons so that much more complicated patterns can be learned. The more layers there are the ‘deeper’ a net becomes… which is the reason why the current revolution in this field is called “deep learning” because there are so many hidden layers involved. Let us now look at a more realistic example: predicting whether a breast cell is malignant or benign.

Have a look at the following code:

library(OneR)
data(breastcancer)
data <- breastcancer
colnames(data) <- make.names(colnames(data))
data$Class <- as.integer(as.numeric(data$Class) - 1) # for compatibility with neuralnet
data <- na.omit(data)

# Divide training (80%) and test set (20%)
set.seed(12) # for reproducibility
random <- sample(1:nrow(data), 0.8 * nrow(data))
data_train <- data[random, ]
data_test <- data[-random, ]

# Train NN on training set
model_train <- neuralnet(Class ~., data = data_train, hidden = c(9, 9), lifesign = "minimal")
## hidden: 9, 9    thresh: 0.01    rep: 1/1    steps:    3784   error: 0.00524  time: 3.13 secs

# Plot net
plot(model_train, rep = "best")

# Use trained model to predict test set
prediction <- round(predict(model_train, data_test))
eval_model(prediction, data_test)
## 
## Confusion matrix (absolute):
##           Actual
## Prediction   0   1 Sum
##        0    93   2  95
##        1     4  38  42
##        Sum  97  40 137
## 
## Confusion matrix (relative):
##           Actual
## Prediction    0    1  Sum
##        0   0.68 0.01 0.69
##        1   0.03 0.28 0.31
##        Sum 0.71 0.29 1.00
## 
## Accuracy:
## 0.9562 (131/137)
## 
## Error rate:
## 0.0438 (6/137)
## 
## Error rate reduction (vs. base rate):
## 0.85 (p-value = 1.298e-13)

So you see that a relatively simple net achieves an accuracy of about 95% out of sample. The code itself should be mostly self-explanatory. For the actual training the neuralnet function from the package with the same name is being used, the input method is the standard R formula interface, where you define Class as the variable to be predicted by using all the other variables (coded as .~).

When you look at the net one thing might strike you as odd: there are three neurons at the top with a fixed value of 1. These are so called bias neurons and they serve a similar purpose as the intercept in a linear regression: they kind of shift the model as a whole in n-dimensional feature space just as a regression line is being shifted by the intercept. In case you were attentive we also smuggled in a bias neuron in the above example of a single neuron: it is the last column of the input matrix which contains only ones.

Another thing: as can even be seen in this simple example it is very hard to find out what a neural network has actually learned – the following well-known anecdote (urban legend?) shall serve as a warning: some time ago the military built a system which had the aim to distinguish military vehicles from civilian ones. They chose a neural network approach and trained the system with pictures of tanks, humvees and missile launchers on the one hand and normal cars, pickups and lorries on the other. After having reached a satisfactory accuracy they brought the system into the field (quite literally). It failed completely, performing no better than a coin toss. What had happened? No one knew, so they re-engineered the black box (no small feat in itself) and found that most of the military pics where taken at dusk or dawn and most civilian pics under brighter weather conditions. The neural net had learned the difference between light and dark!

Just for comparison the same example with the OneR package:

data(breastcancer)
data <- breastcancer

# Divide training (80%) and test set (20%)
set.seed(12) # for reproducibility
random <- sample(1:nrow(data), 0.8 * nrow(data))
data_train <- optbin(data[random, ], method = "infogain")
## Warning in optbin.data.frame(data[random, ], method = "infogain"): 12
## instance(s) removed due to missing values
data_test <- data[-random, ]

# Train OneR model on training set
model_train <- OneR(data_train, verbose = TRUE)
## 
##     Attribute                   Accuracy
## 1 * Uniformity of Cell Size     92.32%  
## 2   Uniformity of Cell Shape    91.59%  
## 3   Bare Nuclei                 90.68%  
## 4   Bland Chromatin             90.31%  
## 5   Normal Nucleoli             90.13%  
## 6   Single Epithelial Cell Size 89.4%   
## 7   Marginal Adhesion           85.92%  
## 8   Clump Thickness             84.28%  
## 9   Mitoses                     78.24%  
## ---
## Chosen attribute due to accuracy
## and ties method (if applicable): '*'

# Show model and diagnostics
summary(model_train)
## 
## Call:
## OneR.data.frame(x = data_train, verbose = TRUE)
## 
## Rules:
## If Uniformity of Cell Size = (0.991,2] then Class = benign
## If Uniformity of Cell Size = (2,10]    then Class = malignant
## 
## Accuracy:
## 505 of 547 instances classified correctly (92.32%)
## 
## Contingency table:
##            Uniformity of Cell Size
## Class       (0.991,2] (2,10] Sum
##   benign        * 318     30 348
##   malignant        12  * 187 199
##   Sum             330    217 547
## ---
## Maximum in each column: '*'
## 
## Pearson's Chi-squared test:
## X-squared = 381.78243, df = 1, p-value < 0.00000000000000022204

# Plot model diagnostics
plot(model_train)

# Use trained model to predict test set
prediction <- predict(model_train, data_test)

# Evaluate model performance on test set
eval_model(prediction, data_test)
## 
## Confusion matrix (absolute):
##            Actual
## Prediction  benign malignant Sum
##   benign        92         0  92
##   malignant      8        40  48
##   Sum          100        40 140
## 
## Confusion matrix (relative):
##            Actual
## Prediction  benign malignant  Sum
##   benign      0.66      0.00 0.66
##   malignant   0.06      0.29 0.34
##   Sum         0.71      0.29 1.00
## 
## Accuracy:
## 0.9429 (132/140)
## 
## Error rate:
## 0.0571 (8/140)
## 
## Error rate reduction (vs. base rate):
## 0.8 (p-value = 0.000000000007992571)

As you can see the accuracy is only slightly worse but you have full interpretability of the model… and you would only need to measure one value (“Uniformity of Cell Size”) instead of 9 to get a prediction!

Captured images of layers of glass with smears of breast mass (the parts stained correspond to cell nuclei) – Source

On the other hand making neural networks interpretable is one of the big research challenges at the moment.

To end this rather long post: there is a real revolution going on at the moment with all kinds of powerful neural networks. Especially promising is a combination of reinforcement learning (the topic of an upcoming post) and neural networks, where the reinforcement learning algorithm uses a neural network as its memory. For example the revolutionary AlphaGo Zero is built this way: it just received the rules of Go, one of the most demanding strategy games humanity has ever invented, and grew superhuman strength after just three days! The highest human rank in Go has an ELO value of 2940 – AlphaGo Zero achieves 5185! Even the best players don’t stand a chance against this monster of a machine. The neural network technology that is used for AlphaGo Zero and many other deep neural networks is called Tensorflow, which can also easily be integrated into the R environment. To find out more go here: https://tensorflow.rstudio.com/

In this whole area there are many mind-blowing projects underway, so stay tuned!

Understanding the Maths of Computed Tomography (CT) scans


Noseman is having a headache and as an old-school hypochondriac he goes to see his doctor. His doctor is quite worried and makes an appointment with a radiologist for Noseman to get a CT scan.

Modern CT scanner from Siemens

Because Noseman always wants to know how things work he asks the radiologist about the inner workings of a CT scanner.

The basic idea is that X-rays are fired from one side of the scanner to the other. Because different sorts of tissue (like bones, brain cells, cartilage etc.) block different amounts of the X-rays the intensity measured on the other side varies accordingly.

The problem is of course that a single picture cannot give the full details of what is inside the body because it is a combination of different sorts of tissue in the way of the respective X-rays. The solution is to rotate the scanner and combine the different slices.

How, you ask? Good old linear algebra to the rescue!

We start with the initial position and fire X-rays with an intensity of 30 (just a number for illustrative purposes) through the body:

Initial position

As can be seen in the picture the upper ray goes through areas 1, 2 and 3 and let’s say that the intensity value of 12 is measured on the other side of the scanner:

    \[30-x_1-x_2-x_3=12\]

or

    \[x_1+x_2+x_3=18\]

The rest of the formula is found accordingly:

    \[\underbrace{ \begin{pmatrix}   1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\   0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \\   0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \end{pmatrix} }_{\bold{A}_1} \underbrace{ \begin{pmatrix}   x_1 \\   x_2 \\   x_3 \\   x_4 \\   x_5 \\   x_6 \\   x_7 \\   x_8 \\   x_9 \end{pmatrix} }_{\bold{x}} = \underbrace{ \begin{pmatrix}   18 \\   21 \\   18 \end{pmatrix} }_{\bold{b}_1}\]

We then rotate the scanner for the first time…

Position after first rotation

…which gives the following formula:

    \[\underbrace{ \begin{pmatrix}   0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\   0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\   0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \end{pmatrix} }_{\bold{A}_2} \underbrace{ \begin{pmatrix}   x_1 \\   x_2 \\   x_3 \\   x_4 \\   x_5 \\   x_6 \\   x_7 \\   x_8 \\   x_9 \end{pmatrix} }_{\bold{x}} = \underbrace{ \begin{pmatrix}   18 \\   21 \\    9 \end{pmatrix} }_{\bold{b}_2}\]

And a second rotation…

Position after second rotation

…yields the following formula:

    \[\underbrace{ \begin{pmatrix}   0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\   0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 \\   0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \end{pmatrix} }_{\bold{A}_3} \underbrace{ \begin{pmatrix}   x_1 \\   x_2 \\   x_3 \\   x_4 \\   x_5 \\   x_6 \\   x_7 \\   x_8 \\   x_9 \end{pmatrix} }_{\bold{x}} = \underbrace{ \begin{pmatrix}   18 \\   14 \\   16 \end{pmatrix} }_{\bold{b}_3}\]

Now we are combining all three systems of equations:

    \[\begin{pmatrix}   \bold{A}_1 \\   \bold{A}_2 \\   \bold{A}_3 \end{pmatrix} \bold{x} = \begin{pmatrix}   \bold{b}_1 \\   \bold{b}_2 \\   \bold{b}_3 \end{pmatrix}\]

or written out in full:

    \[\underbrace{ \begin{pmatrix}   1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\   0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \\   0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\   0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\   0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\   0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\   0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\   0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 \\   0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \end{pmatrix} }_{\bold{A}} \underbrace{ \begin{pmatrix}   x_1 \\   x_2 \\   x_3 \\   x_4 \\   x_5 \\   x_6 \\   x_7 \\   x_8 \\   x_9 \end{pmatrix} }_{\bold{x}} = \underbrace{ \begin{pmatrix}   18 \\   21 \\   18 \\   18 \\   21 \\    9 \\   18 \\   14 \\   16 \end{pmatrix} }_{\bold{b}}\]

Here is the data of the matrix \bold{A} for you to download: ct-scan.txt).

We now have 9 equations with 9 unknown variables… which should easily be solvable by R, which can also depict the solution as a gray-scaled image… the actual CT-scan!

A <- read.csv("data/ct-scan.txt")
b <- c(18, 21, 18, 18, 21, 9, 18, 14, 16)
v <- solve(A, b)
matrix(v, ncol = 3, byrow = TRUE)
##      [,1] [,2] [,3]
## [1,]    9    9    0
## [2,]    9    5    7
## [3,]    9    9    0
image(matrix(v, ncol = 3), col = gray(4:0 / 4))
CT of Noseman

The radiologist looks at the picture… and has good news for Noseman: everything is how it should be! Noseman is relieved and his headache is much better now…

Real CT scans make use of the same basic principles (of course with a lot of additional engineering and maths magic 😉 )

Here are real images of CT scans of a human brain…

Source: Wikimedia

… which can be combined into a 3D-animation:

Source: Wikimedia

Isn’t it fascinating how a little bit of maths can save lives!

Check Machin-like formulae with arbitrary-precision arithmetic

Happy New Year to all of you! Let us start the year with something for your inner maths nerd 🙂

Rosetta Stone
Source: Wikimedia

For those of you who don’t yet know Rosetta Code: it is a real cool site where you can find lots of interesting code examples in all kinds of different languages for many different tasks. Of course R is also present big time (at the time of writing 426 code examples!): Rosetta Code for R.

The name of the site is inspired by the famous Rosetta Stone of Ancient Egypt which is inscribed with three different versions of the same text: in Ancient Egyptian hieroglyphs, Demotic script, and Ancient Greek script which proved invaluable in deciphering Egyptian hieroglyphs and thereby opening the window into ancient Egyptian history.

Now, a few days a ago I again added an example (for the other tasks I solved I will write more posts in the future, so stay tuned!). The task is to verify the correctness of Machin-like formulae using exact arithmetic.

A little bit of mathematical background is in order, so Wikipedia to the rescue:

Machin-like formulae are a popular technique for computing \pi to a large number of digits. They are generalizations of John Machin]s formula from 1706:

    \[\frac{\pi}{4} = 4 \arctan \frac{1}{5} - \arctan \frac{1}{239}\]

which he used to compute \pi to 100 decimal places.

Machin-like formulae have the form

    \[c_0 \frac{\pi}{4} = \sum_{n=1}^N c_n \arctan \frac{a_n}{b_n}\]

where a_n and b_n are positive integers such that a_n < b_n, c_n is a signed non-zero integer, and c_0 is a positive integer.

The exact task is to verify that the following Machin-like formulae are correct by calculating the value of tan (right hand side) for each equation using exact arithmetic and showing they equal one:

{\pi\over4} = \arctan{1\over2} + \arctan{1\over3}
{\pi\over4} = 2 \arctan{1\over3} + \arctan{1\over7}
{\pi\over4} = 4 \arctan{1\over5} - \arctan{1\over239}
{\pi\over4} = 5 \arctan{1\over7} + 2 \arctan{3\over79}
{\pi\over4} = 5 \arctan{29\over278} + 7 \arctan{3\over79}
{\pi\over4} = \arctan{1\over2} + \arctan{1\over5} + \arctan{1\over8}
{\pi\over4} = 4 \arctan{1\over5} - \arctan{1\over70} + \arctan{1\over99}
{\pi\over4} = 5 \arctan{1\over7} + 4 \arctan{1\over53} + 2 \arctan{1\over4443}
{\pi\over4} = 6 \arctan{1\over8} + 2 \arctan{1\over57} + \arctan{1\over239}
{\pi\over4} = 8 \arctan{1\over10} - \arctan{1\over239} - 4 \arctan{1\over515}
{\pi\over4} = 12 \arctan{1\over18} + 8 \arctan{1\over57} - 5 \arctan{1\over239}
{\pi\over4} = 16 \arctan{1\over21} + 3 \arctan{1\over239} + 4 \arctan{3\over1042}
{\pi\over4} = 22 \arctan{1\over28} + 2 \arctan{1\over443} - 5 \arctan{1\over1393} - 10 \arctan{1\over11018}
{\pi\over4} = 22 \arctan{1\over38} + 17 \arctan{7\over601} + 10 \arctan{7\over8149}
{\pi\over4} = 44 \arctan{1\over57} + 7 \arctan{1\over239} - 12 \arctan{1\over682} + 24 \arctan{1\over12943}

The same should be done for the last and most complicated case…

{\pi\over4} = 88 \arctan{1\over172} + 51 \arctan{1\over239} + 32 \arctan{1\over682} + 44 \arctan{1\over5357} + 68 \arctan{1\over12943}

… but it should be confirmed that the following, slightly changed, formula is incorrect by showing tan (right hand side) is not one:

{\pi\over4} = 88 \arctan{1\over172} + 51 \arctan{1\over239} + 32 \arctan{1\over682} + 44 \arctan{1\over5357} + 68 \arctan{1\over12944}

This is what I contributed to Rosetta Code:

library(Rmpfr)
prec <- 1000 # precision in bits
`%:%` <- function(e1, e2) '/'(mpfr(e1, prec), mpfr(e2, prec)) # operator %:% for high precision division
# function for checking identity of tan of expression and 1, making use of high precision division operator %:%
tanident_1 <- function(x) identical(round(tan(eval(parse(text = gsub("/", "%:%", deparse(substitute(x)))))), (prec/10)), mpfr(1, prec))
 
tanident_1( 1*atan(1/2)    +  1*atan(1/3) )
## [1] TRUE
tanident_1( 2*atan(1/3)    +  1*atan(1/7))
## [1] TRUE
tanident_1( 4*atan(1/5)    + -1*atan(1/239))
## [1] TRUE
tanident_1( 5*atan(1/7)    +  2*atan(3/79))
## [1] TRUE
tanident_1( 5*atan(29/278) +  7*atan(3/79))
## [1] TRUE
tanident_1( 1*atan(1/2)    +  1*atan(1/5)   +   1*atan(1/8) )
## [1] TRUE
tanident_1( 4*atan(1/5)    + -1*atan(1/70)  +   1*atan(1/99) )
## [1] TRUE
tanident_1( 5*atan(1/7)    +  4*atan(1/53)  +   2*atan(1/4443))
## [1] TRUE
tanident_1( 6*atan(1/8)    +  2*atan(1/57)  +   1*atan(1/239))
## [1] TRUE
tanident_1( 8*atan(1/10)   + -1*atan(1/239) +  -4*atan(1/515))
## [1] TRUE
tanident_1(12*atan(1/18)   +  8*atan(1/57)  +  -5*atan(1/239))
## [1] TRUE
tanident_1(16*atan(1/21)   +  3*atan(1/239) +   4*atan(3/1042))
## [1] TRUE
tanident_1(22*atan(1/28)   +  2*atan(1/443) +  -5*atan(1/1393) + -10*atan(1/11018))
## [1] TRUE
tanident_1(22*atan(1/38)   + 17*atan(7/601) +  10*atan(7/8149))
## [1] TRUE
tanident_1(44*atan(1/57)   +  7*atan(1/239) + -12*atan(1/682)  +  24*atan(1/12943))
## [1] TRUE

tanident_1(88*atan(1/172)  + 51*atan(1/239) +  32*atan(1/682)  +  44*atan(1/5357) + 68*atan(1/12943))
## [1] TRUE
tanident_1(88*atan(1/172)  + 51*atan(1/239) +  32*atan(1/682)  +  44*atan(1/5357) + 68*atan(1/12944))
## [1] FALSE

As you can see all statements are TRUE except for the last one!

In the code I make use of the Rmpfr package (from Martin Maechler of ETH Zürich, Switzerland) which is based on the excellent GMP (GNU Multiple Precision) library. I define a new infix operator %:% for high-precision division and after that convert all standard divisions in the formulae to high-precision divisions and calculate the tan. Before I check if the result is identical to one I round it to 100 decimal places which is more than enough given the precision of log_{10}(2^{1000})=301.03, so about 300 decimal places, in the example.

Please let me know in the comments what you think of this approach and whether you see room for improvement for the code – Thank you!