3.84, or How to Detect BS (Fast)

In From Coin Tosses to p-Hacking: Make Statistics Significant Again! I explained the general principles behind statistical testing, here I will give you a simple method that you could use for quick calculations to check whether something fishy is going on (i.e. a fast statistical BS detector), so read on!
Continue reading “3.84, or How to Detect BS (Fast)”

Network Analysis: Who is the Most Important Influencer?

Networks are everywhere: traffic infrastructure and the internet come to mind, but networks are also in nature: food chains, protein-interaction networks, genetic interaction networks and of course neural networks which are being modelled by Artificial Neural Networks.

In this post, we will create a small network (also called graph mathematically) and ask some question about which is the “most important” node (also called vertex, pl. vertices). If you want to understand important concepts of network centrality and how to calculate those in R, read on!
Continue reading “Network Analysis: Who is the Most Important Influencer?”

Local Differential Privacy: Getting Honest Answers on Embarrassing Questions

Do you cheat on your partner? Do you take drugs? Are you gay? Are you an atheist? Did you have an abortion? Will you vote for the right-wing candidate? Not all people feel comfortable answering those kinds of questions in every situation honestly.

So, is there a method to find the respective proportion of people without putting them on the spot? Actually, there is! If you want to learn about randomized response (and how to create flowcharts in R along the way) read on!
Continue reading “Local Differential Privacy: Getting Honest Answers on Embarrassing Questions”

Kalman Filter as a Form of Bayesian Updating


The Kalman filter is a very powerful algorithm to optimally include uncertain information from a dynamically changing system to come up with the best educated guess about the current state of the system. Applications include (car) navigation and stock forecasting. If you want to understand how a Kalman filter works and build a toy example in R, read on!
Continue reading “Kalman Filter as a Form of Bayesian Updating”

Time Series Analysis: Forecasting Sales Data with Autoregressive (AR) Models


Forecasting the future has always been one of man’s biggest desires and many approaches have been tried over the centuries. In this post we will look at a simple statistical method for time series analysis, called AR for Autoregressive Model. We will use this method to predict future sales data and will rebuild it to get a deeper understanding of how this method works, so read on!
Continue reading “Time Series Analysis: Forecasting Sales Data with Autoregressive (AR) Models”

COVID-19: False Positive Alarm

In this post, we are going to replicate an analysis from the current issue of Scientific American about a common mathematical pitfall of Coronavirus antibody tests with R.

Many people think that when they get a positive result of such a test they are immune to the virus with high probability. If you want to find out why nothing could be further from the truth, read on!
Continue reading “COVID-19: False Positive Alarm”

Learning Data Science: A/B Testing in Under One Minute


Google does it! Facebook does it! Amazon does it for sure!

Especially in the areas of web design and online advertising, everybody is talking about A/B testing. If you quickly want to understand what it is and how you can do it with R, read on!
Continue reading “Learning Data Science: A/B Testing in Under One Minute”

Learning Statistics: Randomness is a Strange Beast


Our intuition concerning randomness is, strangely enough, quite limited. While we expect it to behave in certain ways (which it doesn’t) it shows some regularities that have unexpected consequences. In a series of seemingly random posts, I will highlight some of those regularities as well as consequences. If you want to learn something about randomness’ strange behaviour and gain some intuition read on!
Continue reading “Learning Statistics: Randomness is a Strange Beast”

Lying with Statistics: One Beer a Day will Kill you!


About two years ago the renowned medical journal “The Lancet” came out with the rather sensational conclusion that there is no safe level of alcohol consumption, so every little hurts! For example, drinking a bottle of beer per day (half a litre) would increase your risk of developing a serious health problem within one year by a whopping 7%! When I read that I had to calm my nerves by having a drink!

Ok, kidding aside: in this post, you will learn how to lie with statistics by deviously mixing up relative and absolute changes in risks, so read on!
Continue reading “Lying with Statistics: One Beer a Day will Kill you!”

ZeroR: The Simplest Possible Classifier, or Why High Accuracy can be Misleading


In one of my most popular posts So, what is AI really? I showed that Artificial Intelligence (AI) basically boils down to autonomously learned rules, i.e. conditional statements or simply, conditionals.

In this post, I create the simplest possible classifier, called ZeroR, to show that even this classifier can achieve surprisingly high values for accuracy (i.e. the ratio of correctly predicted instances)… and why this is not necessarily a good thing, so read on!
Continue reading “ZeroR: The Simplest Possible Classifier, or Why High Accuracy can be Misleading”