We already covered the so-called Accuracy-Interpretability Trade-Off which states that oftentimes the more accurate the results of an AI are the harder it is to interpret how it arrived at its conclusions (see also: Learning Data Science: Predicting Income Brackets).
This is especially true for Neural Networks: while often delivering outstanding results, they are basically black boxes and notoriously hard to interpret (see also: Understanding the Magic of Neural Networks).
There is a new hot area of research to make black-box models interpretable, called Explainable Artificial Intelligence (XAI), if you want to gain some intuition on one such approach (called LIME), read on!
Continue reading “Explainable AI (XAI)… Explained! Or: How to whiten any Black Box with LIME”
Learning Machines proudly presents a fascinating guest post by decision and risk analyst Robert D. Brown III with a great application of R in the business and especially startup-arena! I encourage you to visit his blog too: Thales’ Press. Have fun!
Continue reading “Business Case Analysis with R (Guest Post)”
Happy New Year to all of you! 2020 is here and it seems that we are being overwhelmed by more and more irrationality, especially fake news and conspiracy theories.
In this post, I will give you some indication that this might actually not be the case (shock horror: good news alert!). We will be using Google Trends for that: If you want to know what Google Trends is, learn how to query it from within R and process the retrieved data, read on!
Continue reading “Psst, don’t tell anybody: The World is getting more rational!”