In this post, let us rise into the air to have a good view of the stock market. From this vantage point, seemingly unrelated things all of a sudden become connected and patterns hidden by all the buzz and noise start to appear!
If you want to understand the big picture of technical and fundamental analysis, its relation to the payoffs of certain option strategies, and what this has to do with buy-and-hold, read on!
Continue reading “The Big Picture: Technical + Fundamental Analysis = Buy-and-Hold”
The topic of consciousness doesn’t cease to fascinate me. I have already written about it here: Will AI become conscious any time soon?. This essay could be seen as a continuation of my ongoing journey towards an integrated (holistic) worldview of mind and matter, that could be characterized as having a spiritual dimension to it, yet being grounded in science.
Continue reading “Cosmopsychism and the Many Worlds Interpretation: A Monistic Perspective on Consciousness and Quantum Mechanics”
Word embedding, self-attention, and next-word prediction lie at the core of LLMs like ChatGPT. If you are curious about how these techniques work and want to see a simple example in R, read on!
Continue reading “Attention! What lies at the Core of ChatGPT? (Also as a Video!)”
Everybody and their dog are talking about ChatGPT from OpenAI. If you want to get an intuition about what lies at the core of such Language Models, read on!
Continue reading “Create Texts with a Markov Chain Text Generator… and what this has to do with ChatGPT!”
What is the “opposite” of sampling without replacement? In a classical urn model sampling without replacement means that you don’t replace the ball that you have drawn. Therefore the probability of drawing that colour becomes smaller. How about the opposite, i.e. that the probability becomes bigger? Then you have a so-called Pólya urn model!
Many real-world processes have this self-reinforcing property, e.g. leading to the distribution of wealth or the number of followers on social media. If you want to learn how to simulate such a process with R and encounter some surprising results, read on!
Continue reading “The Pólya Urn Model: A simple Simulation of “The Rich get Richer””
Public-key cryptography is one of the foundations of our modern digital life. Normally it is quite hard to understand but with our literally colourful explanation it is a walk in the park. At the end we also give the nerd version, so read on!
Continue reading “Understanding Public-Key Cryptography by Mixing Colours!”
This is our 101’st blog post here on Learning Machines and we have prepared something very special for you!
Oftentimes the different concepts of data science, namely artificial intelligence (AI), machine learning (ML), and deep learning (DL) are confused… so we asked the most advanced AI in the world, OpenAI GPT-3, to write a guest post for us to provide some clarification on their definitions and how they are related.
We are most delighted to present this very impressive (and only slightly redacted) essay to you – enjoy!
Continue reading “The Most Advanced AI in the World explains what AI, Machine Learning, and Deep Learning are!”
In 2018 the renowned scientific journal science broke a story that researchers had re-engineered the commercial criminal risk assessment software COMPAS with a simple logistic regression (Science: The accuracy, fairness, and limits of predicting recidivism).
According to this article, COMPAS uses 137 features, the authors just used two. In this post, I will up the ante by showing you how to achieve similar results using just one simple rule based on only one feature which is found automatically in no-time by the
OneR package, so read on!
Continue reading “Recidivism: Identifying the Most Important Predictors for Re-offending with OneR”
In data science, we try to find, sometimes well-hidden, patterns (= signal) in often seemingly random data (= noise). Pseudo-Random Number Generators (PRNG) try to do the opposite: hiding a deterministic data generating process (= signal) by making it look like randomness (= noise). If you want to understand some basics behind the scenes of this fascinating topic, read on!
Continue reading “Pseudo-Randomness: Creating Fake Noise”
More and more companies use chatbots for engaging with their customers. Often the underlying technology is not too sophisticated, yet many people are stunned at how human-like those bots can appear. The earliest example of this was an early natural language processing (NLP) computer program called Eliza created 1966 at the MIT Artificial Intelligence Laboratory by Professor Joseph Weizenbaum.
Eliza was supposed to simulate a psychotherapist and was mainly created as a method to show the superficiality of communication between man and machine. Weizenbaum was surprised by the number of individuals who attributed human-like feelings to the computer program, including his own secretary!
If you want to build a simple Eliza-like chatbot yourself with R read on!
Continue reading “ELIZA Chatbot in R: Build Yourself a Shrink”