TL;DR:
- Our lives are mostly influenced by unpredictable events called Black Swans
- Although people create stories that they were clear from the start, they were not
- Instead of looking at predictions, focus on the balance of upsides and downsides
- You can profit from uncertainty by seeking optionality, which means seeking situations with limited downside and exponential upside
- Step changes in AI require further Black Swans, not tinkering with LLMs
On the (un)importance of forecasts
Nassim Taleb, a famous essayist and statistician, popularized the idea of the Black Swan. By this, he refers to unpredictable events with extreme impact. He argues that our lives are mostly determined by Black Swans. While our lives mostly change incrementally, Black Swans have extreme impact and lead to step changes. They can be positive or negative.
An example of a Black Swan is the transformer, which oftentimes forms the basis of today’s most powerful AI models. It represents a very special architecture that allows AI models to handle context very effectively by determining how important every part of the text is to every other part, regardless of their distance in the text. This mechanism is called self-attention and enables parts of a text, also called Tokens, to attend to each other. If Token A attends to Token B, it means Token B is important to Token A. This means that Token A is paying lots of attention to Token B. It’s called self-attention because the sequence of text is analyzing its own tokens to understand its own context. This allows an AI model to build an incredibly deep, nuanced understanding of a text.
The transformer led to a stepwise increase in AI model performance. Realizing the potential of this technology, investors provided massive amounts of capital, which triggered further research and progress. Unlike the development of the transformer, this progress is incremental.
Afterwards, Black Swans tend to seem predictable from the start. In fact, nothing in the past convincingly pointed to its occurrence. Since Black Swans lie outside the realm of regular expectations, they cannot be forecasted reliably.
Yet, some people were able to predict the 2007/08 financial crisis. Since then, they predicted 8 of the last 2 recessions!
Focus on building learning systems
Instead of making decisions based on predictions, build systems that do not suffer, but profit from volatility. Those systems are able to adjust to the new circumstances and flourish in the new environment. There are multiple strategies that can be considered when using such systems. One of them is the barbell strategy, which suggests protecting much of a system’s resources while using a small fraction of them for aggressive bets with a very high upside. By striving for situations with minimal downsides and exponential upsides, you can structure a system around asymmetric bets. Taleb calls these situations convex and concludes that convexity is the backbone of antifragility, which means that a system improves from uncertainty. Asymmetric bets can be paired with building intentional redundancy. A simple example of that is establishing multiple income streams, which not only reduce each other’s importance, but can also scale exponentially with limited additional costs. Having low marginal costs, digital services typically fulfill this requirement. Additionally, fast and cheap tinkering tends to reduce potential downsides while leveraging the quick incorporation of feedback to enable big upsides.
Who pays for whose mistakes?
Antifragile systems enforce what is oftentimes called “skin in the game”. By aligning risk and reward, they make sure that people bear the consequences of their actions, especially in the long term. This avoids situations where people profit from short-term success at the expense of future costs borne by other people, leading to responsible decision-making.
Not all performance indicators convey valuable information
Sometimes, badly chosen KPIs cover up fragility. This is especially dangerous when they implicitly assume that a system works linearly, while it actually does not. If 1.000€ get you 100 customers, 1.000.000€ most probably will not get you 100.000 customers. Doubling the number of cars in the streets does not necessarily double travel time, but may lead to catastrophic delays. To prevent unacceptable scenarios, you might also want to take a look at the worst outcome possible and hedge it.
The optimal long-term solution may be out of reach
The longer the time horizon, the more powerful the described strategy becomes. Depending on the area you apply it to, it might take more time than you have for it to pay off. In the meantime, other people may have great success until they are wiped out by the next Black Swan. True success in an asymmetric world is not about outperforming the lucky in the short run, but about ensuring you are still standing when reality catches up with them.
Further step changes in AI require further Black Swans
The next major step forward in AI cannot be predicted. For example, Transformers alone do not suffice for producing fully autonomous, intelligent robots that do humanity’s work. Apart from incremental improvements, we cannot even be sure about the direction AI progress takes. The next step change may lead to different AI systems completely out of reach of today’s imagination. It could happen in 1 year, in 10 years, or never.
From a technical perspective, we are incredibly well positioned for another Black Swan. Huge amounts of capital allow extensive research, limiting technological downside while obtaining astronomical upsides. Even a financial correction will not hinder existing AI systems from becoming ever more powerful. It will just lead to resources being focused on increasing the performance of existing systems incrementally rather than exploring completely new realms of AI, as existing AI systems lead to greater revenue with much more certainty. Until the next Black Swan, we will be stuck with LLMs.