AI can analyze data, automate tasks, and solve complicated problems. But when it comes to real-world complexity, unpredictable, and ever-changing systems—it struggles. Why? Because AI doesn’t think, it just recognizes patterns. Here’s why that matters and what it means for the future of work.
It's 2025 and AI is now everywhere. It's no longer the technological promise of tomorrow but rather the day-to-day tool of today. As we continue to utilise and explore the possibilities of what these new and exciting tools can offer us, we are also starting to run into barriers and are starting to paint a picture of their limitations.
Here at Drive we're always discussing, as a team, which platforms to adopt and which ones to drop. We're always trying to find a balance of productivity, creativity, and automation. As such, we're constantly testing out new tools and putting the advertised features to their tests.
After the last 18 months, a pattern has emerged that outlines the limitations of these platforms and gives us a framework for understanding where these tools derive the most value for our work. It also gives us a clearer idea of the type of work that (current) AI is just not very good at.
Limitations Talked About In Main Stream
When people discuss the limitations of AI, especially vis-a-vis replacement of human labour, a few recurring limits often come up:
- Limits of ideation
- Limits of creativity (can only mimic but not truly create)
- Limits of accuracy
- Limits of training
There's also a lot of discussion about climate impact and hardware limitations but we're not discussing that here. In this article we're going to focus on the limitations of output. But I think there's something missing in the discussion: AI Can't Handle Complex, Only Complicated.
Complicated vs. Complex
Although often used as synonyms in everyday conversation, they actually have different meanings, especially when it comes to systems design. A complicated system uses simple components in simple relationships that create predictable outputs. A complex system might still be made up of simple components, but they interact with each other in dynamic ways, creating unpredictable outputs and emergent properties.
While a large software system can have hundreds of thousands of lines of code and be extremely difficult to understand, it is supposed to create predictable output. Often, software is delicate in that one misconfigured or miscoded element can bring down part of the system because the interactions are rigid.
- Complicated Systems
- Can be understood by breaking them down into their parts
- Can be managed with the right expertise and models
- Can be simplified by removing unnecessary parts
- Examples; Car Engine
- Complex Systems:
- Are difficult to predict because of their emergent properties
- Are difficult to understand because of the complex relationships between their parts
- Require a nuanced approach to management
- Examples: include the human body, traffic systems, and weather systems
- Key Differences:
- Understandability: Complicated systems can be understood by breaking them down into parts, while complex systems are difficult to understand because of the interactions between their parts
- Predictability: Complicated systems can be managed with the right expertise, while complex systems are difficult to predict because of their emergent properties
Solutions:Complicated systems have defined solutions, while complex systems do not
Why AI Has This Limitation
The way in which most AI is created today using training data fed into complex neuro networks or transformers. The training is basically a sophisticated form of pattern recognition. The power of AI comes from the capacity of machines to outpace humans when it comes to speed and memorization. The limitation though, is that it's still just a complicated process based on fuzzy statistically algorithms. In my opinion, a sort of pseudo-complex.
There is already starting to be discussion about the limit of these models due to the limit of data. Even with all the data of the internet at it's disposal, the training data is limited. It cannot predict the future because the amount of data we have access to vs the amount of data we will someday created, is logically inconsequential. Humans, being truly complex, don't have to be limited by our past.
We can innovate.
How this Applies To AI
While we would like to imagine that AI can solve complex problems, when we really think about it's capabilities, it's limited to complicated problems. The clever part is that scientists and engineers are doing the work of boiling down complex problems to be merely complicated. In other words, humans are the ones doing the thinking work of dumbing down a problem for an AI to solve via pattern recognition.
Once we keep this framework in mind, it starts becoming easier for us to decide whether or not an AI tool will exist to solve the problem.