Artificial intelligence is not a divine being, but a tool built by humans
Artificial intelligence is often discussed as a replacement for human labor, but the reality is much more mundane. Andrej Karpathy, one of the founders of OpenAI and former head of artificial intelligence at Tesla, recently appeared on Dwarkesh Patel’s podcast to share his thoughts on the subject.
The article summarises his key views on the current state of AI development, the limitations of agents, and the role of humans alongside them. It also examines other recent findings on why most AI projects fail and what the successful ones do differently.
- Realism in the AI debate
- A tool, not a superior being
- Companies face reality
- Animals and ghosts
- Artificial intelligence agents as human companions and supervisors
- 95% of AI pilots fail, which is why the successful ones stand out.
- Practical examples and lessons for businesses
- Artificial intelligence needs people
Realism in the AI debate
In the podcast and his subsequent reflections, Karpathy sought to bring balance to the AI debate, which has become polarized into two extremes. One side believes that artificial general intelligence (AGI) is just around the corner, while the other believes that it will never happen. Karpathy takes a middle ground. According to him, we are living in the ‘decade of agents’, but in a more realistic sense than the hype would suggest.
Karpathy considers the development of artificial intelligence in recent years to be significant, but points out that its capabilities are being exaggerated. In his estimation, AGI may be possible within ten years, but even that is an optimistic timeframe. There is still a lot of work to be done, such as integrating data and models, connecting artificial intelligence to the physical world, and resolving issues related to security and societal impacts.
Artificial intelligence has thus made enormous progress, but it is still a long way from being a system that could replace humans in any task. The message is clear: artificial intelligence is not a supernatural phenomenon, but the result of engineering skill and computing power. And it is only halfway there.
A tool, not a superior being
Karpathy said in the podcast that, in his opinion, the real revolution in artificial intelligence agents is not yet upon us. “They simply don’t work well enough. They are not intelligent enough, they are not multimodal, and they are not capable of continuous learning,” he said.
At present, agents do not remember things they have been told and are unable to operate independently in complex environments. According to Karpathy, it may take another decade to develop them to the point where they are functional.
In Karpathy’s view, AI is at its best when it acts as a teacher and partner, not a replacement for humans. He wants AI to make fewer assumptions and ask more questions, and to be willing to collaborate and learn from humans. This view is deeply human, as the true value of artificial intelligence lies in cooperation, not competition.
Companies face reality
Fortune’s article compiles data that illustrates how companies are also beginning to understand this. For example, research by Gartner shows that half of the organizations that planned to reduce their customer service staff through artificial intelligence have cancelled their plans.
At the same time, MIT research shows that 95 percent of AI pilots have failed. So the world is not yet ready for AI agents, but development is progressing at a rapid pace.
Animals and ghosts
One of Karpathy’s most interesting comparisons is “animals vs. ghosts“. Karpathy says that the development of artificial intelligence is not about repeating evolution, but rather a long engineering career in which we try to build something useful and safe. He is sceptical about the idea that a single algorithm could learn everything about the world from scratch. “If someone builds one, I’m wrong – and it would be an incredible breakthrough. But I don’t think we’re anywhere near that.”
He points out that animals, for example, are not tabula rasa beings, but creatures shaped by evolution, with an innate intelligence created by evolution. He describes language models as being like ghosts, because they do not live or learn from experience, but their knowledge is absorbed from the depths of the internet. They know a lot, but they are not living beings and do not experience the world through their bodies. Artificial intelligence is therefore not a new ‘form of life’, but rather, according to him, a large language-based ghost that mimics human understanding.
Artificial intelligence agents as human companions and supervisors
Karpathy is cautious about the idea of fully autonomous AI agents. He believes that we are still at a stage where humans and AI need to work together, not separately. He hopes for a form of cooperation in which artificial intelligence does not disappear into its own bubble to make decisions, but remains interactive and justifies its solutions to the user.
“I don’t want an agent that goes off for 20 minutes and comes back with 1,000 lines of code. I certainly don’t feel ready to supervise a team of 10 of them.” -Andrej Karpathy
His concern is valid: if agents are developed too quickly without understanding their limitations, the software world may soon be filled with poorly functioning, insecure code whose origins are unclear.
95% of AI pilots fail, which is why the successful ones stand out.
According to MIT’s State of AI in Business 2025 study, up to 95 per cent of generative AI pilots fail. The reason is not the technology itself, but how it is implemented. Companies often strive for quick results and impressive experiments, but the real benefits only come when AI is integrated into people’s work and processes. Failed organizations have invested in static tools that cannot adapt to their workflows.
Jason Snyder, on the other hand, writes in Forbes that it is precisely resistance and friction that make artificial intelligence work. Successful people do not try to bypass problems, but solve them through practical means. They build systems that improve with use and support human decision-making rather than replacing it.
According to MIT research, companies that succeed with artificial intelligence operate differently. They design their AI solutions in a problem-oriented manner, i.e. they focus on limited high-value use cases and integrate generative AI into their most important business processes, building systems that remember, learn and improve with use. In these pilots, industry knowledge and process integration are more important than a flashy user interface. That is where real value is created.
Practical examples and lessons for businesses
The artificial intelligence agent built by McKinsey using Microsoft’s Copilot Studio is a good example of successful AI integration. The agent screens customer proposals received by email, and although it does not operate completely independently, it has reduced the duration of the work from 20 days to two. AI can therefore already improve work efficiency when it is viewed correctly, i.e. as an assistant to humans.
At Hurja, we have also built an artificial intelligence agent based on this idea. The artificial intelligence agent we have developed automatically processes orders received by email for the Odoo ERP system. It analyses the messages, extracts the relevant information and any errors, and converts them into a transferable order format. Finally, a human checks the order before it is entered into the system.
MIT also recognises the phenomenon of shadow AI, where employees adopt their own AI solutions even if official pilots have failed. However, it is precisely this practical use that has already brought measurable results for many companies, such as cost savings and faster processes.
Karpathy’s thinking is along the same lines. Artificial intelligence does not yet function completely independently, nor should it. The best results are achieved when humans and artificial intelligence work in parallel and learn from each other.
Artificial intelligence needs people
Karpathy’s reflections are marked by scientific humility. He believes in progress, but not in miracles. The next big leaps forward will likely come from boring but important work, such as better data, security, collaboration, and responsible deployment.
Artificial intelligence can be a powerful ally, but only if we remember that its intelligence is borrowed, created by us, limited by us and our responsibility. However, there is a temptation surrounding artificial intelligence, a desire to build something that would do the work and thinking for us. But as the story of Frankenstein reminds us, creation without understanding can turn against us.
When we imagine artificial intelligence as all-powerful, we relinquish responsibility. When we see it as a colleague, we learn more ourselves and can learn from our mistakes. We do not see artificial intelligence as a supernatural force, but as a practical tool that must be developed and used with caution. We want to use artificial intelligence in a way that supports understanding and cooperation between people, not replaces them. When routines and repetition are automated, people have more space to think, create and solve problems together.
Artificial intelligence is not a divine being. It is the greatest tool ever built by humans, and therefore it needs us, curious, responsible and humane people, to keep it on the right track.
Shall we get started?
"*" indicates required fields
