“We should be more concerned with ‘foolish AI’ than with super-intelligent ‘ultra-smart AI’ in order to advance artificial intelligence (AI),” says Professor Toby Walsh of the University of New South Wales, a global authority in the field of AI. He advises that now, when AI technologies are rapidly advancing, it is more important to be vigilant about foolish AI than to focus on making smarter AI. As AI is increasingly used to make important decisions across society, we must ensure that it does not make foolish decisions. To this end, he emphasized the need for a framework that ensures accountability, transparency, and fairness in algorithms.
“I have long argued for the necessity of AI regulation, not because I fear AI will take over the world,” he said, “but because big tech companies are too focused on their own profits.” He continued, “Companies are competing to build smarter AI, but there is far less competition in technologies that can detect and correct foolish decisions made by AI.” He emphasized that we need a race in AI safety technology.
Professor Walsh is an AI scholar who actively campaigns for the ethical use of AI and against its weaponization. He has been studying the impact of AI on society and has continuously emphasized the importance of safety and regulation. In Korea, he became widely known in 2019 when he announced a boycott of research exchange with KAIST. At the time, he and 57 AI experts from 29 countries declared that they would suspend collaboration with KAIST due to concerns that the AI Research Center it had established with Hanwha Systems was pushing for AI militarization.
He argues that AI regulation should not hinder technological progress, but rather guide it in a way that benefits humanity. For this, governments should focus on actual, existing problems caused by AI rather than hypothetical, future issues. We spoke with him in more detail.
Professor Toby Walsh won the Good AI Awards hosted by THE AI in 2023 for improving both AI technology development and safety. /THE AI
— In your book “2062: The World that AI Made,” you predicted that 2062 would be the AI era. Considering current technological progress, it seems this might happen much sooner.
“The pace of AI development today is astonishing. It’s moving much faster than many expected. The prediction of 2062 was not solely my own; it was the consensus of 300 colleagues working in AI. If we asked them the same question today, they would likely say we will reach human-level AI or artificial general intelligence (AGI) by 2042. That said, we shouldn’t take AGI lightly. Today’s AI is still only highly capable within narrow domains. There are still fundamental challenges to achieving AGI.”
— What are the challenges we face in building human-level AI?
“Currently, we are seeing many astonishing AI models—high-level language models, image generation AI, and so on. But we must not confuse these with human intelligence. Even if we’ve partially solved problems related to System 1, which processes language and perception, general problem-solving abilities like reasoning and consciousness—System 2—remain largely unsolved. Even if System 1 has advanced significantly, System 2 still lags behind.”
— What do you see as the driving force behind today’s AI advancements, and what limits might we face?
“The driving force behind today’s AI progress is the convergence of three core elements: vast amounts of data, improved computing power, and sophisticated machine learning algorithms. However, we are approaching significant limitations. Moore’s Law is officially dead—Intel is no longer trying to double the number of transistors every 18 months. We are hitting physical limits in computing performance. The real challenge is not just having more computing power or data, but understanding intelligence itself. How do we build machines that truly understand, not just match patterns? How can AI reason with common sense and logic? These are the fundamental challenges we still face.”
— What trends in AI research are you currently paying attention to?
“The speed and scale of investment. Currently, $1 billion is being spent on AI every day. That’s 20% of global research and development (R&D) spending. No other technology in history has received this level of investment.”
— With such advancements, countries including the EU are preparing AI regulations. South Korea is also working quickly on its Basic Act on AI. Do you have any advice on regulatory development?
“Governments developing AI regulations should focus on real, existing issues rather than existential threats of the future. We need clear guidelines on algorithmic bias, data privacy, and accountability. The EU’s approach with GDPR and the AI Act offers a good starting point. But regulations must be flexible enough to adapt as technology evolves, while also setting clear boundaries for responsible progress.”
— What are the key elements of ‘responsible AI development’ that researchers and developers should keep in mind?
“Responsible AI development is not just about technical safeguards. It’s about understanding the social impact of what we are building. AI researchers must look beyond their technical capabilities and consider the broader effects of their work. We need diverse perspectives and clear principles regarding when and how to deploy AI. The basic principle must be that AI exists not to replace or harm humanity, but to augment and benefit it.”
— You are known for opposing the weaponization of AI, but autonomous weapons systems are advancing rapidly.
“I have raised concerns about the dangers of autonomous weapons for a long time. The weaponization of AI doesn’t just mean new military technology—it represents a fundamental revolution in warfare. Removing human judgment from lethal decisions is deeply problematic. Just as we banned chemical weapons, we need international bans on autonomous weapons.”
— In the Russia-Ukraine war, we’ve seen AI and drones play significant roles. Doesn't this suggest AI weaponization is inevitable?
“Using AI in conflicts like the Ukraine war is concerning but not surprising. What we’re currently seeing are mostly semi-autonomous military systems controlled by humans. The real danger emerges when we move toward fully autonomous weapons. Such weapons are like Aladdin’s lamp—once the genie is out, it’s hard to put it back in. That’s why nations must proceed with great caution when developing fully autonomous weapons.”
— How much do you think AI will be used in global defense by 2030?
“By 2030, AI will likely be deeply integrated into defense systems. Before we reach that point, proper human oversight is crucial. The key is to maintain meaningful human control over lethal force and critical decisions. We also need to be cautious about the pace of warfare. For instance, when AI systems operate at machine speed, human decision-making can become a bottleneck. This may create pressure to eliminate human involvement altogether. If that happens, many problems could arise—algorithms might confront each other in hostile environments, leading to something like a ‘flash war,’ similar to a ‘flash crash’ in the stock market. (A flash crash refers to a phenomenon where the prices of stocks, currencies, or futures plummet and then rapidly recover within a short period.)”
— Then, how do you think our daily lives will change by 2030 due to AI?
“By 2030, AI will be deeply embedded in our daily lives. The changes will happen without people even noticing. Instead of sci-fi robots, we’ll see invisible AI making systems like traffic management, medical diagnostics, and energy distribution more efficient. By 2030, we’ll have more sophisticated language interfaces, better autonomous vehicles, and more personalized services. However, challenges related to privacy, job displacement, and algorithmic bias will continue to grow. The key is to manage this transition carefully.”
— Are there any current research projects you’re focusing on?
“I’m currently focused on ensuring that AI development benefits humanity rather than causing harm. This includes work on AI safety, ethics, and governance. I’m particularly interested in developing frameworks for AI transparency and accountability. How can we keep increasingly complex AI systems understandable and controllable? Solving this problem will allow us to use AI for various social goods. That’s why I’m focused on framework development. I’m also working with Surf Life Saving Australia to develop an AI app that identifies dangerous areas in the ocean and warns people not to swim there. Additionally, I’m collaborating with a homeless charity to use large language models (LLMs) to offer advice to people seeking help.”
— Lastly, what advice would you give to the current generation as they prepare for the AI era?
“The key message I want to emphasize is that our AI future is not predetermined. Technology is not destiny. We now have the opportunity to decide how this technology will evolve. The challenge is not to stop AI from becoming too powerful, but to ensure that as it becomes more capable, it aligns with human values and benefits all of humanity. AI is one of the most significant technological shifts in human history. We shouldn’t fearfully try to block its progress, but we must ensure its development is responsible and thoughtful. The decisions we make over the next few years will shape humanity’s future for decades—or even centuries—to come, just like the invention of the steam engine did for the Industrial Revolution.”