Oliver Ilott, Director of UK’s AISI, Elected as Next Chair of International AISI
Chair Country Changes from U.S. to U.K. at General Meeting Held During ICML
Oliver Ilott, Director of the UK’s AI Security Institute, has been elected as the next Chair of the global AI safety consortium, the International Network of AI Safety Institutes(AISI).
According to reporting by THE AI on July 24, the appointment was finalized during the AISI General Assembly held alongside the International Conference on Machine Learning(ICML), which took place in Vancouver, Canada, from July 13 to 19.
This decision officially fills the chair position left vacant by Elizabeth Kelly, former director of the U.S. AI Safety Institute. Kelly, who was appointed under the Biden administration, resigned following the inauguration of President Trump and the subsequent structural reform plans for the U.S. AI safety body. Although the U.S. retained its status as chair country, the chair role had remained vacant.
At the AI Action Summit held in Paris this past May, AISI convened its first full assembly of institute directors, where the need to elect a new chair was formally raised. At the time, Oliver Ilott expressed interest in assuming the chairmanship. He is set to begin his term in November.
With the chairmanship transitioning from the United States to the United Kingdom, a shift in the global AI safety governance landscape is expected. Kim Myung-joo, Director of the Korean AI Safety Institute, commented, “The U.K. has been leading collaborative testing to verify and evaluate the safety of AI technologies across multiple tracks,” adding, “Korea also plans to engage more actively in these efforts.”
Indeed, Korea’s AI Safety Institute is currently collaborating with the U.K. and Singapore on several joint projects, including defenses against deepfakes, multilingual LLM verification, and multi-agent AI security testing.
Director Kim further explained, “Korea is participating in projects that examine how AI responds differently in multilingual and multicultural contexts, as well as in joint testing for high-risk areas such as cybersecurity threats and multi-agent safety evaluation.” She added, “A key goal is to ensure AI models deliver consistent and trustworthy outcomes regardless of language or cultural context. To support evaluations of domestic AI products, we are also developing proprietary assessment tools and datasets.”
The International Network of AI Safety Institutes (AISI) was established on November 20 last year by 10 countries, including the U.K., U.S., Japan, Singapore, and Canada—each of which launched national AI safety institutes—along with countries like France, the European Union, Kenya, and Australia, which established equivalent bodies.
The recent general meeting also included discussions on potentially changing the official name of AISI. The U.S. and U.K. proposed replacing the word ‘Safety’ with ‘Standards’ and ‘Security’, respectively, reflecting the updated names of their national institutes: AI Standards Institute(U.S.) and AI Security Institute(U.K.). However, Japan, Singapore, and Korea opposed the change, advocating for retaining the current name. A final decision on this matter is expected to be revisited at the next general meeting, which will coincide with the ‘NeurIPS’ conference in San Diego this December.