26 Jun 2024

Ant Inspiration in AI Safety: Our Collaboration with the University of York

What do ants have to do with AI Safety? Could the next breakthrough in AI Assurance come from the self-organising structures found in ecological systems?

The UK Research and Innovation funded a Knowledge Transfer Partnership between Advai and the University of York.

This led to the hire of Matthew Lutz "AI Safety Researcher / Behavioural Ecologist".

In this blog, we explore Matt's journey from architecting, through the study of Collective Intelligence in Army Ant colonies, and how this ended up with him joining as our 'KTP Research Associate in Safe Assured AI Systems'.

Words by
Alex Carruthers

Ant Inspiration in AI Safety: Our Collaboration with the University of York

Matt

Introduction

A short while ago, Advai were thrilled to announce a UK Research and Innovation (UKRI)-funded ‘Knowledge Transfer Partnership’ with the University of York.

The initiative was to fund a senior researcher for three years, to provoke progress in AI Safety research. Together we wrote a job description and waited to see who would apply.

Advai was to receive a boost to the research team and the University of York was to receive a review of their ‘AMLAS’ framework (Assurance of Machine Learning in Autonomous Systems) grounded in the commercial work Advai performs for clients, sense-checked against our practical AI Safety experience.

Much of – or indeed ‘most’ – AI research is performed by the private sector.

A reflection of this fact can be seen in the Ministry of Defence or the AI Safety Institute’s explicit strategies to work more closely with the private sector, a policy drive that aims to leverage innovations developed for commercial purposes.

To give you a tangible example to chew on, consider how much R&D Facebook or Google have conducted to create profiling technologies, algorithms that ultimately predict [buying] behaviour; naturally, no country’s public sector spends anything near the amounts invested in the algorithms developed by these companies.

Yet, clearly, the public sector would stand to benefit from this expensive technology.

The UK has been quick to spot this public-private investment gap, which has led to a range of initiatives, some like this Knowledge Transfer Partnership. It represents 1) a public sector fund that brings together experts from 2) private and 3) academic sectors, growing AI Safety knowledge to the benefit of all.

The result of this co-written job description has manifested in the form of our newest human recruit (well, ours-ish): meet Dr. Matthew Lutz.

 

Matt has until now been a career academic fascinated by research in a kind of natural but decentralised, emergent intelligence.

His journey into AI was preceded by the study of collective intelligence in ant colonies.

This work equipped him with skills in computational modelling of complex systems, which he will need in his new role modelling multi-agent AI systems.

What have ant behaviours got to do with AI systems?

Well, the number of AI-enabled autonomous agents is increasing. As Matt says, “We don’t fully understand how a single AI agent works, yet we’re already building these complex systems where we have multiple agents interacting.” He sees this as a major problem that must be solved before multi-agent AI systems are adopted much further.

His work will help us all better understand collective behaviour in both natural and artificial systems, developing Advai’s commercial AI Safety offering, integrating and improving the University of York’s AMLAS framework, and giving the public sector more commercial innovation to leverage in pursuit of national aims.

Ant Bridge

From Ant Intelligence to AI

During his first qualification and early career in architecture, his interest was captured by agent-based structure generationthe study of structures that self-organize without the control of any centralised intelligence.

You might have had your eye caught by a flashy headline about self-organising nanomaterials in the past. The idea is that you can programme behaviours at the individual level for many ‘agents’ and that through interactions with each other and their environment, highly intelligent and adaptive behaviours can emerge at the group level.

While architecting, he got into programming swarming and flocking behaviours, and found fascinating the new kinds of architectural forms enabled by such emergent intelligence.

In particular, it was the self-organised structures built by Army Ants that caught his eye and drew him away from architecture.

These canny creatures can join their bodies to create structures that optimize their traffic and resource transport, including bridges that create shortcuts, and scaffolds that catch other ants and prey from falling while in transit.

Each ant harnesses a low level of intelligence, with simple local sensing, yet the group is capable of intelligent feats.

 

This led to field experiments, ant bites, computational models, and a PhD from Princeton. “The consistent theme in my career is understanding collective behaviour in complex systems.” It was also in studying ants where he first dabbled with AI, using computer vision models to track ants in videos for analysis.

When asked what prompted a shift into AI Safety, he refers back to the ants,  pointing out how the Army Ants he studied are effectively a killer predatory superorganism. This killing machine has been optimised over millions of years of evolution to consume resources so effectively that the colony needs to every night because they destroy so much prey.

Each ant is like its own very simple AI model; studying their behaviour reveals insight into how simple models can interact and lead to the emergence of highly intelligent – and in this case destructive – behaviour.

“The intelligence lies at the level of the colony itself, rather than any individual ant (even the queen). For social insects like ants and bees, this results from natural selection operating at the colony level – since individual workers are non-reproductive (and related), they all cooperate for the good of the colony, rather than behaving selfishly.”

“It's not hard to imagine AI-based systems in the future consisting of many interacting agents or parts, where the performance at the system level is the thing being [inadvertently] optimised for. In this way, even if the individual agents are relatively dumb and may not "know" what they are doing, high levels of [potentially destructive] intelligence can emerge.”

Application to AI Networks

Arguably the greatest danger presented by AI is not a Terminator-styled Skynet, a master architect of human annihilation, but instead a destructive emergent collective intelligence without consciousness, agency or intent.

If sufficiently empowered systems are deployed without being fully understood, there’s a real chance that the spread of such AI systems will lead to some fairly bad things happening. In the same way that social networks have undermined democracy, there’s a reasonable chance that networks of AI-driven agents will interact in unpredictable ways that lead to undesirable emergent behaviours.

In the worst case, these may be so subtle and complex as to be practically unstoppable.

Herein lies the application of collective behaviour principles to AI systems and AI Safety research. We want to understand and mitigate the potential risks of intelligent but fundamentally dumb systems. The real-world relevance is clear, if you saw that Bumble’s CEO announced our dating profiles might soon be doing the dating for us, or if you’re aware of the number of bots flooding our social platforms. Increasingly complex networks of ‘dumb’ agents will interact increasingly often. It’s already happening.

Add to this the fact that telecommunication networks, logistics channels, national security systems, and even the operating system on your computer or phone, and oh so much more, are all having AI injected into them. The risk only grows.

Matt describes one of our early experiments, which is to replicate a game you might know as ‘Broken Telephone’ (in group, where one person whispers a message to the next person, and accuracy deteriorates with each whisper in the chain of whispers).

  1. Information is transmitted within a network of interacting Large Language Model (LLM)-based agents. One agent starts with ‘true’ information, while the rest begin by guessing an answer at random.
  2. The agents are directed to hold a series of discussions with other agents.
  3. Each round consists of a 1-on-1 discussion between two agents and the agents change partners each round.

 

There are between 8-16 agents, and, in the ideal case where information is shared accurately, the true information should diffuse quickly so that each agent within the group has correct information.

Of course, that’s not what happens. We observed that several variants of false truths (or misinformation) quickly emerge.

So the immediate research question is – "What are the things we can control to improve the robustness of these multi-agent systems to deception or misinformation?"

We’re discovering promising mitigation strategies already.

The KTP Project with the University of York

The University of York has developed the AMLAS (Assurance of Machine Learning for Autonomous Systems) framework, a structured methodology to ensure the safety of machine learning (ML) components within autonomous systems. Like Advai’s approach of integrating AI Safety practices across each stage of the development lifecycle, AMLAS integrates safety assurance throughout the process of developing ML models.

Our shared interest in AI Assurance and our mutual recognition of the unique challenges posed by complex and unpredictable AI systems have made this partnership, and co-hiring Matt, a perfect fit.

The University of York has a deep history of developing safety and assurance frameworks outside the field of AI, leading pioneering research in safety assurance across various domains including robotics, healthcare, automotive, and aviation.

Like any framework, there’s a need to generate a structured safety case.

In Matt’s case, this means providing clear, evidence-based arguments and documentation that demonstrate the ML components in use meet predefined safety standards, and  identifying potential risks, so these can be acceptably managed. The goal is to ensure stakeholders that the ML system is safe to use in its intended environment and application.

From Advai’s perspective, our ultimate aim is to develop a scalable, automated service platform for testing and assuring AI systems, ensuring their robustness, fairness, and security.

UKRI invest in “research and innovation to enrich lives, drive economic growth” and have selected this partnership because AI Safety and Assurance is an AI-adoption enabler. Through the KTP, Matt will bridge high-level decision makers with evidence-generating tools, enhancing business trust in autonomous AI systems.

Bridging Academia and Industry

"This is deliberately a cross-disciplinary thing.”

The original keystone breakthrough in AI was the development of neural networks, modelled after their biological counterparts in neuroscience.

Genetic algorithms were inspired by Darwin’s theory of evolution.

Breakthroughs in mathematics routinely enhance the field of AI with new ways of codifying the unpredictable or unknowable.

Reinforcement learning is basically a direct rip-off of psychology’s operant conditioning.

And, you’ve heard of Nvidia? Nvidia, the gaming company?

The point we’d like to leave you with is this:

Here we are developing world-leading AI Safety mechanisms, some fifty years after the first ‘perceptron’ (computer science’s first application of the neural node) and we wonder… could the next breakthrough in AI Assurance come from the self-organising structures found in ecological systems?

A warm welcome to Matthew Lutz, and a thank you to our partners University of York and UKRI.