Understanding multi-agent systems in five minutes

rowing team rowing skull in synchronization on water

Understanding multi-agent systems in five minutes

With more than 300 papers and 12 books in his portfolio, Michael Luck, the executive dean of the faculty of natural and mathematical sciences at King’s College London, is pioneering research into how multi-agent systems can be used to solve a huge array of challenges. Here, Michael explains why multi-agent systems are so important to the future of artificial intelligence (AI) with five key quotes:

1. “The standalone computer is no longer exciting to us. What we really want today are computers that work with other computers.”

Michael explains that multi-agent systems are multiple interacting entities achieving tasks that they either couldn’t do by themselves or couldn’t achieve as efficiently. “If I wanted to move a bookcase, I’d have to ask someone else to do it with me and they might not want to. I might try to persuade them, negotiate with them or even threaten them and all of these actions are what’s involved with getting multi-agent systems to work,” he explains. “The focus is to identify what are the techniques for getting people to work together that we need computers to do.”

2. “One of the most interesting parts of AI is producing systems that are ‘goal based’.”

There is a big difference between systems that are programmed with a specific task and method of how to do it, and ones that are given a goal and are left to figure out how to do it. Michael is interested in the latter. “When you start to get multiple interacting entities that are autonomous, then goals emerge. So for me, autonomy is identifiable through the self-generation of goals,” he says. This is where technology meets psychology and presents some difficult questions. “Should we require computational systems to do what we want? Should they have to comply, or should they be able to challenge us?”

3. “We have the ability to make systems work together, but we don’t yet have the support from multiple manufacturers.”

When pressed on the challenges AI faces, Michael explains a far bigger barrier is the hype generated by the perception that AI is just one technology. This leads to an expectation that it should be able to live up to so much promise. “There’s an awful lot of interest in AI, as there should be, but we need to manage expectations about what these technologies can do.” But on the challenge to the technology itself, Michael says that manufacturers allowing their machines to speak to one another will be another barrier to progress. “In other phases of technological development, we’ve seen people come together to get over these challenges. Sometimes, if that’s done too early it doesn’t solve the problem, and at the moment there’s still much more to be done to understand where we need standards and where we don’t,” he adds.

4. “There should not be a trade-off between academia and industry, it’s about working together and making sure the people who are doing academia are contextualised by industry.”

Last April, the UK Government announced it is investing £1bn in creating 1,000 new PHDs in AI research, but Michael says that the pull of industry is drawing a lot of the talent away from academia. “It’s a challenge for recruitment. We struggle in key areas in AI, such as cybersecurity and data science. But at the same time, there are things you get in academia that you don’t get elsewhere, like the freedom to set up your own project and collaborate with others,” he says.

“I don’t think we’ll ever get as many people as we need, because there’s a demand for highly qualified, highly skilled people in these areas and typically these are not just people with an undergraduate degree, it’s people who really understand the cutting edge. The needs span every industry. There is a need for people.”

5. “We need to work out where the liability for multi-agent systems falls.”

Michael points out that in many of these systems, it is not yet possible for them to give guarantees or explanations about their behaviour, meaning humans do not have enough confidence in them. “At its core, AI is an aggregation of many different disciplines, including computer science, sociology, psychology. But when you start thinking about the challenges of safety and trust, then you need to bring in lawyers, ethicists and philosophers. We don’t have the answers yet,” Michael says.

For more on multi-agent systems, watch the full video interview with Michael Luck here.

 

 

Insights Team
insights@contactengine.com