Overview

The projects we are involved in cover a broad range of topics, generally involving elements of intelligent agents, robotics, and computer vision. If you are interested in these areas, further information about our work is available in the publications and videos section. Our recent work includes:

Mixed-Reality Robotics

Robots are interesting from an AI standpoint partly because they are embodied solutions: they have to deal with the real world, and they force us as researchers to do the same. At the same time, elements such as control and repeatability in research, as well as constructing domains that differ from the physical world, are difficult to accomplish well in purely physical domains. We are interested in working with mixed-reality domains, where physical robots interact with both the physical world and a virtual world. This work has bearing on education, on human-robot and human-computer interaction, and on control and repeatability in robotics experiments. Our current work involves using small robots (e.g. ir-toys, Eco-B) on a large, horizontally-mounted LCD. Some of the recent educational uses of this can be seen in class videos for comp 4060 in the videos section, as well as research papers in the publications section.

Humanoid Robotics

Tao-Pie-Pie The world of our everyday activities is designed ergonomically to make things easy on humans. As such, this everyday world is largely designed for bipedal locomotion. While a humanoid form is not necessarily the optimal form for every (or even most) robotic tasks, robots that are meant to function in humanoid environments will be strongly biased toward similar physical characteristics. Moreover, humans interacting with robots can be more accepting of this technology in humanoid form, leading to further potential applicability of humanoid architectures. Beyond the ultimate range of applicability, however, humanoid robotic designs represent a strong challenge both to hardware design and software control, and the pursuit of good humanoid robotic designs will serve to greatly advance technology in robotic hardware and software control. We are working on advanced humanoid robotic designs both to advance this technology and as entries for international robotics competitions.

Peer Assistance in Multi-Robot Teams

Most practical robotic applications in unstructured environments currently rely heavily on teleoperation, simply because intelligent systems are not yet sophisticated enough to function well autonomously in complex, unforgiving domains. While we are interested in improving human teleoperation of robotic systems, human teloperators will always be limited in the number of robots that can be controlled. The alternative approach is to leverage the limited abilities of autonomous systems and improve these through teamwork. We are working on approaches to allow peers on a team to assist one another, through visually diagnosing problems and offering advice to assist peers in specific difficulty, as well as to improve team coordination by sharing knowledge.

Developing Common Groundings in Multi-Agent Systems

Agents that inhabit a world invariably have repeated interaction with elements such as geographic locations or physical entities. The more often the interaction, the more likely referring to these locations (e.g. symbolically) is useful. Developing common groundings between groups of agents allows a team to function better as a group, by being able to better support useful communication. We are working on approaches allowing a team of robots to develop consistent common groundings over time in unstructured environments, allowing a group of agents to adapt to a new environment, or a new agent to adapt to an existing team

Learning from Others in Social Settings

Agents that learn only from a global teacher are only taking advantage of a small part of what is available to them: there is a wealth of information available from other learning agents in the community as well. Moreover, there are many real world situations where a teacher cannot have immediate and constant access to an agent for the purposes of reinforcement. We are working with reinforcement learning techniques that support individuals learning within a collective by reinforcing one another, and also with imitation learning by robots. Each of these requires developing a gradual understanding of who in a population is best learned from, since a range of skills will be evident in a heterogeneous population, no matter what the task. In a robotic environment, differences in physiology further compound the differences already present between agents. Our work in this area involves both reinforcement learning (peer reinforcement) and imitation learning . The latter involves recognizing actions and intentions others visually, abstracting these to judge the relative quality of the performances of others and selectively imitate portions of the behavior demonstrated by others. We are currently exploring the application of these techniques in robotic soccer domains.

Robotic Rescue

We are interested in developing inexpensive robotic units that can operate in teams for robotic Urban Search and Rescue. Inexpensiveness as a design criteria means that we can potentially provide large numbers of individuals to take full advantage of the power of teamwork, and also means that individuals can be considered expendable in dangerous domains. We have had a number of entries in previous AAAI, IJCAI, and RoboCup rescue competitions, some of which can be seen in the videos section of this site.

Trust and Reputation-Building in Multi-Agent Systems

Both of the above areas rely on agents knowing who is likely to be fruitful to interact with, who's information they are likely to find accurate, and who is likely to behave in predictable ways. These are just some of the issues involved in building trust over time and using this to prevent negative interactions between agents and foster positive interactions. We are developing practical models of these concepts for use in real-time agents in complex domains in order to support work in the above areas.

Team and Coalition Formation

Much multi-agent systems research involves improving the performance or abilities of teams of agents. Comparatively little has been done on the criteria that make it advantageous to join or form a team and the conditions that make agents maintain teams while functioning both as individuals and as part of a group. We are working on studying these conditions and giving software and hardware agents the ability to wisely form teams (and learn to form teams) and adapt to select those other agents with which to interact. This includes strategies for maintaining coalitions in the face of mistrust, deception, robotic failure, and incompetence, as well as deciding when coalitions should be allowed to break down. This is important for electronic commerce and other market-based applications, as well as physical robots, and will allow intelligent agents to form and maintain useful social networks in a flexible manner, similar to what we observe in humans.

Intelligent Vision Servers

Vision is our richest sense, and the most difficult to make use of in a robotic environment. We are interested in developing vision systems for both specific applications (e.g. stereo vision in robotic rescue) and general environments. Our latter work has resulted in the development of two global vision servers which are used in robotics competitions such as RoboCup and Fira. Both Doraemon and Ergo are available in our downloads section.

Anticipation and Teleautonomy in Multi-Agent Systems

Communication between teammates (human or robot) is costly both in terms of information processing and in terms of security and stealth. We are investigating the use of environmental cues (stigmergy), nonverbal expression, and models of peers or opponents (including their abilities and reputation) to anticipate future actions and minimize the communication necessary to coordinate groups of interacting agents. This includes teleautonomous situations, where a robot with autonomous abilities receives asynchronous commands from a human user, which it must integrate with its own perceptions and goals.

Real-Time Implicit Coordination in Multi-agent Systems

In complex problems, intelligent computational problem-solving agents must interact to jointly construct solutions in real time. Such agents must also be able to minimize the interference of others, and assist others when they are able. In such domains, communication is a necessary part of social interaction: we inform others of our intentions, warn them of impending danger, or specifically request information. Communication is not always possible, however, and where it is, it is often expensive in terms of data transfer as well as agent attention. This is also the case in much human activity: we do not broadcast complete information on our activities to those around us. Instead, others are expected in many cases to infer the course of our activity in order to avoid interference or offer cooperation. This does not remove communication entirely, but drastically reduces it. This research employs a constraint-directed model of behavior within an agent that will allow it to make these types of inferences and increase cooperative behavior in a complex, real-time environment with a minimum of communication cost. This research is important in that while it has been shown that the use of communication improves (in some cases drastically) the ability of agents to achieve shared tasks, and to avoid interfering with one another, there are many cases where such communication is expensive physically or in terms of the time that can be devoted to processing communication. There are also many areas where the number of agents and the traffic involved in communication make this aspect of problem-solving major factor in providing a timely solution (e.g. internet-based agents). Agents must balance both the utility of communication vs. its cost, as well as the time spent recognizing other agents' intentions in order to avoid communication. Overall, the ability to deal effectively with others with a minimum of communication will allow intelligent agents to operate more effectively and less expensively.

Faculty

Research Associates

  • Dr. Meng Cheng Lau, Post-doctoral Fellow

Current Graduate Students

  • Hisham Alawi (M.Sc.)
  • Fred Comeau (Ph.D.)

Current Undergraduate Students

  • Li Borui
  • Mario Mendez Diaz
  • Chi Fung (Andy) Lun
  • Christian Melendez Gallegos
  • Kyle Morris
  • Louis O'Connor
  • Kurt Palo
  • Vlad Samonin
  • Ziang (Daniel) Wang

Collaborators

Alumni (Grad Students,  Undergrads, Post-Docs)

  • Amir Hosseinmemar, Ph.D.
  • Olayinka Basheer Adelakun
  • Qaiser Ahsan
  • Abdul-Rasheed Audu
  • Roushain Akhter
  • Suhad Alharbi
  • Jeff Allen
  • Jonathan Bagot
  • Simon Barber-Dueck
  • Ahmad Byagowi
  • Diana Carrier
  • Chi Tai Cheng
  • Derek Cormier
  • Michael de Denus
  • Yuan Ding
  • Lorisa Dubuc
  • Barrett Ens
  • Seth Fiawoo
  • Paul Furgale
  • Richard Galka
  • Mike Gauthier [pic]
  • Tyler Gunn
  • Ayobami Ige
  • Chris Iverach-Brereton
  • Joshua Jung
  • Meng Cheng Lau
  • Shunjie Lau
  • Terry Liu
  • Tiago Martins Araújo
  • Sancho McCann [pic]
  • Sara McGrath
  • Dan Messing[pic]
  • Brian McKinnon
  • Geoff Nagy
  • Samson Ootoowak
  • Chad Peters
  • Kiral Poon
  • Iran Rocha
  • Sibendu Sarkar
  • Shawn Schaerer
  • Stela Hanbyeol Seo
  • Shachi Singh
  • Nicole Storen
  • Brian Tanner[pic]
  • Mike van de Vijsel
  • Nathan Wiebe
  • Andrew Winton
  • Ryan Wegner, Ph.D. [pic]
  • Byan Wodi
  • Alf Wurr
  • Shane Yanke
  • Long (Will) Yu

Prospective Students

We have opportunities for good graduate students in these and related areas. Different thesis topics demand different skills and interests (e.g. software agents are very different from robotic design!). However, we are in general interested in students who can demonstrate an interest (and as much experience as possible) in artificial intelligence and experience with a wide array of programming languages as well as students with a background in electronics, mechanical design, and robotic hardware. All students, irrespective of the background, must satisfy the Department of Computer Science and Faculty of Graduate Studies requirements, which means that you have to have a broad background in Computer Science as well.. If you are interested, please contact us - but please read the FAQ entry on this subject first.

We will be be attending RoboCup 2018 in Montreal, the first time RoboCup has come to Canada!

The Autonomous Agents Laboratory is one of the research laboratories within the Department of Computer Science at the University of Manitoba, and is directed by Dr. John Anderson and Dr. Jacky Baltes. The goal of our work is the improvement of technology surrounding hardware and software agents as well as the development of applications employing these technologies. We are especially interested in cooperation in multi-agent settings, and the infrastructure necessary to support this and other forms of social interaction in intelligent systems.