Original draft from AAAI Technical Report WS‐94‐02. Presented at the International Workshop on
Distributed Artificial Intelligence, Lake Quinalt, Washington, USA 1994.
Introspection in Cooperative Intelligent
Software Agents
Part I
Cindy L. Mason
Artificial Intelligence Research Branch
NASA Ames Research Center, Mail Stop 269‐2, Moffett Field, California 94305 USA
Email: mason@ptolemy.arc.nasa.gov
Phone: 415.604.0305
Fax: 415.604.3594
Contact: cindymason@media.mit.edu
Abstract
Introspection refers to an agent’s meta‐cognition and knowledge of its own beliefs. In this
paper we explore the use of introspection in cooperative distributed agent system using
result‐sharing abductive or assumption‐based systems. The work is in two parts. Part I
describes a multi‐agent model of introspection using a machine theoretic description and
discusses the issue of belief consistency across cooperating distributed agent systems. It
also presents introspection as a mechanism for control and preventing distraction. Part II
describes a 20 man year effort building a multi‐agent system that uses introspection for the
application of distributed sensor network in monitoring for anomalous environmental
events. The bulk of Part II describes experiments on various multi‐agent communication
schemes with and without introspection.
1 Introduction
Software agents that cooperate in distributed computing environment have proven to be a
powerful paradigm in which AI systems can cooperatively solve problems on loosely‐
coupled network computer architectures. Examples of such problems include 24/7
distributed sensor network monitoring systems [1] automatic scheduling for a global
network of robotic telescopes [2], cooperative robots for hostile environments and more
broadly, the Internet of Things (IoT).
Cooperation among intelligent agents in a cooperative distributed problem solving (CDPS)
system may be coarsely divided as task‐sharing and result‐sharing according to the two
forms of cooperation they model. In the former, agents cooperate by sharing the
decomposition, assignment and solution of subproblems. In the latter, agents must not
only provide solutions to subproblems, but must also reason about when subproblems
Original draft from AAAI Technical Report WS‐94‐02. Presented at the International Workshop on
Distributed Artificial Intelligence, Lake Quinalt, Washington, USA 1994.
solutions should be sent, which subproblem solutions should be sent and to whom they
should be sent. Here we focus on the result‐sharing class of CDPS agent systems.
As developers of result‐sharing CDPS systems, we are concerned primarily with
communication between agents. Effective coordination of problem solver agents cannot be
considered by single message exchange. Instead, communication occurs through protocols
for informing, requesting, and convincing. Cost of communication can be expensive – not
just latency and transmission and reception overhead, but an arriving message may cause a
flurry of computation to ensue. Result‐sharing agents are subject to distraction and may
waste valuable resources exploring incoming results that may turn out to be useless or
repetitive. Hence, architects and programmers strive to limit communication – designing
agents to make reasoned decisions about which agents to transmit results, what results are
relevant, and when they should be sent. So far, research towards controlling
communication among result‐sharing agents has largely focused on techniques for
restricting the transmission of messages – organization structuring, meta‐plans, planning,
etc. From these trends one might conclude that the best form of communication in result‐
sharing networks is a narrow casting of results towards agents in need. To our surprise,
we found that if the recipient agents used introspection, we were able to obtain the
increase in solution quality payoff that comes with broadcast communication strategies
while paying little cost in the way of wasted or repetitive problem‐solving.
The material of the paper is divided into 2 parts. Part I presents the introspective agent
model using a multi‐agent machine theoretic description of mental state and introspection
in the context of cooperative result‐sharing abductive or assumption based belief agent.
Using the introspective agent model it addresses the concepts of physical, logical and
semantic consistency among multiple intelligent agents. Part II gives the implemention of
the software agent and describes our communication experiments.describes
communication experiments run with a Multi‐Agent Test Environment, MATE [3], at NASA
Ames AI Center with 5, 7, 11 and 15 agents using the collaborative agent environmental
monitoring software developed over a 20 man year effort at Lawrence Livermore National
Laboratory [1][4] .
2 The Introspective Agent Model
The informal conceptual model of our introspective agents focuses on the concept of
belief. It has been reasonably argued by Konolige[5] that an agent’s belief system should
be portrayed as a separate component of an agent’s cognitive architecture. As shown in
Figure 1, AI is an agent composed of: BI, a belief subsystem, and II, a subsystem for
performing inferences, and CI, a communications interface for interacting with other
agents. The belief system BI consists of a finite list of facts the agent initially believes and a
belief updating mechanism. The belief subsystem BI interacts with the other subsystems
Original draft from AAAI Technical Report WS‐94‐02. Presented at the International Workshop on
Distributed Artificial Intelligence, Lake Quinalt, Washington, USA 1994.
of AI as a fact repository, accepting propositions from the inferencing subsystem II and
the communications interface CI, and as a query‐answering device for I I .
Figure 1. Agent AI with three main architectural components, Communication CI,
Beliefs Bi and Inferencing component, II.
Figure 2. Belief component BI of Agent AI is structured as query answering, expanded
with machine theoretic view.
Original draft from AAAI Technical Report WS‐94‐02. Presented at the International Workshop on
Distributed Artificial Intelligence, Lake Quinalt, Washington, USA 1994.
As shown in Figure 2, the belief subsystem of agent AI, can be described as a two‐level
introspective machine, with an introspective component IMI and the heart of the belief
machinery, MI, that maintains consistency and determines proposition membership. This
model is similar in design to the conceptual models used by Konolige [5] to describe
introspection in single agents. The queries presented to the belief subsystem are posed in
some language L whose exact form is inconsequential except that there must be an explicit
reference to an agent's own state of belief. Adapting the notation of Konolige [5], we
represent these expressions by the form ☐ ɸ which means agent AI believes ɸ to be one of
its beliefs. In general, ɸ may represent conclusions drawn by AI or another agent, AK, as
when ɸ has been communicated. We use the subscript notation ɸX to represent a
proposition originating from Agent AX when the agent origin of ɸ bears relevance to the
discussion.
The response of the introspective belief machine IMI is based on matching a query
against the agent’s current belief list1. Queries of the form ☐ ɸ presented to IMI can be
answered by presenting ɸ to the machine MI. This formulation of computational
introspection is intuitively appealing as it appears to mimic human introspection. While
multiple levels of introspection are possible, for our purposes a single level of
introspection suffices.
In general, when faced with the query ☐ɸ, IM poses the query ɸ to M, and simply returns
yes if M says yes, and no if M says no. From the cognitive perspective of the agent, "yes"
means that the agent has considered its set of beliefs and has concluded that it believes ɸ,
and therefore believes that it believes ɸ. In other words, ☐ ɸ is one of the agent's beliefs.
On the other hand, when presented with ~ ☐ ɸ will respond no if M says yes, and yes if M
says no. Now, "yes" means that the agent believes that it disbelieves ɸ. The agent doesn't
believe that it believes ɸ, so ~ ☐ ɸ is a belief.
We have now defined an agent's cognitive outlook in terms of its state of belief or disbelief
1
Alternatively, a belief machine may try to derive ɸ, however, without a mechanism to
incorporate some notion of limited resources, such machines are undecidable.
2 We note that an agent distinguishes its locally obtained or derived ɸ in memory using tags
Original draft from AAAI Technical Report WS‐94‐02. Presented at the International Workshop on
Distributed Artificial Intelligence, Lake Quinalt, Washington, USA 1994.
in the proposition ɸ. Together the set of believed facts and the set of disbelieved facts
constitute what is known by the agent. We may define the set of known facts as the set of ɸ
in A’s memory that satisfy a query in L of the form ☐ ɸ V ~ ☐ ɸ. We define A’s set of "known"
ɸ with modal operator ◊ as follows:
◊A = { ɸ | ( ☐ ɸ V ~ ☐ ɸ) }
that is, ◊A is the set of all ɸ known to the agent independent of whether those ɸ are believed
or not. They exist in the agent's cognitive memory. By definition then, the unknown facts can
be described as ~◊A where
~◊A = { ɸ | ~(☐ ɸ V ~ ☐ ɸ) }
It follows that ~◊A can be used to describe what an agent does not know at any one level of
cognition. There can, in theory, be an infinite number of meta‐cognition levels, but for our
purposes we use only one. The characterization of an inquiry into many levels of cognition
is the subject for further research.
3 Societies of Agents, Mental States and Consistency
We now consider the case when our agent functions as part of a group of communicating
agents, such as in collaborative data analysis, search, monitoring, etc. When there are a
number of result‐sharing agents, propositions in the agent's cognitive state/belief system may
occur not only as a result of sensing or local computation/inferencing but also as a result of
communication. For instance, Agent AI believes ɸK or, ☐I ɸK , as a result of a previous
communication of ɸK by Agent AK. The set of propositions an individual agent believes and
works with includes not only locally generated propositions but also propositions
generated by other agents 2. It follows that agents may also work explicitly with the state of
belief of the propositions of another agent. For example, agent AI can pose a query whether
it believes Agent K believes ɸK or ☐I (☐K ɸK) where I ≠ K.
When agents use common sense or abductive reason about what is usual or normally the case,
they rely on the use of beliefs about assumptions or default ɸ in the face of incomplete
information. When new information comes in over time the conclusions that an agent holds
based on ɸ and other beliefs may be retracted subsequent to the communication of those
beliefs3. Using our model with Agents I and K the situation may be described as
☐I ɸK & ~ ☐K ɸK
2
We note that an agent distinguishes its locally obtained or derived ɸ in memory using tags
indicating the agent of origin. This allows an agent to maintain a distinction among self and
others.
3 The reason for subsequent retraction and the global solution requirement to maintain
coherency can vary across applications and agent capabilities. For now we primarily consider
the issue of belief revision.
Original draft from AAAI Technical Report WS‐94‐02. Presented at the International Workshop on
Distributed Artificial Intelligence, Lake Quinalt, Washington, USA 1994.
where Agent AI believes ɸK but Agent K no longer believes ɸK .
This form of incoherency among result‐sharing abductive or assumption‐based
reasoning agents occurs as a result of physical inconsistency, where two agents with
separate mental state using shared copies of the same belief for problem solving can
become out of synch. This can happen when the originating agent A k has not yet
notified Agent I of its change of “mind”. In general, this type of incoherency may be
addressed by using a global truth (belief) maintenance system that guarantees physical
global consistency [6]. The distributed TMS systems of [6] and [7] both address this
form of inconsistency.
Incoherency among result‐sharing abductive or assumption‐based agents may also
take the form of semantic or logical inconsistency. This type of inconsistency occurs
when two agents beliefs semantically or logically disagree about a particular
conclusion or solution to a subproblem. Logical inconsistency is straight forward and
readily detectible in reasoning systems, while semantic inconsistency is usually
knowledge based or domain specific. We focus primarily on semantic inconsistency.
Our experience indicates semantic inconsistency arises for example, in
environmental monitoring systems when physically distributed agents have separate
data streams and local knowledge or in applications where agents share emotional
beliefs. The independent but different beliefs (meaning) each agent has about the data
may be a bug or a feature depending on the application and knowledge of the agents.
Either way, we must look closer at how the inconsistency plays out among the agents
and the belief revision systems they require for coherence. For example, when Agent
AI believes ɸ , which is counter to Agent AK's disbelief .
For AI IMI (ɸ) :Y
For AK, IMk(ɸ):N
More generally, semantic incoherency may be described as AI: ☐I ɸ and AK : ☐K Ω where
ɸ and Ω are incompatible according to domain knowledge. The inconsistency may be
detected when one of agents AI or AK communicates to the other (e.g., AI sends ɸ to AK)
or when each of AI and AK report beliefs ɸ and Ω to a shared belief system3.
Whether or not the disagreement should be corrected or permitted, that is,
whether or not the inconsistency can be characterized as a form of incoherency,
depends upon the characteristics of the problem domain. An agent designer's
demands for inter‐agent consistency are based on overall problem decomposition,
how individual subproblems are solved, and how subproblem solutions are
synthesized to form the final network solution. These aspects of distributed problem
solving differ greatly from one application to the next. Where problem decompositions
give rise to subproblems requiring agreement, the inconsistency among beliefs of
agents assigned to the subproblems can be viewed as incoherency. However, in many
cases, it is important to preserve a "difference of opinion", as when several agents
bring complimentary perspectives to bear on a problem.3
Original draft from AAAI Technical Report WS‐94‐02. Presented at the International Workshop on
Distributed Artificial Intelligence, Lake Quinalt, Washington, USA 1994.
If Agent AI holds a belief ɸ that is semantically incompatible to the belief Ω held by
Agent AK, then either (a) one of AI or AK must revise its beliefs or (b) agents need a
representation of two distinct belief spaces in order to consider the other agent’s
“perspective” – one belief space representing AI's assumptions and beliefs, and the
other representing AK's assumptions and beliefs. So each agent has effectively partial
representation of each other’s mental state4. If Agent AI or AK revises its beliefs
distributed truth maintenance systems enable us to virtually consider the agents to be
reasoning across overlapping multi‐context belief spaces. Justification‐based global
truth maintenance systems, such as the DTMS [6], enforce logical consistency among
agents sharing the same belief space. A global JTMS system guarantees:
If AI: ☐ ɸ then ∀ AK(K ≠ I) AK: ◊ ɸ ⇒ IMK(☐I ɸ) : Y
That is, if Agent AI believes ɸ then every agent in the cooperative agent system that knows ɸ
also believes ɸ. Assumption‐based global truth maintenance systems, such as the
DATMS[7], allow agents to reason with multiple, possibly conflicting, semantic views
or sets of beliefs at once. 5
Figure 3: In a global ATMS, Belief Spaces of 2 agents, each capable of maintaining multiple
perspectives or spaces, some relate to other agents spaces. Left agent, space 2, corresponds to
right agent, space 2’ and agent on right believe space 3 corresponds to left agent space 3’.
Unlike the DTMS, the consistency mechanisms of a global ATMS system guarantee
only that conflicting beliefs may not be combined to create new beliefs. This is
accomplished by giving each agent any number of belief spaces, each internally
consistent although the union of spaces may be inconsistent. These belief spaces may
represent several alternative views the agent itself is considering or the alternative
views of its fellow agents. The basic idea is illustrated in Figure 3. The agent on the left
currently maintains two belief spaces of its own, numbered 1 and 2, and one,
4
In agents equipped to solve problems using multiple contexts or worlds, or who are
required to generate all solutions, the ability to create belief context is already part of
the agent programming.
5 As a result of agent distribution, it is possible that incompatible beliefs may be derived by
two agents but not be detected unless an incompatible fact is communicated and therefore
observed by one of the agents.
Original draft from AAAI Technical Report WS‐94‐02. Presented at the International Workshop on
Distributed Artificial Intelligence, Lake Quinalt, Washington, USA 1994.
numbered 3', that was communicated from the agent on the right. The agent on the
right has 3 belief spaces as well; 3 and 4 are its own beliefs, while 2' was
communicated.
4
Introspection as Control
Coherency and performance in a result‐sharing agent network are threatened by a
number of problems related to the fact that an agent unwittingly responds to
patterns of facts placed in its working memory by another agent. For example, agents
may increase their belief spaces and as a result overload their working memory,
increasing the cost of belief revision and pattern matching operations.
Methods for curbing incoherency focus on giving the sending agent social and
organizational intelligence – agent focused representation and reasoning structures
to drive communication decisions about what results to send, to whom they should
be sent, and when. Techniques for intelligent transmission of results range from
organizational structures, communication of meta plans, and planning, to actually
simulating the reasoning of another agent to predict the problem solving relevancy of
results. While the creation of an agent that can intelligently transmit results is a
worthwhile goal, our experiments indicate that increases in coherency may be also be
achieved in certain classes of result‐sharing systems using introspection mechanisms
designed for the receiving agent, thus relieving sending agents of a computational
burden.
By using the DATMS we not only maintain multiple belief spaces across agents but
also a history of the state of belief within any space. So states of belief and states of
knowledge are explicit and available to the agent. Using a DATMS we not only give
agents the ability to reason about the propositions it believes, but also about the state
of those beliefs over time. Thus an agent can explicitly pattern match on its own state
of knowledge. This is a powerful mechanism for accessing the full history of what an
agent has known during the course of problem solving. When implementing a result‐
sharing computational agent, the developer may use this mechanism to guide an
agent's pattern matching over incoming results thereby reducing the incoherency in
result‐sharing systems.
In part II, we describe our implementation of a CDPS result‐sharing assumption‐
based reasoning agents for the application of a global environmental sensing and
monitoring system. It presents the performance data of the agent network using
various cooperation and communication strategies with and without introspection
using 5, 7, 11, and 15 agents using the Multi‐Agent Test Environment, MATE [3]. The
conclusions of Part II are that contrary to the temptation to build cooperative agent
systems with mechanisms that support intelligent message transmission, we found
there exists a class of cooperative agent systems in which agents may broadcast results
without regard to selection when the receiving agents use introspection and a history
of knowledge state to focus problem solving.
Original draft from AAAI Technical Report WS‐94‐02. Presented at the International Workshop on
Distributed Artificial Intelligence, Lake Quinalt, Washington, USA 1994.
References
[1] C. Mason, Cooperative seismic data interpretation for nuclear test ban treaty
verification, Intl J. Applied AI, Vol. 9, no. 4, pp. 371‐400, 1995.
[2] C. Mason, Collaborative Networks of Independent Automatic Telescopes
Optical Astronomy from the Earth and Moon, ASP Conference Series, Vol. 55,
Diane M. Pyper and Ronald J. Angione (eds.) 1994.
Available free at the Harvard-Smithsonian Center for AstroPhysics Digital Archives
http://adsabs.harvard.edu/abs/1994ASPC...55..234M
[3] MATE‐ A Multi‐Agent Test Environment, Technical Report FIA‐93‐ 09, NASA Ames
Research Center, AI Research Branch.
[4] C. Mason, An Intelligent Assistant for Comprehensive Test Ban Treaty Verification,
International Joint Conference on Artificial Intelligence, Workshop on AI and the Environment,
Montreal, Canada, 1995, pp. 118-140. Also see IEEE Expert 01/1995; 10:42-49.
[5] Konolige, K., A Deductive Model of Belief, Proceedings of the Eighth international joint
conference on Artificial intelligence ‐ Volume 1, Pages 377‐381, 1983.
[6] Bridgeland, D. M. & Huhns, M. N., Distributed Truth Maintenance. Proceedings of. AAAI–
90: Eighth National Conference on Artificial Intelligence, 1990.
[7] Mason, C. and Johnson, R. DATMS: A Framework for Assumption Based Reasoning, in
Distributed Artificial Intelligence, Vol. 2, Morgan Kaufmann Publishers, Inc., 1989.