AI-BASED AGENT LEVEL REASONING ENHANCING MODULES

Introduction

Every robot in an application is expected to act as an autonomous software agent, that must make sure that the robot executes the tasks that have been assigned to him by the operator, but also that the robot at the same time interacts appropriately with its environment. The latter consists not only of objects and/or other computer controlled devices, but also of the robot operator, and even people that are not involved in the agent's behaviour.

The decisions that the robot agent makes autonomously are all about what action the robot should take next, given the current situation. And that situation in itself is the interaction between (at least) the following types of activities:

Technical Specifications

Software and hardware Requirements

Each of the above-mentioned activities will be realised by (at least) one software component. (At the level of the agent, the hardware is already abstracted away via sub-components of the four presented types of activities.)

Usage Manual

The robot agent is being used by several “stakeholders”. The sections below give some more explanation about these interactions.

Use Case 1

Use Case Diagram

Use case diagram of agent-level reasoning

Use case diagram of agent-level reasoning.

Usage Models

As described in the glossary section, “agent” is a software activity that must decide about the next action that the robot should do. In that decision, it has access to the following complementary sources of knowledge/information:

The usage diagram depicts the components only at a high level and must be more concretely instantiated for each particular application context. The importance of the diagram is to make developers aware of (i) the complementary sources of information that provide the context of the reasoning, and (ii) the need to get properties from that context to fill in (“configure”) in the instances of the concrete actions that the agent reasons about.

Typically, there are three types of users that can interact with the decision-making agent:

Functional Specifications

Functional Block Diagram

Activity diagram of the reasoning agent

“Functional” (or rather, “software activity”) diagram of the reasoning agent.

The order in which the reasoning algorithm looks at the available information/knowledge is as follows:

  1. The task is the first piece of information to consider, because that is what the robot has to perform.
  2. Then, the reasoner queries the situations that it has knowledge about, to find one for which the right task has been defined. This is used as a candidate situation for the next step.
  3. The reasoner evaluates how well the selected situation fits to the available perception information.
  4. If a fit is found, it combines the information from task, situation and perception, to select the best action to execute, with the right configuration values for the identified context.

Figure “DR Functional diagram” shows the decision making (“coordination”) parts of the “reasoning”. That coordination consists of two complementary parts, more in particular, the following two coordination mechanisms work between:

  1. the coordinated activities, namely the perception, control, task execution and world model updating activities. The “reasoning” that is required is to decide which should be the states of the involved activities (perception, control, world model updating and task execution) that should be activated at every particular moment in time. A state is one of the possible behaviours of that activity, and the possible choices, and transitions between these choices, is represented/implemented as a Finite State Machine in the coordinated activity.
  2. The Petri Net in the “reasoning” activity: Petri Nets are complementary to Finite State Machines, because they “mediate” the events that are sent to, and received from, a set of coordinated activities. That mediation is based on the knowledge about what activity states and configurations are allowed and "best", for the agent as a whole.

Because of the just-mentioned large variety of activity and agent behaviours, we can give no concrete technical specifications in this overview document. Except for these generic software design primitives and insights: