AI-BASED AGENT LEVEL REASONING ENHANCING MODULES

Introduction

Every robot in an application is expected to act as an autonomous software agent, that must make sure that the robot executes the tasks that have been assigned to him by the operator, but also that the robot at the same time interacts appropriately with its environment. The latter consists not only of objects and/or other computer controlled devices, but also of the robot operator, and even people that are not involved in the agent's behaviour. The decisions that the robot agent makes autonomously are all about what action the robot should take next, given the current situation. And that situation in itself is the interaction between (at least) the following types of activities:

Technical Specifications

Software and hardware Requirements

Each of the above-mentioned activities will be realised by (at least) one software component. (At the level of the agent, the hardware is already abstracted away via sub-components of the four presented types of activities.)

Usage Manual

The robot agent is being used by several “stakeholders”. The sections below give some more explanation about these interacations.

Use Case 1.

Use Case Diagram

Use case diagram of agent-level reasoning

Use case diagram of agent-level reasoning.

Usage Models

As described in the glossary section, “agent” is a software activity that must decide about the next action that the robot should do. In that decision, it has access to the following complementary sources of knowledge/information:

The usage diagram depicts the components only at a high level and must be more concretely instantiated for each particular application context. The importance of the diagram is to make developers aware of (i) the complementary sources of information that provide the context of the reasoning, and (ii) the need to get properties from that context to fill in (“configure”) in the instances of the concrete actions that the agent reasons about.
Typically, there are three types of users that can interact with the decision making agent:

Functional Specifications

Functional Block Diagram

Activity diagram of the reasoning agent.

“Functional” (or rather, “software activity”) diagram of the reasoning agent.

The order in which the reasoning algorithm looks at the available information/knowledge is as follows:

Figure “DR-functional” Figure "DR Functional diagram" shows the decision making (“coordination”) parts of the “reasoning”. That coordination consists of two complementary parts, more in particular, the following two coordination mechanisms work between:

  1. the coordinated activities, namely the perception, control, task execution and world model updating activities. The “reasoning” that is required is to decide which should be the states of the involved activities (perception, control, world model updating and task execution) that should be activated at every particular moment in time. A state is one of the possible behaviours of that activity, and the possible choices, and transitions between these choices, is represented/implemented as a Finite State Machine in the coordinated activity.
  2. The Petri Net in the “reasoning” activity: Petri Nets are complementary to Finite State Machines, because they “mediate” the events that are sent to, and received from, a set of coordinated activities. That mediation is based on the knowledge about what activity states and configurations are allowed and "best", for the agent as a whole.

Technical Specifications

Because of the just-mentioned large variety of activity and agent behaviours, we can give no concrete technical specifications in this overview document. Except for these generic software design primitives and insights: