Functional Specifications
Introduction
This section of the documentation serves as a comprehensive overview of the platform's functional architecture and the core interactions between its components. The section begins by presenting a high-level functional diagram that depicts the main interactions of the AI-PRISM components, offering a general overview of the system's operation. Following the diagram, we delve into the specifics of the platform's interfaces through a table that provides high-level details of their functionalities and purposes. This breakdown serves as an essential reference point for understanding how the various elements of the platform interact and collaborate to achieve its overarching objectives. Next, the high-level features of the platform are presented, followed by a mapping of the platform's components to the high-level features they support. Finally, we provide links to detailed descriptions of the functional components of the platform.
Functional Component Diagram
The following figure depicts the main functional components of the AI-PRISM platform and their interactions.

This section shows the holistic view of the AI-PRISM solution with respect to the tasks in WP3, WP4 and WP5. As WP3 connects the hardware in AI-PRISM project:
- T3.1 (Collaboration Multiagent ROS based Robotic Framework) connects the robot with the cyber world.
- T3.2 (Ambient digitalization for Human-Robot Collaboration) connects the surrounding with the cyber world.
- T3.3 (Real Time communication for sensor and platform integration and control) connects the human and the automation devices with the cyber world.
- T3.3 will provide data storage and streaming services so that AI-PRISM modules can reason with and interact with the extracted information.
All raw data from T3.1, T3.2 and T3.3 stored in T3.3 will be consumed by different tasks following this flow:
- T3.4 will provide its output to T4.3 and T4.5. T4.3 concerns the adaptability of the robots to address the human need/ambient needs. T4.4 will consider the multi human/robot task optimization.
- T4.2 (AI based Perception) will consume it and process it to create useful information from the collected data.
- T4.3 will also receive use the information from either T4.2 or T3.3 (depends on how it will be implemented) to learn the current status of the collaborative task the robot is performing (what is the actual task the robot is performing within the planned sequence of task, and what is the situation in the ambient), and based on this information choose the best action to ensure a smooth collaboration.
- T4.4 will reason with data at a higher hierarchical level, assigning tasks to agents in the collaboration ambient or acting in the production scheduling to ensure economic and social sustainability objectives.
- T4.5 will use the information to learn the manipulation of the objects from the human during the Programming-by-Demonstration (PbD).
T4.3 (Agent Level Reasoning, Acting and Control), T4.4 (Ambient Level Reasoning Acting and Control), and T4.5 (Learning from Human Demonstration and Human-Robot Interaction) will use the extracted high-level features to enhance reasoning and interaction capabilities of robotic systems. These tasks will use a unified data model to contextualize the extracted information in the industrial process, identifying all entities in the ambient and their relationships in the production context. T4.2, T4.3, and T4.5 will reason with and enrich the information at different manufacturing equipment levels (e.g. production line, work centre, work cell) and process timeframes (e.g. production orders, manufacturing operations, or discrete collaborative interactions).
Main interfaces
List of main interfaces between functional components shown in the figure.
| ID | Component | Name | Description |
|---|---|---|---|
| I_BH_1 | AI-PRISM Base Hardware | Drivers | Hardware driver interface |
| I_BH_2 | AI-PRISM Base Hardware | ROS DDS | |
| I_CM_1 | AI-PRISM Communications Modules | ||
| I_AS_1 | AI-PRISM Ambient sensing infrastructure | ||
| I_RC_1 | AI-PRISM Real Time Communications Network | ||
| I_IP_1 | AI-PRISM IIoT Platform | ||
| I_DS_1 | AI-PRISM Data Platform | ||
| I_SE_1 | AI-PRISM Simulation Environment | ||
| I_AD_1 | AI-PRISM Ambient Digitalisation Modules | ||
| I_CD_1 | AI-PRISM CI/CD Framework for AI-based Solutions | ||
| I_PE_1 | AI-PRISM AI-based Perception Enhancing Modules | ||
| I_DR_1 | AI-PRISM AI-based Agent Level Reasoning Enhancing Modules | ||
| I_HI_1 | AI-PRISM Human - Machine Interaction (HMI) Modules | ||
| I_PD_1 | AI-PRISM Programming by Demonstration Environment | ||
| I_SP_1 | AI-PRISM Human Safety Management Procedures |
High Level Features
High-level features of AI-PRISM as a whole.
Feature 1. Introduce collaborative robotics in manufacturing environments
AI-PRISM facilitates the introduction of collaborative robotics in manufacturing scenarios. The modular platform, which requires minimal programming skills, can automate tasks that traditionally require human perception and manipulation. The robotic solutions delivered are robust, easy to use, require minimal learning and can be configured without requiring highly skilled personnel.
Feature 2. Digitalize collaborative workplaces
Sensors on the robots and around the workspace (collaboration ambient) collect data about the environment and the activities taking place. This includes visual data from RGBD cameras, spatial data from on-board LiDAR or radar sensors, and even data from sensors installed in manufacturing equipment. This data can then be processed by AI algorithms to create a digital representation or model of the environment. This involves identifying and tracking objects and people in the environment, understanding the tasks being performed, and predicting future actions or changes in the environment.
Feature 3. Capture tacit knowledge from workers
Capture expert knowledge that is difficult to transfer to other team members verbally or in paper. AI-PRISM captures knowledge as workers interact with collaborative robots. Through its "Programming by Demonstration" capability, the digital models are enhanced and cobots learn tasks by observing human actions.
Feature 4. Enhance reasoning capabilities of collaborative robotics
The high-level features extracted by AI based perception can help robots better understand human actions and intentions, allowing robots to anticipate human actions and adjust their own actions accordingly, leading to smoother and more efficient collaboration, better communication between humans and robots, and better planning of future actions.
Feature 5. Enhanced manufacturing operations management
The digital information extracted can also be used for optimization purposes at higher management levels, for instance to improve manufacturing operations scheduling, workforce planning, AMR navigation and routing, or workspace layout optimization.
Solutions Map
Mapping of solutions to high-level features.

Detailed Functional Specs
The following table contains links to the detailed specifications of each component in the reference architecture.
| Acronym | Link |
|---|---|
| TM | Template |
| RF | RF Documentation |
| BH | BH Documentation |
| CM | CM Documentation |
| AS | AS Documentation |
| RC | RC Documentation |
| IP | IP Documentation |
| DS | DS Documentation |
| SE | SE Documentaion |
| AD | AD Documentation |
| CD | CD Documentation |
| PE | PE Documentation |
| DR | DR Documentation |
| CR | CR Documentation |
| HI | HI Documentation |
| PD | PD Documentation |
| SP | SP Documentation |
| NS | NS Documentation |