BASE HARDWARE

Introduction

This module will comprise all the packages necessary to control any piece of equipment able to sense or actuate within the workspace, them being sensors, robots, PLC's, actuators, etc...

Technical Specifications

As we can see, the nodes will be split depending on which physical system they will be deployed. The nature of this split will be explained further within the section 3. AI-PRISM Communication Modules (CM). Nonetheless, it should be remarked, that, since all nodes will be running within the ROS2 environment, all nodes, indistinctly of the physical device they are running or the container they are packaged in, they will be able to transfer data from one another.

AI-PRISM ROS Base Framework Technical specifications data sharing diagram

Software and hardware Requirements

The Base Hardware is meant to be run on an embedded device with a processor based on ARM64 architecture. A recommendable device would be Jetson Nano which uses said architecture and does not require to process highly demanding computational tasks.

Other boards, such as the Jetson Orin or the Jetson Xavier could also be suitable for development purposes.

ROS2 Humble is required for this project and, therefore, the Ubuntu Linux - Jammy Jellyfish (22.04) Operating System is recommended, which is also compatible with ARM64 devices:

The following packages are required to deploy and run the AI-PRISM Base Image:

Usage Manual

Assuming we already have an embedded device with Ubuntu Linux - Jammy Jellyfish (22.04) installed, the following steps help us into deploying the AI-PRISM Base Image.

A Docker Image has been built with all the necessary nodes already inbuilt.

It can be easily downloaded using the following steps.

0. Install Docker:

Open a terminal and run the following commands:

# Update the repos
sudo apt update

# Install the certificates
sudo apt install apt-transport-https ca-certificates curl software-properties-common

# Import the Docker GPG key to ensure the authenticity of the Docker packages:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Add the Docker repository to your system's sources.list.d directory:
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Update the package index again to include the Docker repository:
sudo apt update

# Finally, install Docker by running the following command:
sudo apt install docker-ce docker-ce-cli containerd.io

After the installation is complete, verify that Docker is running by checking the Docker version:

docker --version

We can give docker, sudo privileges to avoid having to do so constantly:

# Add the user to the docker group using the following command:
sudo usermod -aG docker <username>

IS recommended to restart the device after this point to allow all the changes to take effect.

# To verify that the user has Docker admin permissions, you can run the following Docker command without sudo:
docker run hello-world

1. Pull AI-PRISM Base Image:

We will proceed by downloading a ROS2 Humble Image already prebuilt to run on an ARM64 embedded device. Open a terminal and run the following command:

docker pull aiprism/ur10e_controller:jetson

2. Run the AI-PRISM Base Image:

Run the downloaded Docker Image by opening a terminal and run the following command:

docker run -itd --name aiprism_ur10e_controller aiprism/ur10e_controller:jetson bash

If you require to connect to the aiprism_ur10e_controller container with several parallel sessions, you can do so by opening another terminal and running the following command:

sudo docker exec -it aiprism_ur10e_controller bash

Use Case 1.

Use Case Diagram

The ROS Framework will include all the modules necessary to control the robot, as well as the controllers related with any piece of hardware involved in the project at the plant level. These modules have a certain degree of complexity and, therefore, only an experienced ROS developer should temper with the ROS Framework file system. The ROS Framework provides us with the advantage that any module can subscribe and publish to any other, and as such, all modules have access to any data, if they require it. Stated this, the ROS Framework will gather any incoming piece of data from the optical sensors (or any other sensor that we might require) and will provide it to any module that might need said data. With this advantage in mind, we can run in parallel a great variety of tasks, such as the robot controller, SLAM, computer vision algorithms, etc...

Stated this, ROS will command the robot with any particular motion it might have to perform, and it will collect and treat any useful data gathered from the sensors. Any worker performing any particular collaborative task or helping with the robot training might be generating useful data that can help with the machine learning goals of the AI-PRISM project. This data might be sent to the cloud (not shown on the ROS Framework Usage Model Figure) before or after it has been evaluated, helping with the data treatment process from the very source. Finally, a technical user could have access to an HMI screen which would provide information about the different physical devices connected to the ROS Network, as well as the running modules within each one of those.

ROS Framework Usage Model

Use Case Mock-ups

Here, it is displayed how the HMI would look. Human-Machine Interface Mock-up

This interface would show to an experimented technician which physical drivers are currently connected to the main drive (in the displayed case, the KEBA ORIN device) as well as the tasks running in said drive. As we can see, it also shows us if we have connection with the cloud.

If we click any of the different devices, it will switch to another screen as seen here:

Human-Machine Interface Mock-up

This second screen would allow the technician to see which modules of the robot 1 group are available, as well as which tasks are currently being executed in said group.

Functional Specifications

Functional Block Diagram

AI-PRISM ROS Base Framework working diagram

Main interfaces

List of main interfaces between functional components shown in the figure.

ID Component Name Description Sense
1 Data-gathering nodes Raw data Data directly provided by the sensors In
2 Data-gathering nodes Useful data Treated and filtered data ready to be used by other nodes Out
3 SLAM Useful data Specific useful data to generate maps In
4 SLAM Map Updated version of the map Out
5 Path Planning Map Map where to plan a path In
5 Path Planning Goal Objective to be reached by the robot In
6 Path Planning Path Path describing how to reach from the current position to the goal Out
7 Machine Learning Useful data Specific useful data to teach/train IA's In
8 Machine Learning Cloud data Specific data understandable as IA knowledge In/Out
9 AI-PRISM Learning Module Cloud data Accumulation of IA knowledge to help training the AI-PRISM Learning modules In
10 AI-PRISM Learning Module Useful data Specific data to train the AI-PRISM Learning modules In

Sequence Maps

Base Hardware data evolution Sequence map The data is firstly gathered by the sensors and treated as needed into useful data. That useful data is used to map the area as well as to train the AI-PRISM models. Once a path is created, the robot executes it. That also provides a knowledge useful to train the AI-PRISM models. Finally, the loop starts against, gathering more data, updating the maps, etc...