SMACS

Smart cabin system for aircraft readiness

SMACS will conceive a camera-based prototype solution, validated in relevant environment in the CleanSky2 Integrated Cabin Demonstrator, for digitalized on-demand verification of Taxi, take-off and Landing for cabin luggage.

About

AI-based technology

The ambition of SmaCS is to develop an automated system that will incorporate AI-based technology for digitalized on-demand verification of Taxi, Take-off and Landing (TTL) requirements for cabin luggage to help the crew in handling safety procedures. The system will particularly address the optimal installation in aircrafts, paying special attention to important aspects such as the responsiveness, robustness, power consumption, maintenance, scalability, upgradeability, compatibility with the international aviation regulations, adaptability to any kind of cameras already installed on a plane, etc. Furthermore, in order to reduce the total cost of the system the relation between number of cameras and processing modules will be maximized. 
In SmaCS the data used for training the AI-based technology will optimally combine real and synthetic annotated images with a high variety of situations and visual appearances of passengers, luggage and cabin components. Sophisticated synthetic-to-real domain adaptation techniques will be developed to generate data of better quality specifically designed for the considered scenarios, overcoming the lack of suitable data to train a tailored image content descriptor.

Aenean Consectetur Porta

Magna Porta Sit Bibendum

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique.

Objectives

Define an innovative architecture for the solution to be reliable, lightweight, cost efficient and regulatory compliant:

Innovative architecture combining low cost processors attached to embedded cameras, maximizing the area covered to obtain an efficient and minimized solution for on demand TTL verification.

Select the data treatment platform, camera platform, AI-based data treatment software, compliant with the specifications and delivering the best trade-off between reliability, service quality, cost efficiency and lightweight.

Novel approach to develop a Deep Neural Networks – DNN technique based on Data generation for algorithm training and DNN design and adaption for maximizing the object detection and its location.

Preliminary design the complete system according to the specifications:

DNN algorithms will be trained specifically to detect and localise objects in the specific domain of an aircraft; inference technologies will allow to deploy the designed software (SW) into embedded HW.

Develop and validate in laboratory a first complete prototype meeting the functionalities and performances defined in the specifications:

Validation in the cabin mock-up provided by the Topic Manager, in laboratory conditions and in simulations with different types of aircrafts.

Disseminate the project results to both the scientific community and the end-users and confirm the conditions for a successful business model:

Scientific and industrial sector specific dissemination will be undertaken by the project.

Implementation

The project plan is divided into eight technical work packages, supplemented by the Management, to allow focusing on a set of specific innovative aspects in each of the work packages.

WP1 – Ethics Requirements

Setting out the Ethics Requirements that the project must comply with.

WP2 – Project management

The management WP will guarantee the successful implementation of the coordination and management activities of the project including the contractual, financial and technical aspects. The quality assurance of the project results as well as the methodology that projects partners will follow in the risk and data management plans will.

WP3 – System definition & specification

This WP will enable to understand the Topic Manager specifications toward the interfaces with the cabin environment for data transmission and processing. The requirements in terms of automated image content description capabilities, physical integration in the cabin, airworthiness certification as well as weight and cost constraints will be identified.

WP4 – Technologies selection

A technology scouting and assessment will be carried out based on the specifications to select the most appropriate “hardware platform, camera platform, AI-based data treatment software” triptych.

WP5 – Preliminary system design

The system architecture and the functioning principles of the machine learning algorithms will be detailed. The system will be preliminary designed and reviewed for validation.

WP6 – System development & laboratory testing

The different technology bricks will be further developed and integrated. The system will be tested until validation in laboratory environment.

WP7 – Prototyping & demonstration

A final prototype will be developed and validated for its integration into the Integrated Cabin Demonstrator.

WP8 – Dissemination & exploitation

Successfully plan and carry out communication and scientific and industrial dissemination activities, all the exploitation, technology transfer, and IPR management activities, as well as the efforts to contribute to innovation management activities.

consortium

Otonomy Aviation

Is a rank one supplier aeronautical equipment supplier that designs and manufactures high definition optronic systems for business aviation. This equipment supports different service offers:Ground control security: On board intrusion control system and ground collision detectors In flight comfort: with high definition and ultra-high definition cameras that stream during the flight.Organized in multiple divisions of excellence (Engineering, Certification, Project Management, Supply Chain, Quality, Manufacturing, Marketing and After Sales), Otonomy Aviation is responsible for all processes from beginning to the end. As a zero-defect culture team, OA supplies the most reliable and efficient systems for client’s satisfaction. Otonomy Aviation has a permanent team of engineers and PhD’s exclusively dedicated to research, development and qualification. The entire range of products thus has all the certifications and international certifications that are essential for commercialize and reference the products.Otonomy Aviation has developed a fully owned hardware design and software design for an HD camera (externally and internally mounted).

Vicomtech

Vicomtech is an applied research centre for Interactive Computer Graphics and Multimedia located in San Sebastian (Spain). It is a non-profit foundation, stablished in 2001 as a joint venture by the Fraunhofer INI-GraphicsNet Foundation and the EiTB Broadcasting Group. The role of Vicomtech in the market is to supply society with technology by transfer of primary research to industry through collaborative RTD projects. Vicomtech’s main research lines lay in the fields of multimedia, AI, computer graphics and interaction technologies, which Vicomtech applies in multiple sectors. Vicomtech has researchers with a strong experience on several fields of ICT, especially in Communications, Multimedia and Visual Computing. Due to the direct contact to several industries from different sectors (Engineering, Transport, Medical, etc.), the researchers are experts in bringing innovation to the companies with the latest advances in research.The main team involved in the project is the Intelligent Transport Systems – ITS and Engineering department. The department applies Machine Vision, Advanced Display Systems, AI, Knowledge Engineering and Advanced Algorithm technologies to the industrial sector in general and the transport sector in particular, providing the sector’s companies with technology solutions (ITS).

Solution

Architecture, Technology & Components

Architecture

Processing blocks

The SmaCS architecture is composed of processing blocks with:

  • One AI-processor powered by the aircraft through a dedicated circuit breaker and that powers and manages.
  • 6 to 8 video cameras, depending on the finally adopted data acquisition and automated image description process (continuous video streams or pictures on demand).

The camera installation throughout the cabin environment will be a crucial aspect of the solution. Depending where installed, the perspective of the scene will vary together with the existing spaces between seats (i.e. there is more space in front of emergency exit seats). The 3D simulation technology will be also applied to research on the best option for the installation of the cameras. Accounting with various highly realistic 3D models provided by OA, SmaCS will analyze and determine best and costless position of each component of the solution.

Technology

Object recognition AI

SMACS will create a representative and well-sized data-set to train the AI algorithm in object recognition. The problematic raised by this topic is that such data-set is currently inexistent, and its acquisition will be challenging in terms of manpower, resources and the complexity of each possible situation. The knowledge of the consortium in machine-learning will allow to apply an innovative pedagogical methodology for the aerospace industry on this regard. The SmaCS solution relies on the combination of existing AI library and 3D simulation data to face all potential situations and to improve the accuracy of the algorithm.

Components

Main components

The following are the main components of the technological advances in SmaCS:

  • Perception: Scene monitoring of 2 rows per camera. Raw sensor data will be prepared for entering the “understand” module where the computer vision algorithms are run. The design of the key software modules will follow an elaborated methodology for safety and security validation and testing.
  • Understand: This module is the responsible of detecting and classifying the situation in the cabin. A selection of the relevant situation hypotheses relevant to the use case of TTL cabin requirement will be applied which will be considered and evaluated.
  • Inform: Efficient information propagation from the “understand” module to the cabin crew or/and cockpit will be designed to easily understand the current situation of the TTL.
  • Train: This component will provide sufficient situation models to train the “perception/understand” modules to deal with the relevant situation inside the cabin for the TTL. The component will include a novel approach of mixed real dataset with synthetic ad-hoc generated scenes. This data will help to fine tune the recognition and understanding capabilities of the deployed algorithms.
News & Documents

News & Events

News
End of SmaCS project
On June 21, 2022, the SmaCS Consortium met Clean Sky’s Topic Manager for the validation of the TRL 5 and the final results of the project.
more...
News
Paper submitted and accepted at SN Computer Science Journal
The paper “Leveraging Synthetic Data for DNN-based Visual Analysis of Passenger Seats” submitted by the SmaCS project to the SN Computer Science Journal has been accepted.
more...
News
Preparation for the TRL5 evaluation and the end of the project
System to be installed in CleanSky’s WP2.4 cabin demonstrator
more...
Publication in Springer Nature Computer Science Journal
An extended version of our VISAPP paper will be presented
more...
New Smacs use case video - Smart cabin system for aircraft readiness
The SMACS project, led by OTONOMY Aviation and VICOMTECH keep going
more...
Insights on the SmaCS on-board AI-processor
The on-board AI-processor is intended to process very large data flows provided from the set of sensors
more...
Final round of recordings in the cabin mock-ups
An enhanced version of training data will be generated
more...
Paper on Video Content Description published
The publication is of relevance to SmaCS in the context of video datasets labelling and interface compliance
more...
SmaCS paper enters in VISAPP short list candidates
SmaCS paper enters in VISAPP short list candidates
The paper has been chosen to win the VISAPP 2021 best student paper award, based on marks provided by the reviewers.
more...
Paper submitted and accepted at VISAPP 2021
Paper submitted and accepted at VISAPP 2021
The conference addresses latest technology developments within image and video understanding
more...

Public Documents

SmaCS Official Logo

Contact

Please fill in the below form to contact the SmaCS project Coordination team for any general information or project specific questions / comments.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.