
Magellium
Magellium
3 Projects, page 1 of 1
assignment_turned_in ProjectFrom 2012Partners:UPS - CESBIO, MagelliumUPS - CESBIO,MagelliumFunder: French National Research Agency (ANR) Project Code: ANR-12-ASTR-0036Funder Contribution: 294,692 EURThe project aims to develop new methods for multimodal image exploitation. It will assess the potential of using separate and joint Lidar and optical multi / hyper spectral data in dual applications (civilian and military). It will identify new processing algorithms for characterization/detection of 3D objects or elements (armoured vehicle, road or water under vegetation, 3D models and digital terrain models in the presence of vegetation...) and finally to implement it in a processing chain. In this context, the stakes of the project address environmental theme and geographical information. First, the approach requires the improvement of a simulator for passive and active optical sensors, observing the ground and the development of fusion algorithms for Lidar / Hyperspectral data in order to detect / characterize objects in terms of 3D and materials in image. This simulator represents an efficient solution for the Defence or any civilian organisation which might define future imaging instruments The second part of the project is relative to data exploitation. The first step consists in processing Lidar images and multi/hyper spectral images separately in order to better characterise them. Then we will process data together considering them such as a 3D image with new bands. This new approach will allow us to perform a better knowledge of the 3D observed scene.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::745db23db2440168bab50fb58c11ade1&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::745db23db2440168bab50fb58c11ade1&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in ProjectFrom 2023Partners:Magellium, Laboratoire d'Intégration des Systèmes et des TechnologiesMagellium,Laboratoire d'Intégration des Systèmes et des TechnologiesFunder: French National Research Agency (ANR) Project Code: ANR-23-MOXE-0007Funder Contribution: 444,995 EURThe MOBILEX Challenge tackle autonomous navigation of vehicles in complex environments. To meet the requirements of the Mobilex challenge, CEA and Magellium have combined their competencies in the fields of perception, localization and navigation in complex environments. For over 20 years, Magellium has been supporting the CNES in the development of perception and localization components in order to increase the autonomy of rovers; some of them are embedded in major missions (ExoMars, MMX, ...). This expertise, accredited by the CNES, allows us today to support large transport (Renault, SNCF, etc.) and defence companies (Arquus, Nexter, DGA, etc.) in the use and the deployment of these technologies. The Interactive Robotics Department (SRI) of the CEA-LIST brings its renowned excellence in the fields of command and control, autonomous navigation and robotics system, developed during numerous French, European, industrial or in-house projects, whether in the nuclear, transite logistics or agricultural fields. We thus propose a consortium mastering all the technologies necessary for the success of the project, with a strong competence in robotics and a high scientific level, which will be able to offer solutions to the problems of increasingly complex challenges and will participate in the emergence and industrialization of state-of-the-art algorithms. To successfully complete the challenge, we have defined 5 objectives: (1) To equip the platform with perception and localization capabilities, (2) To integrate an innovative navigation function adapted to the characteristics of the challenge, (3) To provide the robot with a capability to characterize the traversability of the terrain, (4) To develop a mobile robotic platform capable of operating in complete safety, (5) To communicate, disseminate and promote the results of the project. The methodology proposed will be based on the adaptation of existing components, the increase in maturity and the robustification of innovative functions. Challenge #1, which will require the equipment and the handling of the platform, will mainly consist in adapting and integrating existing functionalities brought by each partner to ensure the autonomous mobility of the platform in an environment with limited complexity. Meeting challenge #2 will however require the addition of complementary perception modalities and the integration of innovative algorithms developed specifically. A redundancy will be ensured by the robustification of the existing functionnalities. Finally, in order to face the challenge #3, we will work mainly on the robustification and the improvement of the performances of the whole system set up during the first two challenges. Thus, the envisaged tasks allow to achieve a rise in maturity throughout the project of the proposed hardware and software system, while minimizing the risks and in the respect of the phasing of the project marked by the three challenges. Beyond the technical challenge addressed by innovative technologies and a high-standing consortium, a particular effort will be made on communication, dissemination and valorization, key elements of our proposal. The partnership composition of our project allows us to reach a large scientific, industrial and operational audience, through adapted communication strategies and through dedicated events (exhibitions, seminars, conferences, thematic days, etc.). Finally, scientific and technical valorization activities will be carried out.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::eb8a686b406f902b57944990f89eaa46&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::eb8a686b406f902b57944990f89eaa46&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in ProjectFrom 2012Partners:Centre Hospitalier Universitaire de Toulouse / Laboratoire de Gérontechnologie La Grave / Gérontopôle, Magellium, Laboratoire dAnalyse et dArchitecture des Systèmes, Laboratoire d'Analyse et d'Architecture des Systèmes, SoftBank Robotics (France) +1 partnersCentre Hospitalier Universitaire de Toulouse / Laboratoire de Gérontechnologie La Grave / Gérontopôle,Magellium,Laboratoire dAnalyse et dArchitecture des Systèmes,Laboratoire d'Analyse et d'Architecture des Systèmes,SoftBank Robotics (France),Université Paul Sabatier Toulouse 3 - Institut de Recherche en Informatique de ToulouseFunder: French National Research Agency (ANR) Project Code: ANR-12-CORD-0003Funder Contribution: 790,648 EURWhen robots leave industrial mass production to help with household chores, the requirements for robot platforms will change. While industrial production requires strength, precision, speed, and endurance, domestic service tasks for household robots are: robust navigation in indoor environments, dexterous object manipulation, and intuitive communication (speech, gestures, body language) with users. In this perspective, many issues are still to be solved, such as perception and system integration. The latter must not be underestimated, as the performance of the system as a whole is determined by the performance of the weakest component, generally the robot’s perception capacities and especially its perception of human user which is a bottle neck for long-term interaction. Our RIDDLE project seeks to make a step forward in these directions and our core research issue will be to combine the underlying multiple and uncertain perceptual analyses related to (i) objects and space regarding the robot's spatial intelligence, and (ii) multimodal communication regarding the robot's transactional intelligence. We argue that the robot's transactional knowledge as well as its visual and also audio based perception of humans during H/R interaction should be improved considering such contextual information. The services targeted by our application concern mild memory assistance and search / carry services using Human/Robot (H/R) interaction based on concepts learnt through multimodal communication with the human user: places, furniture, household objects i.e. properties, storage location, and temporal associations. The purpose of this cognitive robot is to learn environmental information with the user in the loop ("learning by interacting with a human user") in terms of interactions with a set of household objects. This common semantic/contextual representation is the appropriate level of abstraction required during any H/R communication ("learning to interact with humans"). Making a robot as socially competent as possible, in all the daily life areas is very challenging. That is why we focus on a subset of daily life actions answering to specific needs related to objects, matching the application requirements, and providing the interactional support the human user expects. These needs and associated robotic services are (i) search and carry objects and, (ii) mild memory assistance about these objects e.g. their current semantic locations. In other words, the robot will answer the human user’s questions/riddles1 about objects by appropriate actions (speech, displacement or manipulation). Decision-making, components related to robot’s action, and safety are not the core of the RIDDLE project. These topics are the subject of other projects involving all or part of the RIDDLE consortium; these issues are just considered here to validate the perception level. The final key issue will be to integrate the whole final perceptual system onto the ROMEO humanoid robot developed by Aldebaran Robotics. We will consider its full embedded perception resources (vision, audio, radio frequency) to limit the environment instrumentation. In order to reduce deployment time and facilitate the household object perception, small radio frequency tags will be stickled on them. The achievement of scenarios involving ROMEO, elderly volunteers and graspable household objects of user’s daily life will measure the impact in term of robustness when sharing multiple and uncertain perceptual analyses within the perception layers. Such abilities could be generalized and expanded outside the elder-care field.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::6c9f26f34aceac2106e20a416c854bd5&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::6c9f26f34aceac2106e20a416c854bd5&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu