
GIPSA
ISNI: 0000000118823396
62 Projects, page 1 of 13
assignment_turned_in ProjectFrom 2014Partners:Ecole Supérieure dIngénieurs en Informatique et Télécommunications, Techniques de lIngénierie Médicale de la Complexité, Intrasense, Techniques de l'Ingénierie Médicale de la Complexité, Laboratoire BMBI UMR 7338- Université de Technologie de Compiègne +11 partnersEcole Supérieure dIngénieurs en Informatique et Télécommunications,Techniques de lIngénierie Médicale de la Complexité,Intrasense,Techniques de l'Ingénierie Médicale de la Complexité,Laboratoire BMBI UMR 7338- Université de Technologie de Compiègne,Stendhal University,Grenoble INP - UGA,CNRS,Ecole Supérieure d'Ingénieurs en Informatique et Télécommunications,Service de Neuroradiologie,GIPSA,Service de Neurologie,ESME,UJF,Age, Imagerie, Modélisation,UGAFunder: French National Research Agency (ANR) Project Code: ANR-13-TECS-0011Funder Contribution: 1,040,900 EURThe project Swallowing & Breathing: Modelling and e-Health at Home (e-SwallHome) aims to explain the normal behavior of two coupled functions in human, swallowing and breathing, to better understand the etiology of the mechanisms underlying pathological behaviors of dysphagia, dyspnea and dysphonia, related to a brain stroke. The project will focus on developing, from the understanding of these mechanisms, a set of protocols for diagnosis, monitoring at home, educating and rehabilitating patients at risk of death from asphyxia or fall without the possibility of triggering alarm signals (e.g., in case of brain stroke or chronic disease - such as trisomy 21 - with impaired swallowing / breathing and, consequently, speech). To achieve its objectives, the project will develop research in three areas: 1) Modeling: building a model that integrates the central control and the peripheral effectors of swallowing and breathing, taking into account the associated feedback to centers from existing models. 2) Metrology and signal processing: defining parameters and variables of the integrated model, which will be the subject of metrology, if possible at home (concerning the variables) and estimating, if possible in a personalized way and if necessary in hospitals their individual value (concerning the parameters). 3) Monitoring and Education: using these measures in real time, which allows the visualization of real pathological behavior, in reference to a prototypical normal behavior, for example in the context of a procedure for preventing falls in the flat or on the road, or suffocation with or without a fall, and for establishing a protocol for biofeedback and / or serious games for education or rehabilitation of the patient. The expected results are: • an improvement of the quality of life of people suffering from diseases related to the functions of swallowing and breathing, thanks to tele-home support, combining technological innovation in e-health practices and innovations in swallowing / breathing rehabilitation. • an economy in the support by the national health insurance of rehabilitation procedures and social rehabilitation, including a rapid recovery of normal speech. • a better understanding of physiological control by the centres of the functions involved in the project, and an explanation of the etio-pathogenesis of the related disorders. • the design and implementation of home sensors, and exploitation by the clinician of the information collected at home, in order to improve the knowledge about the patient and his/her illness.
more_vert assignment_turned_in ProjectFrom 2017Partners:Grenoble INP - UGA, CLLE, INSHS, Centre de recherche cerveau et cognition UMR5549 CNRS/UPS, MSHS-T +12 partnersGrenoble INP - UGA,CLLE,INSHS,Centre de recherche cerveau et cognition UMR5549 CNRS/UPS,MSHS-T,UJF,EPHE,UPMF,LPNC,Stendhal University,CHU de Toulouse - Direction de la Recherche et de l'Innovation,Université Savoie Mont Blanc,CNRS,UTM,Michel de Montaigne University Bordeaux 3,UGA,GIPSAFunder: French National Research Agency (ANR) Project Code: ANR-17-CE28-0006Funder Contribution: 340,867 EURThe cochlear implant (CI) in congenital deaf children is now widely considered as a highly efficient means to restore auditory functions. However, after several decades of retrospective analysis, it is clear that there is a large range of recuperation levels, and in extreme cases some CI recipients never develop adequate oral language skills. The major goal of HearCog to improve rehabilitation strategies in CI children, it is to better understand and circumscribe the origins of such variability in CI outcomes. The originality of HearCog project is to consider CI outcomes in a broad range of interdependent aspects, from speech perception to speech production and the associated cognitive mechanism embedded in executive functions. The novelty of the proposal is both theoretical and methodological. The goals will be first to evaluate the capacities of the visual and auditory system to respond to natural environmental stimuli and to analyse neuronal mechanisms induced by sensory loss and recovery through the CI using brain imaging techniques (Functional Near-Infrared Spectroscopy, fNIRS). In view of the co-structuration of speech perception and production during development, we will assess how deafness and CI recovery can alter speech production. But congenital deafness has deleterious impacts that extend beyond the auditory functions and encompass cognitive systems including higher-order executive processes. Based on the disconnecting model (Kral et al., 2016), our objective will be to relate neuronal assessment, using the fNIRS technique, of executive functions to auditory restoration in CI children. HearCog is based on longitudinal assessment on CI infants and age-matched controls, to search for prognosis factors of auditory restoration. We will also compare these measurements to data acquired in older CI children implanted for several years, and controls. In fine our goal is to acquire objective measures of brain reorganization that could be linked to variability in CI outcomes and therefore would constitute a predictive factor. HearCog is at the crossroad of cognitive neuropsychology, clinical research with a strong opening toward education. Consequently HearCog is translational and multidisciplinary with the unique objective to understand the compensatory mechanisms induced by congenital hearing loss to support both the social insertion as well as the insertion within the school system of cochlear implanted deaf children.
more_vert - IGE,IRD,INSIS,UJF,INSU,Stendhal University,GIPSA,Délégation Alpes,Inria Grenoble - Rhône-Alpes research centre,Grenoble INP - UGA,LEGI,UGA,CNRSFunder: French National Research Agency (ANR) Project Code: ANR-23-CE01-0009Funder Contribution: 397,654 EUR
Oceanic convection remains poorly understood even though it is one of the main driver of the oceanic dynamics. Convection can be penetrative (entrain water from below the mixed layer) or non-penetrative. While it is reasonably straightforward to formulate conceptual parameterizations of non penetrative convection in idealized settings, it remains challenging to extend the formalism to realistic settings of penetrative convection even for state of the art ocean models. In fact the most advanced parameterization schemes for oceanic convection are still calibrated based on atmospheric data. Moreover these parameterizations do not take into account the rotation of the earth which can substantially impact the individual and collective behavior of convective plumes. The first objective of this proposal is to build an observational database of convective events on the Coriolis Platform (the largest rotating tank of the world). We will complement this dataset with numerical simulations to explore many types of surface forcing and initial conditions. We will then combine these observations and model outputs with a robust theoretical framework to build a consistent parameterization of oceanic convection. Then, we will overcome the constraints of the mathematical framework of existing parameterizations and propose a data-driven approach to formulate a more generic parameterization. Last, we will test these parameterizations in coarse resolution ocean models. We will perform a sensitivity analysis of the oceanic heat uptake as a function of the free parameters to asses how and where our parameterizations can reduce uncertainty of climate projections.
more_vert assignment_turned_in ProjectFrom 2012Partners:UJF, Grenoble INP - UGA, UGA, Laboratoire de neurosciences cognitives, CNRS +5 partnersUJF,Grenoble INP - UGA,UGA,Laboratoire de neurosciences cognitives,CNRS,Centre de Recherche Cerveau & Cognition / CNRS,Centre de Recherche Cerveau & Cognition / CNRS,GIPSA,Institut National de la Santé et de la Recherche Médicale,Stendhal UniversityFunder: French National Research Agency (ANR) Project Code: ANR-11-BSH2-0010Funder Contribution: 230,000 EURSynesthesia is a fascinating phenomenon, because it is a normal condition that concerns the intimacy of subjective experience, shared probably by about 5% of the population. It offers a unique opportunity to study the neural bases of subjective experience, drawing on individual differences just like in neuropsychology, but with healthy people. Synesthetes experience systematic, additional associations. For example, they may arbitrarily associate a specific color to each day of the week (‘Born On A Blue Day’ as says the title of the popular book written by synesthete – and Asperger – Daniel Tammet), or to each letter or number. This project focuses on those so-called ‘grapheme-color’ synesthetes. Most proposed explanations of synesthesia so far suggested a neurological origin: synesthetes would exhibit extra neuronal connections between the neural centers responsible for grapheme identification and those related to color perception, leading to spurious activity of ‘color areas’ by graphemes. In a recent study (currently submitted for publication), two partners of this proposal evaluated this theory with functional and structural MRI. They observed no activation of ‘color areas’ by graphemes and no structural difference between synesthetes and control subjects in ‘color regions’. However, synesthetes had more white matter in the retrosplenial cortex. These results inspired the present experiments. The result precisely obtained by Hupé and Dojat was the absence of localized correlates of synesthetic colors in the visual cortex. It was obtained by using standard fMRI analysis, requiring averaging across subjects the voxel BOLD response contrasted against two conditions. Such methodology only permits to identify neural processes that are localized enough in the brain. Neural correlates of synesthetic colors may however be distributed within the visual color system. Distributed processing of synesthetic associations would in fact make more sense, since it may seem odd that a unique, specific region of the visual cortex had specialized to code random associations between graphemes and colors – these associations being created (and then stabilized) by children possibly at a late developmental stage. We will evaluate this ‘distributed hypothesis’ by applying recently developed Multi-Voxel based Pattern classification Approaches (MVPA) to fMRI data. Another possibility is that the neural correlates of synesthetic associations are localized (or distributed) outside of the visual cortex. In our study, we observed that synesthetes had more white matter in the retrosplenial cortex, a region at the crossroad of memory, emotional and attentional networks, and involved in language, leading us to envision that the retrosplenial cortex could play a critical role in expertise acquisition by rallying mnesic, emotional and attentional resources in especially difficult perceptual tasks – such as reading. Color associations made by synesthetes (created when they were children and learnt to read) would be the remains of the strong multimodal connections and multiples resources involved in language-related expertise acquisition – reflected as more numerous ‘associative’ connections at the retrosplenial cortex node in the adult. This project considers therefore synesthesia within the context of expertise acquisition and its relationship to written language and speech – especially at the level of its elementary units (graphemes and phonemes) that trigger synesthetic associations. Our main hypothesis is that the synesthetic remains of children expertise acquisition strategies would reflect in a more multimodal coding of language processing in adult synesthetes. We are therefore joining forces with language researchers in audition (speech), vision (reading) and sensory-motor integration (writing and speech production) to measure the multimodal coding of language in controls and synesthetes in four modalities with MVPA fMRI methodology.
more_vert assignment_turned_in ProjectFrom 2022Partners:UPMF, UJF, Grenoble INP - UGA, Stendhal University, GIPSA +4 partnersUPMF,UJF,Grenoble INP - UGA,Stendhal University,GIPSA,LPNC,Université Savoie Mont Blanc,UGA,CNRSFunder: French National Research Agency (ANR) Project Code: ANR-21-CE37-0017Funder Contribution: 299,512 EURDuring speech, singing, or music playing, the auditory feedback involves both an aerial component received by the external ear, and an internal vibration: the ‘bone conduction’ component. While the speaker or musician hears both components, a listener only hears the aerial part. Thus, a person, child or adult, must learn to control oral sound production with a different information than that communicated. Since von Bekesy (1949), studies have consistently found that about half of the cochlear signal comes from bone conduction, but the information it conveys, and how it impacts oral motor control, is still unclear. Previous studies have highlighted important differences in spectral balance between the aerial- and bone-conducted signal during speech, but these studies have not led to an understanding of their possible difference in terms of informational content. Besides, nearly nothing is known of the bone-conducted feedback of other oral audiomotor behaviors like singing or playing a wind instrument through a mouthpiece (although modulation of auditory feedback by ear protectors is obvious, and some studies noted their behavioral consequences). Recent preliminary findings of our consortium suggest that specific information exists in the bone-conducted signal of speech, and in particular, information related to articulator (tongue) position. This intriguing observation warrants further examination and raises several questions. How does the bone-conducted component differ from the aerial component in general, during oral audiomotor tasks (speech, singing, playing a wind instrument), and can we explain these differences, e.g. link them to articulator motion? Are these differences typical, or does bone-conducted auditory feedback vary significantly among individuals and could explain behavioral idiosyncrasies? Can we recover the complete auditory signal that subjects obtain during oral audiomotor tasks, that is, including faithfully its bone-conducted part? How does bone-conduction affect perception of ones’ production in speech and music; in particular, does it lead to biases in auditory perception? Last, does bone-conducted sound guide audiomotor behavior, or in other words, is sound production guided by sounds that cannot be perceived by the interlocutor or the audience? The aim of the present project is to tackle these questions by combining 1) careful experimental extraction of the bone-conducted component thanks to deep in-ear recording during speech and music production, using a specially developed experimental apparatus; 2) a modeling approach, using signal processing, statistical and information-theoretic tools; 3) experimental psychoacoustics to analyze auditory perception; and 4) a sensory modification method for which a novel technique based on sound cancelation will be developed, in order to demonstrate behavioral consequences of a bone conduction perturbation. Answers to the aforementioned questions should help appreciate the role of the invisible part of the auditory iceberg, understand how the central nervous system uses the auditory feedback even when its acoustic communicative goal is different, and pave the way for further research on audiomotor control, in particular its short-term flexibility and longer-term plasticity. Our consortium unites specialists of sensorimotor control, acoustics, phonetics, psychoacoustics, music and modeling around this undertaking, that should contribute both to behavioral/cognitive neuroscience, phonetics and to artistic practice, and could translate down the road into improvements in speech therapy, speech communication systems and ear protection devices.
more_vert
chevron_left - 1
- 2
- 3
- 4
- 5
chevron_right