Powered by OpenAIRE graph
Found an issue? Give us feedback

ACAPELA GROUP France

Country: France

ACAPELA GROUP France

3 Projects, page 1 of 1
  • Funder: French National Research Agency (ANR) Project Code: ANR-08-CORD-0024
    Funder Contribution: 955,166 EUR
    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-12-CORD-0021
    Funder Contribution: 805,029 EUR

    In this project, we intend to study the human-computer interaction in situated manner. We believe that the interaction must have a physical realization, anchored in the real world to be natural and effective. In order to embody interactive systems, we propose to use humanoid robots. Robots, endowed with perceptions, but also means to act in the environment, allow the integration of a physical context in the interaction for the machine as well as for humans. So it is a situated approach of man-machine dialogue that will prevail to the innovations brought by this project. To do so, three processes will interact. First, it will be necessary for the machine to build a context of interaction allowing making decisions about the future of the interaction. This environment will incorporate the spoken input provided by humans but also the perceptions of the environment and the robot’s proprioception. Based on this background, potentially uncertain because of errors introduced by the automatic analysis of speech, the machine will take decisions. To help humans to achieve effective interaction, the robot will produce an expressive feedback of its understanding of the context. The reconstruction of the context will be based of course on the recognition and understanding of the spoken inputs uttered by the user but not only. The originality of this project is to anchor the interaction in the physical world, in close link with the aim of this interaction. For example, as part of a collaborative task manipulation of objects, the context will incorporate information about the configuration of the objects from the perspective of the robot but also of the humans. Thus this project will continue the research conducted in the field of spatial reasoning, and especially perspective taking. Perspective taking is a process by which a machine adopts the viewpoint of another agent (human or artificial) to reason about what can be seen by one or the other. Thus, ambiguities in the context may be raised due to the physical impossibility of certain assumptions. Language processing methods, among others, are stochastic processes that provide hypotheses about the context of the interaction, associated with confidence levels indicating the degree of certainty on these assumptions. Decisions taken by the machine must consider the potentially generated ambiguities and keep track of them throughout the interaction. This issue is discussed in the community of man-machine dialogue, which includes Supélec and the LIA, through statistical optimization models of decision making processes but remains an important lock. The physical attitude that robots should adopt to make the interaction more natural and effective will be among the areas of research. Also, the voice of the robot, including the lexical content, but also the intonations will be investigated. Specifically, the Acapela group company will work on speech synthesis methods based on voice transforms.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-13-CORD-0011
    Funder Contribution: 980,872 EUR

    The goal of the project is to develop a synthesis system for high quality singing voices, which can be used by the general public musician. The system should not be limited to sing vowels, but allow to generate complete songs including arbitrary texts. Such a system does not exist in the French language. The synthesizer will operate in two modes: "text to singing", in which the user must enter the text and the notes of the score (lengths and heights), that the machine will then sing, and "virtual singer" in which the user operates the real-time control interface to control the synthesizer as a musical instrument. To achieve the synthesizer, we propose in this project a combination of advanced voice transformation techniques, including analysis and processing of the parameters of the vocal tract and the glottal source, with state of the art know how about unit selection for concatenative speech synthesis, rules based singing synthesis systems, and innovative gesture control interfaces. A central objective for the synthesizer to be developed is the ability to capture and reproduce the variety of singing styles (opera/classical, popular/song). Besides evaluation techniques that are commonly used for speech synthesis systems, the usability of the systems will be evaluated in particular with respect to the creative aspects that they allow (evaluation in form of mini-concerts and compositional mini-projects using the developed control interface, virtual choirs and/or virtual soloists). The prototype system for singing synthesis that will be developed in the project will be used by partners to offer products including singing voice synthesis as well as virtual singer instruments. These functions are currently lacking or exist only in a very limited form. Thus, the project will provide for performing musicians, composers and the general public an artistic new singing synthesis and new means to create interactive experiences using the vocals.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.