Powered by OpenAIRE graph
Found an issue? Give us feedback
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
775 Projects, page 1 of 155
  • Funder: European Commission Project Code: 244592
    more_vert
  • Funder: European Commission Project Code: 327916
    more_vert
  • Funder: European Commission Project Code: 780788
    Overall Budget: 5,976,420 EURFunder Contribution: 5,976,420 EUR

    Deep Learning (DL) algorithms are an extremely promising instrument in artificial intelligence, achieving very high performance in numerous recognition, identification, and classification tasks. To foster their pervasive adoption in a vast scope of new applications and markets, a step forward is needed towards the implementation of the on-line classification task (called inference) on low-power embedded systems, enabling a shift to the edge computing paradigm. Nevertheless, when DL is moved at the edge, severe performance requirements must coexist with tight constraints in terms of power/energy consumption, posing the need for parallel and energy-efficient heterogeneous computing platforms. Unfortunately, programming for this kind of architectures requires advanced skills and significant effort, also considering that DL algorithms are designed to improve precision, without considering the limitations of the device that will execute the inference. Thus, the deployment of DL algorithms on heterogeneous architectures is often unaffordable for SMEs and midcaps without adequate support from software development tools. The main goal of ALOHA is to facilitate implementation of DL on heterogeneous low-energy computing platforms. To this aim, the project will develop a software development tool flow, automating: • algorithm design and analysis; • porting of the inference tasks to heterogeneous embedded architectures, with optimized mapping and scheduling; • implementation of middleware and primitives controlling the target platform, to optimize power and energy savings. During the development of the ALOHA tool flow, several main features will be addressed, such as architecture-awareness (the features of the embedded architecture will be considered starting from the algorithm design), adaptivity, security, productivity, and extensibility. ALOHA will be assessed over three different use-cases, involving surveillance, smart industry automation, and medical application domains

    more_vert
  • Funder: European Commission Project Code: 101024605
    Overall Budget: 175,572 EURFunder Contribution: 175,572 EUR

    Stars initially ten times more massive than our Sun play a vital role in the evolution of the Cosmos. Their death is marked by the sudden collapse of their cores into neutron stars or black holes. Somehow, tight couples of massive black holes in the distant Universe occasionally merge, unleashing powerful gravitational waves that are now regularly being measured. The underlying processes that lead to the formation of these pairs remain shrouded in mystery, entailing an intricate story about the life cycle of their PROGENITORs -- the most massive and evolved stars. This reminded the astrophysical community of huge gaps in our knowledge of key processes governing massive-star evolution, related to binary interactions, mass-loss, and mixing. To mitigate this, we must obtain robust empirical constraints on the multiplicity, configuration, and stellar properties of the direct progenitors of black holes in regions that approach the conditions of the distant Universe: the Wolf-Rayet populations of the Magellanic Clouds. The MSCA fellowship offers an ideal platform for achieving this, relying on my unique skills, data, and tools in massive-star spectroscopy with training in state-of-the-art evolution models of stellar populations that I will receive at the University of Amsterdam. I will exploit brand new multi-epoch, multiwavelength monitoring spectroscopy obtained with the Hubble Space Telescope (HST) and Very Large Telescope (VLT). I will establish the physical and orbital properties of entire populations of Wolf-Rayet stars and binaries in the Magellanic Clouds relying on state-off-the-art tools and novel analysis techniques. I will compute population-synthesis models to constrain the evolutionary paths of gravitational-wave mergers. Through this, I will push our understanding of massive stars and gravitational wave sources throughout the Cosmos to new frontiers.

    more_vert
  • Funder: European Commission Project Code: 101167904
    Overall Budget: 6,835,700 EURFunder Contribution: 5,203,610 EUR

    Ever since the cloud-centric service provision started becoming incapable for efficiently supporting the emerging end-user needs, compute functionality has been shifted from the cloud, closer to the edge, or delegated to the user equipment at the far-edge. The resources and computing capabilities residing at those locations have been lately considered to collectively make-up a ‘compute continuum’, albeit its unproven assurance to securely accommodate end-to-end information sharing. The continuum-deployed workloads generate traffic that steers through untrusted HW and SW infrastructure (domains) of continuously changing trust-states. CASTOR develops and evaluates technologies to enable trustworthy continuum-wide communications. It departs from the processing of user-expressed high-level requirements for a continuum service, which are turned-to combinations of security needs and network resource requirements, referred to as CASTOR policies. The policies are subsequently enforced on the continuum HW and SW infrastructure to realise an optimised, trusted communication path delivering innovation-breakthroughs to the so-far unsatisfied need: a) for distributed (composable) attestation of the continuum nodes and subsequent elevation of individual outcomes to an adaptive (to changes) continuum trust quantification; b) for the derivation of the optimal path as a joint computation of the continuum trust properties and resources; c) for continuum infrastructure vendor-agnostic trusted path establishment, seamlessly crossing different administrative domains. The CASTOR will be evaluated in operational environments of 4 use-cases whereby varying types of security/safety-critical information is shared. Project innovations will be exhaustively assessed in 3 diverse application domains utilising the carefully-designed CASTOR testbed core for each case. Our results will provide experimental evidence for the CASTOR's efficiency and feed the incomplete trust-relevant (IETF) standards.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.