
UvA
Funder
775 Projects, page 1 of 155
Open Access Mandate for Publications assignment_turned_in Project2010 - 2013Partners:UNIMI, UvA, LSE, TARKI, UAntwerpen +1 partnersUNIMI,UvA,LSE,TARKI,UAntwerpen,UCDFunder: European Commission Project Code: 244592more_vert assignment_turned_in Project2014 - 2015Partners:UvAUvAFunder: European Commission Project Code: 327916more_vert Open Access Mandate for Publications and Research data assignment_turned_in Project2018 - 2021Partners:UvA, UniSS, Leiden University, PLURIBUS ONE SRL, SCCH +10 partnersUvA,UniSS,Leiden University,PLURIBUS ONE SRL,SCCH,PKE HOLDING AG,CA,IBM ISRAEL,University of Cagliari,UPF,EPFZ,SANTER REPLY,STMicroelectronics (Switzerland),IRIDA,MEDYMATCHFunder: European Commission Project Code: 780788Overall Budget: 5,976,420 EURFunder Contribution: 5,976,420 EURDeep Learning (DL) algorithms are an extremely promising instrument in artificial intelligence, achieving very high performance in numerous recognition, identification, and classification tasks. To foster their pervasive adoption in a vast scope of new applications and markets, a step forward is needed towards the implementation of the on-line classification task (called inference) on low-power embedded systems, enabling a shift to the edge computing paradigm. Nevertheless, when DL is moved at the edge, severe performance requirements must coexist with tight constraints in terms of power/energy consumption, posing the need for parallel and energy-efficient heterogeneous computing platforms. Unfortunately, programming for this kind of architectures requires advanced skills and significant effort, also considering that DL algorithms are designed to improve precision, without considering the limitations of the device that will execute the inference. Thus, the deployment of DL algorithms on heterogeneous architectures is often unaffordable for SMEs and midcaps without adequate support from software development tools. The main goal of ALOHA is to facilitate implementation of DL on heterogeneous low-energy computing platforms. To this aim, the project will develop a software development tool flow, automating: • algorithm design and analysis; • porting of the inference tasks to heterogeneous embedded architectures, with optimized mapping and scheduling; • implementation of middleware and primitives controlling the target platform, to optimize power and energy savings. During the development of the ALOHA tool flow, several main features will be addressed, such as architecture-awareness (the features of the embedded architecture will be considered starting from the algorithm design), adaptivity, security, productivity, and extensibility. ALOHA will be assessed over three different use-cases, involving surveillance, smart industry automation, and medical application domains
more_vert Open Access Mandate for Publications and Research data assignment_turned_in Project2021 - 2023Partners:UvAUvAFunder: European Commission Project Code: 101024605Overall Budget: 175,572 EURFunder Contribution: 175,572 EURStars initially ten times more massive than our Sun play a vital role in the evolution of the Cosmos. Their death is marked by the sudden collapse of their cores into neutron stars or black holes. Somehow, tight couples of massive black holes in the distant Universe occasionally merge, unleashing powerful gravitational waves that are now regularly being measured. The underlying processes that lead to the formation of these pairs remain shrouded in mystery, entailing an intricate story about the life cycle of their PROGENITORs -- the most massive and evolved stars. This reminded the astrophysical community of huge gaps in our knowledge of key processes governing massive-star evolution, related to binary interactions, mass-loss, and mixing. To mitigate this, we must obtain robust empirical constraints on the multiplicity, configuration, and stellar properties of the direct progenitors of black holes in regions that approach the conditions of the distant Universe: the Wolf-Rayet populations of the Magellanic Clouds. The MSCA fellowship offers an ideal platform for achieving this, relying on my unique skills, data, and tools in massive-star spectroscopy with training in state-of-the-art evolution models of stellar populations that I will receive at the University of Amsterdam. I will exploit brand new multi-epoch, multiwavelength monitoring spectroscopy obtained with the Hubble Space Telescope (HST) and Very Large Telescope (VLT). I will establish the physical and orbital properties of entire populations of Wolf-Rayet stars and binaries in the Magellanic Clouds relying on state-off-the-art tools and novel analysis techniques. I will compute population-synthesis models to constrain the evolutionary paths of gravitational-wave mergers. Through this, I will push our understanding of massive stars and gravitational wave sources throughout the Cosmos to new frontiers.
more_vert Open Access Mandate for Publications and Research data assignment_turned_in Project2024 - 2027Partners:University of Murcia, K3Y, UBITECH, COMMSIGNIA Kft., ICCS +10 partnersUniversity of Murcia,K3Y,UBITECH,COMMSIGNIA Kft.,ICCS,TUIAŞI,[no title available],ORANGE ROMANIA SA,SUITE5 DATA INTELLIGENCE SOLUTIONS LIMITED,Mellanox Technologies (Israel),UTRC,FERON TECHNOLOGIES PC,WINGS ICT,Mellanox Technologies (United States),UvAFunder: European Commission Project Code: 101167904Overall Budget: 6,835,700 EURFunder Contribution: 5,203,610 EUREver since the cloud-centric service provision started becoming incapable for efficiently supporting the emerging end-user needs, compute functionality has been shifted from the cloud, closer to the edge, or delegated to the user equipment at the far-edge. The resources and computing capabilities residing at those locations have been lately considered to collectively make-up a ‘compute continuum’, albeit its unproven assurance to securely accommodate end-to-end information sharing. The continuum-deployed workloads generate traffic that steers through untrusted HW and SW infrastructure (domains) of continuously changing trust-states. CASTOR develops and evaluates technologies to enable trustworthy continuum-wide communications. It departs from the processing of user-expressed high-level requirements for a continuum service, which are turned-to combinations of security needs and network resource requirements, referred to as CASTOR policies. The policies are subsequently enforced on the continuum HW and SW infrastructure to realise an optimised, trusted communication path delivering innovation-breakthroughs to the so-far unsatisfied need: a) for distributed (composable) attestation of the continuum nodes and subsequent elevation of individual outcomes to an adaptive (to changes) continuum trust quantification; b) for the derivation of the optimal path as a joint computation of the continuum trust properties and resources; c) for continuum infrastructure vendor-agnostic trusted path establishment, seamlessly crossing different administrative domains. The CASTOR will be evaluated in operational environments of 4 use-cases whereby varying types of security/safety-critical information is shared. Project innovations will be exhaustively assessed in 3 diverse application domains utilising the carefully-designed CASTOR testbed core for each case. Our results will provide experimental evidence for the CASTOR's efficiency and feed the incomplete trust-relevant (IETF) standards.
more_vert
chevron_left - 1
- 2
- 3
- 4
- 5
chevron_right