Powered by OpenAIRE graph
Found an issue? Give us feedback

Intel Corporation (UK) Ltd

Intel Corporation (UK) Ltd

18 Projects, page 1 of 4
  • Funder: UK Research and Innovation Project Code: EP/R018537/1
    Funder Contribution: 2,557,650 GBP

    Bayesian inference is a process which allows us to extract information from data. The process uses prior knowledge articulated as statistical models for the data. We are focused on developing a transformational solution to Data Science problems that can be posed as such Bayesian inference tasks. An existing family of algorithms, called Markov chain Monte Carlo (MCMC) algorithms, offer a family of solutions that offer impressive accuracy but demand significant computational load. For a significant subset of the users of Data Science that we interact with, while the accuracy offered by MCMC is recognised as potentially transformational, the computational load is just too great for MCMC to be a practical alternative to existing approaches. These users include academics working in science (e.g., Physics, Chemistry, Biology and the social sciences) as well as government and industry (e.g., in the pharmaceutical, defence and manufacturing sectors). The problem is then how to make the accuracy offered by MCMC accessible at a fraction of the computational cost. The solution we propose is based on replacing MCMC with a more recently developed family of algorithms, Sequential Monte Carlo (SMC) samplers. While MCMC, at its heart, manipulates a single sampling process, SMC samplers are an inherently population-based algorithm that manipulates a population of samples. This makes SMC samplers well suited to the task of being implemented in a way that exploits parallel computational resources. It is therefore possible to use emerging hardware (e.g., Graphics Processor Units (GPUs), Field Programmable Gate Arrays (FPGAs) and Intel's Xeon Phis as well as High Performance Computing (HPC) clusters) to make SMC samplers run faster. Indeed, our recent work (which has had to remove some algorithmic bottlenecks before making the progress we have achieved) has shown that SMC samplers can offer accuracy similar to MCMC but with implementations that are better suited to such emerging hardware. The benefits of using an SMC sampler in place of MCMC go beyond those made possible by simply posing a (tough) parallel computing challenge. The parameters of an MCMC algorithm necessarily differ from those related to a SMC sampler. These differences offer opportunities for SMC samplers to be developed in directions that are not possible with MCMC. For example, SMC samplers, in contrast to MCMC algorithms, can be configured to exploit a memory of their historic behaviour and can be designed to smoothly transition between problems. It seems likely that by exploiting such opportunities, we will generate SMC samplers that can outperform MCMC even more than is possible by using parallelised implementations alone. Our interactions with users, our experience of parallelising SMC samplers and the preliminary results we have obtained when comparing SMC samplers and MCMC make us excited about the potential that SMC samplers offer as a "New Approach for Data Science". Our current work has only begun to explore the potential offered by SMC samplers. We perceive significant benefit could result from a larger programme of work that helps us understand the extent to which users will benefit from replacing MCMC with SMC samplers. We propose a programme of work that combines a focus on users' problems with a systematic investigation into the opportunities offered by SMC samplers. Our strategy for achieving impact comprises multiple tactics. Specifically, we will: use identified users to act as "evangelists" in each of their domains; work with our hardware-oriented partners to produce high-performance reference implementations; engage with the developer team for Stan (the most widely-used generic MCMC implementation); work with the Industrial Mathematics Knowledge Transfer Network and the Alan Turing Institute to engage with both users and other algorithmic developers.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/W026686/1
    Funder Contribution: 2,670,330 GBP

    This proposal brings together communities from the UK Turbulence Consortium (UKTC) and the UK Consortium on Reacting Flows (UKCRF) to ensure a smooth transition to exascale computing, with the aim to develop transformative techniques for future-proofing their production simulation software ecosystems dedicated to the study of turbulent flows. Understanding, predicting and controlling turbulent flows is of central importance and a limiting factor to a vast range of industries. Many of the environmental and energy-related issues we face today cannot possibly be tackled without a better understanding of turbulence. The UK is preparing for the exascale era through the ExCALIBUR programme to develop exascale-ready algorithms and software. Based on the findings from the Design and Development Working Group (DDWG) on turbulence at the exascale, this project is bringing together communities representing two of the seven UK HEC Consortia, the UKTC and the UKCTRF, to re-engineer or extend the capabilities of four of their production and research flow solvers for exascale computing: XCOMPACT3D, OPENSBLI, UDALES and SENGA+. These open-source, well-established, community flow solvers are based on finite-difference methods on structured meshes and will be developed to meet the challenges associated with exascale computing while taking advantage of the significant opportunities afforded by exascale systems. A key aim of this project is to leverage the well-established Domain Specific Language (DLS) framework OPS and the 2DECOMP&FFT library to allow XCOMPACT3D, OPENSBLI, UDALES and SENGA+ to run on large-scale heterogeneous computers. OPS was developed in the UK in the last ten years and it targets applications on multi-block structured meshes. It can currently generate code using CUDA, OPENACC/OPENMP5.0, OPENCL, SYCL/ONEAPI, HIP and their combinations with MPI. The OPS DSLs' capabilities will be extended in this project, specifically its code-generation tool-chain for robust, fail-safe parallel code generation. A related strand of work will use the 2DECOMP&FFT a Fortran-based library based on a 2D domain decomposition for spatially implicit numerical algorithms on monobloc structured meshes. The library includes a highly scalable and efficient interface to perform Fast Fourier Transforms (FFTs) and relies on MPI providing a user-friendly programming interface that hides communication details from application developers. 2DECOMP&FFT will be completely redesigned for a use on heterogeneous supercomputers (CPUs and GPUS from different vendors) using a hybrid strategy. The project will also combine exascale-ready coupling interfaces, UQ capabilities, I/O & visualisation tools to our flow solvers, as well as machine learning based algorithms, to address some of the key challenges and opportunities identified by the DDWG on turbulence at the exascale. This will be done in collaboration with several of the recently funded ExCALIBUR cross-cutting projects. The project will focus on four high-priority use cases (one for each solver), defined as high quality, high impact research made possible by a step-change in simulation performance. The use cases will focus on wind energy, green aviation, air quality and net-zero combustion. Exascale computing will be a game changer in these areas and will contribute to make the UK a greener nation (The UK commits to net zero carbon emissions by 2050). The use cases will be used to demonstrate the potential of the re-designed flow solvers based on OPS and 2DECOMP&FFT, for a wide range of hardware and parallel paradigms.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/N020030/1
    Funder Contribution: 202,161 GBP

    The need for better support to deal with the threats of cybersecurity is undisputed. Organisations are faced with an ever growing number of malware and integrated malware attack tools, attempted attacks on infrastructure and services, an increasing number of insider attacks, and advanced persistent threats for high-priced assets. Dealing with such threats requires that organisations have ICT staff that is at least familiar with cybersecurity issues and preferably has actual skills in cybersecurity regardless of the role of such staff. Likewise, management and decision makers need to be aware of cybersecurity issues and reflect these in their actions. Large organisations often have a Chief Information Security Officer (CISO) who deals with the operational and strategic issues of cybersecurity for his or her organisation. But SMEs typically cannot afford a role with such oversight on cybersecurity, which makes them especially vulnerable. The scale and diversity of cybersecurity issues that an organisation faces means it cannot possibly consider each single vulnerability of its systems against each credible or potential adversary whose presence would turn a vulnerability into an actual threat. A CISO or decision maker, though, needs to have a fairly abstract view of all this complexity where the choice of abstraction is not driven by technical aspects but by modalities such as risk, compliance, availability of service, and strategy. This view often has to take into account the cybersecurity of external or partner organisations, which is problematic as organisations are reluctant to share such sensitive information. Therefore, a CISO or decision maker needs a representation of relevant internal or external systems and services that allows him or her to make decisions of either operational or strategic nature. The uncertainty expressed in such abstractions is typically probabilistic or strict in nature. For example, a bank may have a good idea of the probability that a given teller machine has a corrupted external interface that clones inserted bank cards, based on past history, location of the machine and so forth. Strict uncertainty often relates to threats for which no (or insufficient) historical information is available to estimate probability distributions, or it is used to express the combinatorial nature of a problem, for example the different orderings in which one may schedule critical tasks. This project brings together research leaders in machine learning, robust optimisation, verification and cybersecurity to explore new modelling and analysis capabilities for needs in cybersecurity. The project will investigate new approaches for modelling and optimisation by which cybersecurity of systems, processes, and infrastructures can be more robustly assessed, monitored, and controlled in the face of stochastic and strict uncertainty. Particular attention will be paid to privacy: new forms of privacy-preserving data analytics will be created and approaches to decision support that respect privacy considerations; for corporate confidentiality, we will invent foundations that enable different organisations to model and analyse cross-organisational cybersecurity aspects whilst respecting the type of privacy inherent in organisations' confidential information by establishing appropriate information barriers.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/Z531066/1
    Funder Contribution: 11,782,400 GBP

    However, access to silicon prototyping facilities remains a challenge in the UK due to the high cost of both equipment and the cleanroom facilities that are required to house the equipment. Furthermore, there is often a disconnect in communication between industry and academia, resulting in some industrial challenges remaining unsolved, and support, training, and networking opportunities for academics to engage with commercialisation activities isn't widespread. The C-PIC host institutions comprising University of Southampton, University of Glasgow and the Science and Technologies Facilities Council (STFC), together with 105 partners at proposal stage, will overcome these challenges by uniting leading UK entrepreneurs and researchers, together with a network of support to streamline the route to commercialisation, translating a wide range of technologies from research labs into industry, underpinned by the C-PIC silicon photonics prototyping foundry. Applications will cover data centre communications; sensing for healthcare, the environment & defence; quantum technologies; artificial intelligence; LiDAR; and more. We will deliver our vision by fulfilling these objectives: Translate a wide range of silicon photonics technologies from research labs into industry, supporting the creation of new companies & jobs, and subsequently social & economic impact. Interconnect the UK silicon photonics ecosystem, acting as the front door to UK expertise, including by launching an online Knowledge Hub. Fund a broad range of Innovation projects supporting industrial-academic collaborations aimed at solving real world industry problems, with the overarching goal of demonstrating high potential solutions in a variety of application areas. Embed equality, diversity, and inclusion best practice into everything we do. Deliver the world's only open source, fully flexible silicon photonics prototyping foundry based on industry-like technology, facilitating straightforward scale-up to commercial viability. Support entrepreneurs in their journey to commercialisation by facilitating networks with venture capitalists, mentors, training, and recruitment. Represent the interests of the community at large with policy makers and the public, becoming an internationally renowned Centre able to secure overseas investment and international partners. Act as a convening body for the field in the UK, becoming a hub of skills, knowledge, and networking opportunities, with regular events aimed at ensuring possibilities for advancing the field and delivering impact are fully exploited. Increase the number of skilled staff working in impact generating roles in the field of silicon photonics via a range of training events and company growth, whilst routinely seeking additional funding to expand training offerings.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V028251/1
    Funder Contribution: 613,910 GBP

    The DART project aims to pioneer a ground-breaking capability to enhance the performance and energy efficiency of reconfigurable hardware accelerators for next-generation computing systems. This capability will be achieved by a novel foundation for a transformation engine based on heterogeneous graphs for design optimisation and diagnosis. While hardware designers are familiar with transformations by Boolean algebra, the proposed research promotes a design-by-transformation style by providing, for the first time, tools which facilitate experimentation with design transformations and their regulation by meta-programming. These tools will cover design space exploration based on machine learning, and end-to-end tool chains mapping designs captured in multiple source languages to heterogeneous reconfigurable devices targeting cloud computing, Internet-of-Things and supercomputing. The proposed approach will be evaluated through a variety of benchmarks involving hardware acceleration, and through codifying strategies for automating the search of neural architectures for hardware implementation with both high accuracy and high efficiency.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.