Powered by OpenAIRE graph
Found an issue? Give us feedback

University of Edinburgh

University of Edinburgh

Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
8,208 Projects, page 1 of 1,642
  • Funder: UK Research and Innovation Project Code: 1951737

    The focus of this project is to develop a quantum verification technique for restricted low-complexity quantum architectures, specifically analogue quantum simulators. It is widely accepted that quantum computers will be able to solve problems that are classically intractable. Quantum verification is the field of quantum computing that aims to answer the question: If a quantum computer solves a problem that can not be solved classically, how does one verify that the outcome is correct? This question is aimed at systems of high complexity. However, thinking of this question in terms of low-complexity quantum systems, we can redefine verification as a measure of the systems performance and a characterisation of the noise present in the system for specific instances. Quantum simulators are engineered quantum systems that emulate another physical quantum device. Although most architectures for quantum computing and quantum simulation deal with discrete variables, quantum logic gates (digital), analogue quantum simulators involve something called a time-evolution operator and are therefore not compatible with current quantum verification techniques used for digital quantum computers/simulators. The goal of this project is to develop a quantum verification technique that can evaluate the performance of an analogue quantum simulator beyond the scope of current techniques that test for reliability in the simulators computation. Analogue quantum simulators are vital in terms of understanding dynamics of many-body quantum systems, quantum information, entanglement and specific properties of certain physical phenomena. Therefore, establishing confidence in an analogue quantum simulator as an emulation of a physical quantum system is incredibly important. When analogue quantum simulators are engineered to create long-range interactions, current techniques fail at shorter time-scales and for a small amount of qubits. The aim of this project is to develop a technique independent of system size that can partially characterise the amount of noise present in the time-evolution of a long-range quantum simulator, and thereby have some measure of the systems performance. In this way, an analogue quantum simulator that seems to be performing correctly can act as a benchmark for other simulators and can establish confidence in your simulator. Focus was put on randomized benchmarking, which is an experimental protocol used to measure the average strength of errors present in a quantum computer when running long randomly-chosen computations. It has only been applied to digital quantum systems, by adapting the theory to the analogue regime a protocol for analogue randomized benchmarking has been created, with the hope that it may be implemented experimentally in order to demonstrate that noise can be partially characterised for an analogue quantum simulator for long-range interactions and larger system sizes. This would be an important milestone in quantum simulation, many-body quantum physics and benchmarking as it combines techniques used for quantum computation with low-complexity quantum simulators. So far, focus has been put on a one dimensional analogue quantum simulator that is a string of trapped calcium-ions, and classical simulations of analogue randomized benchmarking on this system are underway. Future directions include implementing analogue randomized benchmarking experimentally and applying analogue randomized benchmarking to Rydberg atoms. Eventually the goal is to combine the analogue randomized benchmarking protocol with current quantum verification ideas so that rather than obtaining a measure of the average strength of errors in the system, we can highlight where the biggest contributors to noise occur and for what type of computation performance is best.

    more_vert
  • Funder: UK Research and Innovation Project Code: NC/P002196/1
    Funder Contribution: 93,650 GBP

    Doctoral Training Partnerships: a range of postgraduate training is funded by the Research Councils. For information on current funding routes, see the common terminology at https://www.ukri.org/apply-for-funding/how-we-fund-studentships/. Training grants may be to one organisation or to a consortia of research organisations. This portal will show the lead organisation only.

    more_vert
  • Funder: UK Research and Innovation Project Code: MR/S016066/2
    Funder Contribution: 494,762 GBP

    During the last two decades we have entered a "golden era" of cosmology. Using satellites and ground based telescopes we have gathered high quality data from the very early Universe, essentially from light emitted right after the Big Bang explosion, as well as from the late Universe, through the light emitted from stars and galaxies. However, a big part of our Universe's history and volume remains unexplored. A way to attack this challenge is by observing the light emitted from the neutral hydrogen (HI) that filled the Universe for a long time after the Big Bang and before the first galaxies were formed. After that time HI resides within galaxies, so we can also use it as a novel way to study the late Universe. This is my main area of research; it is exciting because it opens a new observational window into the Universe and can push the boundaries of our understanding of astrophysics and cosmology. In the next few years, HI surveys of exquisite sensitivity will be performed using radio telescopes, and part of the proposed research is working on new techniques in order to maximise their science output. I have pioneered a new observational method that does not require the -difficult and expensive- detection of individual galaxies but maps the entire HI flux coming from many galaxies together in large 3D pixels (across the sky and along time). I aim to use this technique to provide a 3D map of the Universe using HI intensity mapping data from the MeerKAT and SKA arrays. MeerKAT is a radio telescope located in Karoo, South Africa, and it is a pathfinder for the Square Kilometre Array (SKA), which is going to be the largest radio telescope in the world. My main goal is to build a complete pipeline for the cosmological analysis of the HI intensity mapping signal from instruments like MeerKAT and the SKA. This pipeline will also account for the possibility of powerful synergies between HI and traditional optical galaxy surveys, by including cross-correlations data analysis tools. This is useful in order to obtain measurements that are free of systematic contaminations that often plague individual surveys (but drop out when combining them), and therefore more robust. I aim to perform the first ever measurements of HI and cosmological parameters in the radio wavelength using the intensity mapping technique, exploit multi-wavelength synergies, and revolutionarise our understanding of galaxy evolution and dark energy. I am also working on two of the largest and best optical galaxy surveys of the next decade, the Euclid satellite mission and the ground-based Large Synoptic Survey telescope, whose main goals are to measure dark energy and understand the initial conditions of the Universe. In Euclid, I am working on various projects including building the software tools that are going to be used for the analysis of the data as soon as they become available. I am also working on the very challenging task of modelling the way galaxies cluster on small scales, in order to extract useful information for cosmology. I am also using tailored simulations to assess how well Euclid will measure the largest cosmological scales in order to characterise the initial conditions of the Universe. In both Euclid and LSST, I am working on synergies with radio experiments. My goal is to find innovative ways to optimally combine optical and radio surveys, in order to maximise their joint scientific output. Another exciting part of working with these surveys is the huge amount of data that are going to be available. For example, the Phase 1 of the SKA is expected to generate 300 petabytes of data products every year. My research includes developing modernised and innovative data processing and analysis pipelines, which is required for the success of these amazing surveys.

    more_vert
  • Funder: Wellcome Trust Project Code: 092752
    Funder Contribution: 182,777 GBP

    Helminth parasites are masters of immune regulation and their products have enormous potential for understanding and treating autoimmune and allergic diseases. Cestodes, which have similar potential to the more widely studied trematodes and nematodes, have received little attention to date. We have evidence that the laminated layer (LL) of the cestode Echinococcus granulosus, which interfaces with the host immune system, contains potent immune modulators, particularly with regard to dendritic cell function. The LL inhibits TLR activation as well inhibiting alternative activation of dendritic cells by IL-4. Several of the phenotypic alterations caused by the LL resemble those elicited by known pharmacological inducers of tolerogenic dendritic cells. Further, LL preparations elicit striking expansion of Foxp3+ regulatory T cells in vivo. For this project we propose to address the specific pathways that are altered in dendritic cells by the LL. Using T cells from TCR Transgenic mice we will assess both in vitro and in vivo the impact of the LL-modified DCs on na ve T cell activation. We further propose to define the molecular components of the LL that are responsible for its immune modulatory properties.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/G039070/1
    Funder Contribution: 355,965 GBP

    An Algorithm is a systematic procedure for solving a computational problem that can be implemented on a computer. An example is the Gaussian elimination method for solving a system of linear equations. The running time of an algorithm is the number of elementary steps (e.g., addition, modification of a symbol in a string, etc.) that the algorithm performs. Of course, the running time depends on the size of the input. For example, in the case of Gaussian elimination the size of the input is the number of symbols needed to write down (or enter) the linear system of equations. Denote this quantity by m. Then the running time of Gaussian elimination is bounded by m^3. Generally an algorithm is considered Efficient if for every possible input its running time is bounded by a polynomial in the size of that input. (Hence, Gaussian elimination is efficient.)In spite of intensive research since the early days of computing, there is a broad class of computational problems for which no efficient algorithms are known. In terms of complexity theory, most of these problems can be classified as NP-hard . One example is the Boolean Satisfiability problem (SAT). In this problem the input is a Boolean formula, and the objective is to find an assignment to the Boolean variables that satisfies the entire formula (if such a satisfying assignment exists).Although the SAT problem is NP-hard, it occurs as a sub-problem in numberless real-world applications. In fact, SAT is of similarly eminent importance in Computer Science as solving polynomial equations is in Algebra. Therefore, an immense amount of research deals with heuristic algorithms for SAT. The goal of this line of research is to devise algorithms that can efficiently solve as general types of SAT inputs as possible (although none of these methods solves all possible inputs efficiently).Despite this bulk of work, it remains extremely simple to generate empirically hard problem instances that elude all of the known heuristic algorithms. The easiest way to do so is by drawing a SAT formula at random (from a suitable but very simple probability distribution). Indeed, random input instances were considered prime examples of hard inputs to such an extent that it was proposed to exploit their hardness in cryptographic applications. Random SAT formulas also occur prominently in the seminal work on Algorithms and Complexity from the 1970s, where their empirical hardness was reckoned most vexing . However, it remained unknown why these types of instances eluded all known algorithms (let alone how else to cope with these inputs).Therefore, it came as a surprise when statistical physicists reported that a new algorithm called Survey Propagation ( SP ) experimentally solves these hard SAT inputs efficiently. Indeed, a naive implementation of SP solves within seconds sample instances with a million of variables, while even the most advanced previous SAT solvers struggle to solve inputs with a few hundred variables. SP comes with a sophisticated but mathematically non-rigorous analysis based on ideas from spin glass theory. This analysis suggests why all prior algorithms perform so badly. Its key feature is that it links the difficulty of solving a SAT input to geometric properties of the set of solutions.Though the physics methods have inspired the SP algorithm, they do not provide a satisfactory explanation for the success (or the limitations) of SP. Therefore, the goal of this project is to study these new ideas from spin glass theory from a Computer Science perspective via mathematically rigorous methods. On the one hand, we are going to provide a rigorous analysis of SP to classify what types of inputs it can solve. On the other hand, we intend to study the behaviour of algorithms from the point of view of the solution space geometry ; this perspective has not been studied systematically in Algorithms and Complexity before.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.