Powered by OpenAIRE graph
Found an issue? Give us feedback

UniGe

University of Genoa
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
9 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/K031864/1
    Funder Contribution: 280,589 GBP

    The main goal of typing is to prevent the occurrence of execution errors during the running of a program. Milner formalised the idea, showing that ``well-typed programs cannot go wrong''. In practice, type structures provide a fundamental technique of reducing programmer errors. At their strongest, they cover most of the properties of interest to the verification community. A major trend in the development of functional languages is improvement in expressiveness of the underlying type system, e.g., in terms of Dependent Types, Type Classes, Generalised Algebraic Types (GADTs), Dependent Type Classes and Canonical Structures. Milner-style decidable type inference does not always suffice for such extensions (e.g. the principal type may no longer exist), and deciding well-typedness sometimes requires computation additional to compile-time type inference. Implementations of new type inference algorithms include a variety of first-order decision procedures, notably Unification and Logic Programming (LP), Constraint LP, LP embedded into interactive tactics (Coq's eauto), and LP supplemented by rewriting. Recently, a strong claim has been made by Gonthier et al that, for richer type systems, LP-style type inference is more efficient and natural than traditional tactic-driven proof development. A second major trend is parallelism: the absence of side-effects makes it easy to evaluate sub-expressions in parallel. Powerful abstraction mechanisms of function composition and higher-order functions play important roles in parallelisation. Three major parallel languages are Eden (explicit parallelism) Parallel ML (implicit parallelism) and Glasgow parallel Haskell (semi-explicit parallelism). Control parallelism in particular distinguishes functional languages. Type inference and parallelism are rarely considered together in the literature. As type inference becomes more sophisticated and takes a bigger role in the overall program development, sequential type inference is bound to become a bottle-neck for language parallelisation. Our new Coalgebraic Logic Programming (CoALP) offers both extra expressiveness (corecursion) and parallelism in one algorithm. We propose to use CoALP in place of LP tools currently used in type inference. With the mentioned major developments in Corecursion, Parallelism, and Typeful (functional) programming it has become vital for these disjoint communities to combine their efforts: enriched type theories rely more and more on the new generation of LP languages; coalgebraic semantics has become influential in language design; and parallel dialects of languages have huge potential in applying common techniques across the FP/LP programming paradigm. This project is unique in bringing together local and international collaborators working in the three communities. The number of supporters the project has speaks better than words about the timeliness of our agenda. The project will impact on two streams of EPSRC's strategic plan: "Programming Languages and Compilers" and "Verification and Correctness". The project is novel in aspects of Theory (coalgebraic study of (co)recursive computations arising in automated proof-search); Practice (implementation of the new language CoALP and its embedding in type-inference tools); and Methodology (Mixed corecursion and parallelism).

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/X010740/1
    Funder Contribution: 348,065 GBP

    Inverse problems are concerned with the reconstruction of the causes of a physical phenomena from given observational data. They have wide applications in many problems in science and engineering such as medical imaging, signal processing, and machine learning. Iterative methods are a particularly powerful paradigm for solving a wide variety of inverse problems. They are often posed by defining an objective function that contains information about data fidelity and assumptions about the sought quantity, which is then minimised through an iterative process. Mathematics has played a critical role in analysing inverse problems and corresponding algorithms. Recent advances in data acquisition and precision have resulted in datasets of increasing size for a vast number of problems, including computed and positron emission tomography. This increase in data size poses significant computational challenges for traditional reconstruction methods, which typically require the use of all the observational data in each iteration. Stochastic iterative methods address this computational bottleneck by using only a small subset of observation in each iteration. The resulting methods are highly scalable, and have been successfully deployed in a wide range of problems. However, the use of stochastic methods has thus far been limited to a restrictive set of geometric assumptions, requiring Hilbert or Euclidean spaces. The proposed fellowship aims to address these issues by developing stochastic gradient methods for solving inverse problems posed in Banach spaces. The use of non-Hilbert spaces is gaining increased attention within inverse problems and machine learning communities. Banach spaces offer much richer geometric structures, and are a natural problem domain for many problems in partial differential equation and medical tomography. Moreover, Banach-space norms are advantageous for preservation of important properties, such as sparsity. This fellowship will introduce modern optimisation methods into classical Banach space theory and its successful completion will create novel research opportunities for inverse problems and machine learning.

    more_vert
  • Funder: UK Research and Innovation Project Code: NE/P021395/1
    Funder Contribution: 938,580 GBP

    The vast, remote seas which surround the continent of Antarctica are collectively known as the Southern Ocean. This region with its severe environment of mountainous seas, winter darkness, strong winds, freezing temperatures and ice is unsurprisingly one of the least explored and under-observed parts of the global ocean. However, because of these extremes, it plays a large and still unquantified role in Earth's climate system. In this region, large amounts of heat and carbon dioxide are exchanged between the atmosphere and the ocean. The physical mechanisms controlling these atmosphere-ocean exchanges are the subject of the NERC ORCHESTRA programme. We propose within PICCOLO to concentrate on the role that chemistry and biology play within those exchanges. In particular, PICCOLO will focus on understanding the mechanisms that transform the carbon contained in the seawater as it rises to the surface near Antarctica, interacts with the atmosphere, ice, phytoplankton and zooplankton inhabiting the near surface, before descending to the ocean depths. PICCOLO will undertake an ocean research expedition to the region close to Antarctica, as computer models and satellite images show that these are areas crucial for carbon processes. Freezing seawater in these regions releases salt into the water below, making it denser and therefore causing it to sink. Strong winds cause the sea ice to be pushed away from the Antarctic coastline, leaving areas of open water called polynyas. Within the polynyas the water has enough light during the summer to allow phytoplankton to grow, as well as providing dense waters which sink to the deep, driving a giant ocean conveyor belt which has a large impact upon Earth's climate system. The PICCOLO team will measure the key variables that control the biological and chemical processes in this region including iron, nutrients, phytoplankton and zooplankton. Crucially the team will study the controlling rate terms between different parts of this biological and chemical system. The PICCOLO team will make use of the latest technologies, including autonomous submarines, gliders and floats, to observe these processes in otherwise inaccessible and previously unstudied areas such as under the sea ice. Most ambitiously we will anchor a submarine to the seabed within a polynya and leave it over a winter season to collect data, recovering it the following spring. The PICCOLO team will put instruments on seals which will continuously take data as they dive up and down through the water, sending it back to scientists in real-time via satellite communication links. This wealth of novel data will be analysed by the PICCOLO team, using state of the art computer models, to test our ideas about how the whole complex set of physical, chemical and biological processes affects carbon. Conceptually we will follow an imaginary parcel of water through the system looking at processes between the atmosphere and ocean, biological processes in the surface layer, exchanges between the upper and lower ocean and the final fate of the carbon. The PICCOLO hypotheses address the following: (i) Factors controlling the exchange of carbon dioxide between the ocean and atmosphere and the role of ultra-violet light in controlling the concentration of carbon dioxide in seawater; (ii) The role of light, iron and nutrients in how carbon is processed by the plankton in the water; (iii) The mediating processes governing the export of carbon from the upper ocean to depth; (iv) The processes that take the carbon into the deep ocean on the next stage of its global journey.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R019622/1
    Funder Contribution: 100,987 GBP

    This project concerns computational mathematics and logic. The aim is to improve the ability of computers to perform ``Quantifier Elimination'' (QE). We say a logical statement is ``quantified'' if it is preceded by a qualification such as "for all" or "there exists". Here is an example of a quantified statement: "there exists x such that ax^2 + bx + c = 0 has two solutions for x". While the statement is mathematically precise the implications are unclear - what restrictions does this statement of existence force upon us? QE corresponds to replacing a quantified statement by an unquantified one which is equivalent. In this case we may replace the statement by: "b^2 - 4ac > 0", which is the condition for x to have two solutions. You may have recognised this equivalence from GCSE mathematics, when studying the quadratic equation. The important point here is that the latter statement can actually be derived automatically by a computer from the former, using a QE procedure. QE is not subject to the numerical rounding errors of most computations. Solutions are not in the form of a numerical answer but an algebraic description which offers insight into the structure of the problem at hand. In the example above, QE shows us not what the solutions to a particular quadratic equation are, but how in general the number of solutions depends on the coefficients a, b, and c. QE has numerous applications throughout engineering and the sciences. An example from biology is the determination of medically important values of parameters in a biological network; while another from economics is identifying which hypotheses in economic theories are compatible, and for what values of the variables. In both cases, QE can theoretically help, but in practice the size of the statements means state-of-the-art procedures run out of computer time/memory. The extensive development of QE procedures means they have many options and choices about how they are run. These decisions can greatly affect how long QE takes, rendering an intractable problem easy and vice versa. Making the right choice is a critical, but understudied problem and is the focus of this project. At the moment QE procedures make such choices either under direct supervision of a human or based on crude human-made heuristics (rules of thumb based on intuition / experience but with limited scientific basis). The purpose of this project is to replace these by machine learning techniques. Machine Learning (ML) is an overarching term for tools that allow computers to make decisions that are not explicitly programmed, usually involving the statistical analysis of large quantities of data. ML is quite at odds with the field of Symbolic Computation which studies QE, as the latter prizes exact correctness and so shuns the use of probabilistic tools making its application here very novel. We are able to combine these different worlds because the choices which we will use ML to make will all produce a correct and exact answer (but with different computational costs). The project follows pilot studies undertaken by the PI which experimented with one ML technique and found it improved upon existing heuristics for two particular decisions in a QE algorithm. We will build on this by working with the spectrum of leading ML tools to identify the optimal techniques for application in Symbolic Computation. We will demonstrate their use for both low level algorithm decisions and choices between different theories and implementations. Although focused on QE, we will also demonstrate ML as being a new route to optimisation in Computer Algebra more broadly and work encompasses Project Partners and events to maximise this. Finally, the project will deliver an improved QE procedure that makes use of ML automatically, without user input. This will be produced in the commercial Computer Algebra software Maple in collaboration with industrial Project Partner Maplesoft.

    more_vert
  • Funder: CHIST-ERA Project Code: CHIST-ERA-17-ORMR-004

    Humans excel when dealing with everyday objects and manipulation tasks, learning new skills, and adapting to different or complex environments. This is a basic skill for our survival as well as a key feature in our world of artefacts and human-made devices. Our expert ability to use our hands results from a lifetime of learning by both observing other skilled humans and ourselves as we discover how to handle objects first hand. Unfortunately, today's robotic hands are still unable to achieve such a high level of dexterity in comparison to humans nor are systems entirely able to understand their own potential. In order for robots to truly operate in a human world and fulfil the expectations as intelligent assistants, they must be able to manipulate a wide variety of unknown objects by mastering their capabilities of strength, finesse and subtlety. To achieve such dexterity with robotic hands, cognitive capacity is needed to deal with uncertainties in the real world and to generalise previously learned skills to new objects and tasks. Furthermore, we assert that the complexity of programming must be greatly reduced and robot autonomy must become much more natural. The InDex project aims to understand how humans perform in-hand object manipulation and to replicate the observed skilled movements with dexterous artificial hands, merging the concepts of reinforcement and transfer learning to generalise in-hand skills for multiple objects and tasks. In addition, an abstraction and representation of previous knowledge will be fundamental for the reproducibility of learned skills to different hardware. Learning will use data across multiple modalities that will be collected, annotated and assembled into a large dataset. The data and our methods will be shared with the wider research community to allow testing against benchmarks and reproduction of results. More concretely, the core objectives are: (i) to build a multi-modal artificial perception architecture that extracts data of object manipulation by humans; (ii) the creation of a multimodal dataset of in-hand manipulation tasks such as regrasping, reorienting and finely repositioning; (iii) the development of an advanced object modelling and recognition system, including the characterisation of object affordances and grasping properties, in order to encapsulate both explicit information and possible implicit object usages; (iv) to autonomously learn and precisely imitate human strategies in handling tasks; and (v) to build a bridge between observation and execution, allowing deployment that is independent of the robot architecture.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.