Powered by OpenAIRE graph
Found an issue? Give us feedback

Amazon Development Centre Scotland

Amazon Development Centre Scotland

4 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/R033633/1
    Funder Contribution: 992,641 GBP

    As interaction on online Web-based platforms is becoming an essential part of people's everyday lives and data-driven AI algorithms are starting to exert a massive influence on society, we are experiencing significant tensions in user perspectives regarding how these algorithms are used on the Web. These tensions result in a breakdown of trust: users do not know when to trust the outcomes of algorithmic processes and, consequently, the platforms that use them. As trust is a key component of the Digital Economy where algorithmic decisions affect citizens' everyday lives, this is a significant issue that requires addressing. ReEnTrust explores new technological opportunities for platforms to regain user trust and aims to identify how this may be achieved in ways that are user-driven and responsible. Focusing on AI algorithms and large scale platforms used by the general public, our research questions include: What are user expectations and requirements regarding the rebuilding of trust in algorithmic systems, once that trust has been lost? Is it possible to create technological solutions that rebuild trust by embedding values in recommendation, prediction, and information filtering algorithms and allowing for a productive debate on algorithm design between all stakeholders? To what extent can user trust be regained through technological solutions and what further trust rebuilding mechanisms might be necessary and appropriate, including policy, regulation, and education? The project will develop an experimental online tool that allows users to evaluate and critique algorithms used by online platforms, and to engage in dialogue and collective reflection with all relevant stakeholders in order to jointly recover from algorithmic behaviour that has caused loss of trust. For this purpose, we will develop novel, advanced AI-driven mediation support techniques that allow all parties to explain their views, and suggest possible compromise solutions. Extensive engagement with users, stakeholders, and platform service providers in the process of developing this online tool will result in an improved understanding of what makes AI algorithms trustable. We will also develop policy recommendations and requirements for technological solutions plus assessment criteria for the inclusion of trust relationships in the development of algorithmically mediated systems and a methodology for deriving a "trust index" for online platforms that allows users to assess the trustability of platforms easily. The project is led by the University of Oxford in collaboration with the Universities of Edinburgh and Nottingham. Edinburgh develops novel computational techniques to evaluate and critique the values embedded in algorithms, and a prototypical AI-supported platform that enables users to exchange opinions regarding algorithm failures and to jointly agree on how to "fix" the algorithms in question to rebuild trust. The Oxford and Nottingham teams develop methodologies that support the user-centred and responsible development of these tools. This involves studying the processes of trust breakdown and rebuilding in online platforms, and developing a Responsible Research and Innovation approach to understanding trustability and trust rebuilding in practice. A carefully selected set of industrial and other non-academic partners ensures ReEnTrust work is grounded in real-world examples and experiences, and that it embeds balanced, fair representation of all stakeholder groups. ReEnTrust will advance the state of the art in terms of trust rebuilding technologies for algorithm-driven online platforms by developing the first AI-supported mediation and conflict resolution techniques and a comprehensive user-centred design and Responsible Research and Innovation framework that will promote a shared responsibility approach to the use of algorithms in society, thereby contributing to a flourishing Digital Economy.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/L016427/1
    Funder Contribution: 4,746,530 GBP

    Overview: We propose a Centre for Doctoral Training in Data Science. Data science is an emerging discipline that combines machine learning, databases, and other research areas in order to generate new knowledge from complex data. Interest in data science is exploding in industry and the public sector, both in the UK and internationally. Students from the Centre will be well prepared to work on tough problems involving large-scale unstructured and semistructured data, which are increasingly arising across a wide variety of application areas. Skills need: There is a significant industrial need for students who are well trained in data science. Skilled data scientists are in high demand. A report by McKinsey Global Institute cites a shortage of up to 190,000 qualified data scientists in the US; the situation in the UK is likely to be similar. A 2012 report in the Harvard Business Review concludes: "Indeed the shortage of data scientists is becoming a serious constraint in some sectors." A report on the Nature web site cited an astonishing 15,000% increase in job postings for data scientists in a single year, from 2011 to 2012. Many of our industrial partners (see letters of support) have expressed a pressing need to hire in data science. Training approach: We will train students using a rigorous and innovative four-year programme that is designed not only to train students in performing cutting-edge research but also to foster interdisciplinary interactions between students and to build students' practical expertise by interacting with a wide consortium of partners. The first year of the programme combines taught coursework and a sequence of small research projects. Taught coursework will include courses in machine learning, databases, and other research areas. Years 2-4 of the programme will consist primarily of an intensive PhD-level research project. The programme will provide students with breadth throughout the interdisciplinary scope of data science, depth in a specialist area, training in leadership and communication skills, and appreciation for practical issues in applied data science. All students will receive individual supervision from at least two members of Centre staff. The training programme will be especially characterized by opportunities for combining theory and practice, and for student-led and peer-to-peer learning.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V000497/1
    Funder Contribution: 1,034,990 GBP

    Computers and smart phones are now used in all areas of everyday life, from business to education to entertainment. From the individual to the national level, we have become reliant on software systems and risk substantial loss if these systems fail or have their security compromised. The only way to protect against this is to ensure that the software has no inherent weakness, and this requires that we use the power of mathematics to ensure that the software is both safe and secure. As software systems grow more complex the potential for failure grows. Traditional testing approaches become unscalable and give weaker assurances about the safety of software systems. Similarly, our software systems are increasingly subject to attacks from those who seek to exploit our dependence on technology for their own nefarious purposes. These so-called cyber attacks are becoming more elaborate, actively seeking to subvert existing methods of detection. Only by removing the underlying vulnerabilities we can truly protect ourselves. Therefore, to properly handle notions of safety and security we must focus on fundamental properties of software systems and reason about them generally. However, software systems are not isolated, they run on hardware alongside other software and the safety and security of any software is dependent on this context. Ideally, we have safety and security `all the way down' from semicolons to silicon. A recent effort, let by ARM and the University of Cambridge, has introduced a new model for safe and secure systems: Capability Hardware. The most significant set of security vulnerabilities stem from memory being accessed or manipulated by a process that should not be allowed to access or manipulate that memory. The idea behind Capability Hardware is to provide security guarantees at the hardware level with special so-called capabilities, a special kind of memory that knows who is allowed to do what with it. The result should be much more secure and robust software systems. However, now that we have more security at the hardware level, it is key that we ensure that this is correctly utilised at the software level. This project aims to transport the substantial advanced in software verification to the setting of this new capability hardware. In general, we will extend existing tools and create new tools able to reason about the safety and security of industrially relevant software systems running on Capability Hardware. There will be a significant focus on achieving high technological readiness, ensuring that when the international community is ready to embrace Capability Hardware there already exists a mature set of tools able to verify the safety and security of the software sitting on top of it.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/L01503X/1
    Funder Contribution: 3,937,630 GBP

    The worldwide software market, estimated at $250 billion per annum, faces a disruptive challenge unprecedented since its inception: for performance and energy reasons, parallelism and heterogeneity now pervade every layer of the computing systems infrastructure, from the internals of commodity processors (manycore), through small scale systems (GPGPUs and other accelerators) and on to globally distributed systems (web, cloud). This pervasive parallelism renders the hierarchies, interfaces and methodologies of the sequential era unviable. Heterogeneous parallel hardware requires new methods of compilation for new programming languages supported by new system development strategies. Parallel systems, from nano to global, create difficult new challenges for modelling, simulation, testing and verification. This poses a set of urgent interconnected problems of enormous significance, impacting and disrupting all research and industrial sectors which rely upon computing technology. Our CDT will generate a stream of more than 50 experts, prepared to address these challenges by taking up key roles in academic and industrial research and development labs, working to shape the future of the industry. The research resources and industrial connections available to our CDT make us uniquely well placed within the UK to deliver on these aspirations. The "pervasive parallelism challenge" is to undertake the fundamental research and design required to transform methods and practice across all levels of the ICT infrastructure, in order to exploit these new technological opportunities. Doing so will allow us to raise the management of heterogeneous concurrency and parallelism from a niche activity in the care of experts, to a regularised component of the mainstream. This requires a steady flow of highly educated, highly skilled practitioners, with the ability to relate to opportunities at every level and to communicate effectively with specialists in related areas. These highly skilled graduates must not only have deep expertise in their own specialisms, but crucially, an awareness of relationships to the surrounding computational system. The need for fundamental work on heterogeneous parallelism is globally recognised by diverse interest groups. In the USA, reports undertaken by the Computing Community Consortium and the National Research Council recognise the paradigm shift needed for this technology to be incorporated into research and industry alike. Both these reports were used as fundamental arguments in initiating the call for proposals by the National Science Foundation (NSF) on Exploiting Parallelism and Scalability, in the context of the NSF's Advanced Computing Infrastructure: Vision and Strategic Plan which calls for fundamental research to answer the question of "how to enable the computational systems that will support emerging applications without the benefit of near-perfect performance scaling from hardware improvements." Similarly, the European Union has identified the need for new models of parallelism as part of its Digital Agenda. Under the agenda goals of Cloud Computing and Software and Services, parallelism plays a crucial role and the Commission asserts the need for a deeper understanding and new models of parallel computation that will enable future technology. Given the UK's global leadership status it is imperative that similar questions be posed and answered here.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.