Powered by OpenAIRE graph
Found an issue? Give us feedback

NHSx

3 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/W011654/1
    Funder Contribution: 559,681 GBP

    As computing systems become increasingly autonomous--able to independently pilot vehicles, detect fraudulent banking transactions, or read and diagnose our medical scans--it is vital that humans can confidently assess and ensure their trustworthiness. Our project develops a novel, people-centred approach to overcoming a major obstacle to this, known as responsibility gaps. Responsibility gaps occur when we cannot identify a person who is morally responsible for an action with high moral stakes, either because it is unclear who was behind the act, or because the agent does not meet the conditions for moral responsibility; for example, if the act was not voluntary, or if the agent was not aware of it. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust. Autonomous systems create new responsibility gaps. They operate in high-stakes areas such as health and finance, but their actions may not be under the control of a morally responsible person, or may not be fully understandable or predictable by humans due to complex 'black-box' algorithms driving these actions. To make such systems trustworthy, we need to find a way of bridging these gaps. Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps-by boosting the ability of systems to deliver a vital and understudied component of responsibility, namely answerability. When we say someone is 'answerable' for an act, it is a way of talking about their responsibility. But answerability is not about having someone to blame; it is about supplying people who are affected by our actions with the answers they need or expect. Responsible humans answer for actions in many different ways; they can explain, justify, reconsider, apologise, offer amends, make changes or take future precautions. Answerability encompasses a richer set of responsibility practices than explainability in computing, or accountability in law. Often, the very act of answering for our actions improves us, helping us be more responsible and trustworthy in the future. This is why answerability is key to bridging responsibility gaps. It is not about who we name as the 'responsible person' (which is more difficult to identify in autonomous systems), but about what we owe to the people holding the system responsible. If the system as a whole (machines + people) can get better at giving the answers that are owed, the system can still meet present and future responsibilities to others. Hence, answerability is a system capability for executing responsibilities that can bridge responsibility gaps. Our ambition is to provide the theoretical and empirical evidence and computational techniques that demonstrate how to enable autonomous systems (including wider "systems" of developers, owners, users, etc) to supply the kinds of answers that people seek from trustworthy agents. Our first workstream establishes the theoretical and conceptual framework that allows answerability to be better understood and executed by system developers, users and regulators. The second workstream grounds this in a people-centred, evidence-driven approach by engaging various publics, users, beneficiaries and regulators of autonomous systems in the research. Focus groups, workshops and interviews will be used to discuss cases and scenarios in health, finance and government that reveal what kinds of answers people expect from trustworthy systems operating in these areas. Finally, our third workstream develops novel computational AI techniques for boosting the answerability of autonomous systems through more dialogical and responsive interfaces with users and regulators. Our research outputs and activities will produce a mix of academic, industry and public-facing resources for designing, deploying and governing more answerable autonomous systems.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V026259/1
    Funder Contribution: 3,357,500 GBP

    Machine learning (ML), in particular Deep Learning (DL) is one of the fastest growing areas of modern science and technology, which has potentially enormous and transformative impact on all areas of our life. The applications of DL embrace many disciplines such as (bio-)medical sciences, computer vision, the physical sciences, the social sciences, speech recognition, gaming, music and finance. DL based algorithms are now used to play chess and GO at the highest level, diagnose illness, drive cars, recruit staff and even make legal judgements. The possible applications in the future are almost unlimited. Perhaps DL methods will be used in the future to predict the weather and climate, of even human behaviour. However, alongside this explosive growth has been a concern that there is a lack of explainability behind DL and the way that DL based algorithms make their decisions. This leads to a lack of trustworthiness in the use of the algorithms. A reason for this is that the huge successes of deep learning is not well understood, the results are mysterious, and there is a lack of a clear link between the data training DL algorithms (which is often vague and unstructured) and the decisions made by these algorithms. Part of the reason for this is that DL has advanced so fast, that there is a lack of understanding of its foundations. According to the leading computer scientist Ali Rahimi at NIPS 2017: 'We say things like "machine learning is the new electricity". I'd like to offer another analogy. Machine learning has become alchemy!' Indeed, despite the roots of ML lying in mathematics, statistics and computer science there currently is hardly any rigorous mathematical theory for the setup, training and application performance of deep neural networks. We urgently need the opportunity to change machine learning from alchemy into science. This programme grant aims to rise to this challenge, and, by doing so, to unlock the future potential of artificial intelligence. It aims to put deep learning onto a firm mathematical basis, and will combine theory, modelling, data, computation to unlock the next generation of deep learning. The grant will comprise an interlocked set of work packages aimed to address both the theoretical development of DL (so that it becomes explainable) and the algorithmic development (so that it becomes trustworthy). These will then be linked to the development of DL in a number of key application areas including image processing, partial differential equations and environmental problems. For example we will explore the question of whether it is possible to use DL based algorithms to forecast the weather and climate faster and more accurately than the existing physics based algorithms. The investigators on the grant will be doing both theoretical investigations and will work with end-users of DL in many application areas. Mindful that policy makers are trying to address the many issues raised by DL, the investigators will also reach out to them through a series of workshops and conferences. The results of the work will also be presented to the public at science festivals and other open events.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/W020548/1
    Funder Contribution: 2,659,370 GBP

    The uneven ways that civil liberties, work, labour and health have all been impacted over the last 18 months as we have all turned to digital technologies to sustain previous ways of life, has not only shown us the extent of inequalities across all societies as they are cut through with gender, ethnicity, age, opportunities, class, geolocation; it has also led many organisations and businesses across all three sectors to question those values they previously supported. Capitalising on this moment of reflection across industry, the public and third sectors; we explore the possibility of imagining and building a future that takes different core values and practices as central, and works in very different ways. As the roles of organisations and businesses across all industry, the public and third sectors changes, what is now taken up as core values and ethos will be crucial in defining the future. INCLUDE+ will build a knowledge community around in/equalities in digital society that will comprise industry, academia, the public and third sectors. Responding to the Equitable Digital Society theme, we ask how we can design, co-create and realise digital services and infrastructures to support inclusion and equality in ways that enable all people to thrive. Focusing on the three connected strands of wellbeing, precarity, and civic culture; we address structural inequalities as they emerge through our research, investigating them through whole system approaches that includes the generation of outputs that comprise of new systems, services and practices to be taken up by organisations. More than this, our knowledge community will be underpinned by empirical, co-curation and participatory led research that will produce real interventions into those structural inequalities. These interventions will be taken up by organisations, responded to and considered, enabling the wider knowledge community to critically assess them in relation to the values they purport to promote. Fed by secondments and supported through smaller exploratory and escalator funds, our knowledge community will not only grow through traditional networking activities such as workshops, annual conferences, academic outputs and further funding; it will also grow through the development of interdisciplinary methods, knowledge exchange practices, and mentorship, which the secondment package will promote. In so doing, we structure our N+ around participatory research practices, people development and knowledge exchange, aiming to grow our network through the development and growth of people and good practice. INCLUDE+ is led by a highly experienced cross-disciplinary team incorporating Management and Business Studies, Computing, Social Sciences, Media and Communication and Legal Studies. Each Investigator brings vibrant international networks; active research projects feeding the Network+; and long experience of impact generation across policy and research. With support from organisations like the International Labour Organisation, Law Commission, Cabinet Office, and Equality and Human Rights Commission as well as the existing DE community, we will develop from and with existing research, extend this work and impact beyond it. Our partner organisations cut across industry, the public and third sectors and include (for example) Lego; NHS AI Lab; Space2; mHabitat; Leeds, Cambridgeshire and Swansea Councils; PeopleDotCom; Ditchley; 5Rights; EAMA; DataKind and IBM. We have designed the Network+ to enable a whole system approach that is genuinely exciting and innovative not just because of scalability, transference and scope, but also because of the commitment to people development, knowledge exchange and interdisciplinary practice that will also shape future research

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.