
Amazon Web Services (UK)
Amazon Web Services (UK)
6 Projects, page 1 of 2
assignment_turned_in Project2021 - 2023Partners:Imperial College London, Future Health Works Ltd, Cisco Systems UK, Cisco Systems (United Kingdom), Future Health Works Ltd +5 partnersImperial College London,Future Health Works Ltd,Cisco Systems UK,Cisco Systems (United Kingdom),Future Health Works Ltd,Karl Storz Endoscopy Ltd,Karl Storz Endoscopy Ltd,Cisco Systems (United Kingdom),Amazon Web Services (UK),Amazon Web Services (UK)Funder: UK Research and Innovation Project Code: EP/W004755/1Funder Contribution: 301,430 GBPThis project is about devising and implementing a smart operating room environment, powered by trustable, human-understanding artificial intelligence, able to continually adapt and learn the best way to optimise safety, efficacy, teamwork, economy, and clinical outcomes. We call this concept MAESTRO. A fitting analogy for MAESTRO is that of an orchestra conductor, a 'maestro', who oversees, overhears and directs a group of people on a common task, towards a common goal: a masterful musical performance. Although the music score is identical for all orchestras, there is no doubt that they all perform it in different ways and some significantly better than others. Although the quality and personality of orchestra musicians is very important, it is widely accepted that the role of the maestro is crucial, and extends beyond the duration of the musical performance to rehearsals and understanding of the context behind the music score. Thus, while it is possible for orchestras to perform without conductors, most cannot function without one. Our proposed MAESTRO AI-powered operating room of the future rotates around four key elements: (a) The holistic sensing of patient, staff, operating room environment and equipment through an array of diverse sensor devices. (b) Artificial intelligence focused on humans (human-centric), able to continually understand situations and actions developing in the operating room, and of intervening when necessary. (c) The use of advanced human-machine user interfaces for augmenting task performance. (d) A secure device interconnectivity platform, allowing the full integration of all above key elements. As in our orchestra analogy, our envisioned MAESTRO directs the OR staff and surgical devices before, during and after a surgical procedure by: (1) Sensing surgical procedures in all their aspects, including those which are currently neglected such as the physiological responses of staff (e.g., heart rate, blood pressure, sweating, pupil dilation), focus of attention, brain activity, as well as harmful events that may escape the attention of the clinical team. (2) Overseeing individual and team performance in real-time, throughout the operation and across different types of surgeries and different teams. (3) Guiding and assisting the surgical team via automated checkpoints, virtual and augmented visualisations, warnings, individualised and broadcasted alerts, automation, semi-automation, robotics, and other aids and factors that can affect performance in the operating room. (4) Augmenting and optimising individual and collective operational capabilities, skills, and task ergonomics, through novel human-machine interaction and interfacing modalities. The project is designed to have a significant societal, economic and technological impact, and to establish the NHS as a leading healthcare paradigm worldwide. MAESTRO leverages the expertise of top researchers in the areas of robotics, sensing, artificial intelligence, human factors, health policies and patient safety. It is co-designed in collaboration with top clinicians, one of the largest NHS Trusts in England, patient groups, performing artists, and several small and medium-sized enterprises and large multinational industries operating in the areas of artificial intelligence, medical devices, digital health, large networks, cloud services, cyber security.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a6d860431188a45461c6623722963dd4&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a6d860431188a45461c6623722963dd4&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2018 - 2024Partners:Facebook, Methods Group, British Telecommunications plc, UCL, Amazon Web Services (UK) +10 partnersFacebook,Methods Group,British Telecommunications plc,UCL,Amazon Web Services (UK),BT Group (United Kingdom),GridPP,Amazon Web Services (UK),Hewlett-Packard (United Kingdom),HP Research Laboratories,GridPP,BT Group (United Kingdom),HP Research Laboratories,Facebook (United States),Methods GroupFunder: UK Research and Innovation Project Code: EP/R006865/1Funder Contribution: 6,146,080 GBPThe smooth functioning of society is critically dependent not only on the correctness of programs, particularly of programs controlling critical and high-sensitivity core components of individual systems, but also upon correct and robust interaction between diverse information-processing ecosystems of large, complex, dynamic, highly distributed systems. Failures are common, unpredictable, highly disruptive, and span multiple organizations. The scale of systems' interdependence will increase by orders of magnitude in the next few years. Indeed by 2020, with developments in Cloud, the Internet of Things, and Big Data, we may be faced with a world of 25 million apps, 31 billion connected devices, 1.3 trillion tags/sensors, and a data store of 50 trillion gigabytes (data: IDC, ICT Outlook: Recovering Into a New World, #DR2010_GS2_JG, March 2010). Robust interaction between systems will be critical to everyone and every aspect of society. Although the correctness and security of complete systems in this world cannot be verified, we can hope to be able to ensure that specific systems, such as verified safety-, security-, or identity-critical modules, are correctly interfaced. The recent success of program verification notwithstanding, there remains little prospect of verifying such ecosystems in their entireties: the scale and complexity are just too great, as are the social and managerial coordination challenges. Even being able to define what it means to verify something that is going to have an undetermined role in a larger system presents a serious challenge. It is perhaps evident that the most critical aspect of the operation of these information-processing ecosystems lies in their interaction: even perfectly specified and implemented individual systems may be used in contexts for which they were not intended, leading to unreliable, insecure communications between them. We contend that interfaces supporting such interactions are therefore the critical mechanism for ensuring systems behave as intended. However, the verification/modelling techniques that have been so effective in ensuring reliability of low-level features of programs, protocols, and policies (and so the of the software that drives large systems) are, essentially, not applied to reasoning about such large-scale systems and their interfaces. We intend to explore this deficiency by researching the technical, organizational, and social challenges of specifying and verifying interfaces in system ecosystems. In so doing, we will drive the use of verification techniques and improve the reliability of large systems. Complex systems ecosystems and their interfaces are some of the most intricate and critical information ecosystems in existence today, and are highly dynamic and constantly evolving. We aim to understand how the interfaces between the components constituting these ecosystems work, and to verify them against their intended use. This research will be undertaken through a collection of different themes covering systems topics where interface is crucially important, including critical code, communications and security protocols, distributed systems and networks, security policies, business ecosystems, and even extending to the physical architecture of buildings and networks. These themes are representative of the problem of specifying and reasoning about the correctness of interfaces at different levels of abstraction and criticality. Interfaces of each degree of abstraction and criticality can be studied independently, but we believe that it will be possible to develop a quite general, uniform account of specifying and reasoning about them. It is unlikely that any one level of abstraction will suggest all of the answers: we expect that the work of the themes will evolve and interact in complex ways.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a5e2bc1ceac87e577355f255fd3c6717&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a5e2bc1ceac87e577355f255fd3c6717&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2020 - 2021Partners:Birmingham Open Media (BOM), The Alan Turing Institute, University of Edinburgh, University of Birmingham, Birmingham Open Media (BOM) +12 partnersBirmingham Open Media (BOM),The Alan Turing Institute,University of Edinburgh,University of Birmingham,Birmingham Open Media (BOM),KCCA,Uber Kenya Limited,Amazon Web Services (UK),PA Consultancy Services Ltd,PA CONSULTING SERVICES LIMITED,Uber Kenya Limited,PA Consulting Group,University of Birmingham,The Alan Turing Institute,Amazon Web Services (UK),African Population and Health Research Center,African Population and Health Res CentreFunder: UK Research and Innovation Project Code: EP/T030100/1Funder Contribution: 132,245 GBPAir quality in most East African cities has declined dramatically over the last decades and it air pollution is now the leading environmental risk factor for human health. There is a critical lack of data to assess air quality in East Africa, and therefore to quantify its effect upon human health. Air quality networks in East Africa are still in their early days, with the long term and systematic measurement of air pollutants only available at less than a handful of sites. Large spatial and temporal gaps in data exist. From a historical perspective, very little is known of air pollution concentrations before 2010. The lack of historical data makes it extremely difficult to assess the deleterious effects of air pollution upon human health. It also poses challenges for assessing the efficacy of air quality interventions. Hence informed decisions about infrastructure, which take air quality into account are difficult to make. This proposal forms a new network to co-create strategy and protocols to bring together data that relate to air pollution in East African Urban areas. It targets the capitals of Ethiopia (Addis Ababa), Kenya (Nairobi) and Uganda (Kampala). New data science techniques will be developed to synthesize disparate data streams into spatially and temporally coherent outputs, which can be used to understand historic, contemporary and future air quality. The proposal will provide a road map to harness the power of new data analytics and big data technologies. To design this roadmap, three high intensity workshops and interspersed virtual meetings will be undertaken in Stage 1. Each workshop will tackle a key knowledge gap or development challenge: - Workshop 1: Parameterizing the data problem in East Africa for assessing the causes and effects of air pollution (Kampala) - Workshop 2: Big data approaches to improve East Africa air quality prediction (Addis Ababa) - Workshop 3: Creating greater capacity and capability in analytic air quality science (Nairobi) The Stage 1 research outcomes will enable the development of tailored mitigation strategies for improving air quality. The methodologies developed in the proposal will be translatable and scalable throughout urban East Africa. Hence, the proposal will help realise multiple sustainable development goals (SDGs), including SDG3: Good health and well-being, SDG11: Sustainable cities and communities, and SDG17: Partnerships for the goals. To ensure the project reaches its maximum potential, it includes an extensive array of research translation activities: workshops with academic and non-academic stakeholders; a professionally designed website, which will hold both academic and non-academic outputs including open source academic papers and presentations; briefing notes directed at a range of external stakeholders, including top down governance and bottom up grassroots organizations. Project partners from business, academia, governance and public engagement with science are involved and will attend the workshops. They are Uber, Amazon Web Services, PA Consulting, Kampala Capital City Authority, African Population Health Research Centre, Birmingham Open Media, GCRF Multi-Hazard Urban Disaster Risk Transitions Hub, and the Alan Turing Institute. They offer an additional £102,951 of in-kind contributions to the project. Their incorporation widens the available skillsets and will help deliver long-term impact in the East African region.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::cd8403dc5bb23e84a459e3f46c871654&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::cd8403dc5bb23e84a459e3f46c871654&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2019 - 2027Partners:Amazon Web Services (UK), TNA, UNSW, GreenShoot Labs, hiveonline +19 partnersAmazon Web Services (UK),TNA,UNSW,GreenShoot Labs,hiveonline,Norton Rose LLP,Five AI Limited,Ocado Limited,The Alan Turing Institute,KCL,Mayor's Office for Policing and Crime,Ernst and Young,Ericsson,Royal Mail,IBM,BT Group (United Kingdom),BL,VODAFONE,Bruno Kessler Foundation FBK,Thales Group,Samsung Electronics Research Institute,Codeplay Software,ContactEngine,Association of Commonwealth UniversitiesFunder: UK Research and Innovation Project Code: EP/S023356/1Funder Contribution: 6,898,910 GBPThe UK is world leading in Artificial Intelligence (AI) and a 2017 government report estimated that AI technologies could add £630 billion to the UK economy by 2035. However, we have seen increasing concern about the potential dangers of AI, and global recognition of the need for safe and trusted AI systems. Indeed, the latest UK Industrial Strategy recognises that there is a shortage of highly-skilled individuals in the workforce that can harness AI technologies and realise the full potential of AI. The UKRI Centre for Doctoral Training (CDT) on Safe and Trusted AI will train a new generation of scientists and engineers who are experts in model-based AI approaches and their use in developing AI systems that are safe (meaning we can provide guarantees about their behaviour) and are trusted (meaning we can have confidence in the decisions they make and their reasons for making them). Techniques in AI can be broadly divided into data-driven and model-based. While data-driven techniques (such as machine learning) use data to learn patterns or behaviours, or to make predictions, model-based approaches use explicit models to represent and reason about knowledge. Model-based AI is thus particularly well-suited to ensuring safety and trust: models provide a shared vocabulary on which to base understanding; models can be verified, and solutions based on models can be guaranteed to be correct and safe; models can be used to enhance decision-making transparency by providing human-understandable explanations; and models allow user collaboration and interaction with AI systems. In sophisticated applications, the outputs of data-driven AI may be input to further model-driven reasoning; for example, a self-driving car might use data-driven techniques to identify a busy roundabout, and then use an explicit model of how people behave on the road to reason about the actions it should take. While much current attention is focussed on recent advancements in data-driven AI, such as those from deep learning, it is crucial that we also develop the UK skills base in complementary model-based approaches to AI, which are needed for the development of safe and trusted AI systems. The scientists and engineers trained by the CDT will be experts in a range of model-based AI techniques, the synergies between them, their use in ensuring safe and trusted AI, and their integration with data-driven approaches. Importantly, because AI is increasingly pervasive in all spheres of human activity, and may increasingly be tied to regulation and legislation, the next generation of AI researchers must not only be experts on core AI technologies, but must also be able to consider the wider implications of AI on society, its impact on industry, and the relevance of safe and trusted AI to legislation and regulation. Core technical training will be complemented with skills and knowledge needed to appreciate the implications of AI (including Social Science, Law and Philosophy) and to expose them to diverse application domains (such as Telecommunications and Security). Students will be trained in responsible research and innovation methods, and will engage with the public throughout their training, to help ensure the societal relevance of their research. Entrepreneurship training will help them to maximise the impact of their work and the CDT will work with a range of industrial partners, from both the private and public sectors, to ensure relevance with industry and application domains and to expose our students to multiple perspectives, techniques, applications and challenges. This CDT is ideally equipped to deliver this vision. King's and Imperial are each renowned for their expertise in model-driven AI and provide one of the largest groupings of model-based AI researchers in the UK, with some of the world's leaders in this area. This is complemented with expertise in technical-related areas and in the applications and implications of AI.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2c549b1bf3e238912e66bb0b37566eaf&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2c549b1bf3e238912e66bb0b37566eaf&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2018 - 2024Partners:Amazon Web Services (UK), University of Cambridge, Imperial College London, INRIA, ARM (United Kingdom) +22 partnersAmazon Web Services (UK),University of Cambridge,Imperial College London,INRIA,ARM (United Kingdom),University of Toronto, Canada,IBM (United States),Inria,Advanced Risc Machines (Arm),Aarhus University,University of Cambridge,UNIVERSITY OF CAMBRIDGE,IBM,AU,Korea Advanced Institute of Science and Technology,IBM Corporation (International),GCHQ,KAIST,Max Planck Institutes,French Institute for Research in Computer Science and Automation,Max-Planck-Gymnasium,GCHQ,Google (United States),Facebook UK,Facebook UK,Google Inc,Amazon Web Services (UK)Funder: UK Research and Innovation Project Code: EP/R034567/1Funder Contribution: 1,579,790 GBPModern society faces a fundamental problem: the reliability of complex, evolving software systems on which it critically depends cannot be guaranteed by the established, non-mathematical techniques, such as informal prose specification and ad-hoc testing. Modern companies are moving fast, leaving little time for code analysis and testing; concurrent and distributed programs cannot be adequately assessed via traditional testing methods; users of mobile applications neglect to apply software fixes; and malicious users increasingly exploit programming errors, causing major security disruptions. Trustworthy, reliable software is becoming harder to achieve, whilst new business and cyber-security challenges make it of escalating importance. Developers cope with complexity using abstraction: the breaking up of systems into components and layers connected via software interfaces. These interfaces are described using specifications: for example, documentation in English; test suites with varying degrees of rigour; static typing embedded in programming languages; and formal specifications written in various logics. In computer science, despite widespread agreement on the importance of abstraction, specifications are often seen as an afterthought and a hindrance to software development, and are rarely justified. Formal specification as part of the industrial software design process is in its infancy. My over-arching research vision is to bring scientific, mathematical method to the specification and verification of modern software systems. A fundamental unifying theme of my current work is my unique emphasis on what it means for a formal specification to be appropriate for the task in hand, properly evaluated and useful for real-world applications. Specifications should be validated, with proper evidence that they describe what they should. This validation can come in many forms, from formal verification through systematic testing to precise argumentation that a formal specification accurately captures an English standard. Specifications should be useful, identifying compositional building blocks that are intuitive and helpful to clients both now and in future. Specifications should be just right, providing a clear logical boundary between implementations and client programs. VeTSpec has four related objectives, exploring different strengths of program specification, real-world program library specification and mechanised language specification, in each case determining what it means for the specification to be appropriate, properly evaluated and useful for real-world applications. Objective A: Tractable reasoning about concurrency and distribution is a long-standing, difficult problem. I will develop the fundamental theory for the verified specification of concurrent programs and distributed systems, focussing on safety properties for programs based on primitive atomic commands, safety properties for programs based on more complex atomic transactions used in software transactional memory and distributed databases, and progress properties. Objective B: JavaScript is the most widespread dynamic language, used by 94.8% of websites. Its dynamic nature and complex semantics make it a difficult target for verified specification. I will develop logic-based analysis tools for the specification, verification and testing of JavaScript programs, intertwining theoretical results with properly engineered tool development. Objective C: The mechanised specification of real-world programming languages is well-established. Such specifications are difficult to maintain and their use is not fully explored. I will provide a maintainable mechanised specification of Javascript, together with systematic test generation from this specification. Objective D: I will explore fundamental, conceptual questions associated with the ambitious VeTSpec goal to bring scientific, mathematical method to the specification of modern software systems.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::9ae29559284d47ff58ad9c933b2dde42&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::9ae29559284d47ff58ad9c933b2dde42&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu
chevron_left - 1
- 2
chevron_right