Powered by OpenAIRE graph
Found an issue? Give us feedback

Five AI Limited

Five AI Limited

7 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/W002981/1
    Funder Contribution: 3,087,060 GBP

    The world is currently experiencing an unprecedented era of booming proliferation of machine learning (ML) and artificial intelligence (AI). Undoubtedly, the determining reason behind this rapidly evolving adoption of ML/AI is the embrace of deep neural networks (DNNs). Neural networks had been around for decades, but the advent of faster processing in the form of GPUs and storage enabling huge amounts of "big data" allowed for the training of deeper networks which showed startling performance increases on a variety of tasks in a variety of disciplines. However, the limitations of deep learning are becoming increasingly evident. Despite deep neural networks performing exceptionally well on a range of metrics, they have also been shown to be vulnerable to adversarial examples. This was first demonstrated in the field of computer vision-certain images are classified incorrectly (often with high confidence), despite there being a minimal perceptual difference with correctly classified inputs. Adversarial examples have been found in many other applications of deep learning, such as speech understanding, models of code etc. The ease with which these adversarial examples can be found raises doubts about deep neural networks being used in safety-critical applications such as autonomous vehicles or medical diagnosis since the networks could inexplicably classify a natural input incorrectly although it is almost identical to examples it has classified correctly before. Moreover, it allows for the possibility of malicious agents attacking systems that use neural networks, strikingly, Tencent Keen Security Lab recently demonstrated that the neural network underlying Tesla Autopilot can be fooled by an adversarially crafted marker on the ground into swerving into the opposite lane. The Fellowship will create a new Centre of Excellence at Oxford aiming to make deep learning reliable, robust and deployable, creating a new capability within the UK's AI/ML research landscape. The solution will involve developing fundamental algorithms to make the training more robust, together with algorithms to give an accurate uncertainty calculation for the deep networks estimates. However it is important that the solution also takes into account efficiency. As systems become deployed in the real world in some cases they will be exposed to an ever changing data stream. For instance, as of May 2019, 500 hours of video data is uploaded to YouTube every minute [2], 2.5 quintillion bytes of data are produced by humans every day. In the last two years alone, and most astonishingly, 90 percent of the world's data has been created. The challenge now, however, is to train the (almost) trillion parameters networks with quintillion bytes of data being produced continuously. As we might not want to store all this data and even if we could store it, it might not be computationally possible to train with this amount of data in a single go. Hence this proposal would be incomplete unless we proposed research on uncertainty estimation and robustness in the context of both continual learning and sparsification. The overarching objective for this Fellowship is to retain Prof Phil Torr within the UK and within academia. His research area of Computer Vision, and in particular deep learning, is of increasing interest to companies, as well as overseas academic institutions. The prestige and long-term funding of a Turing AI World Leader Researcher Fellowship would not only secure Prof Torr's continued commitment to UK academic research, but would also enable Oxford to build a Centre of Excellence for Robust and Trustworthy Deep Learning around him, and enable him to take a greater leadership role within Oxford, the UK and internationally.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S005056/1
    Funder Contribution: 1,170,740 GBP

    As automated vehicles (AVs) are being developed for driving in increasingly complex and diverse traffic environments, it becomes increasingly difficult to comprehensively test that the AVs always behave in ways that are safe and acceptable to human road users. There is wide consensus that a key part of the solution to this problem will be the use of virtual traffic simulations, where simulated versions of an AV under development can meet simulated surrounding traffic. Such simulations could in theory cover vast ranges of possible scenarios, including both routine and more safety-critical interactions. However, the current understanding and models of human road user behaviour is not good enough to permit realistic simulations of traffic interactions at the level of detail needed for such testing to be meaningful. This fellowship aims to develop the missing simulation models of human behaviour, to ensure that development of the future automated transport system can be carried out in a responsible, human-centric way. Behaviour of car drivers and pedestrians will be observed both in real traffic as well as in controlled studies in driving and pedestrian simulators, in some cases complementing behavioural data with neurophysiological (EEG) data, since several candidate component models make specific predictions about brain activity. The fellowship will then build on existing models of driver and pedestrian behaviour in routine and safety-critical situations, and extend these with state of the art neuroscientific models of specific phenomena like perceptual judgments, beliefs about others' intentions, and communication, to create an integrated cognitive modelling framework allowing simulations of traffic interactions across a variety of targeted scenarios. Such cognitive interaction models, based on well-understood underlying mechanisms, will be one main contribution from the fellowship. Some researchers have suggested the use of another type of model altogether, instead obtained directly by applying machine learning (ML) methods to large data sets of human road user behaviour, i.e., without an ambition to correctly model underlying mechanisms. This fellowship hypothesises that to achieve reliable virtual testing of AVs, both types of modelling approaches will be needed, and methods for combining them will be researched. Not least, due to their "black box" nature, ML models need to be investigated and benchmarked, to for example determine their ability to generalise to rare, safety-critical events. The multi-disciplinary research, building on and extending on the fellow's past experience in vehicle engineering, cognitive neuroscience, and machine learning, will be carried out at the Institute for Transport Studies, University of Leeds, with support also from the Schools of Psychology and Computing. The fellowship has direct support from industry, both in advisory capacities and as project partners actively sharing data and methods as well as providing first proof-of-concept uptake of the developed models into industrial environments for simulated testing.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S023356/1
    Funder Contribution: 6,898,910 GBP

    The UK is world leading in Artificial Intelligence (AI) and a 2017 government report estimated that AI technologies could add £630 billion to the UK economy by 2035. However, we have seen increasing concern about the potential dangers of AI, and global recognition of the need for safe and trusted AI systems. Indeed, the latest UK Industrial Strategy recognises that there is a shortage of highly-skilled individuals in the workforce that can harness AI technologies and realise the full potential of AI. The UKRI Centre for Doctoral Training (CDT) on Safe and Trusted AI will train a new generation of scientists and engineers who are experts in model-based AI approaches and their use in developing AI systems that are safe (meaning we can provide guarantees about their behaviour) and are trusted (meaning we can have confidence in the decisions they make and their reasons for making them). Techniques in AI can be broadly divided into data-driven and model-based. While data-driven techniques (such as machine learning) use data to learn patterns or behaviours, or to make predictions, model-based approaches use explicit models to represent and reason about knowledge. Model-based AI is thus particularly well-suited to ensuring safety and trust: models provide a shared vocabulary on which to base understanding; models can be verified, and solutions based on models can be guaranteed to be correct and safe; models can be used to enhance decision-making transparency by providing human-understandable explanations; and models allow user collaboration and interaction with AI systems. In sophisticated applications, the outputs of data-driven AI may be input to further model-driven reasoning; for example, a self-driving car might use data-driven techniques to identify a busy roundabout, and then use an explicit model of how people behave on the road to reason about the actions it should take. While much current attention is focussed on recent advancements in data-driven AI, such as those from deep learning, it is crucial that we also develop the UK skills base in complementary model-based approaches to AI, which are needed for the development of safe and trusted AI systems. The scientists and engineers trained by the CDT will be experts in a range of model-based AI techniques, the synergies between them, their use in ensuring safe and trusted AI, and their integration with data-driven approaches. Importantly, because AI is increasingly pervasive in all spheres of human activity, and may increasingly be tied to regulation and legislation, the next generation of AI researchers must not only be experts on core AI technologies, but must also be able to consider the wider implications of AI on society, its impact on industry, and the relevance of safe and trusted AI to legislation and regulation. Core technical training will be complemented with skills and knowledge needed to appreciate the implications of AI (including Social Science, Law and Philosophy) and to expose them to diverse application domains (such as Telecommunications and Security). Students will be trained in responsible research and innovation methods, and will engage with the public throughout their training, to help ensure the societal relevance of their research. Entrepreneurship training will help them to maximise the impact of their work and the CDT will work with a range of industrial partners, from both the private and public sectors, to ensure relevance with industry and application domains and to expose our students to multiple perspectives, techniques, applications and challenges. This CDT is ideally equipped to deliver this vision. King's and Imperial are each renowned for their expertise in model-driven AI and provide one of the largest groupings of model-based AI researchers in the UK, with some of the world's leaders in this area. This is complemented with expertise in technical-related areas and in the applications and implications of AI.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/T026952/1
    Funder Contribution: 807,165 GBP

    AI applications have become pervasive: from mobile phones and home appliances to stock markets, autonomous cars, robots and drones. As AI takes over a wider range of tasks, we gradually approach the times when security laws, or policies, ultimately akin to Isaac Asimov's "3 laws of robotics" will need to be established for all working AI systems. A homonym of Asimov's first name, the project AISEC (``Artificial Intelligence Secure and Explainable by Construction"), aims to build a sustainable, general purpose, and multidomain methodology and development environment for policy-to-property secure and explainable by construction development of complex AI systems. We will create and deploy a novel framework for documenting, implementing and developing policies for complex deep learning systems by using types as a unifying language to embed security and safety contracts directly into programs that implement AI. The project will produce a development tool AISEC with infrastructure (user interface, verifier, compiler) to cater for different domain experts: from lawyers working with security experts to verification experts and system engineers designing complex AI systems. AISEC will be built, tested and used in collaboration with industrial partners in two key AI application areas: autonomous vehicles and natural language interfaces. AISEC will catalyse a step change from pervasive use of deep learning in AI to pervasive use of methods for deep understanding of intended policies and latent properties of complex AI systems, and deep verification of such systems.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S024050/1
    Funder Contribution: 5,532,020 GBP

    A growing consensus identifies autonomous systems as core to future UK prosperity, but only if the present skills shortage is addressed. The AIMS CDT was founded in 2014 to address the training of future leaders in autonomous systems, and has established a strong track record in attracting excellent applicants, building cohorts of research students and taking Oxford's world-leading research on autonomy to achieve industrial impact. We seek the renewal of the CDT to cement its successes in sustainable urban development (including transport and finance), and to extend to applications in extreme and challenging environments and smart health, while strengthening training on the ethical and societal impacts of autonomy. Need for Training: Autonomous systems have been the subject of a recent report from the Royal Society, and an independent review from Professor Dame Wendy Hall and Jérôme Pesenti. Both reports emphatically underline the economic importance of AI to the UK, estimating that "AI could add an additional USD $814 billion (£630bn) to the UK economy by 2035". Both reports also highlight the urgency of training many more skilled experts in autonomy: the summary of the Royal Society's report states "further support is needed to build advanced skills in machine learning. There is already high demand for people with advanced skills, and additional resources to increase this talent pool are critically needed." In contrast with pure Artificial Intelligence CDTs, AIMS places emphasis on the challenges of building end-to-end autonomous systems: such systems require not just Machine Learning, but the disciplines of Robotics and Vision, Cyber-Physical Systems, Control and Verification. Through this cross-disciplinary training, the AIMS CDT is in a unique position to provide positive economic and societal impacts for autonomous systems by 1) growing its existing strengths in sustainable urban development, including autonomous vehicles and quantitative finance, and 2) expanding its scope to the two new application pillars of extreme and challenging environments and smart health. AIMS itself provides evidence for the strong and increasing demand for training in these areas, with an increase in application numbers from 49 to 190 over the last five years. The increase in applications is mirrored by the increase in interest from industrial partners, which has more than doubled since 2014. Our partners span all application areas of AIMS and their contributions, which include training, internships and co-supervision opportunities, will immerse our students in a variety of research challenges linked with real-world problems. Training programme: AIMS has and will provide broad cohort training in autonomous intelligent systems; theoretical foundations, systems research, industry-initiated projects and transferable skills. It covers a comprehensive range of topics centered around a hub of courses in Machine Learning; subsequent spokes provide training in Robotics and Vision, Control and Verification, and Cyber-Physical Systems. The cohort-focused training program will equip our students with both core technical skills via weekly courses, research skills via mini and long projects, as well as transferable skills, opportunities for public engagement, and training on entrepreneurship and IP. The growing societal impacts of autonomous systems demand that future AIMS students receive explicit training in responsible and ethical research and innovation, which will be provided by ORBIT. Additionally, courses on AI ethics, safety, governance and economic impacts will be delivered by Oxford's world-leading Future of Humanity Institute, Oxford Uehiro Centre for Practical Ethics and Oxford Martin Programme on Technology and Employment.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.