
Samsung
Samsung
4 Projects, page 1 of 1
assignment_turned_in Project2025 - 2028Partners:Analog Devices Inc (UK), Nokia Corporation, Ericsson Silicon Valley, Virgin Media, University of Glasgow +1 partnersAnalog Devices Inc (UK),Nokia Corporation,Ericsson Silicon Valley,Virgin Media,University of Glasgow,SamsungFunder: UK Research and Innovation Project Code: EP/Z533609/1Funder Contribution: 960,575 GBPRe-configurability of radio frequency (RF) and millimeter-wave (mmWave) systems is expected to become the bedrock of 6G wireless communications. Enabling technologies that can support reconfigurability are still emerging. The project aims to develop active intelligent reflecting surfaces (IRSs) with integrated amplifying capability for 6G wireless communication. IRSs have the capability to redirect incoming signals towards specific, desirable paths, mitigating blockages and interference in complex wireless environments. However, bulk materials enabling such reconfigurability are technologically immature, with traditional materials experiencing high levels of insertion losses and low tuning range, particularly at mmWave frequencies and beyond. In this project, the research team aims to develop an IRS technology with no or very low loss and latency. This will be achieved by combining the attractive features of resonant tunnelling diodes (RTDs), such as their low power operation and ability to operate as reflection signal amplifiers, with transition metal oxides (TMOs), capable of acting as DC-controllable ultra-fast switches and phase shifters to yield a meta-atom. The meta-atom formed in this way will have the capability to both alter the phase and amplitude of the incident signal and compensate for the incident signal loss incurred through traversing the IRS through the amplification by the RTDs. The project has four main objectives. The first objective (O1) is to develop TMO-based switches for the control of amplitude of the signals incident on the IRS. The team will develop TMO-based switches using either VO2 or TiO2 for material design, growth realization, and characterization of binary and mixed/doped metal oxides. They will employ both thermal and plasma-assisted atomic layer deposition to engineer materials with controlled stoichiometry and defect levels. The second objective (O2) is to develop TMO-based phase shifters for the control of the phase of the incident signal on the IRS. The team will investigate the idea for phase shifting of a propagating wave interlaced with sub-skin depth metal TMO/insulator structures. They will examine the fundamental limits of the 'single-bit' insulator/TMO/insulator stack and its performance as a function of the TMO type, their switching mechanism, thickness, characteristics of the dielectrics, biasing lines, and the frequency of operation. The third objective (O3) is to develop RTD reflection amplifiers to compensate for the losses in the circuitry of the IRS and offset the high path loss at terahertz (THz). The team will use RTD's negative differential resistance to amplify the input signal before it is reflected back. Microwave RTD low noise reflection amplifiers have already been demonstrated featuring very low power with 10 dB gain at 5.7 GHz. The feasibility of such amplifiers at K and Ka band frequencies with 100 µW level DC power consumption and a high gain of 32 dB has also been recently demonstrated. The project's ultimate goal (O4) is to combine the results of objectives 1, 2, and 3 to create an IRS capable of controlling the amplitude and phase of incident signals with no or very low loss and low latency. The project's outcomes will be significant in the development of 6G wireless communication technology. The research team will generate new knowledge of the underlying processes and physics for engineering TMOs and their integration with RF and mmWave/THz systems. The project will enable new opportunities for the introduction of IRSs in communication systems for 6G and beyond.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::7014a4f928682599d1ddb5aa12f86e27&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::7014a4f928682599d1ddb5aa12f86e27&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2022 - 2027Partners:Imperial College London, Cisco Systems UK, Samsung, ARM Ltd, ARM (United Kingdom) +9 partnersImperial College London,Cisco Systems UK,Samsung,ARM Ltd,ARM (United Kingdom),Telefonica I+D (Spain),Information Commissioners Office,Information Commissioners Office,Samsung,Samsung (South Korea),Cisco Systems (United Kingdom),Cisco Systems (United Kingdom),Telefonica Research and Development,ARM LtdFunder: UK Research and Innovation Project Code: EP/W005271/1Funder Contribution: 1,283,040 GBPVision: In this fellowship, I aim to address a major challenge in the adoption of user-centred privacy-enhancing technologies: Can we leverage novel architectures to provide private, trusted, personalised, and dynamically- configurable models on consumer devices to cater for heterogenous environments and user requirements? Importantly, such properties must provide assurances for the data integrity and model authenticity/trustworthiness, while respecting the privacy of the individuals taking part in training and improving such models. Innovation and adoption in this space require collaborations between device manufacturers, platform providers, network operators, regulators, and the users. The objectives of this fellowship will take us far beyond the status-quo, one-size-fits-all solutions, providing a framework for personalised, trustworthy, and confidential edge computing, with ability to respect dynamic policies, in particular when dealing with sensitive models and data from the consumer Internet of Things (IoT) devices. In this fellowship, I aim to address these challenges by designing and evaluating an ecosystem where analytics from, and interaction with, consumer IoT devices can happen with trust in the model and authenticity, while enabling auditing and personalisation, hence pushing today's boundaries on all-or-nothing privacy and enabling new economic models. This approach requires designing for capabilities beyond the current trusted memory and processing limitations of the devices, and a cooperative dialogue and ecosystem involving service providers, ISPs, regulators, device manufacturers, and the end users. By designing our framework around the latest architectural and security features in edge devices, before they become commercially available, we provision for Model Privacy and a User-Centred IoT ecosystem, where service providers can have trust in the authenticity, attestability, and trustworthiness of the valuable models running on user devices, without the users having to reveal sensitive personal information to these cloud-based centralised systems. This approach will enable advanced and sensitive edge-based analytics to be performed, without jeopardising the individuals' privacy. Importantly, we aim to integrate mechanisms for data authenticity and attestation into our proposed framework, to enable trust in models and the data used by them. Such privacy-preserving technologies have the capacity to enable new form of sensitive analytics, without sharing raw data and thereby providing legal balancing capabilities that might enable certain sensitive (or currently unlawful) data analysis.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::fd79ca279de5b55fa02e84f579b686a6&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::fd79ca279de5b55fa02e84f579b686a6&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2020 - 2026Partners:Nielson, Continental Teves AG & Co. oHG, Plexalis Ltd, University of Oxford, British Broadcasting Corporation - BBC +13 partnersNielson,Continental Teves AG & Co. oHG,Plexalis Ltd,University of Oxford,British Broadcasting Corporation - BBC,Samsung,Intelligent Ultrasound,Toshiba (Japan),Samsung,Intelligent Ultrasound,Continental (Germany),Plexalis Ltd,Nielson,BBC,Samsung (South Korea),British Broadcasting Corporation (United Kingdom),Continental Teves AG & Co. oHG,Toshiba CorporationFunder: UK Research and Innovation Project Code: EP/T028572/1Funder Contribution: 5,912,100 GBPWith the advent of deep learning and the availability of big data, it is now possible to train machine learning algorithms for a multitude of visual tasks, such as tagging personal image collections in the cloud, recognizing faces, and 3D shape scanning with phones. However, each of these tasks currently requires training a neural network on a very large image dataset specifically collected and labelled for that task. The resulting networks are good experts for the target task, but they only understand the 'closed world' experienced during training and can 'say' nothing useful about other content, nor can they be applied to other tasks without retraining, nor do they have an ability to explain their decisions or to recognise their limitations. Furthermore, current visual algorithms are usually 'single modal', they 'close their ears' to the other modalities (audio, text) that may be readily available. The core objective of the Programme is to develop the next generation of audio-visual algorithms that does not have these limitations. We will carry out fundamental research to develop a Visual Transformer capable of visual analysis with the flexibility and interpretability of a human visual system, and aided by the other 'senses' - audio and text. It will be able to continually learn from raw data streams without requiring the traditional 'strong supervision' of a new dataset for each new task, and deliver and distill semantic and geometric information over a multitude of data types (for example, videos with audio, very large scale image and video datasets, and medical images with text records). The Visual Transformer will be a key component of next generation AI, able to address multiple downstream audio-visual tasks, significantly superseding the current limitations of computer vision systems, and enabling new and far reaching applications. A second objective addresses transfer and translation. We seek impact in a variety of other academic disciplines and industry which today greatly under-utilise the power of the latest computer vision ideas. We will target these disciplines to enable them to leapfrog the divide between what they use (or do not use) today which is dominated by manual review and highly interactive analysis frame-by-frame, to a new era where automated visual analytics of very large datasets becomes the norm. In short, our goal is to ensure that the newly developed methods are used by industry and academic researchers in other areas, and turned into products for societal and economic benefit. To this end open source software, datasets, and demonstrators will be disseminated on the project website. The ubiquity of digital images and videos means that every UK citizen may potentially benefit from the Programme research in different ways. One example is smart audio-visual glasses, that can pay attention to a person talking by using their lip movements to mask out other ambient sounds. A second is an app that can answer visual questions (or retrieve matches) for text-queries over large scale audio-visual collections, such as a person's entire personal videos. A third is AI-guided medical screening, that can aid a minimally trained healthcare professional to perform medical scans.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::5c1c237d8d04f5287366f923b6c2a7f0&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::5c1c237d8d04f5287366f923b6c2a7f0&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2024 - 2029Partners:Mind Foundry Ltd, Siemens Digital Industries Software - TX, Cadence Design Systems Ltd, Park Systems UK Limited, Keysight Technologies (United States) +21 partnersMind Foundry Ltd,Siemens Digital Industries Software - TX,Cadence Design Systems Ltd,Park Systems UK Limited,Keysight Technologies (United States),BAE Systems (UK),JEOL (United Kingdom),THALES UK LIMITED,Ansys UK Ltd,Synopsys (Northern Europe Ltd.),Samsung,Thermo Fisher Scientific,Broadex Technologies UK Ltd,AMD (Advanced Micro Devices) UK,Arc Instruments,MathWorks (United Kingdom),ST Microelectronics Limited (UK),Leonardo,University of Edinburgh,STFC - LABORATORIES,PragmatIC (United Kingdom),Siemens (Germany) (invalid org),Embecosm (United Kingdom),Intel (United States),Tessolve,Cirrus Logic (UK)Funder: UK Research and Innovation Project Code: EP/Y029763/1Funder Contribution: 10,274,300 GBPArtificial intelligence (AI) is undergoing an era of explosive growth. With increasingly capable AI agents such as chatGPT, AlphaFold, Gato and DALL-E capturing the public imagination, the potential impact of AI on modern society is becoming ever clearer for all to see. APRIL is a project that seeks to bring the benefits of AI to the electronics industry of the UK. Specifically, we aspire developing AI tools for cutting development times for everything from new, fundamental materials for electronic devices to complicated microchip designs and system architectures, leading to faster, cheaper, greener and overall, more power-efficient electronics. Imagine a future where extremely complex and intricate material structures, far more complex than what a human could design alone, are optimised by powerful algorithms (such as an AlphaFold for semiconductor materials). Or consider intelligent machines with domain-specialist knowledge (think of a Gato-like system trained on exactly the right milieu of skills) experimenting day and night with manufacturing techniques to build the perfect electronic components. Or yet what if we had algorithms trained to design circuits by interacting with an engineer in natural language (like a chatGPT with specialist knowledge)? Similar comments could be made about systems that would take care of the most tedious bits of testing and verifying increasingly complex systems such as mobile phone chipsets or aircraft avionics software, or indeed for modelling and simulating electronics (both potentially achievable by using semi-automated AI coders such as Google's "PaLM" model). This is precisely the cocktail of technologies that APRIL seeks to develop. In this future, AI - with its capabilities of finding relevant information, performing simple tasks when instructed to do so and its incredible speed - would operate under the supervision of experienced engineers for assisting them in creating electronics suited to an ever-increasing palette of requirements, from low-power systems to chips manufactured to be recyclable to ultra-secure systems for handling the most sensitive and private data. To achieve this, APRIL brings together a large consortium of universities, industry and government bodies, working together to develop: i) the new technologies of the future, ii) the tools that will make these technologies a reality and very importantly, iii) the people with the necessary skills (for building as well as using such new tools) to ensure that the UK remains a capable and technologically advanced player in the global electronics industry.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a3d99bc38bd64e4a70763015eca22ab5&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::a3d99bc38bd64e4a70763015eca22ab5&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu