Powered by OpenAIRE graph
Found an issue? Give us feedback

SBT

SEEBYTE LTD
Country: United Kingdom
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
14 Projects, page 1 of 3
  • Funder: UK Research and Innovation Project Code: EP/N003446/1
    Funder Contribution: 1,418,010 GBP

    Over the last three decades, our lives have been revolutionized by the availability of inexpensive CMOS-based CCD cameras whose ubiquitous nature has changed key aspects of security, communications, data handling, healthcare, commerce and leisure for almost all sections of society, regardless of wealth or geographical location. For example, it is estimated that over one half of all adults in the UK own a smartphone with imaging/video capability - a statistic considered unthinkable less than 10 years ago. The next revolution in imaging will almost certainly be spearheaded by sparse photon and three dimensional imaging, ultimately using the effects of quantum entanglement. Such a revolution will necessarily require fast timing of the single-photon detection, in the form of arrayed detectors or single-pixel cameras. The use of fast timing will permit effective time-of-flight based depth profiling at remote distances, and the effects of quantum entanglement could be utilised effectively in critical niche examples, such as imaging below the diffraction limit, wavelength transmutation or quantum secure imaging. These revolutionary changes represent a paradigm shift in terms of functionality, but present significant challenges in algorithm development and data processing, as well as data fusion with other imaging platforms, for example multispectral and regular video. This Fellowship will allow me to bridge the gap between the enabling quantum technology and the image processing community in order to improve the scope and overall performance of next generation imaging systems based on quantum technology.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/T026111/1
    Funder Contribution: 254,575 GBP

    There is a silent but steady revolution happening in all sectors of the economy, from agriculture through manufacturing to services. In virtually all activities in these sectors, processes are being constantly monitored and improved via data collection and analysis. While there has been tremendous progress in data collection through a panoply of new sensor technologies, data analysis has revealed to be a much more challenging task. Indeed, in many situations, the data generated by sensors often comes in quantities so large that most of it ends up being discarded. Also, many times, sensors collect different types of data about the same phenomenon, the so-called multimodal data. However, it is hard to determine how the different types of data relate to each other or, in particular, what one sensing modality tells about another sensing modality. In this project, we address the challenge of making sensing of multimodal data, that is, data that refers to the same phenomenon, but reveals different aspects from it and is usually presented in different formats. For example, several modalities can be used to diagnose cancer, including blood tests, imaging technologies like magnetic resonance (MR) and computed tomography (CT), genetic data, and family history information. Each of these modalities is typically insufficient to perform an accurate diagnosis but, when considered together, they usually lead to an undeniable conclusion. Our departing point is the realization that different sensing modalities have different costs, where "cost" can be financial, refer to safety or societal issues, or both. For instance, in the above example of cancer diagnosis, CT imaging involves exposing patients to X-ray radiation which, ironically, can provoke cancer. MR imaging, on the other hand, exposes patients to strong magnetics fields, a procedure that is generally safe. A pertinent question is then whether we can perform both MR and CT imaging, but use a lower dose of radiation in CT (obtaining a poor-resolution CT) and, afterward, improve the resolution of CT by leveraging information from MR. This, of course, requires learning what type of information can be transferred between different modalities. Another example scenario is autonomous driving, in which sensors like radar, LiDAR, or infrared cameras, although much more expensive than conventional cameras, collect information that is critical to driving in safe conditions. In this case, is it possible to use cheaper, lower-resolution sensors and enhance them with information from conventional cameras? These examples also demonstrate that many of the scenarios in which we collect multimodal data also have robustness requirements, namely, precision of diagnosis in cancer detection and safety in autonomous driving. Our goal is then to develop data processing algorithms that effectively capture common information across multimodal data, leverage these structures to improve reconstruction, prediction, or classification of the costlier (or all) modalities, and are verifiable and robust. We do this by combining learning-based approaches with model-based approaches. Over the last years, learning-based approaches, namely deep learning methods, have reached unprecedented performance, and work by extracting information from large datasets. Unfortunately, they are vulnerable to so-called generalization errors, which occur when the data to which they are applied differs significantly from the data used in the learning process. On the other hand, model-based methods tend to be more robust, but have poorer performance in general. The approaches we propose to explore use learning-based techniques to determine correspondences across modalities, extracting relevant common information, and integrate that common information into model-based schemes. Their ultimate goal is to compensate cost and quality imbalances across the modalities while, at the same time, providing robustness and verifiability.

    more_vert
  • Funder: European Commission Project Code: 608096
    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V05676X/1
    Funder Contribution: 1,129,920 GBP

    The offshore energy and defence sectors share a vision of the future where people are taken out of harsh, extreme environments and replaced by teams of smart robots able to do the 'dirty and dangerous jobs', collaborating seamlessly as a team with each other and with the human operators and experts on-shore. In this new world, remote data collection, fusion and interpretation become central, together with the ability to generate transparent, safe actionable decisions from this data. We propose the HUME project (HUman-machine teaming for Maritime Environments), whose vision is to develop a coherent framework that enables humans and machines to work seamlessly as a team by establishing and maintaining a single shared view of the world and each other's intents through transparent interaction, robust to a highly dynamic and unpredictable maritime environments. The HUME project's ambitious and fundamental research programme will address fundamental research questions in the field of machine-machine and human-machine collaboration, robot perception and explainable autonomy and AI. The Prosperity Partnership would build on a 20 year strategic relationship between SeeByte and HWU, with SeeByte originally a spin-out of Heriot-Watt University in 2001 and now a world-leader in maritime autonomy worldwide in the Oil & Gas and Defence sectors. This grant would facilitate a shift to lower TRL research and development, providing seeding for early-stage research that can have a broad, longer-term and more disruptive impact. This proposed work aims at establishing a durable model, through which SeeByte and HWU can remain connected to foster long-term research relationships on projects of interest, as they emerge in this rapidly changing field.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/K014277/1
    Funder Contribution: 3,837,580 GBP

    Sensors have for a long time played a vital role in battle awareness for all our armed forces, ranging from advanced imaging technologies, such as radar and sonar to acoustic and the electronic surveillance. Sensors are the "eyes and ears" of the military providing tactical information and assisting in the identification and assessment of threats. Integral in achieving these goals is signal processing. Indeed, through modern signal processing we have seen the basic radar transformed into a highly sophisticated sensing system with waveform agility and adaptive beam patterns, capable of high resolution imaging, and the detection and discrimination of multiple moving targets. Today, the modern defence world aspires to a network of interconnected sensors providing persistent and wide area surveillance of scenes of interest. This requires the collection, dissemination and fusion of data from a range of sensors of widely varying complexity and scale - from satellite imaging to mobile phones. In order to achieve such interconnected sensing, and to avoid the dangers of data overload, it is necessary to re-examine the full signal processing chain from sensor to final decision. The need to reconcile the use of more computationally demanding algorithms and the potential massive increase in data with fundamental resource limitations, both in terms of computation and bandwidth, provides new mathematical and computational challenges. This has led in recent years to the exploration of a number of new techniques, such as, compressed sensing, adaptive sensor management and distributed processing techniques to minimize the amount of data that is acquired or transmitted through the sensor network while maximizing its relevance. While there have been a number of targeted research programs to explore these new ideas, such as the USs "Integrated Sensing and Processing" program and their "Analog to Information" program, this field is still generally in its infancy. This project will study the processing of multi-sensor systems in a coherent programme of work, from efficient sampling, through distributed data processing and fusion, to efficient implementations. Underpinning all this work, we will investigate the significant issues with implementing complex algorithms on small, lighter and lower power computing platforms. Exemplar challenges will be used throughout the project covering all major sensing domains - Radar/radio frequency, Sonar/acoustics, and electro-optics/infrared - to demonstrate the performance of the innovations we develop.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.