Powered by OpenAIRE graph
Found an issue? Give us feedback

IBM Corporation (International)

IBM Corporation (International)

36 Projects, page 1 of 8
  • Funder: UK Research and Innovation Project Code: EP/R009147/1
    Funder Contribution: 771,657 GBP

    Optical lithography is a process that utilises light to define a specific pattern within a material. Standard optical lithography is capable of patterning materials in two dimensions and the possible feature size scales with the wavelength of the light. It is research into this process and associated techniques that has been one of the main drivers of the technological revolution, is partly responsible for the reduction of areal density within computer hard drives and the doubling of processor power every 18 months (Moore's Law). As we progress through the 21st century it is likely that 3D architectures on the nanoscale will become important in developing advanced materials for future data processing and storage technologies. Two-photon lithography is a 3D fabrication methodology that has recently been commercialised and is having a huge impact upon science, allowing the fabrication of bespoke 3D geometries on a length-scale of 200nm horizontally and 500nm vertically. Commercial two-photon lithography has made the fabrication of 3D systems on the several-100nm scale accessible to scientists in a variety of fields allowing the realisation of swimming micro-robots for targeted drug delivery, bioscaffolds and a range of photonic and mechanical metamaterials. A significant setback with two-photon lithography is the asymmetry in the lateral and vertical resolution, which limits both the absolute size and the type of geometry that can be realised. In this proposal, we are going to utilise our world-leading expertise in non-linear microscopy to modify a commercial two-photon lithography system and obtain enhanced resolution. We will utilise techniques that have already significantly improved the resolution in fluorescence microscopy in order to achieve a 100nm isotropic resolution. The newly built system will be used by our team to fabricate two types of 3D nanoscale magnetic materials, in geometries and on length-scales that are difficult to achieve using other fabrication methodologies. Our work in this area will pave the way for next generation 3D memory technolgies such as magnetic racetrack memory and help us to understand magnetic charge transport in novel magnetic materials. In addition, we will be working with project partners in the regenerative medicine and photonics communities in order to realise a number of novel 3D nanostructured materials. Firstly, we will work with stem cell researchers in order to fabricate artificial tissues that will be used in stem cell differentiation experiments. Our work here will provide a fascinating insight into the role of nanoscale topography upon stem cell differentiation and may eventually have applications in tissue/organ growth. Secondly, we will work with academics studying photonic crystals - artificial materials that are capable of blocking electromagnetic radiation within a certain range of the spectrum. The majority of 3D photonic crystals that have been made to date are capable of attenuating electromagnetic waves that are outside the visible range of the spectrum, limiting applications in optoelectronics. Our work here will allow the fabrication and measurement of photonic crystals that can be used with visible and infra-red light. This work may pave the way to next generation three-dimensional optical circuits that can be utilised by telecommunication industries. Overall, this project will build an internationally unique instrument and utilise it to fabricate a range of advanced materials. This will put the U.K. at the forefront of 3D lithography technologies and the associated biomedical, magnetic and photonic materials that will be realised using our newly built instrument.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/M000923/1
    Funder Contribution: 1,476,200 GBP

    With more than 300 papers published on the topic, the Condensed Matter group in Leeds is well known for its work on spintronics - a subject defined by the exploitation of the magnetic moment of electrons instead of charge. Recently the group has appointed two new members of staff bringing us expertise in organic spintronics (Cespedes) and nanomagnetism (Moore). Thus we are one of the first groups to develop high frequency equipment for molecular spintronics in order to research eco-friendly microwave devices. We are also exploring ways of switching magnetisation using the strain developed by an electric field - important for future storage applications. Although we have links among all members of the group, this Platform provides an excellent opportunity to take a strategic look at our activity. Our broad research strategy will concern the general theme of spintronic metamaterials. Metamaterials are artificial in that the functional properties are not a feature of the natural occurring materials that form the building blocks, but emerge through design and engineering of material combinations. The artificial aspect is often introduced through nanostructuring. An early example arises in optics where sub-wavelength features give rise to new properties such as photonic band-gap crystals. Magnetic metamaterials were at the dawn of spintronics - a multilayer composed of alternating magnetic and non-magnetic metals displays giant magnetoresistance. These properties have been exploited to great advantage in computing and communication. We aim to move from common magnetoresistive devices and spin transport physics into microwave nanodevices that manipulate the interactions between electrons with phonons, magnons and other quasiparticles in hybrid structures. Building on our recognised strengths of thin film growth, characterisation and magnetotransport we are proposing a programme of engineering materials in combinations that yield fruitful emergent properties - spintronic metamaterials. Our group has a broad background that includes the ability to structure materials at the nanoscale so that cooperative behaviour arises, e.g. combining superconductors with skyrmion spin textures, or injecting pure spin currents from magnets into organics. We will apply this capability to questions in areas identified as strategic such as quantum effects for new technology, beyond CMOS electronics, energy efficient electronics and new tools for healthcare. We shall pursue this in a way that is very different from a traditional responsive-mode research project. We have identified areas that are scientifically and nationally important and where we can make impact in both academic and technological settings. We will not specify exactly which experiments will be performed, only the type of experiment that is possible. We will use the flexibility of platform funding to develop the independence of researchers beyond that achievable in a normal grant. As an example, there is a controversy at present about the role of heat and magnetic proximity effects in spin currents and their possibilities in non-dissipative, low power consumption electronics. With platform funding we can send a researcher to visit the relevant labs and attend the workshops who would then be in a good position to recommend the best course of action. The researcher would lead those experiments with full support for necessary resources - including and encouraging, if appropriate, the contribution of PhD students and other PDRAs. This general approach can be applied across our whole platform programme to any emerging problems in the field. This is career-enhancing because researchers, at this stage of their research, can usually only gain this level of autonomy if they are independent Research Fellows. This background will fast track them for Research Fellowships or good positions in industry or top level institutions looking for individuals with initiative and vision.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/N014391/1
    Funder Contribution: 2,008,950 GBP

    Our Centre brings together a world leading team of mathematicians, statisticians and clinicians with a range of industrial partners, patients and other stakeholders to focus on the development of new methods for managing and treating chronic health conditions using predictive mathematical models. This unique approach is underpinned by the expertise and breadth of experience of the Centre's team and innovative approaches to both the research and translational aspects. At present, many chronic disorders are diagnosed and managed based upon easily identifiable phenomena in clinically collected data. For example, features of the electrical activity of the heart of brain are used to diagnose arrhythmias and epilepsy. Sampling hormone levels in the blood is used for a range of endocrine conditions, and psychological testing is used in dementia and schizophrenia. However, it is becoming increasingly understood that these clinical observables are not static, but rather a reflection of a highly dynamic and evolving system at a single snapshot in time. The qualitative nature of these criteria, combined with observational data which is incomplete and changes over time, results in the potential for non-optimal decision-making. As our population ages, the number of people living with a chronic disorder is forecast to rise dramatically, increasing an already unsustainable financial burden of healthcare costs on society and potentially a substantial reduction in quality of life for the many affected individuals. Critical to averting this are early and accurate diagnoses, optimal use of available medications, as well as new methods of surgery. Our Centre will facilitate these through developing mathematical and statistical tools necessary to inform clinical decision making on a patient-by-patient basis. The basis of this approach is patient-specific mathematical models, the parameters of which are determined directly from clinical data obtained from the patient. As an example of this, our recent research in the field of epilepsy has revealed that seizures may emerge from the interplay between the activity in specific regions of the brain, and the network structures formed between those regions. This hypothesis has been tested in a cohort of people with epilepsy and we identified differences in their brain networks, compared to healthy volunteers. Mathematical analysis of these networks demonstrated that they had a significantly increased propensity to generate seizures, in silico, which we proposed as a novel biomarker of epilepsy. To validate this, an early phase clinical trial at King's Health Partners in London has recently commenced, the success of which could ultimately lead to a revolution in diagnosis of epilepsy by enabling diagnosis from markers that are present even in the absence of seizures; reducing time spent in clinic and increasing accuracy of diagnosis. Indeed it may even make diagnosis in the GP clinic a reality. However, epilepsy is just the tip of the iceberg! Patient-specific mathematical models have the potential to revolutionise a wide range of clinical conditions. For example, early diagnosis of dementia could enable much more effective use of existing medication and result in enhanced quality and quantity of life for millions of people. For other conditions, such as cortisolism and diabetes where a range of treatment options exist, identifying the optimal medication, and the pattern of its delivery, based upon the profile of the individual will enable us to maximise efficacy, whilst minimising unwanted side effects.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/M00421X/1
    Funder Contribution: 717,372 GBP

    Our concern in this proposal is for Base of the Pyramid (BoP) users, that is, those who are the most socio-economically disadvantaged. For these communities, there are several challenges to the digital utopia that governments and industry are regularly heralding. These range from low technological and textual literacy, a paucity of relevant, appropriate content, to a lack of affordable, high-bandwidth data connections. With the ubiquity of mobile phones, it is clear that now, and in the future, these platforms will be the most influential ICT solutions for these users in the poorest regions of the world. Understandably, a good proportion of the work in Human Computer Interaction for Development (HCI4D) and ICT for Development (ICTD) has focused on the technologically lowest common denominators - for example "dumbphones" and "feature" phones, the precursors to smartphones - to reach as many people as possible. In contrast, this proposal addresses the need to look ahead to a future that promises widespread availability of increasingly sophisticated devices. The most likely future in the next 5-10 years is that BoP users will have access to handsets that developed world users are now taking for granted. This trend is exemplified by the affordability of so-called "low-end smartphones." The GSMA - the global industry body for mobile service providers, one of our partners in this project - predicts that this trend will continue worldwide, with these devices already retailing for as little as £30. These devices are equipped with rich sets of sensors, connectivity facilities and output channels (from audio-visual to touch-output). While there is plentiful research on how to use and extend these platforms for more "natural" interaction (e.g., creating mobile pointing and gestural interfaces), the work has largely been from a "first world" perspective. That is, the techniques have been designed to fit a future, in terms of resource availability, cultural practice and literacy, that is out of joint with that lying ahead for BoP users. Our aim is to radically innovate for key future interaction opportunities, drawing on a network of organisations and individuals deeply connected to BoP users, along with BoP end-users themselves. These stakeholders have helped shape the proposal and will be integral to the work itself. The programme will be comprehensive and integrative, involving three driver regions in Kenya, South Africa and India, each allowing us to consider needs from three perspectives: the urban, sub-urban and rural. In solving pressing problems of effective interaction for BoP users we will also seek new basis premises of HCI design in the wider developed world. In our view, the established information interaction techniques (like copy/paste) derive from desktop, textual and knowledge work framings of interaction. Mobile interaction articulates an alternative framework - sociality, personal narrative and highly context orientated practices of friendship, family and community. With the emergence of smartphones and their remarkable processing powers, the temptation to make them mini-PCs, with all the interaction principles to match, has led many HCI researchers to avoid designing for those social practices, blurring the distinction between the mobile and the PC. Given that most of those who have access to these devices are living in cultures where knowledge work is the norm, this tends to be accepted - sociality is often achieved through by-passing the device and engaging with 'apps.' The "living lab'' of our BoP communities, where exposure to and suitability of desktop UIs is very low, provides an exciting resource that draws attention to how users seek to appropriate mobile devices for social ends in and through the device itself. This in turn can provide the basis for uncovering new better basic and innovative HCI principles that can allow these ends to be more readily achieved.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R034567/1
    Funder Contribution: 1,579,790 GBP

    Modern society faces a fundamental problem: the reliability of complex, evolving software systems on which it critically depends cannot be guaranteed by the established, non-mathematical techniques, such as informal prose specification and ad-hoc testing. Modern companies are moving fast, leaving little time for code analysis and testing; concurrent and distributed programs cannot be adequately assessed via traditional testing methods; users of mobile applications neglect to apply software fixes; and malicious users increasingly exploit programming errors, causing major security disruptions. Trustworthy, reliable software is becoming harder to achieve, whilst new business and cyber-security challenges make it of escalating importance. Developers cope with complexity using abstraction: the breaking up of systems into components and layers connected via software interfaces. These interfaces are described using specifications: for example, documentation in English; test suites with varying degrees of rigour; static typing embedded in programming languages; and formal specifications written in various logics. In computer science, despite widespread agreement on the importance of abstraction, specifications are often seen as an afterthought and a hindrance to software development, and are rarely justified. Formal specification as part of the industrial software design process is in its infancy. My over-arching research vision is to bring scientific, mathematical method to the specification and verification of modern software systems. A fundamental unifying theme of my current work is my unique emphasis on what it means for a formal specification to be appropriate for the task in hand, properly evaluated and useful for real-world applications. Specifications should be validated, with proper evidence that they describe what they should. This validation can come in many forms, from formal verification through systematic testing to precise argumentation that a formal specification accurately captures an English standard. Specifications should be useful, identifying compositional building blocks that are intuitive and helpful to clients both now and in future. Specifications should be just right, providing a clear logical boundary between implementations and client programs. VeTSpec has four related objectives, exploring different strengths of program specification, real-world program library specification and mechanised language specification, in each case determining what it means for the specification to be appropriate, properly evaluated and useful for real-world applications. Objective A: Tractable reasoning about concurrency and distribution is a long-standing, difficult problem. I will develop the fundamental theory for the verified specification of concurrent programs and distributed systems, focussing on safety properties for programs based on primitive atomic commands, safety properties for programs based on more complex atomic transactions used in software transactional memory and distributed databases, and progress properties. Objective B: JavaScript is the most widespread dynamic language, used by 94.8% of websites. Its dynamic nature and complex semantics make it a difficult target for verified specification. I will develop logic-based analysis tools for the specification, verification and testing of JavaScript programs, intertwining theoretical results with properly engineered tool development. Objective C: The mechanised specification of real-world programming languages is well-established. Such specifications are difficult to maintain and their use is not fully explored. I will provide a maintainable mechanised specification of Javascript, together with systematic test generation from this specification. Objective D: I will explore fundamental, conceptual questions associated with the ambitious VeTSpec goal to bring scientific, mathematical method to the specification of modern software systems.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.