
Tech Against Terrorism (UK)
Tech Against Terrorism (UK)
1 Projects, page 1 of 1
assignment_turned_in Project2025 - 2028Partners:FSU, University of Copenhagen, The Alan Turing Institute, KCL, University of Edinburgh +9 partnersFSU,University of Copenhagen,The Alan Turing Institute,KCL,University of Edinburgh,American Civil Liberties Union,Tech Against Terrorism (UK),University of Chicago,UNSW Sydney,New York University,Ada Lovelace Institute,Meta,HOME OFFICE,BellingcatFunder: UK Research and Innovation Project Code: MR/Z00036X/1Funder Contribution: 594,294 GBPTerrorist risks and violent extremist threats are increasingly identified and governed via new forms of data analytics and automated decision-making (ADM) made possible by advances in machine learning (ML) and artificial intelligence (AI). Private actors, including social media platforms, airlines and financial institutions, are collaborating with states, global bodies and international organisations (IOs) to implement ambitious data-led global counterterrorism projects. The UN Security Council, for example, has called on all states to intensify the exchange of information about suspected terrorists by building and sharing watchlists and analysing biometric data, bolstering the use of ML to predictively identify 'future terrorists' in advance. Social media platforms are using AI to detect extremist content online and regulate global data flows on an unprecedented scale. Passenger data from the aviation industry is analysed to identify suspicious 'patterns of behaviour' and control the movements of risky travellers. Financial data is mined by banks to spot suspicious transactions and terrorist 'associations'. These changes are putting new and far-reaching global security infrastructure projects into motion. Yet the implications of these shifts for how international law is practiced, global security threats known and powerful actors held accountable remain uncertain. The data infrastructures underlying global governance have been largely neglected in legal scholarship. And whilst potential problems that AI poses (discrimination and privacy violations) are becoming clearer, solutions remain elusive - especially in the security domain, where secrecy is key and the inner workings of algorithms are 'black-boxed' even more than usual. Regulatory theorists argue that we urgently need to 'expand our frame of rights discourse to encompass our socio-technical architecture' to respond to the accountability challenges of AI (Yeung 2019). Studying global security infrastructures in action might help us in reimagining how data, security and rights could be reconnected in our digital present. This project rethinks global security law and governance from the 'infrastructure space' it is creating. It focuses on three areas: (i) digital bordering infrastructures for controlling the cross-border movements of 'risky' people (Work Package 1); (ii) platform infrastructures for moderating terrorist and violent extremist content online (Work Package 2); and (iii) counterterrorism watchlisting infrastructures (Work Package 3). The project contends that the most far-reaching changes to global security governance are not being written in the language of international law or created through the formal powers of states and IOs but built through new socio-technical infrastructures and the data-driven security expertise they are enabling. I use the concept of 'infra-legalities' (or, the co-productive effects of data infrastructure, law and regulation) to analyse these shifts and develop a novel approach for studying international law and regulation in the age of algorithmic global governance. Infrastructure is often disregarded as an invisible substrate on which powerful actors act, but it helps create and shape power, knowledge and governance. Drawing from Science and Technology Studies, computer science, critical data studies and critical security studies, this project performs what Bowker and Star (1999) call an 'infrastructural inversion' by mapping the seemingly mundane governance work of AI-driven global security infrastructures. By 'following the data' - and tracing the socio-technical relations, norms, knowledge practices and power asymmetries that security infrastructures are enacting - a different method of studying global governance can emerge. Studying the infra-legalities of global security opens space for addressing key challenges and shaping policy debates on security, responsibility and accountability in the age of AI and automation.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::d80906856b411d97a736ad997fdcd2cf&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::d80906856b411d97a736ad997fdcd2cf&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu