Ethical Issues in Covert, Security and Surveillance Research: Volume 8

Cover of Ethical Issues in Covert, Security and Surveillance Research
Subject:

Table of contents

(16 chapters)
Abstract

In light of the many crises and catastrophes faced in the modern world, policymakers frequently make claims to be ‘following the science’ or being ‘governed by the data’. Yet, conflict based on inequalities continue to fuel dissatisfaction with the decisions and actions of authorities. Research into public security may require surveillance and covert observations, all of which are subject to major ethical challenges. Any neat distinction between covert and overt research is difficult to maintain given the variety of definitions used for all the terms addressed here. Covert research may be ethically justified and is not necessarily deceptive. In any case, deception may be ethical if engaged in for the ‘right’ reasons. Modern research sites and innovative research methods may enhance opportunities for covert work. In all surveillance and covert work, care must be taken about how consent is managed, how observed subjects are protected and harm to all involved minimised in all situations.

Abstract

This short chapter is an introduction to my 2018 book: The Ethics of Surveillance: An Introduction (Macnish, 2018). It is provided at the start of this PRO-RES collection of essays because it anticipates and supplements the range of issues covered in this collection and lays out some of the fundamental considerations necessary to ensure if surveillance must be conducted, it will be done as ethically as possible.

When is surveillance justified? We can largely agree that there are cases in which surveillance seems, at least prima facie, to be morally correct: police tracking a suspected mass murderer, domestic state security tracking a spy network, or a spouse uncovering partner’s infidelity. At the same time, there are other cases in which surveillance seems clearly not to be justified: the mass surveillance practices of the East German Stasi, an employer watching over an employee to ensure that they do not spend too long in the toilet, or a voyeur watching the subject of his lust undress night after night.

As an introductory text, my book does not seek to provide a list of necessary and sufficient conditions for ethical surveillance. What it does provide is an overview of the current thinking in surveillance ethics, looking at a range of proposed arguments about these questions, and how those arguments might play out in a variety of applied settings. It hence provides a useful and accessible volume for policymakers wishing to rapidly get up to speed on developments in surveillance and the accompanying ethical discussions.

Abstract

This chapter, based on a sociological approach, addresses the ethical issues of surveillance research from the perspective of the profound transformations that science and innovation are undergoing, as part of a broader shift from modern to post-modern society, affecting also other major social institutions (such as government, religion, family, and public administration). The change occurring in the science and technology system is characterised by diminishing authority, uncertainty about internal mechanisms and standards, and a declining and increasingly difficult access to resources. Such changes, also related to globalisation and new digital technologies, have transformed the way research is conducted and disseminated. Research is now more open and its results more easily accessible to citizens.

Scientific research is also put under increased public scrutiny, while, at the same time, public distrust and disaffection towards science is rising. In such a context, it is more important than ever to make sure that research activities are not compromised by fraudulent and unethical practices. The legitimate expectations of citizens to enjoy their rights, including the ability to protect their private sphere, are growing. Scientific and technological development is deeply interrelated with the widespread awareness of these rights and the possibility of exercising them, but it produces also new risks, while a widespread sense of insecurity increases. The digital revolution, while improving people’s quality of life, offers at the same time new opportunities for crime and terrorism, which in turn has produced a demand to strengthen security systems through increasingly advanced and intrusive surveillance technologies. Misconduct in the field of surveillance may not only undermine the quality of research, but also further impair society’s trust in research and science as well as in the State and its institutions.

Abstract

The received wisdom underlying many guides to ethical research is that information is private, and research is consequently seen as a trespass on the private sphere. Privacy demands control; control requires consent; consent protects privacy. This is not wrong in every case, but it is over-generalised. The distorted perspective leads to some striking misinterpretations of the rights of research participants, and the duties of researchers. Privacy is not the same thing as data protection; consent is not adequate as a defence of privacy; seeking consent is not always required or appropriate. Beyond that, the misinterpretation can lead to conduct which is unethical, limiting the scope of research activity, obstructing the flow of information in a free society, and failing to recognise what researchers’ real duties are.

Abstract

Covert research has a mixed reputation within the scientific community. Some are unsure of its moral worth, others would proscribe it entirely. This reputation stems largely from a lack of knowledge about the reasons for choosing the covert method. In this chapter, these reasons will be reconstructed in detail and all the elements that will allow one to judge the level of ethicality of covert research will be laid out for the reader. In particular, the chapter will answer the following questions: What harms can result from covert research to the subjects participating in the research? Is covert research necessarily deceptive? In which cases is it ethically permissible for a researcher to deceive? What is the scientific added value of the covert research, that is, what does covert research discover that overt research does not? What are the risks to researchers acting undercover? Finally, some suggestions will be offered to research ethics reviewers to help in their appraisal of covert research.

Abstract

Large-scale data analytics have raised a number of ethical concerns. Many of these were introduced in a seminal paper by boyd and Crawford and have been developed since by others (boyd & Crawford, 2012; Lagoze, 2014; Martin, 2015; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016). One such concern which is frequently recognised but under-analysed is the focus on correlation of data rather than on the causative relationship between data and results. Advocates of this approach dismiss the need for an understanding of causation, holding instead that the correlation of data is sufficient to meet our needs. In crude terms, this position holds that we no longer need to know why X+Y=Z. Merely acknowledging that the pattern exists is enough.

In this chapter, the author explores the ethical implications and challenges surrounding a focus on correlation over causation. In particular, the author focusses on questions of legitimacy of data collection, the embedding of persistent bias, and the implications of future predictions. Such concerns are vital for understanding the ethical implications of, for example, the collection and use of ‘big data’ or the covert access to ‘secondary’ information ostensibly ‘publicly available’. The author’s conclusion is that by failing to consider causation, the short-term benefits of speed and cost may be countered by ethically problematic scenarios in both the short and long term.

Abstract

Advances in Big Data, artificial Intelligence and data-driven innovation bring enormous benefits for the overall society and for different sectors. By contrast, their misuse can lead to data workflows bypassing the intent of privacy and data protection law, as well as of ethical mandates. It may be referred to as the ‘creep factor’ of Big Data, and needs to be tackled right away, especially considering that we are moving towards the ‘datafication’ of society, where devices to capture, collect, store and process data are becoming ever-cheaper and faster, whilst the computational power is continuously increasing. If using Big Data in truly anonymisable ways, within an ethically sound and societally focussed framework, is capable of acting as an enabler of sustainable development, using Big Data outside such a framework poses a number of threats, potential hurdles and multiple ethical challenges. Some examples are the impact on privacy caused by new surveillance tools and data gathering techniques, including also group privacy, high-tech profiling, automated decision making and discriminatory practices. In our society, everything can be given a score and critical life changing opportunities are increasingly determined by such scoring systems, often obtained through secret predictive algorithms applied to data to determine who has value. It is therefore essential to guarantee the fairness and accurateness of such scoring systems and that the decisions relying upon them are realised in a legal and ethical manner, avoiding the risk of stigmatisation capable of affecting individuals’ opportunities. Likewise, it is necessary to prevent the so-called ‘social cooling’. This represents the long-term negative side effects of the data-driven innovation, in particular of such scoring systems and of the reputation economy. It is reflected in terms, for instance, of self-censorship, risk-aversion and lack of exercise of free speech generated by increasingly intrusive Big Data practices lacking an ethical foundation. Another key ethics dimension pertains to human-data interaction in Internet of Things (IoT) environments, which is increasing the volume of data collected, the speed of the process and the variety of data sources. It is urgent to further investigate aspects like the ‘ownership’ of data and other hurdles, especially considering that the regulatory landscape is developing at a much slower pace than IoT and the evolution of Big Data technologies. These are only some examples of the issues and consequences that Big Data raise, which require adequate measures in response to the ‘data trust deficit’, moving not towards the prohibition of the collection of data but rather towards the identification and prohibition of their misuse and unfair behaviours and treatments, once government and companies have such data. At the same time, the debate should further investigate ‘data altruism’, deepening how the increasing amounts of data in our society can be concretely used for public good and the best implementation modalities.

Abstract

A policy of surveillance which interferes with the fundamental right to a private life requires credible justification and a supportive evidence base. The authority for such interference should be clearly detailed in law, overseen by a transparent process and not left to the vagaries of administrative discretion. If a state surveils those it governs and claims the interference to be in the public interest, then the evidence base on which that claim stands and the operative conception of public interest should be subject to critical examination. Unfortunately, there is an inconsistency in the regulatory burden associated with access to confidential patient information for non-health-related surveillance purposes and access for health-related surveillance or research purposes. This inconsistency represents a systemic weakness to inform or challenge an evidence-based policy of non-health-related surveillance. This inconsistency is unjustified and undermines the qualities recognised to be necessary to maintain a trustworthy confidential public health service. Taking the withdrawn Memorandum of Understanding (MoU) between NHS Digital and the Home Office as a worked example, this chapter demonstrates how the capacity of the law to constrain the arbitrary or unwarranted exercise of power through judicial review is not sufficient to level the playing field. The authors recommend ‘levelling up’ in procedural oversight, and adopting independent mechanisms equivalent to those adopted for establishing the operative conceptions of public interest in the context of health research to non-health-related surveillance purposes.

Abstract

Since the European Union’s (EU) Charter of Fundamental Rights became binding in 2009, data protection has attained the status of a fundamental right (Article 8) throughout the EU. This chapter discusses the relevance of data protection in the context of security. It shows that data protection has been of particular relevance in the German context – not only against the backdrop of rapidly evolving information technology, but also of the historical experiences with political regimes collecting information in order to oppress citizens.

Abstract

‘Dual use research’ is research with results that can potentially cause harm as well as benefits. Harm can be to people, animals or the environment. For most research, harms can be difficult to predict and quantify, so in this sense almost all research could be seen as having dual use potential. This chapter will present a framework for reviewing dual use research by justifying why the responsibility for approving and conducting research does not sit with Research Ethics Committees (RECs) alone. By mapping out the wider research landscape, it will be argued that both responsibility and accountability for dual use research sits on the shoulders of broader governance structures that reflect the philosophical and political aspirations of society as a whole. RECs are certainly still important for identifying potential ‘dual use research of concern’, and perhaps teasing out some of the details that may be hidden within research plans or projects, but in a well-functioning system should never be the sole gate keepers that determine which research should, and should not, be allowed to proceed.

Abstract

In recent years, there has been a growing dialogue around community-based and systems-based approaches to security risk management through the introduction of top-down and bottom-up knowledge acquisition. In essence, this relates to knowledge elicited from academic experts, or security subject-matter experts, practitioner experts, or field workers themselves and how much these disparate sources of knowledge may converge or diverge. In many ways, this represents a classic tension between organisational and procedural perspectives of knowledge management (i.e. top-down) versus more pragmatic and experience focussed perspectives (i.e. bottom-up).

This chapter considers these approaches and argues that a more consistent approach needs to address the conflict between procedures and experience, help convert field experience into knowledge, and ultimately provide effective training that is relevant to those heading out into demanding work situations. Ultimately, ethics and method are intricately bound together in whichever approach is taken and the security of both staff and at-risk populations depends upon correctly managing the balance between systems and communities.

Abstract

In many security domains, the ‘human in the system’ is often a critical line of defence in identifying, preventing and responding to any threats (Saikayasit, Stedmon, & Lawson, 2015). Traditionally, such security domains are often focussed on mainstream public safety within crowded spaces and border controls, through to identifying suspicious behaviours, hostile reconnaissance and implementing counter-terrorism initiatives. More recently, with growing insecurity around the world, organisations have looked to improve their security risk management frameworks, developing concepts which originated in the health and safety field to deal with more pressing risks such as terrorist acts, abduction and piracy (Paul, 2018). In these instances, security is usually the specific responsibility of frontline personnel with defined roles and responsibilities operating in accordance with organisational protocols (Saikayasit, Stedmon, Lawson, & Fussey, 2012; Stedmon, Saikayasit, Lawson, & Fussey, 2013). However, understanding the knowledge that frontline security workers might possess and use requires sensitive investigation in equally sensitive security domains.

This chapter considers how to investigate knowledge elicitation in these sensitive security domains and underlying ethics in research design that supports and protects the nature of investigation and end-users alike. This chapter also discusses the criteria used for ensuring trustworthiness as well as assessing the relative merits of the range of methods adopted.

Abstract

This chapter focusses on the ethical issues raised by different types of surveillance and the varied ways in which surveillance can be covert. Three case studies are presented which highlight different types of surveillance and different ethical concerns. The first case concerns the use of undercover police to infiltrate political activist groups over a 40-year period in the UK. The second case study examines a joint operation by US and Australian law enforcement agencies: the FBI’s operation Trojan Shield and the AFP’s Operation Ironside. This involved distributing encrypted phone handsets to serious criminal organisations which included a ‘backdoor’ secretly sending encrypted copies of all messages to law enforcement. The third case study analyses the use of emotional artificial intelligence systems in educational digital learning platforms for children where technology companies collect, store and use intrusive personal data in an opaque manner. The authors discuss similarities and differences in the ethical questions raised by these cases, for example, the involvement of the state versus private corporations, the kinds of information gathered and how it is used.

Cover of Ethical Issues in Covert, Security and Surveillance Research
DOI
10.1108/S2398-6018202108
Publication date
2021-12-09
Book series
Advances in Research Ethics and Integrity
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-1-80262-414-4
eISBN
978-1-80262-411-3
Book series ISSN
2398-6018