Imminent dystopia? Media coverage of algorithmic surveillance at Berlin-Südkreuz

: Facial-recognition software continues to create heated controversy, as illustrated by a year-long pilot run at the Berlin-Südkreuz train station. The test run at one of Berlin’s main arteries was a catalyst for media attention, spurring heated discourse on the efficiency and legitimacy of surveillance technology. Drawing on a critical discourse analysis and (post-)panoptic theory, this paper investigates how the relationship between the public and the state is represented, how automated surveillance technology is linguistically framed and which problematisations were associated with the technology deployed during the 2017 pilot.


INTRODUCTION
Mass-casualty terrorism, migration sparked by raging conflicts and humanitarian crises, and transnational corporate crime give rise to another era of unpredictability. Amid these global challenges, national governments are tasked with providing defence and security of the state, its citizens, institutions, and economy. In a quest to live up to this challenge, recent technological advancements seem to offer promising solutions and are often justified as a means to regain control. One of the most popular tools in this context are surveillance technologies, which are certainly not novel. Yet, the recent strikes towards automation open up unforeseen possibilities.
Facial recognition software, for instance, enables the identification of individuals from a picture or video. While 'facial recognition' has become a catch-all term, it should be noted that facial Imminent dystopia? Media coverage of algorithmic surveillance at Berlin-Südkreuz Internet Policy Review | http://policyreview.info 4 March 2020 | Volume 9 | Issue 1 between the state and the public and (3) the problematisations associated with both. This exploratory discourse analysis draws on thirty-one news articles, commentary pieces and blog posts from a variety of national, regional and online-only outlets. These sources include more critical stances towards the issue (Süddeutsche Zeitung, Spiegel Online, Netzpolitik.org, Deutsche Welle (DW), Zeit Online), a comparatively moderate position (Berliner Zeitung, Der Tagesspiegel, Morgenpost, Welt) and four with a popular scientific focus (Spektrum, Heise Online, Computer Bild, Wissen.de). Additionally, the analysis included posts on blogs that are more or less loosely centred around the topics of data protection, privacy and security (datenschutz notizen, Datenschutzbeauftragter INFO, digitalcourage.org, IT-Security@Work, law blog, TEXperimenTales) and contributions by online outlets with a focus on digital technologies (tarnkappe.info, Gründerszene). Other articles were published in regional publications (Märkische Allgemeine Zeitung, QIEZ). These sources were selected with the assumption that they each might present the Südkreuz-project in different ways and with different foci. A blog on information security might offer a different perspective than a regional newspaper. Some outlets heavily covered the unfolding of the pilot project and were included in the analysis with more than one text. Although the analysis spans a variety of outlets, the results show that the pilot was generally critically portrayed and represented in a similar fashion.
Although the selected sources only represent a small proportion of the many news reports, feature articles, editorials, columns, opinion pieces and blog posts that were published on this topic, they offer an insight into the linguistic framings that characterised the discourse. Thus, this first explorative study offers a baseline for further investigations into the controversy around the Berlin-Südkreuz pilot project.

SURVEILLANCE TECHNOLOGY AND THE STATE
In Germany, as in other countries, the government is the driving force behind the adoption and development of surveillance technologies. The advancements in automated or "smart" surveillance technologies are still recent; thus, no common term has been established. This is partly due to the many new applications, e.g., prediction of criminal behaviour or traffic jams or facial recognition and the move out of local databases into networked systems (Galič et al., 2016;Roßnagel et al., 2011). The terms that are commonly implied in this context include "smart CCTV", "second generation CCTV" and "algorithmic surveillance" (Musik, 2011). I will use the term "algorithmic surveillance". It captures the nature of these systems, which use algorithms to interpret, combine and aggregate data, best.
The Ministry of the Interior, as well as federal and national police, are responsible for the protection of internal security and the provision of policing. Surveillance technologies tend to be justified as resources that enable the state to live up to its responsibility to provide security; as in preventing or reducing harm. In Germany, surveillance tools are increasingly developed and adapted as policing tools. The German Ministry of Education and Research is heavily investing in their development. So are various federal policing institutions across Germany, which run inhouse research projects (Möllers & Hälterlein, 2013, p. 60). Additionally, the EU research projects P-REACT and INDECT explore how surveillance systems may be employed to detect criminal activity (European Commission, 2016;European Commission, 2017).
The state is expanding the legal framework to enable algorithmic surveillance. The adoption of biometric databases through the 'e-Pass', is the first strike toward the large-scale acquisition of biometric data (Oepen, 2013). Since May 2017, federal and national security agencies can access the database (Reuter, 2017a). In March 2017, a law ("Videoüberwachungsverbesserungsgesetz") was passed to extend the deployment of video surveillance and the possibilities for usage and transmission (Reuter, 2017a).
Nonetheless, the algorithmic surveillance software at Berlin-Südkreuz is most probably, if not certainly, in conflict with the current legal framework (Reinsch, 2017). Under German law, individuals are granted the right to informational self-determination, which refers to "the capacity of the individual to determine in principle the disclosure and use of his/her personal data" (BVerfGE 65, 1). This ruling is the "constitutional anchor for data protection" (Hornung & Schnabel, 2009, p. 4) and internationally unparalleled.
Nonetheless, the infrastructure is expanded for larger-scale public surveillance. In Germany, 900 train stations are already equipped with about 6000 CCTV cameras (Deutscher Bundestag, 2019). The pilot project at Berlin-Südkreuz, which I will outline in the next paragraph, is aimed at exploring the capabilities of the newest technological options (Stöcker, 2017).  (Käppner, 2017). At Südkreuz three different areas were marked with blue stickers and signs to inform passers-by about the employed software. One camera is pointed at an entranceway, another at an escalator and the third was pointed at an exit (Morgenpost, 2017).

A PANACEA? THE PILOT PROJECT AT BERLIN-SÜDKREUZ
With each, a different software application was tested. The Ministry of the Interior first declined to disclose the manufacturers but then announced that the software applications employed are by the multinational corporation Dell, much smaller German security provider ELBEX and another German software company, L-1 Identity Solutions AG (Kurz, 2017). Facial recognition applications can identify a person using digital images or video material.
Generally, there are two approaches. The first one draws on mapping facial features, or landmarks e.g., jaw, eyes, or nose that are analysed in relation to each other and then compared to images for a match. The second approach calculates the "essence" of a face. The specific value is different for each individual, thus becomes comparable (see Gallbally et al., 2014;Gates, 2011).
Three hundred volunteers were recruited to test different products (Käppner, 2017). A template was extracted from each participant's photograph, building a database (Lobe, 2017). Each volunteer carries a location-tracking transponder which helps to identify if the employed software successfully picked up and matched the individual passing through with the database.
For their cooperation, each participant was compensated with a 25 Euro Amazon gift card. The individuals who crossed through most often were incentivised with additional prizes (e.g., Apple watches). The selection of incentives sparked some controversy (Horchert, 2017).
In this context, it is noteworthy that identifying specific individuals within a crowd always implies that there are individuals within a reference group. Thus, the distinction between participants and non-participants is precarious. Essentially every individual that passes through, volunteer or not, is picked up by the cameras and is thus a participant. Moreover, questions of informed consent emerged shortly after the project was rolled out. As it turns out, the volunteers were not informed about the scope of data that the transmitters could collect, which include not only location but other factors, e.g., speed and temperature (Kühl, 2017).
The goal of the project was to test if state-of-the-art algorithmic surveillance software works efficiently. In the long run, the idea is to employ systems that spot people in distress, stray potentially dangerous objects and suspicious behaviour of potential criminals (Bundesministerium des Inneren, 2017; 2018). As for this specific pilot project, the Ministry of the Interior did not specify beforehand what would constitute "efficiency" and thus a successful pilot project (Reuter, 2017b). In the end, the Ministry of the Interior deemed the 2017 pilot successful (Bundesministerium des Inneren, 2018). According to the official test report, the employed systems identified participants with an accuracy of 80% (Bundesministerium des Inneren, 2018). The Ministry's claim sparked widespread criticism, as the accuracy rate of the various software employed during the trial's first phase ranged between a meagre 65,8% and 12%. Only the combination of the three different systems employed produced higher accuracy rates (Chaos Computer Club, 2018). Despite the controversy, the Ministry of the Interior commenced with the second phase of the Berlin-Südkreuz pilot project in 2019 (Vogt, 2019). In January 2020, the Ministry of the Interior announced that although the results of the pilot project seemed promising, facial recognition software would not immediately be adopted at German train stations and airports. Instead, the Ministry made plans to expand on video surveillance technology (CCTV) at train stations and in other public gathering spaces (Tagesschau, 2020). Although this turn of events does not indicate a significant change of policy agenda, the Ministry's hesitation towards the implementation of facial recognition software might be a response to the widespread public criticism. In this next section, I will give an insight into the media coverage that the controversial trial's first phase sparked.

DISCOURSE ANALYSIS
First, I will show how different authors present the project at Berlin-Südkreuz and point out the linguistic and rhetorical features, taking a close look at how they convey truth-claims and how they present power structures. For a better overview, I structured this section according to coding categories which consist of (1) the relationship between the public and the state, (2) the representation of automated surveillance technology, (3) the problematisations associated with both.

DISCURSIVE IDENTITIES: THE PUBLIC AND THE STATE
First, the identities that are constructed in and through media discourse are quite insightful. A Süddeutsche Zeitung title reads "they see us" (Moorstedt, 2017). A Berliner Zeitung author alludes to the opacity of the algorithmic surveillance employed, calling the project "trials […] in hiding" (Neumann, 2017). Other headlines read suggestively "police seeking volunteers for total surveillance" (Poschmann, 2017) and "go ahead, scan me" (Rabenstein, 2017). One author proclaims that the pilot project marks a "high point of audacity in the relationship between the German state and its citizens" and adds "he [Thomas de Mazière] must not get through with this" (Stöcker, 2017). In Süddeutsche Zeitung Käppner refers to "technology of control" (Käppner, 2017), while many others allude to the "surveillance state" (Reuter, 2017a;Stürzl, 2018) playing along similar lines of the state-citizen relationship.
A distinct boundary is drawn between the protagonists: those under surveillance ("us"), presumably the public or citizens; and those who are in control, the authorities or "they" (e.g., Hermes, 2017;Stürzl, 2018). Although, subtler, "technology of control" implies that there is one party in control and one that is being controlled (Käppner, 2017). These linguistic acts construct two discursive identities. This is referred to as antagonism, constituting an opposing, even hostile, relationship between two subjects. Each subject is attributed to a specific identity, where one is determinately dominating the other (Fontanille, 2006). In critical discourse analysis (CDA), these instances are also referred to as oppositions, as in the creation of opposition through linguistic frames (Evans, 2013).
Across the articles and blog posts, it is difficult to pinpoint the exact agency of the antagonist(s).
It particularly remains unclear, who "they" are, presumably because of the indistinct responsibility distribution across different institutions. Thus, authors sometimes refer to the state, the Ministry of the Interior, the federal criminal police office and/or Deutsche Bahn (Lobe, 2017;Moorstedt, 2017;Morgenpost, 2017;Stöcker, 2017). The opposition will appeal to the readers who will most likely feel drawn to identify with the protagonist "us", the public, the citizens. The proclamation "he [Thomas de Maizière] must not get through with this" is an appeal for solidarity, a call for collective action (Stöcker, 2017). These antagonisms, as a linguistic twist, imply asymmetrical power-relations and create opposition through language.

THE UNOBSERVABLE OBSERVER
A prominent aspect of linguistic representation is the variety of terms that are used to describe the technology employed at Südkreuz. Therefore, I examined naming, the analysis of nouns as the "units of language that name things in the world" (Evans, 2013). Through naming existence is assumed. If we call something "technology of control" (Käppner, 2017) we presuppose that it exists (Evans, 2013).
"Intelligent" (Borchers, 2017;Conrad, 2017;Horchert, 2017;Kurpjuweit, 2017;Lobe, 2017;Moorstedt, 2017) on the other hand is an adjective that is often employed in this context to communicate the innovative nature of the system. In this case, a system that does not only collect but also interpret, combine and aggregate data. Ultimately, these adjectives do not necessarily draw a positive picture of the employed technology. The ideological potencies of these adjectives are striking, especially considering that the authors seem to struggle to find a suitable term to capture the employed technology.
In fact, a lack of fitting terminology is characteristic of autonomous systems. They can hardly be captured in words, as technology disappears from the front end (cameras, control rooms) into the back end (algorithms) (see Galič et al., 2016;Roßnagel et al., 2011). Presumably, the many different applications and functions of automated surveillance technology add to these difficulties. There are software applications, motion analysis, and facial recognition, object tracking, options for classifications and predictions. Referring to "the system" or "intelligent software" are ways to linguistically capture these facets. There are also attempts to capture the material hardware components into words, referring to what we can observe: "intelligent cameras" (Moorstedt, 2017;Poschmann, 2017;Stürzl, 2018) or "computers" (Moorstedt, 2017; Another linguistic twist in this context are personifications, which are "metaphorical representation, common to literary texts, whereby nonhuman objects are ascribed human attributes or qualities" (Baker & Ellece, 2011, p. 60). Examples include the observation that "systems are not faultless but they can learn at a frightening speed" (Moorstedt, 2013) or that there are now "objects that stare at us" (Moorstedt, 2017) and an "all-seeing, always alert digital guard" (Stöcker, 2017). With the trend towards algorithmic surveillance, their technological focus shifts ways from cameras and human pendants in the control room. What can be grasped under the term algorithmic surveillance describes the move towards autonomous computerbased surveillance, where algorithms take over the formerly human task of analysis and interpretation (see Norris & Armstrong, 1999). The "unobservable observer" is characterised by subtle frontends and black-boxed algorithms. Those who come in touch with the system can hardly make sense of the technology. The diffusion and automation, and with that a sense of mystification and alienation, of surveillance technology, is communicated through language.
The employed adjectives and personifications leave the impression that the technology has assumed agency; control over these surveillance systems seems like an illusion, conveying a sense of urgency.

PROBLEMATISATIONS: DISCIPLINE AND CONTROL
The ubiquitous, intangible nature of the surveillance systems in question could be a key point to the speculative nature in which this discourse is held. This discourse is characterised by modalities, which do not necessarily refer to reality, but contingencies or possibilities. They express information "about what could be or must be the case, as opposed to being about what actually is the case" (Swanson, 2008(Swanson, , p. 1193).
One fear is central to the debate and frequently found throughout media coverage, which is assumptions concerning the transfer of discipline and control to an automated process. Most authors did at least touch upon the (in)capabilities of algorithms to classify facial expressions, movements, interactions and to enable authorities to exercise discipline and control based on these interpretations, which is commonly referred to as predictive policing (Perry et al., 2013).
Süddeutsche Zeitung author Moorstedt questions the capabilities of a computerised interpretation of our world. The author remarks, "a hug in front of an ICE3 that is almost leaving the station could look like a brawl to the computer. Those who run on the platform, trying to catch the train, will possibly be marked as on the run" (Moorstedt, 2017). In a blog post, one calls for putting a stop to a trial that turns Berlin-Südkreuz into a "bewilderment train station" (Demuth, 2017). In Spiegel Online, the author speculates about the emergence of "a magic system of artificial intelligence and real-time data collection, which one day will predict who will do evil next" (Stöcker, 2017). The author refers to predictive policing, the algorithmic capabilities to detect and predict potential criminal activity. In the Süddeutsche Zeitung article, the fear of predictive policing through algorithms is expressed through rhetorical questions, which add dramatic quality, emotionally engaging the reader: "What will life look like in times of intelligent cameras, where one is not only always watched but also always evaluated?" (Moorstedt, 2017). The author answers promptly: "One ought to behave as unsuspicious as possible" (Moorstedt, 2017). This rhetorical twist raises the reader's curiosity. The answer is phrased like an ominous wake-up call. Playing along similar lines, the Süddeutsche Zeitung reader is reminded that "everyone is initially suspicious" (Kühl, 2017). Some interpretations go even further: "Algorithmic pattern recognition raises the question of who defines criminality and if police power is impermissibly delegated to machines" (Lobe, 2017). The author suggests that algorithms could define criminality, traditionally a responsibility of the judiciary, which interprets the law, or the legislative that passes them. "Interpretation of criminality" could also refer to a situational interpretation of the legitimacy of acts, an executive task. Interestingly, the author speaks about delegation of "police power", instead of sheer police work, which would be a more fitting term for mere interpretative algorithmic tasks. Accordingly, the algorithm is not only staged as a computerised process of police supervision. The authors convey that algorithms could not only be used to support law enforcement but ultimately become law enforcement. This is carried to the extreme, evoking dystopian visions about Kafkaesque or Orwellian dystopias and the proclamation that "dystopia threatens to become reality" (Moorstedt, 2017).
Some of the headlines read "Orwell and Kafka meet at the train station" (Stöcker, 2017) and "Big Brother at the train station" (Morgenpost, 2017). Along the same lines, one author asserts "Big Brother is installed at the train station" (Prantl, 2017). In Morgenpost, the totalitarian visions are phrased more subtly. Regarding the recent expansion of surveillance technologies in Germany, the Morgenpost reader is soberly reminded that "facial recognition software already opens up unforeseen opportunities in many dictatorships" (Morgenpost, 2017). These linguistic frames, suggesting dystopian visions, in which those in control use algorithmic surveillance to exercise totalitarian control, privilege one understanding of reality over another. The reader is left with these unsettling speculations about a future of algorithmic discipline and control.
In these articles, the value judgements elicit emotion, while the authors speculate about the possibilities of the technology employed at Berlin-Südkreuz in modalities. The oppositions convey asymmetrical power-relations: there is one party who is controlled and one who exercises control.
The various terms that are applied in this context attempt to capture the pervasive, diffuse nature of algorithmic surveillance. The added adjectives convey associations of autonomous, threatening technology. The employed personifications add to this picture, technology has seemingly assumed agency. The problematisations mainly expressed through modalities point at associated uncertainties about the future. The main themes are speculations about predictive policing and the effectiveness of algorithms to appropriately interpret behaviour and associated worries that it will become necessary to correctly anticipate behaviour to not raise suspicion.
This is further escalated, with visions of algorithmic law enforcement and dystopian visions of the future.
This analysis can give us some insight into the arguments, or truth-claims, that are put forward into this context. The critical tone that I found in varying degrees throughout all articles and blog posts does however not imply that there is a societal opposition to the adoption of automated surveillance technology; it just gives us a glimpse into some discursive frames, wider social practices and the negotiation processes that the pilot project spurs. This next section details how (post-)panoptic theory can be utilised to illuminate the topic of algorithmic surveillance technology.

MOVING BEYOND THE PANOPTICON
In the following paragraphs, I want to situate this case, and algorithmic surveillance more generally, within post-panoptic social theory, drawing on the conceptual threads that Shoshana Zuboff (1988) derived from her empirical work. To this end, I will briefly retrace the panoptic journey from its origins to post-panoptic theory. Occupants cannot see each other as they are divided by walls. Yet, they can always be watched from within the control tower. The central tower is equipped with lights that hinder the occupants from knowing whether they are being watched or not (Galič et al., 2016, pp. 12-13).
This idea of spatial, passive control was later theoretically refined by Foucault in Discipline and Punish (Foucault, 1995). He used the Panopticon as a metaphor to analyse mechanisms of social control and relations to power and knowledge. Foucault notes how the Panopticon allows for power to become anonymous, as occupants can be efficiently controlled without necessarily being watched. Those "subjected to a field visibility […] simultaneously play both roles" they become the principle of their subjection (Foucault, 1995, pp. 202-203). With the emergence of the internet, surveillance lost the Panopticon's physical and spatial characteristics. Surveillance is turned into a networked part of the infrastructure. The physical, if hypothetical, prison guard becomes abstract; the metaphor flawed.
Many scholars have made important contributions to the study of contemporary distributed forms of surveillance. Noteworthy theoretical frameworks come from Deleuze, Kallinikos, and Zuboff, among others (Deleuze, 1992;Kallinikos, 2004Kallinikos, & 2007Zuboff, 1988). These authors, however, all work with different takes on moving beyond the panopticon.
In Smart Machine, Zuboff makes an astonishing empirical and theoretical contribution to surveillance as a means of managerial control. Zuboff (1988) studied the transformation of blueand white-collar work through the application of information technology within corporations.
Surprisingly, her ideas are still relevant today, almost 30 years later. Yet, many of Zuboff's conceptualisations need to be adapted if we want to think about algorithmic surveillance, that in many ways goes way beyond the domains of her studies: Zuboff (1988) considers the rationale behind the adoption of surveillance within an organisation. She remarks that the burden of authority created "the yearning for omniscience in the face of uncertainty, the conformityinducing power of involuntary display" (Zuboff, 1988: 324). Correspondingly, the narrative of increasing uncertainty in times of globalised threat seems to be a key motivator for the adoption of surveillance technologies like the one deployed at Berlin-Südkreuz. Of course, Zuboff made this observation referring to the exertion of managerial control in times of uncertainty, referring to the uncertainty of process optimisation. The scale and context are different, yet the prospect of regaining control might still appeal to authorities.
She also invokes the panoptic schema, which she describes as "mechanisms or instruments that render visible, record, differentiate and compare […] whenever one is dealing with a multiplicity of individuals on whom […] a particular form or behaviour must be imposed (Zuboff, 1988, p. 322 In Zuboff's (1988) study, foremen were watching their workers. Different managerial levels were using the data to check on the lower levels. Zuboff advocated for horizontal visibility as vertical visibility expands, granting data access to those on the same organizational level (Zuboff, 1988, p. 350). Yet, there is no horizontal visibility at the pilot project at Berlin-Südkreuz. Algorithmic surveillance produces the "unobservable observer". Unlike other products of digitalization, e.g., mobile applications, there is no accessible front end, no window into the system that enables the user to make sense of the employed system. In this context, one could take a post-panoptic stance and argue that the diffusion of the internet works both ways: the extensive online media coverage shows that the many [publics] are watching the few [e.g., state authorities] just as much as the few are watching the many. Boyne (2000) makes this point in his piece Post-Panopticism, in which he attempts to redress panopticism. This argument holds some merit.
However, the reluctance of those responsible for the pilot project to give out information illustrates that two-way visibility does not necessarily result in an eye-level relationship between the state and the public(s) (Kurz, 2017). Not only could everyone be unknowingly watched, but it is also difficult to draw a boundary between those who are watching ─ and those who are not. As large interoperable information infrastructures emerge, data is not context-bound anymore. It cannot only be accessed but can leave the context; become aggregated and intertwined (Kallinikos, 2010). The project at Berlin-Südkreuz is the product of a cross-institutional, statecorporate partnership. The construction of the discursive identities, with the citizens as the protagonists and differing ideas about who the antagonist is, are exemplary for the diffuseness and the cross-contextuality that characterise contemporary algorithmic surveillance.
Ideally, managerial control in the relationship between the observer and the observed is mutually beneficial. The data generated through workplace surveillance could be used to assign promotions, bonuses, and if not that, coaching (Zuboff, 1988, p. 324). Algorithmic surveillance in public spaces benefits those who are being observed ─ but only hypothetically. The ease of moving around anonymously, in relative privacy in a public space, is certainly gone, while it remains questionable how algorithmic surveillance can prevent crime, benefiting those in control and those being controlled by increasing security. London, for instance, has a very tightknit surveillance infrastructure. Yet, horrible terrorist attacks like the acid attack on 23 September 2017 keep happening (Sharman & Roberts, 2017). How could algorithmic control enable authorities to prevent crime? Zuboff (1988) observes this fundamental challenge as well.
She notes that "the panopticon also enabled managers to see more of the processes and behaviours that affected their areas, without necessarily making it any easier to influence or control those events" (Zuboff, 1988, p. 348).
We need to critically question if, and how, the technology-focused, top-down ideas of the Panopticon apply to contemporary surveillance technologies. They are hardly applicable to diffuse, automated computerised systems. The emergence of plural agency, anticipatory functionalities and obscured spatial boundaries are just some instances that show that the conception of the monolithic Panopticon is not always productive. This case illustrates that post-panoptic theorists such as Zuboff (1988) can still provide us with some helpful conceptual lenses to consider contemporary algorithmic surveillance technology. The next challenge will be to find new ways to approach the emerging social lifeworld of what some already term "surveillance society" (Galič et al., 2016). Amidst these developments, it is important to remember that the implementation of surveillance technology is a social practice. It is not only an issue of privacy, but it's also an issue of democracy in itself and pertains to the fundamental right to self-determination. All the social problems that this software ought to solve ─ transnational corporate crime, violent acts ─ require social intervention. This discourse exhibits a sombre tone. The safety benefit is hypothetical, the feeling of surveillance is tangible in the discourse. This goes to show that technology does never exist in isolation, it is always embedded in the social world. Social processes, discourses as negotiation, are relevant to technological developments (MacKenzie & Wajcman, 1999, p. 23).

CONCLUSION
Finally, this small glimpse at the discourse on the pilot-project at Berlin-Südkreuz -and the themes that dominate it -show that valuable insights for future research and exploration can be gained from the study of discourse. This case study also provides a baseline against which future cases could be compared. For instance, it would be compelling to research how media portray change over time and vary across different regions and nations. This discourse also offers a window onto underlying socio-technical imaginaries. To this end, it would be worthwhile to investigate how the media representation of this project compares against expert and policy discourses. A close look at the truth-claims that other actors put forward, e.g., state or manufacturers can offer perspectives onto the social construction and negotiation of the issue.
This could give us a valuable insight into the negotiation of the cultural, political and social conditions under which the next generation of surveillance technology is developed.
The technology at hand is one in the making, public discourse is not only important; it's a necessity. Technology must not be developed in the isolation of state research facilities and private corporations. Citizens must be granted an input on questions that concern them so fundamentally. This controversial pilot project illustrates that it is crucial to take a substantive approach to questions of science and technology. A comprehensive participation process that would add new knowledge and improve decision quality.