0% found this document useful (0 votes)
153 views15 pages

AI Risks and EU Regulation Analysis

Uploaded by

Deepa Prajapati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
153 views15 pages

AI Risks and EU Regulation Analysis

Uploaded by

Deepa Prajapati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Journal of Risk Research

ISSN: (Print) (Online) Journal homepage: www.tandfonline.com/journals/rjrr20

Possible harms of artificial intelligence and the EU


AI act: fundamental rights and risk

Isabel Kusche

To cite this article: Isabel Kusche (11 May 2024): Possible harms of artificial intelligence
and the EU AI act: fundamental rights and risk, Journal of Risk Research, DOI:
10.1080/13669877.2024.2350720

To link to this article: https://doi.org/10.1080/13669877.2024.2350720

© 2024 The Author(s). Published by Informa


UK Limited, trading as Taylor & Francis
Group

Published online: 11 May 2024.

Submit your article to this journal

Article views: 5411

View related articles

View Crossmark data

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rjrr20
Journal of Risk Research
https://doi.org/10.1080/13669877.2024.2350720

Possible harms of artificial intelligence and the EU AI act:


fundamental rights and risk
Isabel Kusche
University of Bamberg, Germany

ABSTRACT ARTICLE HISTORY


Various actors employ the notion of risk when they discuss the future Received 21 August 2023
role of Artificial Intelligence (AI) in society – sometimes as a general Accepted 25 April 2024
pointer to possible unwanted consequences of the underlying tech- KEYWORDS
nologies, sometimes oriented towards a political regulation of AI risks. Artificial intelligence; risk;
Mostly discussed within a legal or ethical framework, we still lack a AI act; fundamental
perspective on AI risks based on sociological risk research. Building on rights; trustworthiness;
systems-theoretical thinking about risk and society, this article analyses values
the potential and limits of a risk-based regulation of AI, in particular with
regard to the notion of harm to fundamental rights. Drawing on the AI
Act, its earlier drafts and related documents, the paper analyses how this
regulatory framework delineates harms of AI and which implications the
chosen delineation has for the regulation. The results show that funda-
mental rights are invoked as legal rules, as values and as a foundation
for trustworthiness of AI in parallel to being identified as at risk from
AI. The attempt to frame all possible harms in terms of fundamental
rights creates communicative paradoxes. It opens the door to a political
classification of high-risk AI systems as well as a future standard-setting
that is removed from systematic concerns about fundamental rights
and values. The additional notion of systemic risk, addressing possible
risks from general-purpose AI models, further reveals the problems with
delineating harms of AI. In sum, the AI Act is unlikely to achieve what it
aims to do, namely the creation of conditions for trustworthy AI.

Introduction
A few years ago, Curran (2018) diagnosed a lack of political consideration for and public debate
of the risks of digital economies in general and the rise of artificial intelligence in particular,
drawing on Beck’s risk society approach. The situation has changed in the meantime, with European
legislation aiming to regulate digital services (Digital Services Act), digital markets (Digital Markets
Act) and the application of artificial intelligence (AI Act) in order to prevent harm and not leave
socio-technological trajectories to the profit-oriented decisions of digital platforms and corpora-
tions. The regulation of AI at the EU level in particular draws on the notion of risk. Starting with
the Ethics Guidelines on Trustworthy AI by the High-Level Expert Group on AI (2019) and the
White Paper on AI by the European Commission (2020), the discussion about regulating AI has

CONTACT Isabel Kusche [email protected] Feldkirchenstr. 21, 96045 Bamberg, Germany


© 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://
creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided
the original work is properly cited, and is not altered, transformed, or built upon in any way. The terms on which this article has been
published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.
2 I. KUSCHE

revolved around the acknowledgement of risks arising from it. The explanatory memorandum of
the draft AI Act explicitly proposed a risk-based regulatory framework, distinguishing between
three risk classes, namely AI systems posing unacceptable risks, high risks, and low or minimal
risks (European Commission 2021, 12). The final version of the AI Act stipulates that AI systems
posing unacceptable risks shall be banned while high-risk systems shall fulfil a number of require-
ments. They shall be monitored by a risk management system that covers their entire lifecycle
(Art. 9), trained and tested with data sets appropriate for their purpose (Art. 10), accompanied
by a technical documentation (Art. 11) and information to deployers about possible risks (Art.
13), allow for automatic record-keeping of system events (Art. 12), be subject to human oversight
when in use (Art. 14) and undergo a conformity assessment procedure before they are placed
on the market or put into service (Art. 16). In parallel, general purpose AI models, which can be
fine-tuned to fulfil tasks in various areas of application, are classified into those posing systemic
risks and those that do not (Art. 52a), with differential obligations for both.
An approach that draws on Beck’s notion of risk society could criticize limitations in the
existing political consideration of AI risks and insist on the priority of other concerns, such as
a disruption of labour markets. Yet, this article aims at a different type of reflection on AI and
risk. It contributes to explaining the specific shape that the debate on AI and its risks has taken
in Europe and the resulting limitations. It turns to a tradition in the sociology of risk that
stresses how the notion of risk is tied to differences in observation and attribution (Luhmann
2005). This systems-theoretical approach to risks focuses on the diverging ways in which different
actors observe risks. These different perspectives take centre stage when it comes to regulation.
Contrasting them will reveal how the European regulation of AI attempts to extend the con-
sideration of risks much further than in the case of other technologies but ends up with com-
municative paradoxes as a result.
There is no objective way to assess and measure risks. Although this is an insight that has
been illustrated for various risk objects (Boholm and Corvellec 2011; Christoffersen 2018;
Hilgartner 1992), it is particularly striking with regard to AI. Its regulation aims to consider not
only risks to the safety and health of individuals or to the environment but complements them
with the notion of risks to fundamental rights. Annex III of the AI Act, listing high-risk AI sys-
tems, includes for example AI systems that are supposed to determine access to educational
institutions, assess students or select applicants in recruitment processes. In all these instances,
the concern is not the health or safety of the individuals who are affected by these decisions
but their fundamental rights. As recital 4 of the AI Act states, ‘artificial intelligence may generate
risks and cause harm to public interests and fundamental rights that are protected by Union
law. Such harm might be material or immaterial, including physical, psychological, societal or
economic harm’. With the addition of systemic risks the AI Act extends the scope of risk-based
regulation even further to include in principle harm of any kind. This understanding of risk
challenges fundamental theoretical notions in (sociological) risk research. The Luhmannian strand
of systems theory seems particularly well equipped to meet this challenge, not because of its
notion of systems but thanks to the theory of communication at the heart of it. In stressing
the role of attribution in the communication about risks, it contributes to a better understanding
of the implications and potential problems that the risk-based regulation of AI brings about.
The next section provides a brief overview of the general debate on regulating AI, then
introduces key systems-theoretical concepts for a sociology of risk and applies them to the case
of AI. It demonstrates particular challenges that AI poses for a risk-based regulation and con-
cludes that they imply an indeterminate expansion of possible harms. After a brief section on
the material and method employed, the article subsequently discusses how the European
approach to risk-based AI regulation suggests delineating harms. It identifies references to legal
rules, basic values and trustworthiness as complementary but ultimately paradoxical attempts
at delineation, which rely on fundamental rights as a common denominator. Finally, the article
highlights the resulting ambiguities that are likely to hamper effective risk-based regulation.
Journal of Risk Research 3

Regulating AI and the communication of (AI) risk


In recent years, strands of research and public activism have converged towards highlighting
ways in which AI systems pose risks – to individuals, certain groups or even democratic society
as we know it (Bender et al. 2021; Burt, Leong, and Shirrell 2018; EDRi 2021; Nemitz 2018). At
the same time, the spread of AI systems has often been treated as inevitable and a keystone
of future economic growth (Bareis and Katzenbach 2022; Greene, Hoffmann, and Stark 2019).
Despite an abundance of science- and industry-led ethical guidelines (Hagendorff 2020; Jobin,
Ienca, and Vayena 2019), the uncertainty about the social implications of AI has only grown,
and the turn to ethics was increasingly suspected to be a strategy to avoid legal regulation
(Rességuier and Rodrigues 2020).
The EU in particular has asserted its willingness to go beyond ethical guidelines and develop
a regulatory framework for governing AI. The AI Act is its main building block although other
laws and governance instruments also play a role in the governance of AI (Veale, Matus, and
Gorwa 2023). The drafts of the AI Act predominantly drew attention from a legal perspective,
with the discussion focusing on the potential and likely shortcomings of the proposed regulation
(Hacker 2021; Laux, Wachter, and Mittelstadt 2023, 2024). In contrast, the AI Act has hardly been
discussed from the point of view of sociological risk research even though the notion of risk
plays a prominent part in it.
Although the most common label for Luhmann’s work is systems theory, his conception of
risk is anchored in a theory of communication. A previous overview of the systems-theoretical
approach to risk (Japp and Kusche 2008) focused on three key insights, based on the analytical
distinction between dimensions of meaning in communication: the time, the factual and the
social dimension. The time dimension points to risk as an inherently modern idea. The notion
of risk replaces older descriptions of the relation between present and future by stressing a
fundamental uncertainty about this relation as well as its dependence on decision-making
(Luhmann 1992). While theoretical approaches to risk that draw on Foucault emphasise how
decision-making is based on calculation and the notion of probabilities (O’Malley 2009), the
even more basic observation is that events are not attributed to fate, nature, divine intervention
or any other force outside of human reach. More and more events are attributed to human
decisions; anticipating this self-attribution and attribution of others renders the distinction
between past, present and future presents a ubiquitous aspect of communication (Luhmann
2005, 37–49).
In the factual dimension, dependence on decision-making implies that the opposite of risk
is danger, with ‘risk’ denoting that possible harms will be attributed to one’s own decisions and
‘danger’ standing for harms attributed to decisions of others (Christoffersen 2018, 1236–39;
Luhmann 2005, 21–27). Concurrently, the social dimension is shaped by the inevitable distinction
between decision-makers and those affected by decisions they were no part of (Christoffersen
2018, 1242–43; Luhmann 2005, 101–9). Divergences between those who decide and those who
are affected by these decisions become increasingly relevant for communication once more and
more aspects of the future are understood as the result of present decision-making. A potential
for conflicts is irradicable as no amount of information about safety features or thorough testing
can overcome the distinction between self-attributed and other-attributed anticipated harms.
Turning to AI as an example, its treatment with regard to the time dimension stands out as
overwhelmingly focused on the disruptive effects of AI. The European Commission’s (2020) White
Paper is only one instance of a blending that promotes the development and uptake of AI as
the basis of future prosperity on one hand and warns against the consequences of not adopting
AI or not adopting it in the right way on the other. As Schiølin (2020, 545) noted with regard
to the related notion of a fourth industrial revolution, ‘the future is taken as given, and it is
society’s job to adapt or to face being made redundant’. However, this ‘future essentialism’
(Schiølin 2020) is complemented by the call for regulating AI as part of the adaption process.
4 I. KUSCHE

In the factual dimension, new regulation indicates that political actors, in particular but
not exclusively at the EU level, aim to make decisions on matters on which they have not
decided so far. Warnings against possible dangers of AI are based on the distinction between
risk and danger, with the latter arising when others are the ones deciding. These others are
not named directly but alluded to, for example in the statement that ‘[t]he EU will continue
to cooperate with like-minded countries, but also with global players, on AI, based on an
approach based on EU rules and values’ (European Commission 2020, 8). The others, posing
dangers, are thus private companies or countries developing AI whose values do not align
with European values.
With regard to the social dimension, those affected by decisions about the development
and deployment of AI applications encompass potentially everyone, not only in Europe but
globally, considering the announced inevitability of a future in which AI changes everything.
The impact is supposed to be overwhelmingly positive, but there is no one who is a priori
excluded from possible negative effects. Consequently, the White Paper acknowledges that both
citizens and companies are concerned about various uncertainties (European Commission
2020, 9).
In all three dimensions, a risk-based regulation of AI poses particular challenges. In the
run-up to the first draft of the AI Act, concerns centred on predictive AI with its potential to
assist decision-making and reduce its contingent character in favour of automated calculations
(Campolo and Crawford 2020; Alon-Barkat and Busuioc 2023). Contingency is displaced to the
opaque choices made with regard to the training data and the algorithms employed (Denton
et al. 2021), resulting in various biases that skew the outputs (Aradau and Blanke 2017;
Eubanks 2019; Zajko 2022) and an extrapolation of past injustices into the future (Crawford
2021, 123–49). Accordingly, the European Commission’s White Paper (2020) listed ‘loss of
privacy, limitations to the right of freedom of expression, human dignity, discrimination for
instance in access to employment’ (European Commission 2020, 10) as possible harms of AI,
separating them as immaterial risks to fundamental rights from material risks to safety and
health. The European Commission (2021) carried this distinction over into Rec. 1 of its draft
of the AI Act.
While the European institutions were still working on the AI Act, the release of ChatGPT and
subsequently other generative AI systems called this understanding of AI risks into question
(Helberger and Diakopoulos 2023). The amendments of the European Parliament (2023) added
democracy, the rule of law and the environment to the list of entities potentially at risk from
AI. Rec. 1 of the final AI Act includes these additions under the umbrella of fundamental rights
enshrined in the Charter of Fundamental Rights of the EU. Since the Charter addresses health
and safety only in the context of work, they continue to be listed separately although they are
also fundamental rights. The AI Act thus frames all possible harms of AI systems as harms to a
fundamental right and confirms a risk-based approach to regulation (recital 14). As a result of
the trilogue negotiations between the European institutions it however adds rules regarding
general purpose AI models and refers to systemic risks some of them may pose. Although the
notion of systemic risks has been discussed in (sociological) risk research (Centeno et al. 2015;
Renn et al. 2019), rare links made to the topic of AI so far only considered the use of predictive
AI, for example in fostering sustainability (Galaz et al. 2021). The consideration of systemic risk
in regulating AI, triggered by the sudden prominence of generative AI and general purpose
models (Helberger and Diakopoulos 2023), has no obvious connection to this systemic risk
literature.
Compared to previous cases of regulating the risks of new technologies, like nuclear energy
or genetic engineering, which focused on hazards to the health and safety of human beings
and the environment, the notion of risks to fundamental rights expands the scope of potential
effects to be considered in a risk assessment enormously. The last-minute addition of systemic
risks broadens it even further.
Journal of Risk Research 5

Against this backdrop, the paper focuses on the following research questions:

• How does the regulatory framework for AI adopted by the EU delineate harms of AI,
based on the notion of risks to fundamental rights, and what are the consequences?
• What consequences does the additional notion of systemic risks have for the delineation
and the regulatory framework?

Material and method


To answer the research questions, I analysed the documents at the centre of the EU’s regulatory
efforts with regard to AI qualitatively, using the dimensions of social meaning as overarching
categories for a close reading of how a delineation of harms is attempted. The key documents,
apart from the agreed AI Act itself, are the original draft, proposed by the European Commission
on 21 April 2021, including its Annexes, its revision by the Council of the EU, published on 6
December 2022, and the amended draft by the European Parliament from 14 June 2023.
Furthermore, I used the leaked document of the EU institutions’ trilogue agreement, which a
journalist had made public in January 2024 and which contains a four-column table comparing
the three drafts and the agreed text for the AI Act. Since the goal was not a legal but a socio-
logical analysis, I was less interested in specific wordings than in possible changes with regard
to the basic understanding of risks that the texts implied. Other documents were included in
the analysis because the accompanying memorandum of the original draft referred to them,
namely the 2020 White Paper of the European Commission and the 2019 Ethics Guidelines for
Trustworthy AI by the High-Level Expert Group on AI.

The delineation of AI-related harms


Time dimension: legal rules
The general function of law is the reduction of uncertainty at the level of expectations by
guaranteeing the temporal stability of certain norms also in cases of their violation (Luhmann
2004, 142–47). Accordingly in the case of AI, law is expected to ‘ensure legal certainty to facil-
itate investment and innovation in AI’ (European Commission 2021, 3). The AI Act is presented
by the Commission as
a balanced and proportionate horizontal regulatory approach to AI that is limited to the minimum nec-
essary requirements to address the risks and problems linked to AI, without unduly constraining or hindering
technological development or otherwise disproportionately increasing the cost of placing AI solutions on
the market. (European Commission 2021, 3)

In opting for a predominantly risk-based regulation of AI – as opposed to for example out-


right bans or regulatory sandboxing (Kaminski 2023), which play subordinate roles in the pro-
posal – the Act emphasises the time dimension additionally. It is supposed to provide ‘flexible
mechanisms that enable it to be dynamically adapted as the technology evolves and new
concerning situations emerge’ (European Commission 2021, 3). Thus, the proposal acknowledges
the difference between present and future and the resulting uncertainty even with regard to
the object of regulation. The primary promise is legal certainty, compatible with flexibility and
future adaptation.
However, legal certainty takes on a double meaning in the case of AI. As the explanatory
memorandum of the original draft states, one objective of the regulatory framework is to ‘ensure
that AI systems placed on the Union market and used are safe and respect existing law on
fundamental rights and Union values’ (European Commission 2021, 3). Fundamental rights are
6 I. KUSCHE

thus invoked as a seemingly time-invariant benchmark. Concurrently however, fundamental


rights function as the blanket term for everything that AI systems potentially can harm.
Uncertainty with regard to whether AI systems will violate or adversely impact them in the
future is the reason for new regulation in the first place.
Considering the function of law, the notion of AI harms as adverse impact on fundamental
rights results in a communicative paradox. In dealing with the uncertainty of the future by
defining normative expectations backed by the legal system, the regulation explicitly confirms
that the regulated AI systems render this future even in legal respects inherently more uncertain
than it was before. The regulation does not simply reinforce existing law but explicitly acknowl-
edges that some basic normative expectations are in danger of becoming untenable due to AI.
Of course, the seeming paradox is less paradoxical once the different levels of abstraction
at which the respective laws operate are considered. Fundamental rights are one level removed
from laws prohibiting or prescribing certain actions. They are normative expectations stated in
constitutions and equivalent documents at the supra- or transnational level, such as the Charter
of Fundamental Rights of the European Union. In fact, they are not actually rules but principles:
‘Fundamental rights do not make ‘if-then statements’ but impose aims on their addressees’
(Engel 2001b, 152–53). As such, their stabilizing effect in relating present and future is relatively
weak at the level of action. When constitutional courts decide cases, they apply the propor-
tionality principle to make a decision whether a legal rule is interfering with a fundamental
right. Interferences with fundamental rights are acceptable when the legislated rule serves the
legislative end in a way that is not out of proportion and there is no less intrusive measure
(Engel 2001a, 188).
Referring to fundamental rights as the entities that might be harmed or ‘adversely impacted’
by AI, these principles are conceived as time-invariant benchmarks that are in need of additional
legal protection. Yet even before the advent of AI, there was no legal norm preventing all
restrictions of fundamental rights. It was a matter of court decisions at what point a fundamental
right was actually harmed. Within the framework of a risk-based regulation, it is supposed to
be a matter of risk assessment, which – although it may be part of a legal procedure – would
imply that, in the factual dimension, it is not identical to a pure balancing exercise of competing
interests and principles.

Factual dimension: values


The terms ‘fundamental rights’ and ‘values’ are often used almost interchangeably or discussed
in parallel in the context of constitutions (Luhmann 2004, 442–43). For example, the Charter of
Fundamental Rights of the European Union ‘organizes fundamental rights around six key con-
cepts – dignity, freedoms, equality, solidarity, citizens’ rights, and justice – that can be understood
as the values providing a foundation for fundamental rights and that those rights articulate’
(Facchi and Riva 2021, 3). Similarly, the objective to ensure that AI systems ‘respect existing law
on fundamental rights and Union values’ (European Commission 2021, 3) closely couples rights
and values.
Constitutional courts often have to make decisions about fundamental rights that amount
to their case-specific ranking, a ‘balancing exercise’ (Engel 2001a, 191) that is due to the
impossibility to deduce concrete normative judgements from general normative principles. The
political system processes its own balancing exercises since the commitments of political pro-
grammes refer to values as well (Luhmann 2004, 121). Politically, values denote preferences
considered legitimate and thus not only personal but recognized by a collective that subscribes
to those values (Luhmann 2000, 178). Yet, values cannot guide concrete decisions since any
decision involves more than one value. Political proposals for collectively binding decisions
pick suitable values as justification without explicitly rejecting other, conflicting values.
Opposition to proposals can draw on those values to justify their criticism (Boltanski and
Journal of Risk Research 7

Thévenot 2006; Luhmann 2000, 178–83). Values deferred in taking a particular decision can
justify another decision at a later point in time, with the added justification that they took a
backseat for too long. Decision-making about basic values would take the form of political
decisions by legislative majority, prioritising some values until a later political decision, perhaps
as a result of a change in government or a change of times, revises the order of values to be
considered.
Against this backdrop, the delineation of harms of AI in terms of fundamental rights results
in a second paradox. If fundamental rights and the underlying basic values are at risk, the
political system is at risk that in the future it will be unable to make collectively binding deci-
sions promoting these values or at least that such decisions will be inconsequential. The risk
to basic values translates into the risk that a prioritization of some values in decision-making
at a certain point in time will no longer be a mere deferment of other values but an irreversible
ranking of priorities.
At first sight, this is not so different from the regulation of new technologies in the past. If
a major accident happened at a nuclear power plant, despite the regulatory measures in place,
and whole countries were contaminated by radioactivity as a result, subsequent political deci-
sions prioritising the health of the population would also be more or less inconsequential,
depending on how severe the contamination is. Yet, risk regulation in the past typically separated
the (scientific) assessment of risks from their (political) evaluation. The risk of a major accident
would be calculated or, based on expert opinion, estimated first; a political evaluation would
subsequently decide whether the risk was worth taking, against the backdrop of an implicit
and reversible ordering of values (Tierney 1999, 219–22).
The separation was of course never natural but a result of social convention and power
asymmetries between experts and laypersons, as pointed out by social science research on risk
regulation (Jasanoff 1999; Wynne 2001). Nor was the separation always strict in practice, as the
contested prohibition of using growth hormones in cattle and of importing hormone-treated
beef by the European Community in the 1990s exemplifies. The ensuing dispute between the
EC and the US before the WTO centred on differing interpretations of when scientific evidence
is sufficient and what an appropriate risk assessment looks like in accordance with WTO rules,
since a purely political prioritization of certain values over free trade would have violated those
rules (Brown Weiss, Jackson, and Bernasconi-Osterwalder 2008 Part III).
In contrast, the AI Act upholds and abolishes the separation between (scientific) assessment
of risks and (political) evaluation of risks at the same time. It is ostensibly risk-based and dis-
tinguishes risk classes into which AI systems are supposed to be sorted depending on how
much risk they pose for (European) values. The resemblance of this sorting to a formal risk
analysis is however superficial. It cannot be distinguished from a political evaluation because
the risks to be considered are risks to (political) values. Therefore, the sorting is easily recog-
nizable as political, too.
Art. 6 of the AI Act introduces classification rules for high-risk AI systems. On one hand, it
refers to a list of areas in which AI systems are considered to pose a high risk in Annex III. On
the other hand, the article enumerates criteria that lead to the exclusion of an AI system from
the high-risk category despite its intended use in one of the areas listed in Annex III. These
criteria were introduced during the negotiation process and were not part of the original draft
by the European Commission. Both with and without them, the conceptualization of what
constitutes high risk is unusual in several regards. Firstly, it transforms the question of potential
harm to fundamental rights into a discrete, binary variable: either such a potential exists or it
does not; there is no attempt to quantify it in any way independent of the categorization as
high-risk. Secondly, it gives no justification regarding the AI systems listed in Annex III and their
selection. The general areas, which are further specified in terms of types of application, are
biometrics insofer as their specific use is not completely banned by Art. 5, critical infrastructure,
education and vocational training, employment and workers management, access to essential
8 I. KUSCHE

private and public services, law enforcement, migration and asylum, and the administration of
justice and democratic processes.
None of the items in Annex III is implausible, but there is no rule from which their inclu-
sion can be derived, which is also why negotiations between EU Commission, Parliament
and member states revolved around this list (Bertuzzi 2022; ‘CDT Europe’s AI Bulletin: March
2023’ 2023; ‘MEPs Seal the Deal on Artificial Intelligence Act’ 2023). Moreover, Art. 7(1) of
the AI Act permits amendments to Annex III by the Commission if the AI systems to be
added are intended to be used in any of the areas of application listed in the Annex and
if they pose a risk that is ‘equivalent to or greater than the risk of harm or of adverse impact
posed by the high-risk AI systems already referred to in Annex III.’ There is no indication of
a method for determining equivalence of risk, and there is certainly no rule that would
specify the link between fundamental rights and the items on the list in Annex III. The cat-
egory of high risk is apparently not defined by some type of technical risk assessment but
the result of political judgments that are implicitly linked to values. Paradoxically therefore,
whether an application poses a risk (to values) is a matter of values (and their ranking rel-
ative to one another).
The paradox is resolved by delegating the specification of benchmarks and indicators for
risk that is at least equivalent to the risk of AI systems in the areas listed in Annex III to the
European Commission, and thus eventually to standardization bodies (Joint Research Centre
2023; Laux, Wachter, and Mittelstadt 2024). As Veale and Borgesius (2021) have pointed out,
these bodies have no experience with fundamental rights. The standard-setting process is
therefore likely to prioritize some values over others implicitly and thus invisibly. Although the
resulting implicit ranking of values would be reversible in principle, it may be irreversible in
practice once the standards are established.
The regulation of general purpose AI models, which the original draft of the AI Act by the
European Commission did not consider, renders the implied values invisible in a similar way.
The AI Act distinguishes between AI systems, which can be high-risk, and AI models, which can
be general purpose and may be combined with further components to constitute an AI system
(Rec. 60a). This distinction leaves the risk-based classification of AI systems seemingly intact.
However, the new Title VIIIA introduces an additional classification for general purpose AI models
with (or without) systemic risk, depending on whether such a model has ‘high impact capabil-
ities’. It is presented as a technical matter of applying indicators, benchmarks and other meth-
odologies to identify or presume such capabilities, which the European Commission is tasked
to amend and supplement (Art. 52a; Rec. 60n).
If systemic risk is presumed as a result of meeting the threshold, a provider can still ‘demon-
strate that because of its specific characteristics, a general purpose AI model exceptionally does
not present systemic risks’ (Rec. 60o).
What systemic risks are is illustrated by a non-exhaustive number of examples in Rec.
60 m. They range from ‘disruptions of critical sectors and serious consequences to public
health and safety’ to ‘foreseeable negative effects on democratic processes, public and eco-
nomic security’ and ‘dissemination of illegal, false, or discriminatory content’. It would not be
difficult to rephrase the examples in terms of adverse impact on fundamental rights. What
the notion of systemic risks seems to add is an emphasis on scale. Its inclusion acknowledges
that the scale of potential impact is not adequately captured as a sum of individual violations
to fundamemental rights. Yet each given example implies additional value judgements that
are necessary to determine whether a consequence is serious, negative or generally unde-
sireable. Those judgements will eventually be made when drawing up Codes of Practice (Rec.
60s), a process in which all providers of general-purpose AI models could participate, as well
as civil society organizations and other relevant stakeholders. Although potentially less arcane
than standardization bodies, the process is unlikely to consider the problem of ordering
values in any systematic way.
Journal of Risk Research 9

Social dimension: trustworthiness


In the social dimension, the notion of trustworthiness comes into play to delineate harms. Trust
is another way to deal with the uncertainty of the future, and it is closely related to risk in
some respects: It also anticipates the possibility of unwanted future outcomes; yet someone
who trusts acts as if the future is not uncertain with regard to them and as if the unwanted
future outcomes will not transpire (Luhmann 2017, 23). Anybody who trusts assumes the risk
of misplacing this trust and facing the unwanted outcome after all (Sztompka 2003, 43–45).
Trust inevitably turns those affected by decisions of others into decision-makers themselves to
the extent to which they trusted these others. Fostering trustworthiness to instil trust is there-
fore a way to bridge the distinction between decision-makers and those affected.
Describing the general objective of the regulation, Rec. 1 of the AI Act clarifies that it is
supposed ‘to promote the uptake of human centric and trustworthy artificial intelligence’.
Subsequent recitals reiterate the emphasis on trustworthy AI. Rec. 62 refers to trustworthiness
in combination with the notion of risk:1 ‘In order to ensure a high level of trustworthiness of
high-risk AI systems, those systems should be subject to a conformity assessment prior to their
placing on the market or putting into service’. It treats the category of high risk as an objective
class, and regulatory measures are supposed to render the members of this class trustworthy in
the eyes of indeterminate others, despite being high-risk. A supposedly objective problem –
high-risk – is thus transformed into an assessment and a decision made, for example, by customers
of companies using AI (European Commission 2021, 10). Yet, the high-risk systems listed in Annex
III predominantly concern relationships in which one party has little choice. Whether, for example,
employees being evaluated by an AI system trust that system or not is inconsequential for the
options available to them. Harms arising from the operation of such systems are dangers, not
risks to them (Christoffersen 2018, 1238; Luhmann 2005, 21–27). Moreover, if the harms are
immaterial, those affected may not even notice the negative impact that the AI application has
on them.
The emphasis on trust is already apparent in the 2020 White Paper. It promised an ‘ecosystem
of trust’ (European Commission 2020, 3) as the outcome of a regulatory framework for AI. An
element of this ecosystem are the ‘Ethics Guidelines for Trustworthy AI’ (High-Level Expert Group
on AI 2019), prepared by an independent expert body that the European Commission estab-
lished in 2018 and referenced in the AI Act (Rec. 4). The Guidelines distinguish three components
of trustworthy AI: lawfulness, adherence to ethical principles and values, and robustness from
a technical and social perspective. The eventual goal is to provide a ‘foundation upon which
all those affected by AI systems can trust that their design, development and use are lawful,
ethical and robust’ (High-Level Expert Group on AI 2019, 5). This statement seemingly specifies
how AI systems would need to behave in order to be trustworthy, which is a prerequisite for
relying on trust in dealing with uncertainty (Luhmann 2005, 123).
However, the degree of specification is very uneven. It is feasible to check lawfulness, although
it may require legal experts and courts to do so. Adherence to ethical principles, in contrast,
is less clear. The Guidelines note that an ethics code cannot replace ethical reasoning and stress
that ‘ensuring Trustworthy AI requires us to build and maintain an ethical culture and mind-set
through public debate, education and practical learning’ (High-Level Expert Group on AI 2019,
9). At the same time, they point to fundamental rights as the basis of AI ethics, referring to
the EU Treaties, the EU Charter and international human rights law. The Guidelines explicitly
take note of a dual character of fundamental rights: legally enforceable rights on the one hand,
‘rights of everyone, rooted in the inherent moral status of human beings’ (High-Level Expert
Group on AI 2019, 10) on the other hand. In the second sense, fundamental rights are subjective
rights, pertaining to everyone. The reasoning thus does not actually refer to ethical positions
in practical philosophy but considers fundamental rights as the condensed ethics relevant for
European politics and regulation. The Guidelines also abstain from a separate discussion of
10 I. KUSCHE

robustness, which is justified by the close intertwinement of ethical and robust AI. After briefly
mentioning the keywords safety, security and reliability, the text turns to the deduction of the
ethical principles from fundamental rights.
Consequently, the idea of trustworthy AI boils down to an orientation towards fundamental
rights. According to this logic, users of AI are supposed to trust an application, provided it
respects their subjective rights. Yet, the risk that AI presumably poses is an adverse impact on
these same rights. This leads to the third paradox: The attempt to delineate the possible harms
of AI by distinguishing trustworthy from untrustworthy systems results in a situation where the
proposed reason to trust, namely that AI systems respect fundamental rights, is undermined
by the reason trust is needed in the first place, which is that AI systems may adversely affect
fundamental rights.
The AI Act actually concedes that trustworthiness does not equal low risk. Article 67 covers
the possibility that ‘although a high-risk AI system is in compliance with this Regulation, it
presents a risk to the health or safety of persons, fundamental rights or to other aspects of
public interest protection’ (Art. 67, 1). It is the same paradox, posed for the side of the
decision-makers: The regulation supposed to ensure that there is no adverse impact on funda-
mental rights explicitly states that the rules it introduces cannot reliably achieve this goal.
Ultimately, it is up to the market surveillance authority of a Member State to evaluate the risk
an AI system poses (although not its trustworthiness), a task for which it is ill-equipped since
it predominantly depends on the information contained in the providers’ notifications (Veale
and Borgesius 2021, 111).

Discussion
The risk-based approach of the AI Act in retrospect validates Luhmann’s (1993, 19) rejection of
the distinction between risk and safety as a basis for the sociology of risk. His original concern
was the asymptotic character of safety, which is a legitimate goal and generally preferred state
that remains unreachable in absolute terms. Once the scope of risks is widened to fundamental
rights, as the AI Act does, safety is revealed to be a value (Luhmann 2005, 19). It stands beside
other values like equality or autonomy, which in the form of fundamental rights are considered
to be at risk as a result of (some) AI systems.
The envisioned risk-based EU regulation of AI is based on the notion of harm as adverse
impact on fundamental rights. This attempt at delineation is firstly supposed to clarify which
aspects of the future we can count on to resemble the present, despite the declared disruptive
potential of AI (time dimension). It is secondly supposed to point regulatory attention to high-risk
AI systems and distinguish them from systems for which voluntary instead of mandatory mea-
sures are sufficient (factual dimension). It thirdly proposes respect for fundamental rights per-
taining to subjects as the criterion that, when fulfilled, should enable subjects to trust AI systems
(social dimension). Consequently, fundamental rights are invoked as the key reference point for
decisions in the legal system, decisions in the political system and everyday decision-making
of individuals who are somehow affected by AI.
The first reference to fundamental rights points to their role in the legal system, where they
are codified but function more as principles that have to be adequately considered than as
prescriptive rules that have to be followed. Legal decisions therefore always have to take into
account fundamental rights, but no legal norm prevents restrictions to them in principle. Beyond
specific court decisions, the legal system offers no orientation as to how to assess whether a
fundamental right is harmed by an AI application.
The second reference to fundamental rights draws on them as basic values that AI systems
should respect, embody or be designed for. Yet, values mostly appear in political communication,
where they stand for changing priorities of decision-making. Values denote generally preferable
states and outcomes, and the potential for value conflicts becomes real whenever actors actually
Journal of Risk Research 11

attempt to orient their actions and decisions in terms of values. In the proposed regulation values
are supposed to be the object of protection; yet they are also the reference point that implicitly
or explicitly justifies decisions about including types of AI systems in the category of high risk.
The third reference to fundamental rights treats compliance with them, which the regulation
supposedly ensures, as indicator or motive for trustworthiness. This is tautological since potential
harms to fundamental rights were identified as the problem in the first place. The focus on
trustworthiness of AI suggests that there are decisions to be made by those affected although
their being affected is independent from their (dis-)trust. Concurrently, for the decision-makers
deploying AI the risk remains that an AI application compliant with the regulations of the AI
Act – and thus trustworthy – may still be deemed to present a risk that is unacceptable.
The idea that a risk assessment is able to identify a class of technological systems that is
sufficiently likely to cause sufficient harm of any kind – including economic and societal – to
necessitate special regulation, which in turn reduces the likelihood and/or extent of harm, gets
caught up in paradoxes. It presupposes some delineation as to the harms considered but can
only offer the notion of fundamental rights and basic values, which is and has always been
subject to interpretation by the legal system on one hand and the political system on the other.
To be sure, the proposed AI Act demands a number of specific measures from those who wish
to sell or employ AI systems; it will have practical consequences. Yet, it will not do what it claims
to do. It will likely create an increased need for legal and administrative decisions about specific
cases, but these decisions cannot rest on more than what they would have rested on without
the AI Act, namely a consistent legal argument balancing fundamental rights on one hand and
a temporary prioritisation of some values over others on the other hand. Concurrently, it will
create a number of new obligations for companies developing and deploying AI systems, with
an unclear impact on the eventual risks these systems pose. The addition of systemic risks at a
late stage of drafting the AI Act indicates the ambition to comprehensively map the risks and yet
hints at its eventual futility within a legal framework. The particular challenge posed by compre-
hensiveness is the necessity to bridge the gap between an abstract thinking about risks and the
concretization of rules that are applicable to specific AI systems. Here the AI Act opens the door
to decision-making that, although not necessarily arbitrary, has nothing to do with the estimation
of risks and everything with negotiations between interests that hugely differ in terms of power.

Conclusion
This article does not offer a comprehensive account of the AI Act and its legal and political impli-
cations. The proposed perspective is selective insofar as it focuses on the content of the legislation
from the point of view of sociological risk research, for which the regulation’s key notion of risks
to fundamental rights is unusual. It contributes to risk research by analysing the implications of
thinking about harms from technologies in terms of fundamental rights. Concurrently, it offers a
preliminary analysis of some shortcomings of the AI Act that are the result of framing this thinking
as an assessment of risks. Written after the agreement about the text of the AI Act but before it
enters into force, the analysis is limited to what the AI Act itself, its earlier drafts and key docu-
ments of reference communicate. Future research should focus on the Codes of Conduct and the
standards that are supposed to be developed to offer guidance to providers and deployers of AI
systems, in particular on the extent to which they remain coupled to or become decoupled from
the ambitious attempt to expand the notion of risk within a legal framework.

Note
1. The notion of trustworthy technology is not new. With regard to nanotechnology, for example, Myskja
(2011, 49) suggested however that the focus needed to be on ‘the body of scientific practitioners and
practices that is to be trusted.’.
12 I. KUSCHE

Disclosure statement
No potential conflict of interest was reported by the author(s).

ORCID
Isabel Kusche http://orcid.org/0000-0002-2596-0564

References
Alon-Barkat, Saar, and Madalina Busuioc. 2023. “Human–AI Interactions in Public Sector Decision Making: ‘Automation
Bias’ and ‘Selective Adherence’ to Algorithmic Advice.” Journal of Public Administration Research and Theory 33
(1): 153–169. https://doi.org/10.1093/jopart/muac007.
Aradau, C., and T. Blanke. 2017. “Politics of Prediction: Security and the Time/Space of Governmentality in the
Age of Big Data.” European Journal of Social Theory 20 (3): 373–391. https://doi.org/10.1177/1368431016667623.
Bareis, Jascha, and Christian Katzenbach. 2022. “Talking AI into Being: The Narratives and Imaginaries of National
AI Strategies and Their Performative Politics.” Science, Technology, & Human Values 47 (5): 855–881. https://doi.
org/10.1177/01622439211030007.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of
Stochastic Parrots: Can Language Models Be Too Big?” In, 14. https://doi.org/10.1145/3442188.3445922.
Bertuzzi, Luca. 2022. “Leading MEPs Exclude General-Purpose AI from High-Risk Categories - for Now”. www.
Euractiv.Com. December 12, 2022. https://www.euractiv.com/section/artificial-intelligence/news/leading-mep
s-exclude-general-purpose-ai-from-high-risk-categories-for-now/.
Boholm, Åsa, and Hervé Corvellec. 2011. “A Relational Theory of Risk.” Journal of Risk Research 14 (2): 175–190.
https://doi.org/10.1080/13669877.2010.515313.
Boltanski, Luc, and Laurent Thévenot. 2006. “On Justification: Economies of Worth.” In Princeton Studies in Cultural
Sociology. Princeton: Princeton University Press.
Brown Weiss, Edith, John H. Jackson, and Nathalie Bernasconi-Osterwalder, eds. 2008. Reconciling Environment and
Trade: 2nd ed. Leiden: Brill Nijhoff. https://brill.com/edcollbook/title/14212.
Burt, Andrew, Brenda Leong, and Stuart Shirrell. 2018. Beyond Explainability: A Practical Guide to Managing Risk in
Machine Learning Models. Future of Privacy Forum. https://fpf.org/wp-content/uploads/2018/06/
Beyond-Explainability.pdf.
Campolo, Alexander, and Kate Crawford. 2020. “Enchanted Determinism: Power without Responsibility in Artificial
Intelligence.” Engaging Science, Technology, and Society 6 (January): 1–19. https://doi.org/10.17351/ests2020.277.
‘CDT Europe’s AI Bulletin: March 2023’. 2023. Center for Democracy and Technology (blog). March 12, 2023. https://
cdt.org/insights/cdt-europes-ai-bulletin-march-2023/.
Centeno, Miguel A., Manish Nag, Thayer S. Patterson, Andrew Shaver, and A. Jason Windawi. 2015. “The Emergence
of Global Systemic Risk.” Annual Review of Sociology 41 (1): 65–85. https://doi.org/10.1146/annurev-soc-
073014-112317.
Christoffersen, Mikkel Gabriel. 2018. “Risk, Danger, and Trust: Refining the Relational Theory of Risk.” Journal of
Risk Research 21 (10): 1233–1247. https://doi.org/10.1080/13669877.2017.1301538.
Crawford, Kate. 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale
University Press.
Curran, Dean. 2018. “Risk, Innovation, and Democracy in the Digital Economy.” European Journal of Social Theory
21 (2): 207–226. https://doi.org/10.1177/1368431017710907.
Denton, Emily, Alex Hanna, Razvan Amironesei, Andrew Smart, and Hilary Nicole. 2021. “On the Genealogy of
Machine Learning Datasets: A Critical History of ImageNet.” Big Data & Society 8 (2): 205395172110359. https://
doi.org/10.1177/20539517211035955.
EDRi. 2021. “Beyond Debiasing. Regulating AI and Its Inequalities”. https://edri.org/wp-content/uploads/2021/09/
EDRi_Beyond-Debiasing-Report_Online.pdf.
Engel, Christoph. 2001a. “Delineating the Proper Scope of Government: A Proper Task for a Constitutional Court?”
Journal of Institutional and Theoretical Economics 157 (1): 187–219. https://doi.org/10.1628/0932456012974675.
Engel, Christoph. 2001b. “The European Charter of Fundamental Rights A Changed Political Opportunity Structure
and Its Normative Consequences.” European Law Journal 7 (2): 151–170. https://doi.org/10.1111/1468-0386.00125.
Eubanks, Virginia. 2019. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. First Picador
ed. New York: Picador St. Martin’s Press.
European Commission. 2020. “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust”.
Text. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-
trust_en.
Journal of Risk Research 13

European Commission. 2021. Proposal for a Regulation of the European Parliament and of the Council Laying down
Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative
Acts. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex:52021PC0206.
European Parliament. 2023. “Artificial Intelligence Act. Amendments Adopted by the European Parliament on 14
June 2023 on the Proposal for a Regulation of the European Parliament and of the Council.” https://www.
europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html.
Facchi, Alessandra, and Nicola Riva. 2021. “European Values in the Charter of Fundamental Rights: An Introduction.”
Ratio Juris 34 (1): 3–5. https://doi.org/10.1111/raju.12308.
Galaz, Victor, Miguel A. Centeno, Peter W. Callahan, Amar Causevic, Thayer Patterson, Irina Brass, Seth Baum, et al.
2021. “Artificial Intelligence, Systemic Risks, and Sustainability.” Technology in Society 67 (November): 101741.
https://doi.org/10.1016/j.techsoc.2021.101741.
Greene, Daniel, Anna Lauren Hoffmann, and Luke Stark. 2019. “Better, Nicer, Clearer, Fairer: A Critical Assessment
of the Movement for Ethical Artificial Intelligence and Machine Learning”. Hawaii International Conference on
System Sciences, 10. https://doi.org/10.24251/HICSS.2019.258.
Hacker, Philipp. 2021. “A Legal Framework for AI Training Data—from First Principles to the Artificial Intelligence
Act.” Law, Innovation and Technology 13 (2): 257–301. https://doi.org/10.1080/17579961.2021.1977219.
Hagendorff, Thilo. 2020. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds and Machines 30 (1): 99–120.
https://doi.org/10.1007/s11023-020-09517-8.
Helberger, Natali, and Nicholas Diakopoulos. 2023. “ChatGPT and the AI Act.” Internet Policy Review 12 (1): 1–6.
https://policyreview.info/essay/chatgpt-and-ai-act. https://doi.org/10.14763/2023.1.1682.
High-Level Expert Group on AI. 2019. “Ethics Guidelines for Trustworthy AI”. Text. https://digital-strategy.ec.europa.
eu/en/library/ethics-guidelines-trustworthy-ai.
Hilgartner, Stephen. 1992. “The Social Construction of Risk Objects: Or, How to Pry Open Networks of Risk.” In
Organizations, Uncertainties, and Risk, edited by James F. Short and Lee Clarke, 39–53. Boulder, CO.: Westview
Press.
Japp, Klaus P., and Isabel Kusche. 2008. “Systems Theory and Risk.” In Social Theories of Risk and Uncertainty. An
Introduction, edited by Jens O. Zinn. 76–103. Malden, MA: Blackwell.
Jasanoff, Sheila. 1999. “The Songlines of Risk.” Environmental Values 8 (2): 135–152. https://doi.org/10.1177/096327199900800202.
Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine
Intelligence 1 (9): 389–399. https://doi.org/10.1038/s42256-019-0088-2.
Joint Research Centre. 2023. “Analysis of the Preliminary AI Standardisation Work Plan in Support of the AI Act.”
LU: European Commission. https://data.europa.eu/doi/10.2760/5847.
Kaminski, Margot E. 2023. “Regulating the Risks of AI.” Boston University Law Review 103 (5): 1347–1411.
Laux, Johann, Sandra Wachter, and Brent Mittelstadt. 2023. “Trustworthy Artificial Intelligence and the European
Union AI Act: On the Conflation of Trustworthiness and Acceptability of Risk.” Regulation & Governance 18 (1):
3–32. https://doi.org/10.1111/rego.12512.
Laux, Johann, Sandra Wachter, and Brent Mittelstadt. 2024. “Three Pathways for Standardisation and Ethical
Disclosure by Default under the European Union Artificial Intelligence Act.” Computer Law & Security Review
53: 1–11.
Luhmann, Niklas. 1992. “Die Beschreibung Der Zukunft.” In Beobachtungen Der Moderne, 29–47. Opladen:
Westdeutscher Verlag.
Luhmann, Niklas. 2000. Die Politik Der Gesellschaft. Frankfurt a.M.: Suhrkamp.
Luhmann, Niklas. 2004. Law as a Social System. Oxford Socio-Legal Studies. Oxford ; New York: Oxford University
Press.
Luhmann, Niklas. 2005. Risk: A Sociological Theory. 1st paperback ed. New Brunswick, N.J: Aldine Transaction.
Luhmann, Niklas. 2017. Trust and Power. English ed. Malden, MA: Polity.
Luhmann, Niklas. 1993. Risk: A Sociological Theory. Berlin: de Gruyter. http://www.gbv.de/dms/hbz/toc/ht004524434.
PDF.
‘MEPs Seal the Deal on Artificial Intelligence Act’. 2023. www.Euractiv.Com. April 27 2023. https://www.euractiv.
com/section/artificial-intelligence/news/meps-seal-the-deal-on-artificial-intelligence-act/.
Myskja, Bjørn K. 2011. “Trustworthy Nanotechnology: Risk, Engagement and Responsibility.” Nanoethics 5 (1):
49–56. https://doi.org/10.1007/s11569-011-0116-0.
Nemitz, Paul. 2018. “Constitutional Democracy and Technology in the Age of Artificial Intelligence.” Philosophical
Transactions. Series A, Mathematical, Physical, and Engineering Sciences 376 (2133): 20180089. https://doi.
org/10.1098/rsta.2018.0089.
O’Malley, Pat. 2009. “‘Uncertainty Makes Us Free’. Liberalism, Risk and Individual Security.” BEHEMOTH - A Journal
on Civilisation 2 (3): 24–38. https://doi.org/10.6094/behemoth.2009.2.3.705.
Renn, Ortwin, Klaus Lucas, Armin Haas, and Carlo Jaeger. 2019. “Things Are Different Today: The Challenge of Global
Systemic Risks.” Journal of Risk Research 22 (4): 401–415. https://doi.org/10.1080/13669877.2017.1409252.
Rességuier, Anaïs, and Rowena Rodrigues. 2020. “AI Ethics Should Not Remain Toothless! A Call to Bring Back the
Teeth of Ethics.” Big Data & Society 7 (2): 205395172094254. https://doi.org/10.1177/2053951720942541.
14 I. KUSCHE

Schiølin, Kasper. 2020. “Revolutionary Dreams: Future Essentialism and the Sociotechnical Imaginary of the Fourth
Industrial Revolution in Denmark.” Social Studies of Science 50 (4): 542–566. https://doi.org/10.1177/0306312719867768.
Sztompka, Piotr. 2003. Trust: A Sociological Theory. Cambridge: Cambridge University Press.
Tierney, Kathleen J. 1999. “Toward a Critical Sociology of Risk.” Sociological Forum 14 (2): 215–242. https://doi.
org/10.1023/A:1021414628203.
Veale, Michael, and Frederik Zuiderveen Borgesius. 2021. “Demystifying the Draft EU Artificial Intelligence Act.”
Computer Law Review International 22 (4): 97–112. https://doi.org/10.9785/cri-2021-220402.
Veale, Michael, Kira Matus, and Robert Gorwa. 2023. “AI and Global Governance: Modalities, Rationales, Tensions.”
Annual Review of Law and Social Science 19 (1): 255–275. https://doi.org/10.31235/osf.io/ubxgk.
Wynne, Brian. 2001. “Creating Public Alienation: Expert Cultures of Risk and Ethics on GMOs.” Science as Culture
10 (4): 445–481. https://doi.org/10.1080/09505430120093586.
Zajko, Mike. 2022. “Artificial Intelligence, Algorithms, and Social Inequality: Sociological Contributions to
Contemporary Debates.” Sociology Compass 16 (3): E12962. https://doi.org/10.1111/soc4.12962.

You might also like