AI Risks and EU Regulation Analysis
AI Risks and EU Regulation Analysis
Isabel Kusche
To cite this article: Isabel Kusche (11 May 2024): Possible harms of artificial intelligence
and the EU AI act: fundamental rights and risk, Journal of Risk Research, DOI:
10.1080/13669877.2024.2350720
Introduction
A few years ago, Curran (2018) diagnosed a lack of political consideration for and public debate
of the risks of digital economies in general and the rise of artificial intelligence in particular,
drawing on Beck’s risk society approach. The situation has changed in the meantime, with European
legislation aiming to regulate digital services (Digital Services Act), digital markets (Digital Markets
Act) and the application of artificial intelligence (AI Act) in order to prevent harm and not leave
socio-technological trajectories to the profit-oriented decisions of digital platforms and corpora-
tions. The regulation of AI at the EU level in particular draws on the notion of risk. Starting with
the Ethics Guidelines on Trustworthy AI by the High-Level Expert Group on AI (2019) and the
White Paper on AI by the European Commission (2020), the discussion about regulating AI has
revolved around the acknowledgement of risks arising from it. The explanatory memorandum of
the draft AI Act explicitly proposed a risk-based regulatory framework, distinguishing between
three risk classes, namely AI systems posing unacceptable risks, high risks, and low or minimal
risks (European Commission 2021, 12). The final version of the AI Act stipulates that AI systems
posing unacceptable risks shall be banned while high-risk systems shall fulfil a number of require-
ments. They shall be monitored by a risk management system that covers their entire lifecycle
(Art. 9), trained and tested with data sets appropriate for their purpose (Art. 10), accompanied
by a technical documentation (Art. 11) and information to deployers about possible risks (Art.
13), allow for automatic record-keeping of system events (Art. 12), be subject to human oversight
when in use (Art. 14) and undergo a conformity assessment procedure before they are placed
on the market or put into service (Art. 16). In parallel, general purpose AI models, which can be
fine-tuned to fulfil tasks in various areas of application, are classified into those posing systemic
risks and those that do not (Art. 52a), with differential obligations for both.
An approach that draws on Beck’s notion of risk society could criticize limitations in the
existing political consideration of AI risks and insist on the priority of other concerns, such as
a disruption of labour markets. Yet, this article aims at a different type of reflection on AI and
risk. It contributes to explaining the specific shape that the debate on AI and its risks has taken
in Europe and the resulting limitations. It turns to a tradition in the sociology of risk that
stresses how the notion of risk is tied to differences in observation and attribution (Luhmann
2005). This systems-theoretical approach to risks focuses on the diverging ways in which different
actors observe risks. These different perspectives take centre stage when it comes to regulation.
Contrasting them will reveal how the European regulation of AI attempts to extend the con-
sideration of risks much further than in the case of other technologies but ends up with com-
municative paradoxes as a result.
There is no objective way to assess and measure risks. Although this is an insight that has
been illustrated for various risk objects (Boholm and Corvellec 2011; Christoffersen 2018;
Hilgartner 1992), it is particularly striking with regard to AI. Its regulation aims to consider not
only risks to the safety and health of individuals or to the environment but complements them
with the notion of risks to fundamental rights. Annex III of the AI Act, listing high-risk AI sys-
tems, includes for example AI systems that are supposed to determine access to educational
institutions, assess students or select applicants in recruitment processes. In all these instances,
the concern is not the health or safety of the individuals who are affected by these decisions
but their fundamental rights. As recital 4 of the AI Act states, ‘artificial intelligence may generate
risks and cause harm to public interests and fundamental rights that are protected by Union
law. Such harm might be material or immaterial, including physical, psychological, societal or
economic harm’. With the addition of systemic risks the AI Act extends the scope of risk-based
regulation even further to include in principle harm of any kind. This understanding of risk
challenges fundamental theoretical notions in (sociological) risk research. The Luhmannian strand
of systems theory seems particularly well equipped to meet this challenge, not because of its
notion of systems but thanks to the theory of communication at the heart of it. In stressing
the role of attribution in the communication about risks, it contributes to a better understanding
of the implications and potential problems that the risk-based regulation of AI brings about.
The next section provides a brief overview of the general debate on regulating AI, then
introduces key systems-theoretical concepts for a sociology of risk and applies them to the case
of AI. It demonstrates particular challenges that AI poses for a risk-based regulation and con-
cludes that they imply an indeterminate expansion of possible harms. After a brief section on
the material and method employed, the article subsequently discusses how the European
approach to risk-based AI regulation suggests delineating harms. It identifies references to legal
rules, basic values and trustworthiness as complementary but ultimately paradoxical attempts
at delineation, which rely on fundamental rights as a common denominator. Finally, the article
highlights the resulting ambiguities that are likely to hamper effective risk-based regulation.
Journal of Risk Research 3
In the factual dimension, new regulation indicates that political actors, in particular but
not exclusively at the EU level, aim to make decisions on matters on which they have not
decided so far. Warnings against possible dangers of AI are based on the distinction between
risk and danger, with the latter arising when others are the ones deciding. These others are
not named directly but alluded to, for example in the statement that ‘[t]he EU will continue
to cooperate with like-minded countries, but also with global players, on AI, based on an
approach based on EU rules and values’ (European Commission 2020, 8). The others, posing
dangers, are thus private companies or countries developing AI whose values do not align
with European values.
With regard to the social dimension, those affected by decisions about the development
and deployment of AI applications encompass potentially everyone, not only in Europe but
globally, considering the announced inevitability of a future in which AI changes everything.
The impact is supposed to be overwhelmingly positive, but there is no one who is a priori
excluded from possible negative effects. Consequently, the White Paper acknowledges that both
citizens and companies are concerned about various uncertainties (European Commission
2020, 9).
In all three dimensions, a risk-based regulation of AI poses particular challenges. In the
run-up to the first draft of the AI Act, concerns centred on predictive AI with its potential to
assist decision-making and reduce its contingent character in favour of automated calculations
(Campolo and Crawford 2020; Alon-Barkat and Busuioc 2023). Contingency is displaced to the
opaque choices made with regard to the training data and the algorithms employed (Denton
et al. 2021), resulting in various biases that skew the outputs (Aradau and Blanke 2017;
Eubanks 2019; Zajko 2022) and an extrapolation of past injustices into the future (Crawford
2021, 123–49). Accordingly, the European Commission’s White Paper (2020) listed ‘loss of
privacy, limitations to the right of freedom of expression, human dignity, discrimination for
instance in access to employment’ (European Commission 2020, 10) as possible harms of AI,
separating them as immaterial risks to fundamental rights from material risks to safety and
health. The European Commission (2021) carried this distinction over into Rec. 1 of its draft
of the AI Act.
While the European institutions were still working on the AI Act, the release of ChatGPT and
subsequently other generative AI systems called this understanding of AI risks into question
(Helberger and Diakopoulos 2023). The amendments of the European Parliament (2023) added
democracy, the rule of law and the environment to the list of entities potentially at risk from
AI. Rec. 1 of the final AI Act includes these additions under the umbrella of fundamental rights
enshrined in the Charter of Fundamental Rights of the EU. Since the Charter addresses health
and safety only in the context of work, they continue to be listed separately although they are
also fundamental rights. The AI Act thus frames all possible harms of AI systems as harms to a
fundamental right and confirms a risk-based approach to regulation (recital 14). As a result of
the trilogue negotiations between the European institutions it however adds rules regarding
general purpose AI models and refers to systemic risks some of them may pose. Although the
notion of systemic risks has been discussed in (sociological) risk research (Centeno et al. 2015;
Renn et al. 2019), rare links made to the topic of AI so far only considered the use of predictive
AI, for example in fostering sustainability (Galaz et al. 2021). The consideration of systemic risk
in regulating AI, triggered by the sudden prominence of generative AI and general purpose
models (Helberger and Diakopoulos 2023), has no obvious connection to this systemic risk
literature.
Compared to previous cases of regulating the risks of new technologies, like nuclear energy
or genetic engineering, which focused on hazards to the health and safety of human beings
and the environment, the notion of risks to fundamental rights expands the scope of potential
effects to be considered in a risk assessment enormously. The last-minute addition of systemic
risks broadens it even further.
Journal of Risk Research 5
Against this backdrop, the paper focuses on the following research questions:
• How does the regulatory framework for AI adopted by the EU delineate harms of AI,
based on the notion of risks to fundamental rights, and what are the consequences?
• What consequences does the additional notion of systemic risks have for the delineation
and the regulatory framework?
Thévenot 2006; Luhmann 2000, 178–83). Values deferred in taking a particular decision can
justify another decision at a later point in time, with the added justification that they took a
backseat for too long. Decision-making about basic values would take the form of political
decisions by legislative majority, prioritising some values until a later political decision, perhaps
as a result of a change in government or a change of times, revises the order of values to be
considered.
Against this backdrop, the delineation of harms of AI in terms of fundamental rights results
in a second paradox. If fundamental rights and the underlying basic values are at risk, the
political system is at risk that in the future it will be unable to make collectively binding deci-
sions promoting these values or at least that such decisions will be inconsequential. The risk
to basic values translates into the risk that a prioritization of some values in decision-making
at a certain point in time will no longer be a mere deferment of other values but an irreversible
ranking of priorities.
At first sight, this is not so different from the regulation of new technologies in the past. If
a major accident happened at a nuclear power plant, despite the regulatory measures in place,
and whole countries were contaminated by radioactivity as a result, subsequent political deci-
sions prioritising the health of the population would also be more or less inconsequential,
depending on how severe the contamination is. Yet, risk regulation in the past typically separated
the (scientific) assessment of risks from their (political) evaluation. The risk of a major accident
would be calculated or, based on expert opinion, estimated first; a political evaluation would
subsequently decide whether the risk was worth taking, against the backdrop of an implicit
and reversible ordering of values (Tierney 1999, 219–22).
The separation was of course never natural but a result of social convention and power
asymmetries between experts and laypersons, as pointed out by social science research on risk
regulation (Jasanoff 1999; Wynne 2001). Nor was the separation always strict in practice, as the
contested prohibition of using growth hormones in cattle and of importing hormone-treated
beef by the European Community in the 1990s exemplifies. The ensuing dispute between the
EC and the US before the WTO centred on differing interpretations of when scientific evidence
is sufficient and what an appropriate risk assessment looks like in accordance with WTO rules,
since a purely political prioritization of certain values over free trade would have violated those
rules (Brown Weiss, Jackson, and Bernasconi-Osterwalder 2008 Part III).
In contrast, the AI Act upholds and abolishes the separation between (scientific) assessment
of risks and (political) evaluation of risks at the same time. It is ostensibly risk-based and dis-
tinguishes risk classes into which AI systems are supposed to be sorted depending on how
much risk they pose for (European) values. The resemblance of this sorting to a formal risk
analysis is however superficial. It cannot be distinguished from a political evaluation because
the risks to be considered are risks to (political) values. Therefore, the sorting is easily recog-
nizable as political, too.
Art. 6 of the AI Act introduces classification rules for high-risk AI systems. On one hand, it
refers to a list of areas in which AI systems are considered to pose a high risk in Annex III. On
the other hand, the article enumerates criteria that lead to the exclusion of an AI system from
the high-risk category despite its intended use in one of the areas listed in Annex III. These
criteria were introduced during the negotiation process and were not part of the original draft
by the European Commission. Both with and without them, the conceptualization of what
constitutes high risk is unusual in several regards. Firstly, it transforms the question of potential
harm to fundamental rights into a discrete, binary variable: either such a potential exists or it
does not; there is no attempt to quantify it in any way independent of the categorization as
high-risk. Secondly, it gives no justification regarding the AI systems listed in Annex III and their
selection. The general areas, which are further specified in terms of types of application, are
biometrics insofer as their specific use is not completely banned by Art. 5, critical infrastructure,
education and vocational training, employment and workers management, access to essential
8 I. KUSCHE
private and public services, law enforcement, migration and asylum, and the administration of
justice and democratic processes.
None of the items in Annex III is implausible, but there is no rule from which their inclu-
sion can be derived, which is also why negotiations between EU Commission, Parliament
and member states revolved around this list (Bertuzzi 2022; ‘CDT Europe’s AI Bulletin: March
2023’ 2023; ‘MEPs Seal the Deal on Artificial Intelligence Act’ 2023). Moreover, Art. 7(1) of
the AI Act permits amendments to Annex III by the Commission if the AI systems to be
added are intended to be used in any of the areas of application listed in the Annex and
if they pose a risk that is ‘equivalent to or greater than the risk of harm or of adverse impact
posed by the high-risk AI systems already referred to in Annex III.’ There is no indication of
a method for determining equivalence of risk, and there is certainly no rule that would
specify the link between fundamental rights and the items on the list in Annex III. The cat-
egory of high risk is apparently not defined by some type of technical risk assessment but
the result of political judgments that are implicitly linked to values. Paradoxically therefore,
whether an application poses a risk (to values) is a matter of values (and their ranking rel-
ative to one another).
The paradox is resolved by delegating the specification of benchmarks and indicators for
risk that is at least equivalent to the risk of AI systems in the areas listed in Annex III to the
European Commission, and thus eventually to standardization bodies (Joint Research Centre
2023; Laux, Wachter, and Mittelstadt 2024). As Veale and Borgesius (2021) have pointed out,
these bodies have no experience with fundamental rights. The standard-setting process is
therefore likely to prioritize some values over others implicitly and thus invisibly. Although the
resulting implicit ranking of values would be reversible in principle, it may be irreversible in
practice once the standards are established.
The regulation of general purpose AI models, which the original draft of the AI Act by the
European Commission did not consider, renders the implied values invisible in a similar way.
The AI Act distinguishes between AI systems, which can be high-risk, and AI models, which can
be general purpose and may be combined with further components to constitute an AI system
(Rec. 60a). This distinction leaves the risk-based classification of AI systems seemingly intact.
However, the new Title VIIIA introduces an additional classification for general purpose AI models
with (or without) systemic risk, depending on whether such a model has ‘high impact capabil-
ities’. It is presented as a technical matter of applying indicators, benchmarks and other meth-
odologies to identify or presume such capabilities, which the European Commission is tasked
to amend and supplement (Art. 52a; Rec. 60n).
If systemic risk is presumed as a result of meeting the threshold, a provider can still ‘demon-
strate that because of its specific characteristics, a general purpose AI model exceptionally does
not present systemic risks’ (Rec. 60o).
What systemic risks are is illustrated by a non-exhaustive number of examples in Rec.
60 m. They range from ‘disruptions of critical sectors and serious consequences to public
health and safety’ to ‘foreseeable negative effects on democratic processes, public and eco-
nomic security’ and ‘dissemination of illegal, false, or discriminatory content’. It would not be
difficult to rephrase the examples in terms of adverse impact on fundamental rights. What
the notion of systemic risks seems to add is an emphasis on scale. Its inclusion acknowledges
that the scale of potential impact is not adequately captured as a sum of individual violations
to fundamemental rights. Yet each given example implies additional value judgements that
are necessary to determine whether a consequence is serious, negative or generally unde-
sireable. Those judgements will eventually be made when drawing up Codes of Practice (Rec.
60s), a process in which all providers of general-purpose AI models could participate, as well
as civil society organizations and other relevant stakeholders. Although potentially less arcane
than standardization bodies, the process is unlikely to consider the problem of ordering
values in any systematic way.
Journal of Risk Research 9
robustness, which is justified by the close intertwinement of ethical and robust AI. After briefly
mentioning the keywords safety, security and reliability, the text turns to the deduction of the
ethical principles from fundamental rights.
Consequently, the idea of trustworthy AI boils down to an orientation towards fundamental
rights. According to this logic, users of AI are supposed to trust an application, provided it
respects their subjective rights. Yet, the risk that AI presumably poses is an adverse impact on
these same rights. This leads to the third paradox: The attempt to delineate the possible harms
of AI by distinguishing trustworthy from untrustworthy systems results in a situation where the
proposed reason to trust, namely that AI systems respect fundamental rights, is undermined
by the reason trust is needed in the first place, which is that AI systems may adversely affect
fundamental rights.
The AI Act actually concedes that trustworthiness does not equal low risk. Article 67 covers
the possibility that ‘although a high-risk AI system is in compliance with this Regulation, it
presents a risk to the health or safety of persons, fundamental rights or to other aspects of
public interest protection’ (Art. 67, 1). It is the same paradox, posed for the side of the
decision-makers: The regulation supposed to ensure that there is no adverse impact on funda-
mental rights explicitly states that the rules it introduces cannot reliably achieve this goal.
Ultimately, it is up to the market surveillance authority of a Member State to evaluate the risk
an AI system poses (although not its trustworthiness), a task for which it is ill-equipped since
it predominantly depends on the information contained in the providers’ notifications (Veale
and Borgesius 2021, 111).
Discussion
The risk-based approach of the AI Act in retrospect validates Luhmann’s (1993, 19) rejection of
the distinction between risk and safety as a basis for the sociology of risk. His original concern
was the asymptotic character of safety, which is a legitimate goal and generally preferred state
that remains unreachable in absolute terms. Once the scope of risks is widened to fundamental
rights, as the AI Act does, safety is revealed to be a value (Luhmann 2005, 19). It stands beside
other values like equality or autonomy, which in the form of fundamental rights are considered
to be at risk as a result of (some) AI systems.
The envisioned risk-based EU regulation of AI is based on the notion of harm as adverse
impact on fundamental rights. This attempt at delineation is firstly supposed to clarify which
aspects of the future we can count on to resemble the present, despite the declared disruptive
potential of AI (time dimension). It is secondly supposed to point regulatory attention to high-risk
AI systems and distinguish them from systems for which voluntary instead of mandatory mea-
sures are sufficient (factual dimension). It thirdly proposes respect for fundamental rights per-
taining to subjects as the criterion that, when fulfilled, should enable subjects to trust AI systems
(social dimension). Consequently, fundamental rights are invoked as the key reference point for
decisions in the legal system, decisions in the political system and everyday decision-making
of individuals who are somehow affected by AI.
The first reference to fundamental rights points to their role in the legal system, where they
are codified but function more as principles that have to be adequately considered than as
prescriptive rules that have to be followed. Legal decisions therefore always have to take into
account fundamental rights, but no legal norm prevents restrictions to them in principle. Beyond
specific court decisions, the legal system offers no orientation as to how to assess whether a
fundamental right is harmed by an AI application.
The second reference to fundamental rights draws on them as basic values that AI systems
should respect, embody or be designed for. Yet, values mostly appear in political communication,
where they stand for changing priorities of decision-making. Values denote generally preferable
states and outcomes, and the potential for value conflicts becomes real whenever actors actually
Journal of Risk Research 11
attempt to orient their actions and decisions in terms of values. In the proposed regulation values
are supposed to be the object of protection; yet they are also the reference point that implicitly
or explicitly justifies decisions about including types of AI systems in the category of high risk.
The third reference to fundamental rights treats compliance with them, which the regulation
supposedly ensures, as indicator or motive for trustworthiness. This is tautological since potential
harms to fundamental rights were identified as the problem in the first place. The focus on
trustworthiness of AI suggests that there are decisions to be made by those affected although
their being affected is independent from their (dis-)trust. Concurrently, for the decision-makers
deploying AI the risk remains that an AI application compliant with the regulations of the AI
Act – and thus trustworthy – may still be deemed to present a risk that is unacceptable.
The idea that a risk assessment is able to identify a class of technological systems that is
sufficiently likely to cause sufficient harm of any kind – including economic and societal – to
necessitate special regulation, which in turn reduces the likelihood and/or extent of harm, gets
caught up in paradoxes. It presupposes some delineation as to the harms considered but can
only offer the notion of fundamental rights and basic values, which is and has always been
subject to interpretation by the legal system on one hand and the political system on the other.
To be sure, the proposed AI Act demands a number of specific measures from those who wish
to sell or employ AI systems; it will have practical consequences. Yet, it will not do what it claims
to do. It will likely create an increased need for legal and administrative decisions about specific
cases, but these decisions cannot rest on more than what they would have rested on without
the AI Act, namely a consistent legal argument balancing fundamental rights on one hand and
a temporary prioritisation of some values over others on the other hand. Concurrently, it will
create a number of new obligations for companies developing and deploying AI systems, with
an unclear impact on the eventual risks these systems pose. The addition of systemic risks at a
late stage of drafting the AI Act indicates the ambition to comprehensively map the risks and yet
hints at its eventual futility within a legal framework. The particular challenge posed by compre-
hensiveness is the necessity to bridge the gap between an abstract thinking about risks and the
concretization of rules that are applicable to specific AI systems. Here the AI Act opens the door
to decision-making that, although not necessarily arbitrary, has nothing to do with the estimation
of risks and everything with negotiations between interests that hugely differ in terms of power.
Conclusion
This article does not offer a comprehensive account of the AI Act and its legal and political impli-
cations. The proposed perspective is selective insofar as it focuses on the content of the legislation
from the point of view of sociological risk research, for which the regulation’s key notion of risks
to fundamental rights is unusual. It contributes to risk research by analysing the implications of
thinking about harms from technologies in terms of fundamental rights. Concurrently, it offers a
preliminary analysis of some shortcomings of the AI Act that are the result of framing this thinking
as an assessment of risks. Written after the agreement about the text of the AI Act but before it
enters into force, the analysis is limited to what the AI Act itself, its earlier drafts and key docu-
ments of reference communicate. Future research should focus on the Codes of Conduct and the
standards that are supposed to be developed to offer guidance to providers and deployers of AI
systems, in particular on the extent to which they remain coupled to or become decoupled from
the ambitious attempt to expand the notion of risk within a legal framework.
Note
1. The notion of trustworthy technology is not new. With regard to nanotechnology, for example, Myskja
(2011, 49) suggested however that the focus needed to be on ‘the body of scientific practitioners and
practices that is to be trusted.’.
12 I. KUSCHE
Disclosure statement
No potential conflict of interest was reported by the author(s).
ORCID
Isabel Kusche http://orcid.org/0000-0002-2596-0564
References
Alon-Barkat, Saar, and Madalina Busuioc. 2023. “Human–AI Interactions in Public Sector Decision Making: ‘Automation
Bias’ and ‘Selective Adherence’ to Algorithmic Advice.” Journal of Public Administration Research and Theory 33
(1): 153–169. https://doi.org/10.1093/jopart/muac007.
Aradau, C., and T. Blanke. 2017. “Politics of Prediction: Security and the Time/Space of Governmentality in the
Age of Big Data.” European Journal of Social Theory 20 (3): 373–391. https://doi.org/10.1177/1368431016667623.
Bareis, Jascha, and Christian Katzenbach. 2022. “Talking AI into Being: The Narratives and Imaginaries of National
AI Strategies and Their Performative Politics.” Science, Technology, & Human Values 47 (5): 855–881. https://doi.
org/10.1177/01622439211030007.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of
Stochastic Parrots: Can Language Models Be Too Big?” In, 14. https://doi.org/10.1145/3442188.3445922.
Bertuzzi, Luca. 2022. “Leading MEPs Exclude General-Purpose AI from High-Risk Categories - for Now”. www.
Euractiv.Com. December 12, 2022. https://www.euractiv.com/section/artificial-intelligence/news/leading-mep
s-exclude-general-purpose-ai-from-high-risk-categories-for-now/.
Boholm, Åsa, and Hervé Corvellec. 2011. “A Relational Theory of Risk.” Journal of Risk Research 14 (2): 175–190.
https://doi.org/10.1080/13669877.2010.515313.
Boltanski, Luc, and Laurent Thévenot. 2006. “On Justification: Economies of Worth.” In Princeton Studies in Cultural
Sociology. Princeton: Princeton University Press.
Brown Weiss, Edith, John H. Jackson, and Nathalie Bernasconi-Osterwalder, eds. 2008. Reconciling Environment and
Trade: 2nd ed. Leiden: Brill Nijhoff. https://brill.com/edcollbook/title/14212.
Burt, Andrew, Brenda Leong, and Stuart Shirrell. 2018. Beyond Explainability: A Practical Guide to Managing Risk in
Machine Learning Models. Future of Privacy Forum. https://fpf.org/wp-content/uploads/2018/06/
Beyond-Explainability.pdf.
Campolo, Alexander, and Kate Crawford. 2020. “Enchanted Determinism: Power without Responsibility in Artificial
Intelligence.” Engaging Science, Technology, and Society 6 (January): 1–19. https://doi.org/10.17351/ests2020.277.
‘CDT Europe’s AI Bulletin: March 2023’. 2023. Center for Democracy and Technology (blog). March 12, 2023. https://
cdt.org/insights/cdt-europes-ai-bulletin-march-2023/.
Centeno, Miguel A., Manish Nag, Thayer S. Patterson, Andrew Shaver, and A. Jason Windawi. 2015. “The Emergence
of Global Systemic Risk.” Annual Review of Sociology 41 (1): 65–85. https://doi.org/10.1146/annurev-soc-
073014-112317.
Christoffersen, Mikkel Gabriel. 2018. “Risk, Danger, and Trust: Refining the Relational Theory of Risk.” Journal of
Risk Research 21 (10): 1233–1247. https://doi.org/10.1080/13669877.2017.1301538.
Crawford, Kate. 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale
University Press.
Curran, Dean. 2018. “Risk, Innovation, and Democracy in the Digital Economy.” European Journal of Social Theory
21 (2): 207–226. https://doi.org/10.1177/1368431017710907.
Denton, Emily, Alex Hanna, Razvan Amironesei, Andrew Smart, and Hilary Nicole. 2021. “On the Genealogy of
Machine Learning Datasets: A Critical History of ImageNet.” Big Data & Society 8 (2): 205395172110359. https://
doi.org/10.1177/20539517211035955.
EDRi. 2021. “Beyond Debiasing. Regulating AI and Its Inequalities”. https://edri.org/wp-content/uploads/2021/09/
EDRi_Beyond-Debiasing-Report_Online.pdf.
Engel, Christoph. 2001a. “Delineating the Proper Scope of Government: A Proper Task for a Constitutional Court?”
Journal of Institutional and Theoretical Economics 157 (1): 187–219. https://doi.org/10.1628/0932456012974675.
Engel, Christoph. 2001b. “The European Charter of Fundamental Rights A Changed Political Opportunity Structure
and Its Normative Consequences.” European Law Journal 7 (2): 151–170. https://doi.org/10.1111/1468-0386.00125.
Eubanks, Virginia. 2019. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. First Picador
ed. New York: Picador St. Martin’s Press.
European Commission. 2020. “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust”.
Text. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-
trust_en.
Journal of Risk Research 13
European Commission. 2021. Proposal for a Regulation of the European Parliament and of the Council Laying down
Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative
Acts. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex:52021PC0206.
European Parliament. 2023. “Artificial Intelligence Act. Amendments Adopted by the European Parliament on 14
June 2023 on the Proposal for a Regulation of the European Parliament and of the Council.” https://www.
europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html.
Facchi, Alessandra, and Nicola Riva. 2021. “European Values in the Charter of Fundamental Rights: An Introduction.”
Ratio Juris 34 (1): 3–5. https://doi.org/10.1111/raju.12308.
Galaz, Victor, Miguel A. Centeno, Peter W. Callahan, Amar Causevic, Thayer Patterson, Irina Brass, Seth Baum, et al.
2021. “Artificial Intelligence, Systemic Risks, and Sustainability.” Technology in Society 67 (November): 101741.
https://doi.org/10.1016/j.techsoc.2021.101741.
Greene, Daniel, Anna Lauren Hoffmann, and Luke Stark. 2019. “Better, Nicer, Clearer, Fairer: A Critical Assessment
of the Movement for Ethical Artificial Intelligence and Machine Learning”. Hawaii International Conference on
System Sciences, 10. https://doi.org/10.24251/HICSS.2019.258.
Hacker, Philipp. 2021. “A Legal Framework for AI Training Data—from First Principles to the Artificial Intelligence
Act.” Law, Innovation and Technology 13 (2): 257–301. https://doi.org/10.1080/17579961.2021.1977219.
Hagendorff, Thilo. 2020. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds and Machines 30 (1): 99–120.
https://doi.org/10.1007/s11023-020-09517-8.
Helberger, Natali, and Nicholas Diakopoulos. 2023. “ChatGPT and the AI Act.” Internet Policy Review 12 (1): 1–6.
https://policyreview.info/essay/chatgpt-and-ai-act. https://doi.org/10.14763/2023.1.1682.
High-Level Expert Group on AI. 2019. “Ethics Guidelines for Trustworthy AI”. Text. https://digital-strategy.ec.europa.
eu/en/library/ethics-guidelines-trustworthy-ai.
Hilgartner, Stephen. 1992. “The Social Construction of Risk Objects: Or, How to Pry Open Networks of Risk.” In
Organizations, Uncertainties, and Risk, edited by James F. Short and Lee Clarke, 39–53. Boulder, CO.: Westview
Press.
Japp, Klaus P., and Isabel Kusche. 2008. “Systems Theory and Risk.” In Social Theories of Risk and Uncertainty. An
Introduction, edited by Jens O. Zinn. 76–103. Malden, MA: Blackwell.
Jasanoff, Sheila. 1999. “The Songlines of Risk.” Environmental Values 8 (2): 135–152. https://doi.org/10.1177/096327199900800202.
Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine
Intelligence 1 (9): 389–399. https://doi.org/10.1038/s42256-019-0088-2.
Joint Research Centre. 2023. “Analysis of the Preliminary AI Standardisation Work Plan in Support of the AI Act.”
LU: European Commission. https://data.europa.eu/doi/10.2760/5847.
Kaminski, Margot E. 2023. “Regulating the Risks of AI.” Boston University Law Review 103 (5): 1347–1411.
Laux, Johann, Sandra Wachter, and Brent Mittelstadt. 2023. “Trustworthy Artificial Intelligence and the European
Union AI Act: On the Conflation of Trustworthiness and Acceptability of Risk.” Regulation & Governance 18 (1):
3–32. https://doi.org/10.1111/rego.12512.
Laux, Johann, Sandra Wachter, and Brent Mittelstadt. 2024. “Three Pathways for Standardisation and Ethical
Disclosure by Default under the European Union Artificial Intelligence Act.” Computer Law & Security Review
53: 1–11.
Luhmann, Niklas. 1992. “Die Beschreibung Der Zukunft.” In Beobachtungen Der Moderne, 29–47. Opladen:
Westdeutscher Verlag.
Luhmann, Niklas. 2000. Die Politik Der Gesellschaft. Frankfurt a.M.: Suhrkamp.
Luhmann, Niklas. 2004. Law as a Social System. Oxford Socio-Legal Studies. Oxford ; New York: Oxford University
Press.
Luhmann, Niklas. 2005. Risk: A Sociological Theory. 1st paperback ed. New Brunswick, N.J: Aldine Transaction.
Luhmann, Niklas. 2017. Trust and Power. English ed. Malden, MA: Polity.
Luhmann, Niklas. 1993. Risk: A Sociological Theory. Berlin: de Gruyter. http://www.gbv.de/dms/hbz/toc/ht004524434.
PDF.
‘MEPs Seal the Deal on Artificial Intelligence Act’. 2023. www.Euractiv.Com. April 27 2023. https://www.euractiv.
com/section/artificial-intelligence/news/meps-seal-the-deal-on-artificial-intelligence-act/.
Myskja, Bjørn K. 2011. “Trustworthy Nanotechnology: Risk, Engagement and Responsibility.” Nanoethics 5 (1):
49–56. https://doi.org/10.1007/s11569-011-0116-0.
Nemitz, Paul. 2018. “Constitutional Democracy and Technology in the Age of Artificial Intelligence.” Philosophical
Transactions. Series A, Mathematical, Physical, and Engineering Sciences 376 (2133): 20180089. https://doi.
org/10.1098/rsta.2018.0089.
O’Malley, Pat. 2009. “‘Uncertainty Makes Us Free’. Liberalism, Risk and Individual Security.” BEHEMOTH - A Journal
on Civilisation 2 (3): 24–38. https://doi.org/10.6094/behemoth.2009.2.3.705.
Renn, Ortwin, Klaus Lucas, Armin Haas, and Carlo Jaeger. 2019. “Things Are Different Today: The Challenge of Global
Systemic Risks.” Journal of Risk Research 22 (4): 401–415. https://doi.org/10.1080/13669877.2017.1409252.
Rességuier, Anaïs, and Rowena Rodrigues. 2020. “AI Ethics Should Not Remain Toothless! A Call to Bring Back the
Teeth of Ethics.” Big Data & Society 7 (2): 205395172094254. https://doi.org/10.1177/2053951720942541.
14 I. KUSCHE
Schiølin, Kasper. 2020. “Revolutionary Dreams: Future Essentialism and the Sociotechnical Imaginary of the Fourth
Industrial Revolution in Denmark.” Social Studies of Science 50 (4): 542–566. https://doi.org/10.1177/0306312719867768.
Sztompka, Piotr. 2003. Trust: A Sociological Theory. Cambridge: Cambridge University Press.
Tierney, Kathleen J. 1999. “Toward a Critical Sociology of Risk.” Sociological Forum 14 (2): 215–242. https://doi.
org/10.1023/A:1021414628203.
Veale, Michael, and Frederik Zuiderveen Borgesius. 2021. “Demystifying the Draft EU Artificial Intelligence Act.”
Computer Law Review International 22 (4): 97–112. https://doi.org/10.9785/cri-2021-220402.
Veale, Michael, Kira Matus, and Robert Gorwa. 2023. “AI and Global Governance: Modalities, Rationales, Tensions.”
Annual Review of Law and Social Science 19 (1): 255–275. https://doi.org/10.31235/osf.io/ubxgk.
Wynne, Brian. 2001. “Creating Public Alienation: Expert Cultures of Risk and Ethics on GMOs.” Science as Culture
10 (4): 445–481. https://doi.org/10.1080/09505430120093586.
Zajko, Mike. 2022. “Artificial Intelligence, Algorithms, and Social Inequality: Sociological Contributions to
Contemporary Debates.” Sociology Compass 16 (3): E12962. https://doi.org/10.1111/soc4.12962.