0% found this document useful (0 votes)
31 views8 pages

The Eu Ai Act and The Wager On Trustworthy AI

The document discusses the ethical, societal, and regulatory challenges posed by AI systems, emphasizing the need for transparency, accountability, and interdisciplinary collaboration in their development. It highlights the EU AI Act as a significant regulatory framework aimed at ensuring the responsible deployment of AI technologies across member states. The article also calls for enhanced education on AI ethics and societal impacts to foster public trust and understanding of these systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views8 pages

The Eu Ai Act and The Wager On Trustworthy AI

The document discusses the ethical, societal, and regulatory challenges posed by AI systems, emphasizing the need for transparency, accountability, and interdisciplinary collaboration in their development. It highlights the EU AI Act as a significant regulatory framework aimed at ensuring the responsible deployment of AI technologies across member states. The article also calls for enhanced education on AI ethics and societal impacts to foster public trust and understanding of these systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

research and advances

DOI:10.1145/ 3665322
internal principles of operation are
As the impact of AI is difficult to assess by a unknown, leading to severe safety and
regulation problems. Once trained,
single group, policymakers should prioritize deep-learning systems perform well,
societal and environmental well-being and but they are subject to surprising vul-
seek advice from interdisciplinary groups nerabilities when confronted with ad-
versarial images.9
focusing on ethical aspects, responsibility, and The decisions may be explicated
transparency in the development of algorithms. after the fact, but these systems carry
the risk of wrong decisions affecting
BY ALEJANDRO BELLOGÍN, OLIVER GRAU, STEFAN LARSSON, the well-being of people. They may be
GERHARD SCHIMPF, BISWA SENGUPTA, AND GÜRKAN SOLMAZ discriminated against, disadvantaged,
or seriously injured. Examples include

The EU AI
suggestions on how to select a job ap-
plicant, proper medical treatment for
a patient, or how to navigate autono-
mous cars through heavy traffic. In
such situations, several ethical, legal,

Act and the


and general societal challenges arise.
At the forefront is the question of who
is responsible for a decision made by
an AI system: Do we leave the decision

Wager on
to the AI system, or does a human de-
cide in partnership with an AI system?
Are there reliable, trustworthy, and
understandable explanations for the
decisions in each case? Unfortunately,

Trustworthy
the inner workings of many AI systems
remain hidden—even from experts.
Given the critical role AI systems play
in modern society, this seems in many
cases unacceptable. But how can we

AI
make complex, self-learning systems
explainable? And to what extent is this
lack of explanation or broader trans-
parency contributing to a watchful and
responsible introduction of AI systems
that have evidenced benefits?

key insights
˽ Public trust, transparency, and
interdisciplinary research are pivotal
in the responsible deployment of AI
systems are increasingly
A R T I F ICI A L I N T EL L IGENCE (A I) systems.
supplementing or taking over tasks previously ˽ The EU AI Act passed by the European
Parliament will now be implemented
performed by humans. On the one hand, this relates in 27 member states of the EU. It is the
to low-risk tasks, such as recommending books first major law aimed at regulating AI
across sectors, with a focus on risk
ILLUSTRATION BY JACEY TEC

or movies, or recommending purchases based on management, transparency, ethical


governance, and human oversight.
previous buying behavior. But it also includes crucial ˽ AI systems categorized as high risk will
decision making by highly autonomous systems. Many be subject to stringent regulations to
ensure they do not compromise human
current systems are opaque in the sense that their rights or safety.

58 COM MUNICATIO NS O F TH E ACM | D EC EM BER 2024 | VO L . 67 | NO. 1 2


DEC E MB E R 2 0 2 4 | VO L. 6 7 | N O. 1 2 | C OM M U N IC AT ION S OF T HE ACM 59
research and advances

A deeper look at the technical de- insight to carry out legally required in-
tails of AI and technical innovations dependent safety certifications.
on their way, such as autonomous Creating and supporting sustain-
systems, shows an obvious need for able solutions. In light of the UN sus-
technical expertise in the practical
technical and societal aspects of AI Many current tainability development goals, we rec-
ommend advancing multidisciplinary
in the decision-making process. On
the other hand, a purely technological
systems are research methodologies that integrate
social sciences and humanities along-
perspective may result in regulations opaque in the side engineering sciences. Social sci-
that cause more significant societal
problems. This article highlights ac-
sense that their ences, such as sociology and anthro-
pology, can provide crucial insights
curate and realistic technology de- internal principles into how people understand, interact
scriptions that take into account the
risk factors as required, for example,
of operation are with, and trust AI systems. This un-
derstanding is vital for designing tech-
by the risk pyramid of the EU AI Act unknown, leading nologies that are socially acceptable,
that entered into force in August 2024.
To strike such a balance for the pub- to severe safety beneficial, and promote sustainable
development. Humanities disciplines,
lic interest, policymakers should pri- and regulation like philosophy, can offer valuable per-

problems.
oritize societal and environmental spectives on ethics, fairness, and the
well-being and seek advice from in- potential impact of AI on human val-
terdisciplinary groups, as the impact ues. This combined approach can lead
of AI and autonomous systems is very to developing sustainable and energy-
difficult to assess by a single group. efficient autonomous systems that
This more holistic system view is com- align with societal well-being.
plementary to previous statements Prioritizing societal well-being and
focusing on ethical aspects, responsi- equal opportunities. We recommend
bility, and transparency in the devel- that the legislative processes, espe-
opment of algorithms,1 specifically on cially in adapting existing laws and the
algorithmic systems involving AI and new design of liabilities, take an in-
machine learning (ML).3,15,22 terdisciplinary approach and consult
Many members of the public, par- the scientific and technical expertise
ticularly in Europe, exhibit skepticism in trusted AI. This should ideally lead
toward AI and autonomous systems, to equal opportunities and fairness in
which often translates into a lack of new business development consider-
confidence or a cautious “wait-and- ing new autonomous systems and pre-
see” approach.23 For this technology venting monopolies.
to develop to its beneficial potential, Promoting education on science,
we need a framework of rules within technology, social impact, and ethics.
which all players can operate responsi- To foster responsible and beneficial
bly. For the future of AI systems—spe- use of AI, we propose enhancing educa-
cifically in the public spheres, where tional curricula in secondary schools,
people express their personal expecta- universities, and technical fields to in-
tions and worries about the potential clude fundamental knowledge about
consequences of AI being used with- AI ethics and its impact on society.
out proper oversight—certain aspects Incorporating ethical and social scien-
must be taken into account. The fol- tific aspects into computer science (CS)
lowing points are crucial for guiding curricula, as exemplified by Stanford
the formulation of policies and regu- University’s approach, will encourage
lations related to AI and essential for students to consider “embedded” ethi-
the research and development com- cal, legal, or social implications while
munity: solving problems. Similarly, in Europe,
Supporting research and develop- some institutions teach CS students to
ment in AI and autonomous systems. relate the ACM Code of Ethics for Pro-
We recommend advanced research fessional Conduct1 to their tasks, fos-
on the governance of implemented AI tering a sense of responsibility in their
and automated systems, for example, future AI-related endeavors.
transportation. Special care must be The overall level of expertise in all
taken at an early stage to contribute levels of our society about how AI works
and adhere to transparent standards and operates represents a critical suc-
for hardware and software that provide cess factor that will ultimately lead to

60 COM MUNICATIO NS O F TH E AC M | D EC EM BER 2024 | VO L . 67 | NO. 1 2


research and advances

confidence and acceptance of benefi- rates with the EU Commission as a if not impossible, to fully explain their
cial uses of these technologies in our stakeholder and representative of the outcomes. This skepticism raises criti-
daily lives. Policymakers, developers, European CS community, providing cal questions about the feasibility of
and adopting users of AI systems need technical input on relevant initiatives. achieving truly transparent AI systems.
to be literate about these technologies While the Commission looks broadly Therefore, ways to establish an in-
and find answers at the intersection of at an assessment of AI from a general dividual trust in AI must be sought.
technology, society, and policymak- point of view to preserve the values of However, more than detailed expla-
ing. Furthermore, we should weigh the the European member states, a more nations of individual outcomes will
risks of autonomous systems against comprehensive judgment will result if be required for the public. In Knowles
the benefits to allay public fears. the predictive assessments of all the and Richards,14 the authors call for
The points mentioned here high- actors, that is, owners, designers, de- building a public regulatory ecosys-
light the need for an interdisciplinary velopers, and researchers, are taken tem based on traceable documenta-
and holistic approach to beneficial us- into account.1,3,5,22 This process led to tion and auditable AI, with a slightly
age of AI. They set the foundation for a the proposal of an AI Act, first pub- different emphasis than the one on
broader involvement of the public on lished by the European Commission individual transparency and informa-
one hand and the subsequent develop- in April 2021, and the final version in tion for all.
ment of the EU AI Act. To be informed force starting August 2024, which this Robustness and verification. Given
about the endeavors of a supranational article discusses later. the complexity, more work needs to be
governmental organization such as done by interdisciplinary teams bring-
the EU, striving to establish consen- Essentials for Achieving ing together the social sciences and
sus across 27 member states regarding Trustworthy AI Systems humanities expertise with computer
the legal regulation of AI, is likely to Implementers of AI and autonomous scientists, software engineers, legal
capture the attention of a diverse in- systems must be aware of what we, as scholars, and political scientists in in-
ternational readership. This audience responsible citizens, can accept and vestigating what meaningful control
includes academics in the field of AI what is ethical, and put laws and regu- and verification procedures for AI sys-
ethics, explainable AI, and risk man- lations in place to safeguard against tems might look like in the future.
agement as well as professionals who future tragedies. Trustworthy AI Safety, risk issues, and ethical deci-
may be called upon to provide techni- should, for example, according to the sions. In the domain of autonomous
cal expertise to lawmakers in other European Commission’s High-Level vehicles, looking at the state of the
parts of the world. Expert Group on AI, respect all appli- art to avoid collisions, autonomous
cable laws and regulations and a se- cars have been trained not only to re-
Background: EU Policies on ries of requirements for the particular spect traffic rules rigorously but also to
AI and Ethics Guidelines sector. Specific assessment lists aim ‘drive cautiously’, that is, negotiate and
Considered one of the ‘lighthouse’ to help verify the application of each not enforce the right of way. Even in the
projects, public trust in autonomous essential requirement. The following case of unavoidable and dilemmatic
systems is a crucial issue, well in line list of essentials is taken from the EU situations, legislation is underway to
with recent awareness in the gover- document “Building Trust in Human- respect the ethical dilemma, a.k.a. the
nance over AI12,16,21 expressed in the Centric Artificial Intelligence,” which Trolley Dilemma, investigated in Awad
joint agreement of the EU Commis- results from the work of a European et al.4 and Goodall.11
sion and EU Council’s proposal for High-Level Expert Group on ethics.c In the context of public expecta-
a new European AI Act,a as well as Additional perspectives are covered in tions, it is important to understand
the High-Level Expert Group called a report by the Alan Turing Institute.18 that there is no universally “right”
in by the EU Commission in 2019.8 Developing trust in autonomous answer when it comes to making de-
The High-level Expert Group’s Ethics systems in the public sphere. Human cisions in dilemma situations. Pri-
Guidelines echo several critical issues agency and oversight. The essentials marily, an algorithm should not be
on human-centered and transparent described above in the direction of an constrained to making predefined
approaches pointed to several prin- explainable and trustworthy AI may be decisions. Nevertheless, ongoing dis-
cipled documents.13 suitable to convince professionals who cussions about this topic persist in
The EU Commission takes a three- knowingly interact with AI systems.6,7 society. Furthermore, the lack of ac-
step approach: setting out the essen- It would be similarly essential to en- ceptance for autonomous driving can
tial requirements for trustworthy AI, sure trust in these systems among the be attributed to the fact that humans
launching a large-scale pilot phase public. However, it is important to note are allowed to make mistakes, where-
for feedback from stakeholders, and that explainability in AI, particularly in as there seems to be zero tolerance for
working on international consen- deep neural networks (DNNs), remains any mistakes made by AI.
sus-building for human-centric AI.b a significant scientific challenge. Some Cybersecurity. In the cybersecurity
Among others, the ACM Europe Tech- scientists argue that the inherent com- domain, other than attacks through
nology Policy Council (TPC)2 collabo- plexity and the high-dimensional na- the Internet, there are also AI-specific
ture of these models make it difficult, attacks, such as adversarial learning,
a See https://bit.ly/4gP6j5d which researchers have successfully
b See https://bit.ly/4eL8kNK c See https://bit.ly/3XTZ5nL demonstrated from the Tencent Keen

DEC E MB E R 2 0 2 4 | VO L. 6 7 | N O. 1 2 | C OM M U N IC AT ION S OF T HE ACM 61


research and advances

Security Lab.9 AI systems such as au- which depends largely on how the fol- individual application requests.
tonomous vehicles must demonstrably lowing areas of concern are rated and ˲ Canada. Canada has established
be able to defend themselves and go prioritized by policymakers: the Pan-Canadian Artificial Intelli-
into a safe mode in case of doubt. ˲ Global governance and interna- gence Strategy, which aims to promote
Physical security. There might be tional cooperation the responsible development and use
physical attacks, such as throwing a ˲ Maximizing beneficial AI research of AI. The strategy includes funding for
paint bag against the cameras to blind and development research, development, and innova-
an autonomous system or using a laser ˲ Impact on the workforce tion in AI, as well as ethical guidelines
pointer against the LiDAR. In cases ˲ Accountability, transparency, and for its use.
like these, error handling must be able explainability ˲ United Kingdom. The U.K. has es-
to bring the system into a safe mode. ˲ Surveillance, privacy, and civil lib- tablished the AI Council, which aims to
Data privacy. People have the erties promote the responsible use of AI and
right to determine if they want to be ˲ Fairness, ethics, and human rights advise the government on AI regula-
“filmed” and whether they want their ˲ Manipulation tion. The council has published guide-
location, date, and time to be recorded ˲ Implications for health lines on ethical use. The approach so
and shared. To build trust, autono- ˲ National security far aims to ensure consumers “have
mous systems manufacturers must ad- ˲ Artificial general intelligence and confidence in the proper functioning
here to the data-protection principles superintelligence of the system.”
in the GDPR to ensure that no privacy Looking at the major players, we ˲ The G7. During its summit meet-
rights are being violated. see: ing on May 20, 2023 in Hiroshima, the
Trust and human factors. Different ˲ United States. The White House G7 issued a statement about what it
levels of trust and comfort may arise has published a ‘Blueprint for an AI called the ‘Hiroshima AI Process’.
through explanation, for example, if Bill of Rights’, a set of five principles “We recognize the need to imme-
an autonomous car explains its ma- and associated practices to help guide diately take stock of the opportunities
neuvers to its passengers and road us- the design, use, and deployment of au- and challenges of generative AI, which
ers outside the vehicle. tomated systems to protect the rights is increasingly prominent across coun-
Trust and legal systems. The decisive of the American public in the age of tries and sectors, and encourage in-
question is who or what caused the er- AI. However, there is currently no fed- ternational organizations to consider
ror: The human at the wheel? A flawed eral AI regulation in the U.S., but some analysis on the impact of policy devel-
system? A defective sensor? Complete states have taken steps to regulate par- opments and Global Partnership on AI
digitization makes it possible to an- ticular use cases and the use of AI in (GPAI) to conduct practical projects.
swer these questions. To do this, how- specific industries. For example, Cali- In this respect, we task relevant min-
ever, extensive data must be stored. fornia passed a law requiring compa- isters to establish the Hiroshima AI
Open legal questions that need to be nies to disclose the use of automated process, through a G7 working group,
clarified in this context include who decision making in employment and in an inclusive manner and in coopera-
owns these data, who has access to the housing. Overall, the strategy is busi- tion with the OECD and GPAI, for dis-
data, and whether this is compatible ness oriented. After the appearance of cussions on generative AI by the end of
with privacy protection. ChatGPT, the U.S. Senate Committee 2024. These discussions could include
Public administration. The answers on the Judiciary’s Subcommittee on topics such as governance, safeguard
to the above must be found because Privacy, Technology and the Law held of intellectual property rights includ-
they represent significant citizen con- several hearings with leading AI aca- ing copy rights, promotion of transpar-
cerns. In our capacity as members of demics to evaluate the risks of genera- ency, response to foreign information
the ACM Europe TPC, we contribute to tive AI. In October 2023, the Executive manipulation, including disinforma-
the work by the EU Commission and Order on Safe, Secure, and Trustworthy tion, and responsible utilization of
EU Parliament to establish harmo- Development and Use of Artificial In- these technologies.”10 In October 2023,
nized rules for the use of AI. Our com- telligence, signed by President Biden, this was followed by the publication of
ments from the perspective of autono- arose from a desire to address both the AI guidelines for a ‘Hiroshima Process’
mous systems can be found in Saucedo potential benefits and risks of AI. for advanced AI systems and a code of
et al.20 ˲ China. China has been actively conduct for developer organizations.
investing in AI and has taken steps to In the EU, preparations for AI regu-
AI Legislation in the EU: The AI Act regulate its use, including developing lation began in April 2021, when the
AI policy work is underway globally national AI standards and guidelines EU Commission presented the Arti-
in most industrial countries. Partner- for ethical use. The country has also ficial Intelligence Act, which sets out
ing with PricewaterhouseCoopers, the established a national AI development horizontal rules for the development,
Future of Life Institute offers a dash- plan that sets out its goals and objec- commodification, and use of AI-driven
board on its website24 with a wealth of tives for the industry. China has signif- products, services, and systems within
information and references to docu- icantly restricted the use of generative the territory of the EU. It should be
ments. According to their analysis, the AI. ChatGPT is blocked within the Chi- noted that the EU AI legislation does
approach to govern AI varies greatly nese network, and access to domestic not regulate AI technology per se, but
between soft and hard law efforts, alternatives is granted solely through rather the effect of AI products on the

62 COMM UNICATIO NS O F THE ACM | D EC EM BER 2024 | VO L . 67 | NO. 1 2


research and advances

lives of EU citizens. There is no inten- Figure. The EU Risk Pyramid.


tion to intervene in the development of
AI products, but there is a claim to help
shape their use in the EU. The regula-
tion provides core AI rules that apply to
all industries.
The EU AI Act introduces a sophisti- Unacceptable Prohibited
Risk AI practices
cated ‘product safety framework’ con-
structed around four risk categories
Regulated
as evidenced in the figure. It imposes High Risk
AI Systems
requirements for market entrance and
certification of high-risk AI systems
Limited Risk Transparency
through a mandatory CE-marking pro-
cedure. To ensure equitable outcomes,
this pre-market conformity regime Minimal Risk No Obligation
also applies to ML training, testing,
and validation datasets. The Act seeks
to codify the high standards of the EU
trustworthy AI paradigm, which re-
quires AI to be legally, ethically, and Table 1. Contents of the EU AI Act.
technically robust while respecting
democratic values and human rights, Chapter I: General Provisions Outlines the proposal’s scope and how it would affect the market
including privacy and the rule of law. once in place.
This is claimed to be the first law Chapter II: Prohibited AI Defines AI systems that violate fundamental rights and are
worldwide to regulate AI in all areas of Practices categorized at an unacceptable level of risk.

life, except the military sector. The leg- Chapter III: High-Risk Covers the specific rules for classifying AI systems as high risk, the
AI Systems connected requirements and obligations for Providers and Depolyers
islative process reached a milestone in and other parties.
December 2023, when the EU Commis- Chapter IV: Transparency Lists transparency obligations for systems that interact with
sion, the EU Council, and Parliament Obligations for Providers and humans, detect emotions, or determine social categories based on
managed to reach an agreement in the Deployers of Certain AI Systems biometric data, or generate or manipulate content (for example,
and GPAI Models: ‘deep fakes’).
so-called "trilogue." After subsequent
Chapter V: General Purpose Classification Rules, obligations for poviders of general-purpose AI
approval from votes in Parliament and AI Models Models, and GPAI Models with systemic risk.
the Council, the regulation came into Chapter VI: Measures in AI regulatory sandboxes, testing of high-risk AI systems in real
force in August 2024, shifting the at- Support of Innovation world conditions.
tention to member states to set up su- Chapter VII: Governance Establishing the Act’s governance systems, including the AI
pervisory bodies, the standardization Office and the AI Board, and monitoring functions of the European
Commission and national authorities.
bodies to develop harmonized stan-
Chapter VIII: EU Database EU database for high-risk AI systems listed in Annex III.
dards for high-risk AI compliance, and for High-Risk AI Systems
for the new AI Office to develop guide- Chapter IX: Post-Market Sharing of information on serious incidents: Supervision,
lines. Monitoring, Information Sharing, investigation, enforcement, and monitoring with respect to providers
Market Surveillance of general-purpose AI models.
Who is affected by the new regula-
tion? Companies that plan to provide Chapter X: Codes of Conduct Guidelines from the Commission on the implementation of this
and Guidelines regulation.
or deploy AI systems in the EU (the
Chapter XI: Delegation of Power Exercise of the delegation and committee procedure.
“providers and deployers” accord- and Committee Procedure
ing to the wording of the Act) are the Chapter XII: Confidentiality Administrative fines on union institutions, agencies and bodies. Fines
primary addresses bound by the pro- and Penalties for providers of general-purpose AI models.
visions of the AI Act. They apply re- Chapter XIII: Final Provisions Amendments to several articles in other legislation.
gardless of where the systems were
developed or are operated from—or
when the operation of the systems im- Given the need for more awareness ments since the initial proposal to the
pacts EU citizens. It will take courage outside the EU, companies are well ad- AI Act as of July 2023 may be found in
and creativity to legislate this convo- vised to start early to learn what is in Zenner.25
luted, interdisciplinary issue and will the EU AI Act and what is needed to The risk pyramid of the AI Act. The
require non-EU, namely U.S. and Chi- meet the compliance criteria. main guiding point of the AI Act is the
nese companies, to adhere to values- The essence of the EU AI Act. The risk pyramid with a core focus on high-
based EU standards before their AI AI act contains the following sections, risk applications. The risk levels, as de-
products and services gain access to called titles.d,13 A collection of all pub- picted previously in Figure 1, are sum-
the European market of 450 million licly available documents and amend- marized below.
consumers. Consequently, the pro- Unacceptable risk. This category
posal has an extraterritorial effect. d See https://bit.ly/4dEOJOh delineates which uses of AI systems

DEC E MB E R 2 0 2 4 | VO L. 6 7 | N O. 1 2 | C OM M U N IC AT ION S OF T HE ACM 63


research and advances

carry an unacceptable level of risk to biometric data of an individual be- mary about the content used for train-
society and individuals and are thus longs to a group with some predefined ing the GPAI model.
prohibited under the law. These pro- characteristic to take a specific ac- ˲ Free and open-license GPAI mod-
hibited use cases include AI systems tion; emotion recognition; and deep- els whose parameters, including
that entail social scoring, subliminal fake systems. weights, model architecture, and mod-
techniques, biometric identification Minimal risk. The proposal’s lan- el usage are publicly available, allowing
in public spaces, and exploiting peo- guage describes minimally risky AI for access, usage, modification, and
ple’s vulnerabilities. In these uses, the systems as all other systems not cov- distribution of the model, only have to
AI Act describes when and how excep- ered by its safeguards and regula- comply with the latter two obligations
tions may be made, such as in emer- tions. There are no requirements for above. This exception does not apply to
gencies related to law enforcement systems in this category. Of course, GPAI models with systemic risks.
and national security. businesses with multiple kinds of AI The GPAI models are presumed to
High risk. Requirements related to systems must ensure compliance with carry “systemic risk” when the cumu-
high-risk systems, such as compliance each appropriately. lative amount of computation used for
with risk-mitigation requirements Handling general-purpose AI with its training is greater than 1025 floating
like documentation, data safeguards, or without systemic risk. As a result point operations per second (FLOPS),
transparency, and human over- of the increased general capabilities or through evaluation or the Commis-
sight, are at the crux of this proposed of several new AI models during the sion’s decision have been found to
regulation. The list of high-risk AI sys- spring of 2023, and the broad adop- have the high-impact capabilities that
tems that must deploy additional safe- tion of ChatGPT, there were intense implicate this classification. If their
guards is lengthy and can be found in public debates and a delay of the EU model meets this criterion, providers
Art. 6, Annex III of the Act. Parliament’s proposal for the AI Act. must notify the Commission within
Explainability plays a crucial role The proposal, from June 2023, came to two weeks. The provider may present
in ensuring that AI systems are trans- include rules that the earlier propos- arguments that, despite meeting the
parent and trustworthy, particularly als did not, on “foundation models” criteria, their model does not present
in domains where the risk of harm- (see definition in Art. 3) and respon- systemic risks.
ful decisions is high—for example, sibilities linked to providers of gen- We consider it quite a leap to assume
in the medical domain, where a false erative AI (see, for example Zenner25). that more compute in training a model
negative may be as harmful as a false These proved to be part of the most necessarily equals risks for negative
positive. The EU AI Act requires that intensely negotiated aspects of the AI impact on public health, safety, public
AI systems provide information on Act, which solidified into a set of ob- security, and so on. The Commission is
their decision-making process so that ligations for all providers of general- also quite autonomously mandated to
individuals can understand the basis purpose AI (GPAI), that also included change how “systemic risk” is allocat-
for the AI system’s outputs and that a second tier with additional obliga- ed and can amend the criteria listed in
they are not used to manipulate be- tions for GPAI models (see Chapter V) Annex XIII, which may be both mean-
havior. Additionally, the requirement “having a significant impact on the ingful in terms of how AI evolves, but
for human oversight and control over Union market due to their reach, or also open for legal unpredictability.
high-risk AI systems is based on the due to actual or reasonably foresee- In addition to the obligations for
principle that there must be a human able negative effects on public health, GPAI above, providers of GPAI models
in the loop to make decisions that have safety, public security, fundamental with systemic risk must:
significant consequences for individu- rights, or the society as a whole, that ˲ Perform model evaluations, in-
als’ rights and safety.19 The EU AI Act can be propagated at scale across the cluding conducting and documenting
aims to ensure that AI systems are de- value chain” (Art. 3(65)). adversarial testing to identify and miti-
veloped and deployed responsibly and In brief, all providers of GPAI mod- gate systemic risk.
transparently, considering the poten- els must: ˲ Assess and mitigate possible sys-
tial impact on individuals’ rights and ˲ Draw up technical documentation, temic risks, including their sources.
safety. Harmonized standards, under including training and testing process ˲ Track, document, and report seri-
development, are likely to play an im- and evaluation results, to be available, ous incidents and possible corrective
portant role for the compliance of upon request, for the AI Office and na- measures to the AI Office and relevant
high-risk AI systems. tional supervisory authorities. national competent authorities with-
Limited risk. Limited-risk AI sys- ˲ Draw up information and docu- out undue delay.
tems have much fewer obligations to mentation to supply to downstream ˲ Ensure an adequate level of cyber-
providers, and users must follow com- providers that intend to integrate the security protection.
pared to their high-risk counterparts. GPAI model into their own AI system In response to the complexities of
AI systems of limited risk must fol- so that the latter understands capabili- AI regulation, the EU has established
low certain transparency obligations ties and limitations and is enabled to an AI Office to facilitate coordination
outlined in Title IV of the proposal. comply. on cross-border cases. However, the
Examples of systems that fall into this ˲ Put in place a policy to comply with resolution of intra-authority disputes
category include biometric categori- EU law on copyright. remains the responsibility of the Com-
zation, or establishing whether the ˲ Publish a suffciently detailed sum- mission.

64 COMM UNICATIO NS O F THE AC M | D EC EM BER 2024 | VO L . 67 | NO. 1 2


research and advances

opportunities and challenges toward responsible


Table 2. Management system.
AI. Science Direct 58, (2020), 82–115. https://bit.
ly/4gRKeCY.
8. Building trust in human-centric artificial intelligence.
I. Define use case Risk Self-Assessment EU Commission (2019); https://bit.ly/3zTMzwr.
9. Doctorow, C. Small stickers on the ground trick Tesla
Limited Risk autopilot into steering into opposing traffic lane. Boing
˲ Accessible disclosure of concrete user information Boing (Mar. 31, 2019); https://bit.ly/47VQbux
High Risk (Annex III and Annex VIII) 10. G7 Meeting Hiroshima. (May 2023); https://bit.
˲ Risk management ly/3U8BM8E
II. Evaluation of risk level and 11. Goodall, N.J. Machine ethics and automated vehicles.
˲ Data and data governance Road Vehicle Automation, G. Meyer and S. Beiker
compliance requirements
˲ Human oversight (eds.). Springer (2014), 93–102; https://bit.ly/3TQn2Ll.
˲ Technical documentation 12. IEEE Global Initiative on Ethics of Autonomous and
˲ Transparency and provision of information to users Intelligent Systems. Ethically Aligned Design: A Vision
˲ Accuracy, robustness, and cybersecurity for Prioritizing Human Well-being with Autonomous
and Intelligent Systems, First Edition. (2019); https://
Internal Control (see AI Act Annex VI) bit.ly/47RhbLR.
III. Compliance Assessment 13. Jobin, A., Ienca, M., and Vayena, E. The global
External Control / Quality Management (see AI Act Annex VII)
landscape of AI ethics guidelines. Nature Machine
Intelligence 1, (2019), 389–399.
14. Knowles, B. and Richards, J. The sanction of
authority: Promoting public trust in AI FAccT ‘21.
Assessment and How to ency, the nature and quality of the
5 In Proceedings of the 2021 ACM Conf. on Fairness,
Accountability, and Transparency (March 2021),
Cope with the EU AI Act training data, and its documentation 262–271
For anyone wishing to put an AI sys- because of a later evaluation by exter- 15. Larsson, S. and Heintz, F. Transparency in artificial
intelligence. Internet Policy Rev 9, 2 (2020); https://
tem into operation in the EU, the AI nal reviewers. This also includes the bit.ly/3XQODO0.
act serves as a reminder for develop- establishment of a risk-management 16. Larsson, S. On the governance of artificial intelligence
through ethics guidelines. Asian J. Law and Society 7,
ers to always prioritize the well-being system (see Table 2). 3 (2020), 437–451.
of individuals and society as a whole. ˲ Continuous investment in re- 17. Larus, J. et al. When Computers Decide: European
Recommendations on Machine-Learned Automated
They must first assess the risk and, search and development, especially in Decision Making. ACM (2018). https://bit.ly/3BtDkDV.
depending on the risk class, comply rapidly evolving methods of AI explain- 18. Leslie, D. Understanding artificial intelligence ethics
and safety: A guide for the responsible design
with requirements relating to trans- ability, see Balasubramanian6 and Bar- and implementation of AI systems in the public
sector. The Alan Turing Institute (2019); https://bit.
parency and security. It is expected redo Arrieta et al.7 Once an AI system is ly/4dH2AUE.
that it will be particularly challenging explainable, it may positively contrib- 19. Middelton, S. et al. Trust, regulation, and human-
in-the-loop AI within the European region.
for high-risk applications to obtain ute to trustworthiness and form a step Communications 65, 6 (April 2022), 64–68.
approval for the EU market. There will toward acceptance and approval. 20. Saucedo, A. et al. ACM Europe TPC Comments on
Proposed AI Regulations. ACM (2021); https://bit.
be a grace period until the the various ˲ Collaborate with other companies,
ly/3XN060S.
obligations or bans become applica- potentially supervisory authorities, 21. Shneiderman, B. Responsible AI: Bridging from ethics
to practice. Communications 64, 8 (Aug.2021), 32–35.
ble. Nevertheless, developers should and organizations in the industry to 22. Villani, C. For a meaningful artificial intelligence.
analyze the respective compliance re- share information and best practices Comitè d’Ètica de La UPC (2018); https://bit.
ly/3XRxIuC.
quirements at an early stage to adapt for compliance. This can help reduce 23. Wood, M. Self-driving cars might never be able to
the development process accordingly. costs and ensure that all parties are on drive themselves, Marketplace (2021); https://bit.
ly/4dw15s5.
The strategy includes the following the same page when it comes to com- 24. Yelizarova, A. Global AI policy. Future of Life (Dec. 16,
key elements: pliance. 2021); https://bit.ly/4dtV27u.
25. Zenner, K. The implementation and enforcement of
˲ Informing and training employ- the EU AI Act: The documents. Digitizing Europe (Jul.
ees about the regulations and their Acknowledgments 28, 2024); https://bit.ly/3BFRth4.

obligations under the law. These can- This work was undertaken while work-
Alejandro Bellogín is an associate professor in the
not be understood without addressing ing for the ACM Europe Technology department of Computer Engineering, Universidad
the EU’s rationale for this law and the Policy Committee (TPC) on autono- Autónoma de Madrid, Spain.

expectations of EU citizens regarding mous systems. We are grateful for sup- Oliver Grau is an ACM Senior Member, Hannover,
Germany.
trustworthy AI. Researchers and de- port of and discussions with Chris
Stefan Larsson is an associate professor in the
velopers must understand that auto- Hankin, chair of the TPC. Further in- department of Technology and Society LTH, Lund
mated and algorithmic decision mak- formation may be found on the TPC University, Lund, Sweden.

ing should be based on the principles website2 and in prior publications.3,17 Gerhard Schimpf is a senior manager at SMF Team
Consulting, Pforzheim, Germany.
and values enshrined in the Charter of
Biswa Sengupta is a technical fellow at University
Fundamental Rights (such as human References College London, London, U.K.
1. ACM Code of Ethics and Professional Conduct. ACM
dignity, equality, justice and equity, (2018); https://bit.ly/4eNL1Tv. Gürkan Solmaz is a senior researcher at NEC
non-discrimination, informed con- 2. ACM Europe Technology Policy Committee; https:// Laboratories Europe, Heidelberg, Germany.
bit.ly/3XL8LAY
sent, private and family life, and data 3. ACM Principles for Algorithmic Transparency and © 2024 Copyright held by the owner/author(s).
protections), and the principles and Accountability, Association for Computing Machinery Publication rights licensed to ACM.
(2017); https://bit.ly/4eSHbJ8
values of Union law (such as non-stig- 4. Awad, E. et al. The moral machine experiment. Nature
matization, and individual and social 563, (2018), 59–64; https://go.nature.com/47S6g4w
5. Baeza-Yates, R. et al. ACM Technology Policy Council
responsibility). Support from an in- Statement on Principles for Responsible Algorithmic
terdisciplinary working group should Systems. ACM (2022). https://bit.ly/3TUTnRo.
6. Balasubramanian, V. Toward explainable deep Watch the authors discuss
therefore be planned for. learning. Commun. ACM 65, 11 (Nov. 2022), 68–69; this work in the exclusive
˲ During the design of the systems, https://bit.ly/3Y9SwPa. Communications video.
7. Barredo Arrieta, A. et al. Explainable artificial https://cacm.acm.org/videos/
attention should be paid to transpar- intelligence (XAI): Concepts, taxonomies, the-eu-ai-act

DEC E MB E R 2 0 2 4 | VO L. 6 7 | N O. 1 2 | C OM M U N IC AT ION S OF T HE ACM 65

You might also like