Responsible and Accountable Algorithmization. Grimmerlikhuijsen
Responsible and Accountable Algorithmization. Grimmerlikhuijsen
Introduction
Algorithms are increasingly popular in the public sector in countries all around the
world: they are used to provide services (Pencheva, Esteve & Mikhaylov, 2018) but
also to, for instance, support decision-making (Van der Voort et al., 2019) and pre
dict recidivism (Kleinberg et al., 2017). Furthermore, the police use algorithms to
predict crime patterns (Meijer & Wessels, 2019), tax departments use algorithms to
detect tax fraud (Zouridis, Van Eck & Bovens, 2020) and local governments use
algorithms to make garbage collection more efficient (Ramalho, Rossetti & Cacho,
2017). The current wave of algorithms uses relatively new techniques, such as
machine-learning and deep learning, to transform organizational processes
(Brynjolfsson & McAfee, 2014; Burrell, 2016).
The ‘magic’ of these new technologies is appealing to governments since they
promise to bring us more effective processes, better informed decisions and more
insights in complex realities through informative and seamless interfaces. At the
same time, the ‘magic’ of these new technologies is also risky since the use of algo
rithms can produce bias and even discriminatory practices; it can result in errors in
the implementation of policies and it can also hamper the interactions between
governments and citizens. Therefore, various authors stress that we need to step
back and reflect on how algorithms can be applied to realize desirable outcomes
(O’Neil, 2016; Eubanks, 2018; Gerards, 2019). For instance, various municipal gov
ernments in the Netherlands used a machine-learning algorithm (Systeem
Risicoindicatie, SyRI) to detect welfare fraud amongst citizens. While at the begin
ning there was widespread support for this system among government officials, in
early 2020 the system was judged ‘discriminatory’ and ‘not transparent’, not only
by civil rights activists, but also by the District Court. Vulnerable citizens, often
those from a migrant background, were unfairly profiled to be a suspect of welfare
fraud.
The example of SyRI in the Netherlands illustrates an issue of wider signifi
cance: there is an urgent societal need to not only focus on issues of effectiveness
and efficiency but also identify how governments can avoid negative unintended
consequences of the use of algorithms – such as bias and problems of fairness – to
maintain the trust of citizens (Hoffman, 2019). How can we expect citizens to trust
54 Albert Meijer and Stephan Grimmelikhuijsen
government if this system works with an algorithm that is both opaque and dis
criminatory? Various concerns have been raised regarding the impacts of these
systems on privacy but also on discrimination of different groups in society and its
neglect of human contact. It is still unclear how government organizations will
deal with issues such as interpreting big data in a non-discriminatory manner, han
dling privacy fairly, using means proportional to the objectives and ensuring human
contact (Dencik et al., 2019).
We argue that these concerns go beyond the mere implementation of algo
rithms; they also relate to how organizations transform and change to enable the
use of algorithms. In this chapter, we will label this process algorithmization: an
organization that transforms its working routines around the use of algorithms for
its actions and decisions. We highlight that an analysis of algorithmization requires
a focus not only on the technology but also on its organizational implementation
in terms of the expertise of employees, information resources, organizational struc
ture, organizational policy and monitoring & evaluation to understand why the use
of algorithms does or does not produce citizen trust.
In this chapter we link algorithmization to a crucial concept in contemporary
governance: citizen trust in government. Trust in government is regarded as an essen
tial element in developed societies. It has been found, for example, that if govern
ment institutions are not trusted by the citizens they serve, they are unable to
function properly (Fukuyama, 1995; Inglehart, 1999; Levi & Stoker, 2000). Given
the rapid algorithmization of government, we need to identify desirable forms of
algorithmization in the public sector. Two preconditions for maintaining citizen
trust have been proposed in the literature: (1) incorporating values in the design of
algorithms as a precondition for organizational responsibility (e.g. Friedman, Kahn
& Borning, 2008; Van den Hoven, 2013) and (2) demonstrating the correct usage
of algorithms to the public as a precondition for accountability (e.g. Diakopoulos,
2016). In this chapter we argue that algorithmization in the public sector can only
sustain citizen trust when it is based on both preconditions.
To this end, we first discuss what we understand by citizen trust and we outline
why trust is so important in the public sector.We then offer a discussion of algorith
mization as an organizational process and we stress that this organizational process,
rather than the technology in itself, demands our attention if we want to strengthen
citizen trust. The next sections discuss how two preconditions – responsible and
accountable algorithmization – can contribute to citizen trust. We end this chapter
by presenting a model for responsible and accountable algorithmization in the pub
lic sector.
Responsible algorithmization
A first route for strengthening citizen trust in algorithmization is provided by the
notion of responsibility. Responsibility is one of the key concepts in ethical theory
and its roots can be traced back to Aristotle. He emphasized that moral responsibil
ity grows out of an ability to reason, an awareness of action and consequences, and
a willingness to act free from external compulsion (Roberts, 1989). Responsibility
refers to the idea that persons have moral obligations and duties to others, such as
their well-being and their health, and to larger ethical and moral traditions such as
freedom and empowerment, and that persons, thus, need to consider these obliga
tions and duties in their individual decisions and actions.
More recently, the notion of responsibility has been applied to public organiza
tions. Political scientist Herman van Gunsteren (1976) highlights that we should
strive to embed responsibility in (complex) organizations rather than emphasize
58 Albert Meijer and Stephan Grimmelikhuijsen
the need to control the activities of all individual members of organizations. His
approach stresses the need to acknowledge the limitations of bureaucratic control
mechanisms such as hierarchy and formal rules and to place more emphasis on the
responsibility of individuals in the organization. According to Van Gunsteren, the
notion of responsibility is needed when rigid moral norms – embedded in organi
zational standard procedures – cannot keep up with rapidly changing circum
stances and more flexibility is required to apply sound moral judgement. At the
same time, this does not mean that ‘anything goes’: judgement needs to be made
based on ethically respectable values and reasonable perceptions of relevant facts
(Van Gunsteren, 2015: 318). ‘Responsibility forums’ where individual judgement
needs to be explained can play a key role in strengthening the responsibility of
individuals in organizations.
Furthermore, the notion of responsibility has been used to think about innova
tion. In the context of the European Union, the notion of responsible innovation
has come to play an important role in funding for technology development. The
key idea is that innovation traditionally focuses only on gains in efficiency and
effectiveness. Other considerations are often regarded as barriers to the innovation
process. The notion of responsible innovation reconceptualizes these barriers and
stresses that other values need to play a role in the way the innovation process is
structured and implemented. In our understanding of responsible algorithmization,
we build upon the broader notion of responsible innovation, which Richard
Owen, Phil Macnaghten and Jack Stilgoe define as ‘a collective duty of care, first to
rethink what we want from innovation and then how we can make its pathways
responsive in the face of uncertainty’ (2012: 757–758).
These general notions about responsibility in organizations and responsible
innovation can be used as a basis for ideas about the responsible use of algorithms
in public organizations. Key elements in this conceptualization are (1) ethical
judgement, (2) based on values, (3) and perceptions of relevant facts, (4) to enact a
duty of care (5) through responsive pathways. Based on these elements, we formu
late the following conceptualization of responsible algorithmization:
apply value-sensitivity not only to the design of the technology but also to the
other elements of algorithmization (expertise, information resources, organiza
tional structure, organizational policy).The questions that can help guide organiza
tions towards responsible algorithmization are presented in Table 4.1.
How will responsible algorithmization affect citizen trust in government? In the
above, we identified value-sensitivity as a core component of responsible algorith
mization. This relates directly to the benevolence and integrity dimension of trust
in government. Benevolence refers to the expectations by citizens that government
is acting benign and in the interest of citizens; integrity refers to expectations of
honesty and truthfulness (Mayer, Davis & Schoorman, 1995; Grimmelikhuijsen &
Knies, 2017). If algorithmization is done with consideration of relevant values –
e.g. algorithms are used in a fair and unbiased manner – this will likely positively
affect citizens’ perceptions of the honesty and benevolence of government.
Conversely, when citizens, for instance, feel a welfare fraud algorithm targets them
unfairly, this is likely to cause a decline in perceived honesty and benevolence.
The different organizational policy components – organizational policy and
monitoring & evaluation – are also likely to contribute to citizen trust in govern
ment competence. The basic argument here is that well-considered choices in
terms of how the algorithm is to be used in the organization and a consistent
monitoring and evaluation of the desired and undesired outcomes contribute to
Accountable algorithmization
A second route to strengthening citizen trust in algorithmization is provided by the
notion of accountability. This notion builds upon the concept of public account
ability which political scientist Mark Bovens defines as ‘a relationship between an
actor and a forum, in which the actor has an obligation to explain and to justify his
or her conduct, the forum can pose questions and pass judgement, and the actor
may face consequences’ (2007: 450). Based on this general definition, we can pro
vide a specific definition (see Wieringa, 2020, for an in-depth analysis) of algorith
mic accountability:
Conclusion
In this chapter, we have discussed how ‘algorithmization’ has become a potent
force, changing traditional bureaucracies into algocracies. The new generation of
algorithms that are now finding their way to governments across the globe are
more than a change of technology: they trigger a range of organizational changes
that eventually transform bureaucratic decision-making. In this context, machine-
learning algorithms are often portrayed as problematic for accountable and respon
sible decision-making. We argue that both accountable and responsible
algorithmization are needed to sustain citizen trust in the use of these algorithms.
The argument we developed in this chapter is summarized in Table 4.3.
The two preconditions of value-sensitivity and transparency are a starting point
for realizing responsible and accountable algorithmization. However, this does not
resolve all issues. A first issue is that algorithmization is not limited to the use of
merely one technological system in an organization.There are entire ecosystems of
algorithms that use data from various sources and that are implemented in net
works of organizations (Cicirelli et al., 2019).The ‘problem of many hands’ is com
pounded by these developments and it is not easy to indicate who is responsible
and accountable. In that sense, further work on the assessment questions formu
lated here is needed to test and re-develop them for ecosystems of algorithms.
A second issue that demands more attention is the fact that machine-learning
algorithms change over time. This raises questions about the dynamics of values:
value-sensitivity may be ensured at the start but a machine-learning algorithm can,
over time, develop patterns that conflict with key values. In addition, transparency
Responsible and accountable algorithmization 63
may be ensured at the start, but after a process of machine-learning the algorithm
can have become opaque because its decision rules have changed following learn
ing processes. This means that the questions formulated here may need to be
applied iteratively to ensure that responsibility and accountability persist over time.
Further research is needed to provide an understanding of these dynamics of
responsible and accountable algorithmization.
In sum, this chapter provides a basic understanding of the importance of respon
sibility and accountability in producing citizen trust in algorithmization. The key
message is that organizations should not only look at designers of algorithms and
expect that they will bring the solution: public organizations need to take action
to organize responsible and accountable use of algorithms in their organizational
processes.
References
Bovens, M. 2007. Analysing and assessing accountability: A conceptual framework. European
Law Journal, 13 (4): 447–468. doi: 10.1111/j.1468-0386.2007.00378.x
Brynjolfsson, E. and A. McAfee. 2014. The second machine age:Work, progress, and prosperity in
a time of brilliant technologies. New York:WW Norton & Company.
64 Albert Meijer and Stephan Grimmelikhuijsen
Burrell, J. 2016. How the machine ‘thinks’: Understanding opacity in machine learning
algorithms. Big Data & Society, 3 (1): 1–12. doi: 10.1177/2053951715622512
Cicirelli, F., A. Guerrieri, C. Mastroianni, G. Spezzano and A.Vinci. (Eds.). 2019. The internet
of things for smart urban ecosystems. Cham: Springer.
Dencik, L., Redden, J., Hintz, A. and H. Warne. 2019. The ‘golden view’: Data-driven gov
ernance in the scoring society. Internet Policy Review, 8 (2): 1–24.doi: 10.14763/2019.2.1413
Diakopoulos, N. 2016. Accountability in algorithmic decision making. Communications of the
ACM, 59 (2): 56–62. doi: 10.1145/2844110
Eubanks, V. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor.
New York: St. Martin's Press.
Friedman, B., P.H. Kahn and A. Borning. 2008.Value sensitive design and information sys
tems. In The handbook of information and computer ethics, edited by K. Himma and H.Travani,
69–102. New Jersey: John Wiley & Sons Inc.
Fukuyama, F. 1995. Trust:The social virtues and the creation of prosperity. New York: Simon and
Schuster.
Gerards, J. 2019. The fundamental rights challenges of algorithms. Netherlands Quarterly of
Human Rights, 37 (3): 205–209. doi: 10.1177/0924051919861773
Grimmelikhuijsen, S. 2012. Transparency and trust: An experimental study of online disclosure and
trust in government. Doctoral dissertation. Utrecht: University Utrecht.
Grimmelikhuijsen, S. and E. Knies. 2017.Validating a scale for citizen trust in government
organizations. International Review of Administrative Sciences, 83 (3): 583–601. doi:
10.1177/0020852315585950
Grimmelikhuijsen, S., F. Herkes, I. Leistikow, J.Verkroost, F. de Vries and W.G. Zijlstra. 2019.
Can decision transparency increase citizen trust in regulatory agencies? Evidence from a
representative survey experiment. Regulation & Governance. doi: 10.1111/rego.12278
Grootelaar, H.A. and K. van den Bos. 2018. How litigants in Dutch courtrooms come to
trust judges: the role of perceived procedural justice, outcome favorability, and other
sociolegal moderators. Law & Society Review, 52 (1): 234–268. doi: 10.1111/lasr.12315
Hardin, R. 1999. Do we want trust in government? In Democracy and trust, edited by
M.E. Warren, 22–41. Cambridge: Cambridge University Press.
Hardin, R. 2002. Trust and trustworthiness. New York: Russell Sage Foundation.
Hetherington, M.J. 1998. The political relevance of political trust. The American Political
Science Review, 92 (4): 791–808. doi: 10.2307/2586304
Hoffman, A.L. 2019.Where fairness fails: Data algorithms, and the limits of the antidiscrimi
nation discourse. Information, Communication & Society, 22 (7): 900–915. doi: 10.1080/
1369118X.2019.1573912
Hood, C. 2010. Accountability and transparency: Siamese twins, matching parts, awkward
couple?. West European Politics, 33 (5): 989–1009. doi: 10.1080/01402382.2010.486122
Inglehart, R. 1999. Trust, well-being and democracy. In Democracy and trust, edited by
M.E. Warren, 88–120. Cambridge: Cambridge University Press.
Janssen, M. and J.Van den Hoven. 2015. Big and open linked data (BOLD) in government:
A challenge to transparency and privacy?. Government Information Quarterly, 32 (4):
363–368. doi: 10.1016/j.giq.2015.11.007
Kleinberg, J., H. Lakkaraju, J. Leskovec, J. Ludwig and S. Mullainathan. 2017. Human deci
sions and machine predictions. The Quarterly Journal of Economics, 133 (1): 237–293. doi:
10.1093/qje/qjx032
Kroll, J.A., S. Barocas, E.W. Felten, J.R. Reidenberg, D.G. Robinson and H. Yu. 2017.
Accountable algorithms. University of Pennsylvania Law Review, 165 (3): 633–706.
Responsible and accountable algorithmization 65
Lepri, B., N. Oliver, E. Letouzé,A. Pentland and P.Vinck. 2018. Fair, transparent, and account
able algorithmic decision-making processes. Philosophy & Technology, 31 (4): 611–627. doi:
10.1007/s13347-017-0279-x
Levi, M. and L. Stoker. 2000. Political trust and trustworthiness. Annual Review of Political
Science, 3: 375–507. doi: 10.1146/annurev.polisci.3.1.475
Lorenz, L. 2019. The algocracy: Understanding and explaining how public organizations are shaped
by algorithmic systems. MSc Thesis, Utrecht University.
Mayer, R., J.H. Davis and F.D. Schoorman. 1995. An integrative model of organizational
trust. Academy of Management Review, 20 (3): 709–734. doi: 10.5465/amr.1995.9508080335
McEvily, B. and M. Tortoriello. 2011. Measuring trust in organisational research: Review
and recommendations. Journal of Trust Research, 1 (1): 23–63. doi: 10.1080/
21515581.2011.552424
McKnight, D.H.,V. Choudhury and C. Kacmar. 2002. Developing and validating trust mea
sures for e-commerce: An integrative typology. Information Systems Research, 13 (3):
334–359. doi: 10.1287/isre.13.3.334.81
Meijer, A. 2014. Transparency. In The Oxford handbook of public accountability, edited by
M. Bovens, R.E. Goodin and T. Schillemans, 507–524. Oxford: Oxford University Press.
Meijer, A. and M. Wessels. 2019. Predictive policing: Review of benefits and drawbacks.
International Journal of PublicAdministration,42 (12):1–9.doi:10.1080/01900692.2019.1575664
Mingers, J. and G. Walsham. 2010. Toward ethical information systems: the contribution of
discourse ethics. MIS Quarterly, 34 (4): 833–854. doi: 10.5555.2017496.2917505
O’Neil, C. 2016. Weapons of math destruction. How big data increases inequality and threatens
democracy. New York: Crown.
Owen, R., P. Macnaghten and J. Stilgoe. 2012. Responsible research and innovation: From
science in society to science for society, with society. Science and Public Policy, 39 (6),
751–760. doi: 10.1093/scipol/scs093
Pencheva, I., M. Esteve and S.J. Mikhaylov. 2018. Big data and AI: A transformational shift
for government: So, what next for research?. Public Policy and Administration, 35 (1): 24–44.
doi: 10.1177/0952076718780537
Porumbescu, G.A. and S. Grimmelikhuijsen. 2018. Linking decision-making procedures to
decision acceptance and citizen voice: Evidence from two studies. The American Review of
Public Administration, 48 (8): 902–914. doi: 10.1177/0275074017734642
Ramalho, M.A., R.J. Rossetti and N. Cacho. (2017). Towards an architecture for smart garbage
collection in urban settings. Presented at the 2017 International Smart Cities Conference (ISC2),
Wuxi, September 1–6.
Roberts, J. 1989. Aristotle on responsibility for action and character. Ancient Philosophy, 9 (1):
23–36. doi: 10.5840/ancientphil19899123
Rousseau, D.M., S.B. Sitkin, R.S. Burt and C. Camerer. 1998. Not so different after all: a
cross-discipline view of trust. Academy of Management Review, 23 (3): 393- 404. doi:
10.5465/amr.1998.926617
Tyler, T.R. 2006. Why people obey the law. Princeton: Princeton University Press.
Tyler,T.R. and P. Degoey. 1996.Trust in organizational authorities:The influence of motive
attributions on willingness to accept decisions. In Trust in organizations: Frontiers of theory
and research, edited by R. Kramer and T. Yler, 331–356. Thousand Oaks, CA: Sage
Publications.
Van den Bos, K., R.Vermunt and H.A.Wilke. 1997. Procedural and distributive justice:What
is fair depends more on what comes first than on what comes next. Journal of Personality
and Social Psychology, 72 (1), 95.
66 Albert Meijer and Stephan Grimmelikhuijsen
Van Gunsteren, H.R. 1976. The quest for control: A critique of the rational-central-rule approach in
public affairs. London: John Wiley & Sons.
Van Gunsteren, H.R. 2015.The ethical context of bureaucracy and performance analysis. In
The public sector: Challenge for coordination and learning, edited by F.-X. Kaufmann, chapter 15.
Berlin:Walter de Gruyter.
Van den Hoven, J. 2013. Value sensitive design and responsible innovation. In Responsible
innovation, edited by R. Owen, J. Bessant and M. Heintz, 75–84. Chichester, UK:Wiley.
Van der Voort, H.G., A.J. Klievink, M. Arnaboldi and A.J. Meijer. 2019. Rationality and poli
tics of algorithms. Will the promise of big data survive the dynamics of public decision
making? Government Information Quarterly, 36 (1): 27–38. doi: 10.1016/j.giq.2018.10.011
Wieringa, M. 2020. What to account for when accounting for algorithms. A systematic literature
review on algorithmic accountability. Paper presented at ACM Conference on Fairness,
Accountability, and Transparency (ACM FAT*), Barcelona. ACM, New York, NY, USA,
January 27–30.
Zouridis, S., M. van Eck and M. Bovens. 2020. Automated discretion. In Discretion and the
quest for controlled freedom, edited by T. Evans and P. Hupe, chapter 20. London: Palgrave
Macmillan.
Zuurmond, A. 1994. De infocratie. Een theoretische en empirische heroriëntatie op Weber's ideaaltype
in het informatietijdperk. Den Haag: Phaedrus.
Zuurmond, A. 1998. From bureaucracy to infocracy: Are democratic institutions lagging
behind? In: Public administration in an information age:A handbook, edited by I.Th.M. Snellen
and W.B.H.J. van de Donk, chapter 16. Amsterdam: IOS Press.