@theStephLocke @rencontres_R
Practical AI & data science
ethics
The ethical implications of our work can be staggering but how do we
balance commercial needs, ethical requirements, and productivity? Taking
a pragmatic approach through starting with simple checklists evolving to
the use of automation and structured processes, I look at how we can
make our work more robust from an ethical perspective.
Steph Locke
CEO @ Nightingale HQ
Data & AI specialist
Microsoft MVP, 5+ years
Microsoft MCT
T: @theStephLocke
Li: /stephanielocke
steph@nightingalehq.ai
@theStephLocke @rencontres_R
Step 0
Get alignment
It’s important to ensure that the team and stakeholders have a
commitment to an ethical approach and have outlined key areas of
focus.
@theStephLocke @rencontres_R
Responsible < Ethical
Responsible AI
minimises risk of
unintended negative
consequences
Ethical AI avoids
intended negative
consequences
@theStephLocke @rencontres_R
Education
• Communicate about ethics internally
• Run workshops to help understand implications
scu.edu/ethics-in-technology-practice
• Share stories about what data scientists fail to be ethical
@theStephLocke @rencontres_R
5 concepts of fairness
• group unaware - same cutoff points / decision boundary
• group thresholds - different cutoff points to allow different
volumes in
• demographic parity - different cutoff points to end up with
distribution like overall demographics
• equal opportunity - the same true positive rate holds across
groups
• equal accuracy - the same overall accuracy rate holds across
groups
@theStephLocke @rencontres_R
Step 1
Make others think before you
start
Spend time working on the process around designing a data science
/ AI product idea or feature to ensure ethical considerations are
considered up-front
@theStephLocke @rencontres_R
Checklists
• Responsible AI checklist
• Fairness
• Inclusive
• Reliability & safety
• Privacy & security
• Transparency
• Accountability
https://www.microsoft.com/en-gb/ai/responsible-ai
@theStephLocke @rencontres_R
Impact assessment
• Who / what will use the solution?
• What are the desired outcomes if things go right?
• What happens if something goes wrong?
• Are there different potential errors with different impacts/risks?
• Are there secondary groups impacted?
• What KPIs could this solution impact?
• What behaviours are we trying to drive?
@theStephLocke @rencontres_R
Frameworks
• ainowinstitute.org/aiareport2018.pdf
• www.gov.uk/government/publications/data-ethics-
framework/data-ethics-framework
@theStephLocke @rencontres_R
Compliance
• Legal requirements
• Contractual requirements
• Privacy
• Regulators
• Consultation processes
@theStephLocke @rencontres_R
Tools
microsoft/HAXPlaybook: The HAX Playbook is an interactive tool for generating interaction scenarios to test when designing
user-facing AI systems. (github.com)
@theStephLocke @rencontres_R
Step 2
Work robustly
Start implementing workflow practices and automations to ensure
“ethical hygiene” and alignment to identified outcomes & risks
@theStephLocke @rencontres_R
Synthetic data
https://cran.r-project.org/package=synthpop
@theStephLocke @rencontres_R
EDA
https://github.com/ropenscilabs/proxy-bias-
vignette/blob/master/EthicalMachineLearning.ipynb
@theStephLocke @rencontres_R
MLOps
https://ml-ops.org/content/mlops-principles
@theStephLocke @rencontres_R
Testing
• Functions for fairness concepts
• Analysis by groups
• Pen portrait / “Typical” entries
@theStephLocke @rencontres_R
Testing
https://github.com/kozodoi/fairness
@theStephLocke @rencontres_R
Testing
https://pair-code.github.io/what-if-tool/uci.html
@theStephLocke @rencontres_R
Testing
https://fairlearn.org/
@theStephLocke @rencontres_R
Step 3
Maintain vigilance
Ensure continued monitoring, stakeholder engagement, and
ongoing compliance. Reflect learnings back into processes and
internal education.
@theStephLocke @rencontres_R
Monitoring
• Protected characteristics
• Spot checks and counterfactuals
• Data drift
@theStephLocke @rencontres_R
Security
• Adversarial behaviours
• Robustness
• Dependencies
• Retro-engineering
• Lessons from security – Red and Blue Teams?
@theStephLocke @rencontres_R
Engagement
• Continued feedback and engagement from people affected by AI
use
• Mine customer service etc for ways AI may have caused problems
• Include AI in risk registers and other internal compliance
monitoring solutions
• Improve AI literacy
@theStephLocke @rencontres_R
Practical AI & data science
ethics
Step 0: Get alignment
Step 1: Make others think before you start
Step 2: Work robustly
Step 3: Maintain vigilance

Practical AI & data science ethics