top of page
1.png
2.png
3.png
7.png
Screen Shot 2020-05-17 at 5.21.37 PM.png
Screen Shot 2020-05-17 at 5.21.00 PM.png
Screen Shot 2020-05-17 at 5.29.18 PM.png
Screen Shot 2020-05-17 at 5.20.25 PM.png
9.png
10.png
11.png
Screen Shot 2020-05-17 at 5.19.08 PM.png
Screen Shot 2020-05-17 at 5.16.56 PM.png
Screen Shot 2020-05-17 at 5.17.11 PM.png
16.png
17.png
Screen Shot 2020-05-17 at 5.18.01 PM.png
Screen Shot 2020-05-17 at 5.18.12 PM.png
Screen Shot 2020-05-17 at 5.31.59 PM.png

Of Monsters and Men

In this report, I explore the unintended consequences of artificial intelligence, stakeholders' responses to discriminatory design, and the strengths and limitations of current ethics frameworks in addressing these dilemmas.

A Guide for Ethical Technological Innovation 

In a World of Giants   2019

Artificial intelligence — the simulation of natural thought within man-made machines

— is as impressive as it is intimidating.

On one hand, the automation of manual tasks has allowed for unprecedented efficiency, accuracy, and standardization in today’s interconnected world. But the same process that allows for rapid digital improvement —

an internal feedback loop called machine learning — can cause rapid harm if not properly overseen. And because algorithms leave no visible trace, these violations go largely unnoticed.  Instead, errors and bias are masked by claims of superhuman precision.

Violating Ethics

Needing Ethics

It's a no-brainer: we need ethical design.  But in order to develop toolkits and best practices, we also need to know what values we’re espousing.  And while ethics codes are necessary for setting expectations, their systematization in many ways reduces a gut-level concept into a discrete list of rules that cannot (no matter how long) capture its full essence.  Ethics, then, is a Catch-22 — requiring, yet transcending, definition.

The least we can do, though, is try.

Defining Ethics

Amidst the current craze to define responsible innovation, the best solution, counterintuitively, might be to turn to the ancient.  Ethics frameworks have been around for centuries.  These classical principles — far removed from the realities

of artificial intelligence — may nevertheless ground, and provide common language for,

a uniquely modern phenomenon.

...within modern realms:

virtue

legal

I analyzed age-old frameworks...

... and arrived at an inconvenient truth: ethics in theory does not mean ethics in practice. Ultimately, we need more than a defined set of principles. We need practices that can embed them.

Embedding Ethics

Because innovation occurs on a multidimensional plane — one with moving parts and messy outcomes — ethical protocol should mirror the complexity of its context.  For that, companies must consider all elements on the production process:

Ultimately, ethics must exist in the people who make things, in the process of making, and the things that are made.  And although messy, this tangled framework highlights a certain convenience.  A positive development in one area will presumably ripple to others.

Reframing Ethics

There’s a certain awe to artificial intelligence — something both distinct from, and eerily similar to, human aptitude.  But in glorifying our technologies, we overlook their biggest flaw: they are just as nuanced, flawed, and idiosyncratic as the data they’re trained on.  In all their artificiality, these devices are really just an extension of the natural world.  Technology, then, becomes embedded in — and influenced by — the social hierarchies of our society.

And so, we must recognize our own power in this complex equation.  Computers may be making their own conclusions, but we — the creators — have ultimate responsibility for their design, their deployment, and their destiny.

SHOOT ME A MESSAGE!
bottom of page