OF MONSTERS AND MEN
A Guide for Ethical Technological Innovation
In a World of Giants
Artificial intelligence — the simulation of natural thought within man-made machines — is as impressive as it is intimidating. On one hand, the automation of manual tasks has allowed for unprecedented efficiency, accuracy, and standardization in today’s interconnected world. But the same process that allows for rapid digital improvement — an internal feedback loop called machine learning — can cause rapid harm if not properly overseen. And because algorithms leave no visible trace, these violations go largely unnoticed. Instead, errors and bias are masked by claims of superhuman precision.
It's a no-brainer: we need ethical design. But in order to develop toolkits and best practices, we also need to know what values we’re espousing. And while ethics codes are necessary for setting expectations, their systematization in many ways reduces a gut-level concept into a discrete list of rules that cannot (no matter how long) capture its full essence. Ethics, then, is a Catch-22 — requiring, yet transcending, definition.
The least we can do, though, is try.
Amidst the current craze to define responsible innovation, the best solution, counterintuitively, might be to turn to the ancient. Ethics frameworks have been around for centuries. These classical principles — far removed from the realities of artificial intelligence — may nevertheless ground, and provide common language for, a uniquely modern phenomenon.
...within modern realms:
I analyzed age-old frameworks...
... and arrived at an inconvenient truth: ethics in theory does not mean ethics in practice.
Ultimately, we need more than a defined set of principles. We need practices that can embed them.
Because innovation occurs on a multidimensional plane — one with moving parts and messy outcomes — ethical protocol should mirror the complexity of its context. For that, companies must consider all elements on the production process:
Ultimately, ethics must exist in the people who make things, in the process of making, and the things that are made. And although messy, this tangled framework highlights a certain convenience. A positive development in one area will presumably ripple to others.
There’s a certain awe to artificial intelligence — something both distinct from, and eerily similar to, human aptitude. But in glorifying our technologies, we overlook their biggest flaw: they are just as nuanced, flawed, and idiosyncratic as the data they’re trained on. In all their artificiality, these devices are really just an extension of the natural world. Technology, then, becomes embedded in — and influenced by — the social hierarchies of our society.
And so, we must recognize our own power in this complex equation. Computers may be making their own conclusions, but we — the creators — have ultimate responsibility for their design, their deployment, and their destiny.