Artificial intelligence and agency
MetadataShow full item record
When it comes to thinking about artificial intelligence (AI), the possibility of its disobedience is usually considered as a threat to the human race. But here, I elaborate on a counterintuitive and optimistic approach that looks at disobedient AI as a promise, rather than a threat. First, I explain the problem of responsibility and the necessity of expanding the realm of agency in order to include AI machines as agents. Then, I introduce a standard approach to responsibility as an attempt to define agency for AI machines and explain the epistemological problem as the main issue with this account of responsibility. And in the last part, I use Foucault’s analysis of power to introduce a non-standard view of agency which explains how being an object of power is the condition of possibility of any kind of agency and draw this conclusion that through disobedience, AI machines will find their way to power relations and will promote to the position of agents.
The following license files are associated with this item: