How can artificial intelligence be risky?
In this thesis, I research risks associated with artificial intelligence (AI) – how could AI lead to us losing something we value? I stress that when defining AI, we cannot be too human-centric or require the existence of general intelligence. Narrow AI systems that are very different from humans can be powerful enough to pose risks. Many risks originate from unintended consequences, yet many actual risks come from using AI as a tool in zero or negative sum games, to use concepts of game theory. I stress that AI should not be treated so much as an abstract phenomena of the future, but as an already existing phenomena that requires analysis. The values and goals of humans are often in conflict and this requires a solution, since the progress of technology is accelerating, enabling more different goals to be achieved, and we are often unable to keep up with this pace. AI can provide a partial solution to the existence of instrumental conflicts by enabling us to reconsider them – it is possible that what we have desired so far is no longer relevant.
The following license files are associated with this item: