First of all, this is not a book but a scientific (or philosophical) article, the title of which immediately caught my interest.
In perfect accordance to the title of the paper, the author suggests that if artificial intelligence (AI) can, at some point of time in the future, be conscious, then it is quite possible that such an intelligence can also be or become dysfunctional, i.e. suffer from mental illness.
Was it good?
The paper is, in my opinion, a bit divided. First, the framing of the paper is exceptionally inviting: mental illness in artificial intelligence - quite a compelling line of argumentation. However, somehow the author does not seem to get exceedingly great mileage out of this setup. I can't, though, quite put my finger on the reason for this experience. In any event, the paper most probably must be perceived as an opening for a stream of discussion in philosophy of mind, and not as a fully-blown treatise on the subject matter. In this role the paper serves very well.
The main take-away for me?
Once again, the take-away is one at the meta level. Namely, in scientific/academic discourse, a good topic/framing/setup is highly important, yet a rare occurrence. That is, some 95% of academic papers I have read are quite incremental in their intended contributions - i.e. quite forgettable - while the remaining about 5% somehow catch one's attention and actually make one think. This paper clearly falls into the 5% category.
Who should read the article?
While the title and basic setup of the paper is quite universally appealing, I think that reading the paper requires some underlying interest in philosophy of mind, of philosophy in general.
No comments:
Post a Comment