Hello. Some people like Stephen Hawking say that the artificial intelligence we are developing could be very dangerous, and others like Michio Kaku, from what I understand, don’t seem to be very worried about it.
I understand that just because a computer can beat a human at go, or chess, or recognizing street signs, it doesn’t mean it’s a thinking entity with goals or even any other skills. But we will reach that point eventually.
And the idea of a computer outmatching a human mind in every possible way has kind of given me an existential crisis. I would like to go to college, and get a Ph.D. before I die, but I feel like maybe anything I could hope to accomplish might already get done by an AI by that point. I’d do it anyway, but perhaps it’d feel hollow.
Sorry if this isn’t a proper question, I am basically ignorant of this subject, but thank you for any opinions.
submitted by /u/Professor-Wheatbox
[link] [comments]
May 23, 2017 at 03:16AM
from /u/Professor-Wheatbox
No comments:
Post a Comment