Alan Jacobs


#

How to prevent the coming inhuman future - by Erik Hoel:

There are a handful of obvious goals we should have for humanity’s longterm future, but the most ignored is simply making sure that humanity remains human. […] 

… what counts as moral worth surely changes across times, and might be very different in the future. That’s why some longtermists seek to “future-proof” ethics. However, whether or not we should lend moral worth to the future is a function of whether or not we find it recognizable, that is, whether or not the future is human or inhuman. This stands as an axiomatic moral principle in its own right, irreducible to other goals of longtermism. It is axiomatic because as future civilizations depart significantly from baseline humans our abilities to make judgements about good or bad outcomes will become increasingly uncertain, until eventually our current ethical views become incommensurate. What is the murder of an individual to some futuristic brain-wide planetary mind? What is the murder of a digital consciousness that can make infinite copies of itself? Neither are anything at all, not even a sneeze — it is as absurd as applying our ethical notions to lions. Just like Wittgenstein’s example of a talking lion being an oxymoron (since a talking lion would be incomprehensible to us humans), it is oxymoronic to use our current human ethics to to answer ethical questions about inhuman societies. There’s simply nothing interesting we can say about them.