Going off my usual topics, but when I see people talking about computers reaching and surpassing our intelligence, I only see them talk about it from our perspective. What will the machines or robots do with us?
It hit me that if they out-think and out-perform us, first of all, the day they equal us, I expect the next day they’ll out-perform us. Then the following day, or soon after, they’ll surpass us by so much we won’t know what’s happening. Maybe “day” should be “month” or “year,” but the point remains. If it takes intelligence to create intelligence, once they have more, they’ll evolve faster and fast.
Second of all, wouldn’t they become the important character then?
Wouldn’t concerning ourselves with us after they’ve surpassed us be like concerning ourselves with apes today? Or pre-sapiens homonins?
I’m sure most of us consider chimps and bonobos interesting, but not nearly as interesting as ourselves.
Wouldn’t AI be the more important creature once beyond us? I’d hope they’d afford us some protection, but how much have we helped apes or cromagnons? If they’re more interesting, they’re more interesting.
I guess we can’t predict how they’ll behave with known accuracy, which limits how meaningfully we can talk about them, but as their development seems inevitable, speculating how they’ll treat us seems pointless, but more importantly, misguided. We won’t be that important.
I guess I don’t see harm in thinking about it, but I don’t see the value in it. By the time we’re close enough to see how they will affect us, it will be too late to do anything about it.
Read my weekly newsletter
On initiative, leadership, the environment, and burpees