[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Google’s Artificial Intelligence Getting ‘Greedy,’ ‘Aggressive’
On Feb 16, 2017, at 4:40 AM, grarpamp <grarpamp AT gmail.com> wrote:
>> "Why the Future Doesn’t Need Us"
> The closing from above...
> "Whether we are to succeed or fail,
> to survive or fall victim to these technologies,
> is not yet decided"
> Having claimed and settled all the unexplored land
> mass since a couple hundred years we can't
> just run and migrate away from conflict.
> Though we not yet managed to nuke ourselves in conflict
> since then, probably because, well, MAD is mad.
> How many years since last "all in" wars frequency is safe to say
> we learned to at least not launch complete death at each other...
> 100, 250, 500?
> If we make it to enlightened free global living, AI bot tech,
> sustainability, solar, etc and it works, well there's that,
> probably for a good long while.
> However it is absolutely certain that Earth itself
> will fail, taking everything down with it...
> So there are really only two choices...
> 1) Undertake everything we do by its contribution
> toward getting us off the rock.
> 2) Call our own bet, nuke ourselves today, and give the
> next blob that evolves up out of the oceans a good run at it.
> Both meanwhile praying it isn't some space rocks
> or aliens that do the job for good.
> If you ever get beyond safe stellar distance (maybe),
> you've got a universe of time and space to deal with.
> Transcending any of its forecast ends doesn't
> look too easy at the moment. But you've probably
> bought yourself a lot more time to think on it.
> Who's giving odds on any of this, what are they,
> and why?
Humanity is likely fucked. It all comes down to where the great filter in Fermis paradox is - before us, or after us? With the discovery of so many exoplanets and the obvious implication that there are tons of planets out there in the goldilocks zone, it's hard to imagine the filter being before us... It isn't hard to imagine at all humanity fucking blowing itself up, destroying itself in a pandemic, just continuing to literally burn the earth up thinking there won't be consequences, or otherwise letting our tech get the best of us... This seems more likely when you start thinking about time scales, how young we are, and how insanely fast we've begun progressing.
If we do somehow make it off Earth and out of the solar system, i think it's safe to assume we will no longer be human. Elon Musk made a kind of trite little quote which actually may turn out to he true (he said this after reading the Bostrom book i mentioned):
"Hope we're not just the biological boot loader for digital superintelligence."