Ray Kurzweil is a computer scientist, futurist, top Google engineer, and arguably the greatest prophet of AI to ever span the mainstream academic and tech worlds.
At a recent conference Kurzweil predicted that by 2029 AI will pass the Turing test, and by 2045 it will reach a “singularity.” If you’re not familiar with AI, both concepts probably require a little explaining.
The Turing Test is a test developed by Alan Turing that would show whether an artificial intelligence had reached human levels of intelligence.
The "singularity" is the point at which AI becomes so powerful that it acquires superhuman intelligence, and is capable of growing and expanding on its own. This is “runaway” AI where we lose control and AI begins to train itself and act as a truly sentient, independent entity.
This sounds frightening but Luskin says that Kurzweil isn’t worried:
In Kurzweil’s future, “as medicine continues to merge with AI, it will progress exponentially” and potentially help us solve “every possible human disease.” If Kurzweil is right, by 2029 AI will give humanity the gift of “longevity escape velocity,” where AI-based medicine adds months to our lives faster than time is going by.There are skeptics. The power AI would give to whomever could seize it would be enormous and human nature being what it is the people most likely to saeize it are precisely the people we wouldn't want to have it.
While Kurzweil promised that AI will effectively cure aging, he cautioned that doesn’t mean we’ll live forever because you could still die in a freak accident.
But even here AI might come to our rescue, with AI guiding autonomous vehicles that will reduce crash fatalities by 99 percent. AI will further yield breakthroughs in manufacturing, energy, farming, and education that could help us end poverty.
In the coming decades, he predicts that everyone will live in what we currently consider “luxury.”
We’ll also be living in the luxury of our minds. In the coming decades, he expects our brains will “merge with the technology” so we can “master all skills that every human being has created.”
For those hesitant to plug technology into your skull, Kurzweil claims AI to enhance our brains will be no different, ethically speaking, from using a smartphone. At this point, Kurzweil proclaimed AI will be “evolving from within us, not separate from us.”
In other words, under Kurzweil’s transhumanist vision of the future, AI promises us superhuman capabilities complete with heaven on earth and eternal life — what science historian Michael Keas has termed the “AI enlightenment myth.” While Kurzweil framed everything in terms of scientific advancement, it’s easy to envision how this could inspire new religions.
Moreover, some experts predict what they call "model collapse."
In short, AI works because humans are real creative beings, and AIs are built using gigantic amounts of diverse and creative datasets made by humans on which they can train and start to think and reason like a human. Until now, this has been possible because human beings have created almost everything we see on the Internet.Some experts in AI have warned that we’re at the edge of available training data for AI — essentially we’re hitting the limits of what we can feed AI to make it smart. Once AI runs out of training data, what will it do? Will it implode?
As AIs scour the entire Internet, they can trust that virtually everything they find was originally made by intelligent and creative beings (i.e., humans). Train AI on that stuff, and it begins to appear intelligent and creative (even if it really isn’t).
But what will happen as humans become more reliant on AI, and more and more of the Internet becomes populated with AI-generated material? If AI continues to train on whatever it finds on the Internet, but the web is increasingly an AI-generated landscape, then AI will end up training on itself.
We know what happens when AIs train on themselves rather than the products of real intelligent humans — and it isn’t pretty. This is model collapse.
As one put it, "After we’ve scraped the web of all human training data” then “it starts to scrape AI-generated data” because “that’s all you have.” That’s when you get model collapse, and we might be getting close to it.
2029 is only a few years off so we should soon know whether Kurzweil or the skeptics are correct. Meanwhile, there's a lot more on this at the links given above. Checkj them out if you're interested in your technological future.