Post subject: The technological singularity
Editor, Player (123)
Joined: 8/3/2014
Posts: 204
Location: USA
Does anyone here believe that the technological singularity is going to happen in the near future (by 2045), or at all? And to what extent? What are the reasons to believe or to not believe that the singularity will happen? Also, for those who believe in the singularity, what will happen to the TAS community when the singularity occurs? Such intelligent AI would do perfect speedruns of every game in existence, I would assume, as well as doing a tremendous amount of other unrelated tasks. So...then what? I actually have no idea what to believe as far as that goes, but am interested to hear your opinions.
creaothceann
He/Him
Editor
Joined: 4/7/2005
Posts: 1874
Location: Germany
The entire human history has been a technological singularity. If you want a year you could say it happened ~1900.
Patashu
He/Him
Joined: 10/2/2005
Posts: 4045
I don't believe there will be a technological singularity. Reasons: 0) We're not even close to coming up with a true artificial intelligence. A recurrent neural network can never suddenly become 'sentient' and reason about its own systems, for example. The things we're producing now are ultimately based on the code and hardware we're used to working on, and have no intrinsic relation with how the brain an artificial intelligence would need work. 1) Intelligence isn't some kind of 'slider' you can just grab and max out. Improving intelligence requires innovations, and there's no specific reason why an AI can find such innovations faster than us. 2) The hardware. Computer systems are ridiculously less efficient at processing data in a brain-like way than brains are, and it's not clear when we'll be able to match the performance of a brain. Once we can do that, and we understand how brains work, we can make an AI... As smart as a human. And it will probably take as long as a human to learn to be human-like, too. That's interesting, but not going to cause a singularity, especially since any further growth in intelligence will be limited by hardware (as well as other things!). (For example, it won't be able to spread from computer to computer and make itself intelligent using a botnet - for one, latency, for another, computers that aren't optimized to run AI will be useless for this purpose) 3) Evolution. Animals have evolved over billion years, especially ones like humans, and have eeked out every last drop of efficiency in things like the brain in the process. Since an AI is meant to work like and exceed the capabilities of a human brain, and we don't even understand how a human brain is as efficient as and does everything it does yet, it's quite unreasonable to think we'd be able to outdo what Mother Nature has been working on this entire time. 4) While a lot of things in science have been growing exponentially (Moore's Law, etc) - there are signs that this exponential growth may be tapering off, due to running into intrinsic limitations of physics. Any 'singularity' will no doubt run into similar problems - we might think something is becoming twice as intelligent each year, only for the growth to tail off when an intrinsic limit to the methods used is reached. I am open to being wrong, of course; it is possible I am not thinking on a large enough timescale. But it is certainly not inevitable that a singularity will happen, even so.
My Chiptune music, made in Famitracker: http://soundcloud.com/patashu My twitch. I stream mostly shmups & rhythm games http://twitch.tv/patashu My youtube, again shmups and rhythm games and misc stuff: http://youtube.com/user/patashu
Joined: 7/2/2007
Posts: 3960
Note that the "technological singularity" is not inherently related to solving the hard AI problem. It just refers to some point at which it becomes impossible to make accurate predictions about the future. Arguably we're already there. I'm inclined to agree with Patashu about the likelihood of us solving the hard AI problem; however, I also think that our existing "AIs" (expert systems, really) are going to get increasingly easier to create and more powerful, to the point that within a couple of generations, most humans that still have jobs will be largely busy with guiding or training new AIs rather than directly doing work themselves.
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
Patashu
He/Him
Joined: 10/2/2005
Posts: 4045
Derakon wrote:
I'm inclined to agree with Patashu about the likelihood of us solving the hard AI problem; however, I also think that our existing "AIs" (expert systems, really) are going to get increasingly easier to create and more powerful, to the point that within a couple of generations, most humans that still have jobs will be largely busy with guiding or training new AIs rather than directly doing work themselves.
Definitely. And AIs are very much going to be a reflection of the culture and people who train them, because your choice of what data you put in and how you tweak the algorithms determines what comes out.
My Chiptune music, made in Famitracker: http://soundcloud.com/patashu My twitch. I stream mostly shmups & rhythm games http://twitch.tv/patashu My youtube, again shmups and rhythm games and misc stuff: http://youtube.com/user/patashu
Site Admin, Skilled player (1255)
Joined: 4/17/2010
Posts: 11492
Location: Lake Char­gogg­a­gogg­man­chaugg­a­gogg­chau­bun­a­gung­a­maugg
This sounds as probable as communism. I think it's unreachable, because right now AI can only be incomprehensible to some people, and it is always comprehensible to others. While human brain has not ever been comprehensible to anyone in all its complexity. Indeed, AI can only evolve according to the methods it was taught to use. Human mind can evolve in any direction with any method, and master the subject in decades. Believing in technological singularity doesn't seem to give full credits to what human mind is, yet it still implies that it will be overcome by something we fully design ATM. True, supercomputers can execute specific tasks better than humans, there's no more competition between a human and the AI in, say, chess. But AI can not train itself to train, it can only be taught to train by a human, while humans learn how to learn simply by living.
Warning: When making decisions, I try to collect as much data as possible before actually deciding. I try to abstract away and see the principles behind real world events and people's opinions. I try to generalize them and turn into something clear and reusable. I hate depending on unpredictable and having to make lottery guesses. Any problem can be solved by systems thinking and acting.