This article is less money-related, but more technology-related (or job-related for that matter). It concerns all of us. Will intelligent computers overtake the world in our lifetime, or our children’s lifetime? Many futurists will tell you YES with great excitement and/or extremely deep concerns for the social ramifications. Me, on the other hand, I will tell you that they are way too early in their “visions” (in my personal opinions). I do believe that software will get “smarter” over time, but reaching a comparable state of human intelligence is probably at least one hundred years away if not a couple of hundreds. Please bear with me on my lengthy discourse. I do hope that it will be interesting to you.
Do you understand how computers work, and how softwares are written? Lack of knowledge gives you fear and creates ignorance. I want to ask, how many of those futurists know anything in details about artificial intelligence or AI? AI was one of my academic pursuit and sparetime contemplation. When I was 13, I wrote an unbeatable tic-tac-toe game on an Apple II computer. When I was 15, I laughed at the most rudimentary form of AI of robotic mice going through a laberynth, and solved a more difficult problem of creating a random laberynth without any unreachable space inside. At that time, I have realized the biggest hurdle of AI (or rather AI based upon expert systems), which is that all intelligent softwares or non-intelligent softwares must be created by the human software coder. The software inherently is a rule-based recipe. Whether the software can beat the best chess player in the world or not, it is still rule-based (or algorithm-based if you are more mathematically inclined), and must be created by a human being. If the software in the computer has any of the apparent intelligence, it is still granted by its creator. Yes, computer can analyze tons of data much faster than human can, and/or calculate millions of forward scenarios unfathomable by humans (because of lack of human processing time), but it is human that gives computer the rules to do so. And if computers can create and deduce their own rules, those rules are still within the same rule-creating framework which itself is another rule-based systems. Sorry to inform you this, but computers just canNOT think outside of the box, figuratively speaking.
Later on, in college, I came across technologies behind speech recognition and optical character recognition. I won’t go into details of the speech recognition which is based upon Hidden Markov Models (HMM) which can successfully model human speech through a probabilistic model. On the optical character recognition (OCR), it is really the advancement in the processing speed of computers that made a 1974 technology called Neural Network possible again.
When I first learned about Neural Network as a senior in college on my own, I was so excited. Because I knew that I’ve found the thing that I have always been looking for, the true way of how human intelligence is assimilated and processed. The way how neural network works is it simply models how 100 billions of neuronal cells inside human brains work: it is a network of nodes with different connection strength to each node, and the strength of the connections are repeatedly trained through stimuli to become either stronger or weaker connections. Human brains compute through biochemical reactions between neurons in an analog (non-digital) way. However, such computations are done in a massively parallel way involving billions of neuronal cells everyday. Through stimuli, we form patterns. Through patterns, we form rules, etc.
Now, if we model the same computations on computers, do you know how much worse it will be? First of all, computers deal with digital numbers, and so each analog connection strength now needs to be a digital (floating point) number. Now, depending on how many neural cells we model, let’s say N, the number of connections between all cells are N times N. And then the each computer or rather each CPU can only process things serially even though each CPU is really really fast. Because nowadays, computers are quite fast, we can do some limited application such as optical character recognition (OCR) in a reasonable amount of time (mostly via backpropagation neural network). But to model 100 billions of neurons in a human brain, and all computing in parallel??? Given the current computing power, you need LOTS and LOTS of computers & electric power too.
What are the current most promising technologies that may duplicate the computational power done by human brains?
- Ex-Caltech professor Carver Mead has done analog implementation of neural network in silicon chip. But it appears to be not very successful since I can’t google out much more information on this thread.
- Quatum computation: If successful, such ways of computation will be power efficient and enable much faster computers than current semiconductor technologies can provide, based on silicon.
- There is an actual system built with probably hundreds of thousands of computers to mimic a human brain. I can’t find the link now, but the computational power is in the same order of a single human brain. But
In any case, despite Moore’s law on computation, I don’t think we are anytime close to a machine intelligence era at all. After studying both the biology of brain and the computational aspect of the brain, I truly marvelled at how great our brains are at doing the “wet” (biological) computations.
Just a side note for you to become a smarter person. Do you know how to become smarter? Smartness is always associated with the ability to change. Make sure you’re willing to change when things don’t go your way. When the biological neural synapses are no longer elastic (or not being able to change when you get older), your learning ability starts to drop. Do you know why there is only a fine line between genius and insanity as said by Oscar Levant? Obviously a genius has lots of brain activities if not great learning ability. For any control system to have more elastic/wider-ranged parameters, the end result is probably faster learning/convergence, but it also comes at a cost of less stability. Obvious when the stability is lost, it would be called as insanity. Got that? By the way, the above is only my thinking. Didn’t really read those anywhere, but I’m sure somebody must have written or said something similar.
By the way, if one day we do have such computational power, that alone will not create human-like intelligence. Human brain learns because there is a need for survival. The learning process is goal/survival driven. Without a motive for machine, machine will NOT continually learn and improve. To have a motive, there must be inputs and outputs. The outputs go to the external world, in the attempt to get the best desirable inputs back into the human brain or machine brain. And of course, as the creator of the intelligent machine, the creator must carefully define what are the most desirable state for the machine, but the thinkable machine will certainly figure out that survival for itself should rank very high.
I will stop here, since I’m really going off tangent.