When will humanity reach Singularity, that now-famous point in time when artificial intelligence becomes greater than human intelligence? It is aptly called the Singularity proponents like Ray Kurzweil: like the singularity at the center of a black hole, we have no idea what happens once we reach it. However, the debate today is not what happens after the Singularity, but when will it happen. _BigThinkIn the video below, Kurzweil discusses some of his ideas about the coming singularity, including timelines and cautionary notes.
Microsoft co-founder and billionaire Paul Allen recently expressed skepticism about Kurzweil's timeline for the singularity, in a Technology Review article.
Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they'll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It's heady stuff.Allen goes on to discuss the "complexity brake," which the limitations of the human brain (and the limitations of the human understanding of the human brain) will apply to any endeavour that begins to accelerate in complexity too quickly.
While we suppose this kind of singularity might one day occur, we don't think it is near.
...Kurzweil's reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these "laws" will work until they don't. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer's hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn't enough to just run today's software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this. _Technology Review_Paul Allen
Allen's argument is remarkably similar to arguments previously put forward by Al Fin neurscientists and cognitivists. The actual way that the human brain works, is something that is very poorly understood -- even by the best neuroscientists and cognitivists. If that is true, the understanding of the brain by artificial intelligence researchers tends to be orders of magnitude poorer. If these are the people who are supposed to come up with super-human intelligence and the "uploading of human brains" technology that posthuman wannabes are counting on, good luck!
But now, Ray Kurzweil has chosen the same forum to respond to Paul Allen's objections:
Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.Kurzweil's attitude seems to be: "Because difficult problems have arisen and been solved in the past, we can expect that all difficult problems that arise in the future will also be solved." Perhaps I am being unfair to Kurzweil here, but his reasoning appears to be fallacious in a rather facile manner.
...Allen writes that "these 'laws' work until they don't." Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it's true that this specific trend continued until it didn't. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm.
...Allen's statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information, but the same can be said of AI systems.
...How do we get on the order of 100 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through redundancy. There are on the order of a billion pattern-recognition mechanisms in the cortex. They are interconnected in intricate ways, but even in the connections there is massive redundancy. The cerebellum also has billions of repeated patterns of neurons.
...Allen mischaracterizes my proposal to learn about the brain from scanning the brain to understand its fine structure. It is not my proposal to simulate an entire brain "bottom up" without understanding the information processing functions. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of analysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. _TechnologyReview_Ray Kurzweil
Al Fin neuroscientists and cognitivists warn Kurzweil and other singularity enthusiasts not to confuse the cerebellum with the cerebrum, in terms of complexity. They further warn Kurzweil not to assume that a machine intelligence researcher can simply program a machine to emulate neurons and neuronal networks to a certain level of fidelity, and then vastly expand that model to the point that it achieves human-level intelligence. That is a dead end trap, which will end up wasting many billions of dollars of research funds in North America, Europe, and elsewhere to discover.
This debate has barely entered its opening phase. Paul Allen is ahead in terms of a realistic appraisal of the difficulties ahead. Ray Kurzweil scores points based upon his endless optimism and his proven record of skillful reductionist analyses and solutions of previous problems.
Simply put, the singularity is not nearly as near as Mr. Kurzweil predicts. But the problem should not be considered impossible. Clearly, we will need a much smarter breed of human before we can see our way clear to the singularity. As smart as Mr. Kurzweil is, and as rich as Mr. Allen is, we are going to need something more from whatever humans who eventually birth the singularity.
Originally written for Al Fin, the Next Level
No comments:
Post a Comment