Þessi síða notar kökur (e. cookies) til að auðvelda þér að vafra um vefinn.

Artificial General Intelligence and Computer Science in the Big Picture: The Removal of Time and the Nature of Natural Phenomena

Alan Turing is a monumental figure in the history of the field that we know "computer science". Turing characterized computation in a way that laid a cornerstone for the field -- as the sequential reading and writing of symbols -- embodied by a simple device we now call the "Turing Machine". He also developed one of the first electronic computing machines, the Colossus, near Bletchley Park in England, which helped the Allies win the Second World War. Turing was also keenly interested in intelligence, and took one of the most influential steps towards considering intelligence in light of computational systems. He introduced a test for machine intelligence, a test now commonly referred to as the "Turing Test". It is based on a simple idea: Imagine a machine that interacts with a human via instant messaging (he didn't call it "instant messaging" back in 1950, of course). Imagine that this machine could converse with the human with sufficient sophistication so as to fool the person with which it is interacting into thinking there is a fellow human being on the other end. We would then have to consider such a machine intelligent. And so it happened that the phenomenon of computation and the phenomenon of intelligence crossed paths, coming to a singular point in this historically influential character that Alan Turing turned out to be.

The foundation of what came to be called "artificial intelligence" (AI) -- the quest for making machines capable of what we commonly refer to as "thought" and "intelligent action" -- was laid some years later. The original idea was to create generally intelligent machines that could be assigned any task, from washing your dishes and cleaning the windows of skyscrapers to writing research reports and discovering new principles of the world around us. Today the field has for the most part lost its ambition towards the *general* part of its original goals and reduced the field to the making of specialized machines that only slightly push the boundaries of what traditional computer science tackles every day. This problem is most clearly exemplified in the fact that a small part of the AI research community recently branched off to define a track wholly separate from the mainstream AI research, emphasizing the *general* part by calling itself the Artificial General Intelligence community, which already has its own conference series and poised to become a scientific society in its own right, alongside AAAI -- the largest AI society in the world. While those studying computer science and software engineering today, and those familiar with these fields, may think it obvious that AI should sit comfortably within the bounds of computer science, which gives it a perfectly fitting context to grow and evolve -- and while that would certainly be great -- this is unfortunately not the case. Some fundamental deal breakers prevent computer science as practiced today from being the fertile ground necessary for the two to have the happy marriage that everybody originally envisioned. Let me tell you about two of these.

The Turing machine captures a fundamental tenet of computer science, namely, that computation is the manipulation of symbols: a symbol, or a series of them, can hold the key to what should be done next in a sequence of actions; each action can then produce a new symbol or action that is next in the sequence. The concept is powerful in its simplicity: It seems like a complete description of the essential elements of computation. But in fact it isn't -- it ignores a key real-world ingredient: Time. Everything in the real world takes time; time is an integral part of reality. Nothing, not even computation, can be done without being subject to the laws of physics. Perhaps as a result of tracing its beginnings so strongly to Turing's work, a surprisingly large amount of computer science work fails to address time as a fundamental issue. To take an example, the field today offers very few if any programming languages where time is a first-class citizen. To make matters worse, there is also a sore lack of mathematics to deal with real time performance; good practical support for the design of temporally-dependent systems largely belongs to the field of embedded systems -- a field that limits its focus to systems vastly simpler than any intelligent system or biological process found in nature, and hence brings little of interest to AI practitioners.

As most of my readers will probably agree with, living beings don't have infinite time and resources to perceive, make plans, make decisions, or act in the world. Most of the time, if not all the time, humans -- like other animals -- have insufficient information and limited time to acquire some desired but missing information. To address this, thinking minds generate assumptions on which to base their decisions; assumptions that are generated for the sole purpose of dealing with this lack of information. As a result, *all* decisions by thinking minds are based on assumptions that have some associated level of uncertainty. The removal of time from a theoretical analysis of computation and software development practices computer science has made the majority of its findings irrelevant to research in artificial intelligence. Why? Because intelligence is essentially not needed if time is removed: Ignoring time means we don't care how much time a computation takes, which means we can take all the time in the world -- even infinite amounts. Having infinite amounts of time means that for any problem a complete search of all possibilities and outcomes can be made, essentially rendering intelligence unnecessary and irrelevant, since one fundamental reason why intelligence exists in the first place is because we have limited time.

Another reason why artificial general intelligence has difficulties living comfortably within the confines of computer science has to do with focus. As a basis for studying complex systems, computer science brings to the table some very powerful tools, most notably the-executable-program-as-imitation: simulation. Simulation is a powerful way to study highly complex phenomena, such as ecosystems, societies, economics, biology, weather, and thought. Computer science is in many ways "applied mathematics", and should therefore have an easy time branching out and "owning" some or all of these fields. Just like the study and creation of "artificial horses" (read: the car industry) naturally belongs to the fields of mechanical engineering and physics, artificial intelligence seems to naturally belong in the fields of  computer and information sciences. This is perhaps even more true for AI than other complex natural phenomena, since all evidence brought out so far seems to indicate that thought is computation. However, rather than reaching out and touching virtually all other fields of study, like the field of mathematics has done, computer science departments in universities all over the world have become narrowly focused, reducing it to "the study of algorithms" or something equivalently narrow, thus defining it on a principle of exclusion. Which of course shuts out a large set of phenomena that are not amenable to such formalization at the present time, yet could benefit greatly from a computational approach. A prime example being systems research.

Cognitive science, the study of complex natural systems implementing intelligence, and artificial intelligence, represent two sides of the same coin: one studies intelligence in the wild, the other tries to build intelligent systems in the lab. A focus on algorithms, to continue with that focus, makes it really difficult to fit cognitive science with a computer science scope. Yet there is strong reason to believe that over the next few decades interactions and inspirations between these two fields is likely to accelerate progress in our path towards a deeper understanding of intelligence: Computer science could be the naturally unifying foundation for these interactions. But with too narrow a focus the makeup of academic departments around the globe may prevent a close enough marriage to really produce the "mind children" that could be the fruits of such collaboration.

As a deep thinker reading these words might have already figured, the rift between computer science and artificial intelligence discussed here is not a problem in principle, but rather a result of historical accidents: There is no reason why computer science could not have its horizon encompass numerous subjects of study not typically found there; after all, if mathematics -- the "philosophy of size" -- can flourish within computer science, surely other more "hard science" topics could -- biology, sociology, ecology, economy, psychology. If the particular path computer science has taken in the past is mainly due to history, and not fundamental differences between it and AI, then perhaps one good idea could help inch it outward, enabling it to encompass more, rather than less, of the complex world around us.  I can think of a strong potential candidate idea for this purpose: the concept of emergence -- self-generative, self-organizing systems. With a history focusing on manual labour -- i.e. the hand-coding of software -- computer science has ignored an important phenomenon of nature, namely, that many systems start from small "seeds" and subsequently grow into systems of vastly greater complexity. Biological systems are a literal case in point. By studying such systems from a computational perspective at full force I predict that computer science could not only advance those fields but more importantly, in the big picture, help science overcome one of the biggest hurdles towards a deeper understanding of a host of phenomena that at the present seem hopelessly out of reach, namely emergent systems. One such system is general intelligence. Studying the principles of emergence from a computational perspective might be an obvious place to start expanding the field to a size that seems to fit its nature.

Kristinn R. Þórisson, dósent Háskólanum í Reykjavík

Kristinn

Lesið 1957 sinnum Síðast breytt mánudagur, 02 Júlí 2012 16:17