Questionnaire Experts Comments: Future Progress in Artificial Intelligence
Comments
Please click here to return to the results page →
Bart Selman: | Recent successes in AI, such a IBM's Watson (Jeopardy), autonomous robotic systems (e.g. self-driving cars), and, in general, progress in machine learning are both exciting and somewhat concerning. The combination of big data, the continued rise in distributed processing power and sensor technology, and AI techniques appears to be accelerating the progress in our field. Scientists and policy makers will have to consider how to integrate these advances into society in a positive manner. |
Benjamin Kuipers: | While computational modeling of the phenomenon of mind (i.e., AI) is still promising after approximately 60 years, the role of the "unknown unknowns" remains the single most important factor when assessing the plausibility and timing of HLMI. Consider the analogy with Hilbert's vision for the mechanization of mathematics, and the difference between its plausibility in 1900 and in 1935. I don't consider Goedel's Theorem itself to be a barrier to AI, but it stands as an example of how a barrier to apparently unstoppable progress can arise from an unexpected direction, acting by deepening our understanding of the intrinsic difficulty of the problem. |
Anonymous: | My interest in AI is directed towards understanding intelligence, so that people can improve their own human intelligence. I have no interest or opinion about the above questions. |
Wolfram Burgard: | High-level machine intelligence will allow us to build extremely smart systems that will assist us in numerous ways. I have no fears in this regard and definitely would like to see their number grow. Especially in the areas of transportation, manufacturing and home automation, there are plenty of aspects where the world could be safer, more efficient and more enjoyable. |
Francisco Herrera: | In the coming years, we will witness to the integration of intelligent technologies, the design of new cognitive architectures that integrated with the current hardware processing capabilities and the mass access to information will enable them to design new models moving toward the overall goal of machine intelligence. The High-level Machine Intelligence should lead to situate the artificial intelligence in the place dreamed in the middle of the last century by the fathers of the artificial intelligence, allowing a balance between men and machines in the resolution of problems and progress of society. |
Include as "Stephen H. Muggleton": | The present fear of Artificial Intelligence is largely driven by sales of books and films. By contrast, the many long-term challenges to the existence of humanity require all the intelligence we can engineer. Overall, addressing the development of HMLI is likely to drive a deeper scientific understanding of human nature, the phenomenon of Mind, and the nature of Life. |
yes: | Our best model for above average human intelligence is above average humans. Even exceptionally brilliant people are limited by the amount of experience they can absorb. I am not convinced that super fast computers with state-of-the-art algorithms can be better at being people than people are. |
if you'd like.: | Achieving HLMI can be done either by "reverse engineering" the human brain or by invention. I'm not sure which method will ultimately be successful, so we should try both. Maybe the two approaches will complement each other. Probably the biggest social problem arising from HLMI will be how to distribute the wealth (and to whom) produced by the machines that replace human labor. And, just as important, what do humans do with all their free time. |
Anonymous: | There is no stopping this; but, if in the wrong hands, when the singularity is achieved, what will become of humankind? |
Anonymous: | Humans have a breadth of capabilities, which is practically infinite. A human can learn and can objectively do basically an infinite number of things (independent of emotions), e.g., read, walk, speak multiple languages, scramble eggs, prepare a meal, play tennis, dance, prove theorems, listen to music, tell jokes, purchase a house, select a restaurant, create research problems, help others, plan a vacation, ride a bike, put on a necklace, pick up flowers, knit, hold and turn newspaper pages, clean, generate shopping lists, shop, etc. Machines one day may be capable of performing some human tasks. But it is hard to imagine that a single machine or even multiple machines will be able to reach all human capability. |
Anonymous: | I enjoyed each question whereas it was very difficult to choose a single numeric value such as 90% and 97% rather than a linguistic term such as “high” and “very high” for each question. |
Stuart Russell: | Current progress is rapid. Economic and military pressures will add momentum. Because of the "general" nature of intelligence, the issue of managing and controlling developments and applications will be harder than it is for, say, engineered biosystems. I think the field needs to take itself seriously. It is time to grow up. |
Anonymous: | Rather than including neural networks specifically, there should have been a tick box for machine learning in general. The impact is difficult to foresee: it depends on how society reacts. One outcome could be a society with a few super rich individuals (the owners of the technology) in which the majority has to make do with very little. At the other extreme, if benefits are shared more equally, the outcome could be more positive, at least concerning material wealth. However, the lack of meaningful employment may cause problems even in that case if society does not adjust. |
Tom Dietterich: | Computers are already superhuman along many dimensions, but these primarily involve either pure inference (no autonomous action) or limited autonomous action but only at superhuman rates for limited tasks (real time control of aircraft and cars, fast response in weapons systems). It isn't obvious to me that there will be a market for autonomous systems that take actions on time scales similar to humans. More generally, I don't see what the incentives will be to create HLMI with broad capabilities for action. I can imagine broad coverage of inference capabilities (integrating all of the world's information to inform human decision making), but extremely limited permission to take autonomous actions. The chief danger might be that HLMI could effectively take actions in the world by manipulating the behavior of humans. But this will require the collusion of multiple HLMIs owned by different people/organizations, so I doubt it will happen. |
Dr. C.D. Spyropoulos: | Based on my experience with the current digital technology I believe that the human professions can only be limited replaced by machine intelligent machines. This limitation concerns very specific tasks of human professions (i.e. decide whether a web site is pornographic or not, recognise a human phase among many human phases that might be available, specify the most probable location to drill and explore ore deposits, human based dialogue systems only on specific thematic areas, etc.). I do not believe that the intelligent machine (based on digital technology) will be able to synthesize or evolve, at the same time, many different human abilities to achieve complex and different tasks a human can deal with or decide upon different professions. A new era might appear when biological computers will come into the market. With such computers biological intelligent machines might be much more eligible to act, think and behave more like humans. I cannot imagine how such computers will be controlled and be used only for the benefit of the humans. This is something that I cannot foresee whether the results will be extremely good or bad. It rather frightens me. |
Anonymous: | I am a computational linguist with an engineering background. |
Anonymous: | I believe that only the field of automatic learning is promising. After many decades it will be mature with the development of new, presently unknown, methods. Then may be AI machines will surpass the performance of humans. |
Anonymous: | As with every activity of humans, there is risk associated with the further development of AI. However, potential benefits significantly outweigh potential risks of future AI. |
Anonymous: | Regarding question A.1, I'd have liked to indicate something like "an overlay of techniques, encompassing logic, swarms AND evolution" though I wouldn't relate that to "integrated cognitive architectures" and that's why I didn't tick it. |
Argyris Dentsoras: | 1. I think that any progress on HLMI is inevitably connected to the adequacy, the completeness and the reliability of the available models about human cognition and human brain. Our knowledge about these two major issues is continuously enriched and, as a consequence, this results to better models, a fact that is beneficial for HLMI. 2. I suppose that HLMI should cover not only mental activities but also materialistic ones (I have robotics in my mind). This implies that advances in engineering play a crucial role in supporting the progress of HLMI. The current situation in research regarding engineering is promising because it absorbs with high rates the achievements in the fields of sciences and provides efficient solutions to complex problems 3. A problem that should be efficiently solved is that of computational/processing power for HLMI. I have the impression that the complexity, diversity and multiplicity of the environment that human beings act within is demand not only sophisticated representation models but also systems of high processing capacities, subjected to restrictions of time and space (human brain has a very high processing capacity within a very small space) 4. Despite the obvious difficulty of the goal for HLMI per se, as a researcher that applies methods of artificial and computational intelligence in engineering design, I am optimistic enough for the final outcome (whenever this will be). |
Anonymous: | Previous experience has shown that the AI expectations were very ambitious. |
Marios Daoutis: | Even though I'm optimistic about the development of AI (or HLMI as it is mentioned in this questionnaire), according to my opinion, structural changes might be necessary on several levels (i.e., project initiations and interdisciplinary cooperations, distribution of funding as well as dissemination of research results) in order to improve the way AI research is conducted now. Eventually this will optimally increase our research capacity of AI to a level that is appropriate for achieving HLMI in the foreseeable future (i.e., next 50-100 years). |
Anonymous: | There is something missing in the definition of HLMI, I think: does a system specifically designed and programmed for a profession qualify, or will it have to be generically designed and programmed so that it can learn how to exercise a profession from human-style education? If the former, then it is not entirely clear to me that the target has not been reached already, depending on how you interpret "most" and how you segment the spectum of human professional activity into individual professions. For example, if "researcher" or "scientist" is 1 profession this makes the "most" target easier, if broken down into 100s of professions such as "nuclear fission researcher" then the "most" target gets harder. |
J. Mark Bishop: | I have reservations about the sample population selected for this survey; unless drawn from a broad constituency of engineers and technologists [who actually work in AI] I suspect an over-optimistic bias. Furthermore, as HLMI is defined as a system "that can carry out most human professions at least as well as a typical human" question (1) appears ill-formed as (a) "profession" is not specified and (b) even if a respondent does entertain the intuition that "HLMI in [potentially several] independent, tightly specified professional domains, might be reached by year X, she might not concede that HMLI can be achieved across *all* [or even most] unlabelled professional domains *simultaneously*. Furthermore, the solution to the issue of generality is extremely unlikely to be a simple linear summation of the behaviour of small, independent HLMI systems, but most likely will require a complex level of additional "holistic" meta-analysis and understanding; a level I cannot seriously envisage a machine instantiating algorithmically, (c.f. Dreyfus, Searle etc) .. 'A good AI researcher is not just proficient at programming, he also knows all about Tolkien, is shy with women, enjoys role-playing and heavy metal; and it is not clear that such domains are independent; all need to come together to make a fully-fledged AInik' .. and this forces an answer to questions (2) and (3) of (never) and (zero), even if one does consider HLMI might be reached [and occasionally exceeded] in some constrained domains of [professional] human expertise (e.g. "space invaders" qua "military drone pilots"). |
Fabio Bonsignorio: | There will be steep incremental progress in Cognition, AI and Robotics (more or like the same 'thing' for me). These systems will be cheaper than today’s robots, but their price, technical complexity and difficulty of usage for the end users will still limit their possible use. The potential benefits from this 'incremental progress' are enormous. Some kind of 'rigid', but 'good enough' AI might be possible. Yet current approaches have hit critical bottlenecks: more task complexity leads to exponentially increasing system complexity, thus reducing robustness, flexibility and adaptability. Robotics and AI are shifting their paradigms. Robotics is moving from what we may label as a ’Cartesian’ and ’clock-like’ mechatronic-plus-machine learning one, to a radically new one, based on the reverse engineering of animal intelligence/cognition. The new approcah is borrowing principles and methods from AI, neuroscience, artificial life and synthetic biology. A relevant example of this novel approach is given by the RoboCom Flagship proposal, see www.robotcompanions.eu. Animal-like novel robotic systems, might potentially have a tremendous impact; making possible a new robotics science and technology and thus enabling a geological shift in the human society. Incidentally this will make possible what you brand HLMI. |
George Deane: | HLMI is not to be confused with conscious machines. We still don't know what the essential features of consciousness are. I would like to see further research on whether consciousness can arise out of a program (ie can/do semantics arise out of a purely syntactical system?) or if consciousness requires a particular substrate/physical property. The creation of strong AI will hinge on this. The creation of life and the creation of tools that will improve human life have vastly different ethical implications. The fastest means of accelleration appears to me to be to refine our tools; ie augment intelligence and improve computation. |
Dinesh Pothineni: | AGI - It's not gonna be an easy path, considering the wide divide b/w various cognitive architectures being proposed by researchers, but we'll get there eventually. Going by Nash Equilibrium, most likely outcome of constructing such superior intelligence favors co-evolution of homosapiens with machines. Giving the keys to AI to ride in the back seat is not an option for us. This will probably start with minor augmentations leading to major shift in shape and form factors of physical bodies. Not everyone will like this of course. Surviving this cultural divide and pushing this ambitious ideology to mainstream audience without causing any major catastrophic loss is gonna be the key. |
Anonymous: | Referring to the questions 1&2: I would prefer to reply to a question concerning the most probable research 'scenario' of reaching the HLMI level, composed from the approaches listed and 'major breakthroughs' situated on the time axis(a similar idea has been actually implemented in another survey on less general AI topics). To make the q4 less subjective, different potential disruptive events and drivers could be listed, and quantitative probability assessments derived from such 'elementary replies'. Nevertheless, I congratulate the idea and I am looking forward to knowing the results of this survey ;) |
Mate Toth: | It's non-linear, so unpredictable, from this viewpoint the above questions are kind of meaningless, but of course I don't know. |
Rolf Pfeifer: | I'm not sure what the phrase "most human professions" means. Are we only talking about performance, here? What about "professions" that humans don't do at all? And novel "professions" are continuously created. The "professions" landscape will have completely changed by the time something like HLMI, if ever. Anyhow it's fun to think about these issues, but I find it extremely hard to provide informed answers that go beyond mere guessing. Especially if the questions are fuzzy. --Rolf Pfeifer |
yes: | Given the limits of human cognition (limited short-term memory, low learning rates, presence of a large number of biases, difficulty of thinking abstractly, probabilistically and with numbers), I believe that the development of HLMI is most likely to have positive effects -- i.e. to act as a prosthesis to human intelligence. There are many examples where human experts' predictions have turned out wrong or where their decisions have been sub-optimal (finances/economy, pollution, politics), in spite of their high confidence. At the least, a HLMI would have the decency to recognize that predictions cannot be made or, when decisions have to be made, that they can be made only with low confidence. |
Tom Ziemke: | It seems to me that most of the questionnaire only makes sense for people who actually believe in the possibility of HLMI. But obviously not all of us do. So for question A1, one of the possible answers really should be something like "I doubt that the aim will ever be achieved" and maybe in addition also the stronger statement "I am convinced that the aim will never be achieved". I also find it a bit odd that in questions A3 and A4 you are trying to force non-believers to assume the future existence of HLMI - isn't that likely to make the interpretation of the answers very difficult? |
Anonymous: | . |
Eray Özkural: | I do not forecast a catastrophic outcome from developments in AI field. Human-level AI is presently a subject that eludes even most CS researchers. However, once we experimentally show the feasibility of such systems, there will be countless fields of use. There are myriad potential applications of AI in science and engineering. AI will fundamentally change the way we use computers. Before, computers could not really produce any useful information on their own, except for the ways in which we programmed them to. Most computer use is limited to storing, editing and transmitting information. However, with AI the computer will be able to invent solutions, make designs and conduct experiments; thus AI will revolutionize science and engineering. There will be so many positive applications of AI technology, that we will soon forget the imaginary threats posed by evil military robots. Anyhow, there is little that is ethical about military, so perhaps we can increase our natural intelligence meanwhile to construct a peaceful world that has a better chance of survival. Time is near that we will witness human-like cognitive skills of AI programs. Once this happens, I suspect with positive results from many teams, there will be rapid progress. A technological society is inevitable; we must embrace the change. Thus, let us proceed from the bronze age of computing into the next one. Let us 'dreamform' our world and transcend our present limits through technology. |
Sam Freed (Sussex): | Personally, I wouldn't be in the field of human-like AI if I were not strongly optimistic about the foreseeable future. |
Leslie Smith: | I think machine and human intelligence really are quantitatively different. Human intelligence is really based on survival, honed ecologically over millennia. Machine intelligence is based on electronics (currently, at any rate), and even if modelled on the brain, is designed for different purposes. And for all that we are beginning to understand human intelligence, our models are very basic, whether neural models of optimisation or pattern recognition, or more abstract models that essentially build logical systems. Creativity, volition and awareness are nowhere to be seen. If we really are serious about machines that really do think, we need to consider difficult questions like the neural construction of thought, and that may lead us to 1st person science, which would mean a major change in the scientific method itself. |
Anonymous: | I would like to call your attention to Prof. Peter Gacs' response to Q3 in http://lesswrong.com/r/discussion/lw/9hq/qa_with_experts_on_risks_from_ai_4/ : [AIs will overwhelmingly outperform humans] but the contest will never be direct, and therefore the victory will never have to be acknowledged. Whatever tasks the machines are taking over, will always be considered as tasks that are just not worthy of humans. For a recent example, consider the pooh-pooing of Watson's victory over Ken Jennings by Michael Dyer in the same discussion. The goalpost can always be moved, and I agree with Prof. Gacs that it will continue to be moved. |
Colin T. Schmidt: | According to my knowledge today, I think Dialogical Human Intelligence is very hard to copy. HLMI has to be, in my view, something different than todays human dialogical intelligence, which does not mean it would be less useful in the future context. If humanity is changing this much though, we must admit it is still in control, whatever its new form will be. |
Anonymous: | The performance of a HLMI depends on the specific characteristics of a given profession. For example, if a HLMI has to assemble cars then the achieved performance can be 100% or higher, but if a HLMI has to train people then the performance is a variable reflecting the human capabilities for learning, emotional states, cognitive load, situations and context. Also, the impact of HLMI on human's everyday life can be positive and supportive, as well as negative and destructive; it depends on the application meaning and the purposes for usage. |
Anonymous: | Sorry to say that but as long as Hubert Dreyfus' objections are not acknowledged and taken into account, Artificial Intelligence will go nowhere worthy. |
David Weinbaum (Weaver): | 1. The probability of emergence of AGI critically depends on a large scale collaboration and coordination of different research programs and approaches. No single discipline or approach will achieve an HMLI. 2. One approach which I think is very important and seems to be missing from the main effort has to do with complexity and complex adaptive systems approach. 3. It seems that a scenario which is based on convergence (symbiosis of some kind) of humans and machines (e.g. cyborgs) is a more plausible path to achieve super human intelligence than most of the approaches trying to build a super human intelligent machines. 4. Philosophical investigation of the conceptual framework underlying the understanding of general intelligence, may yield important insights and fertilize other research efforts. For example: 1. The principles guiding the evolution and development of information processing systems. 2. Shifting the emphasize from systems that solve problems to systems that identify problems. It seems that GI is better framed in terms of identifying problems than in terms of solving problems where narrow AI approaches seem to perform better. This may require novel ontologies. |
Anonymous: | AGI is a hyper-amplifier, and society is very fragmented today philosophically as to what is important in the first place. Many would argue that working 100,000 hour weeks to grow the GDP of virtual goods exponentially, as is projected with WBE, may be considered to miss the point of existence, yet in recent years the hyper-darwinists have been very effective at convincing the anti-darwinists and Platonists to support goals toward such ends. We need a regrounding and digestion our cosmology and firmer understanding of our own physical relationship with the fabric of reality before we can confidently allow our existence to be subsumed by our creations. |
Leon Kester: | It is very worrying to see that human emotion and genetically determined unscientific thinking is currently disrupting the developing a broadly accepted scientific theory on ethics, substantially increasing the probability of existential catastrophe. |
Bridget Cooper: | not sure of the value of all this guessing - and the questions are full of assumptions. Its perhaps more important to get fundamentals right eg how learning happens ( seems important for an intelligent being) and then perhaps it will become more realisable. Not sure what it can learn once it has become better than a human since we only learn from others. It would have to have found life elsewhere. The chance of it fulfilling professions dominated by females is much less likely given the current state of play so maybe only male jobs will be lost, if and when it emerges. |
Robert Wenzel: | It is really hard to give a prediction when AGI will happen, nearly impossible. I've attend several conferences in the 90s and now the AGI conference in Oxford 2012, I developed a system called QuickCog and have seen a lot of other teams trying to create an AGI. When consider the development from then until today, I don’t really see any significant progress. So I think in general AGI could be created more or less accidently. Furthermore I think it will be created from an outsider. I hope I will see :) |
Knud Thomsen: | A major break through will certainly only be achieved when one succeeds in combining the diverse results and strengths of a wide field including psychology, cognitive science, AI and engineering. My personal bet is that the Ouroboros Model offers such an all-encompassing approach. Concerning the associated risks, I would rate them as comparable to the ones intrinsic to other great technologies, e.g. nuclear power. We have the capacity to basically wipe out human culture with atomic bombs; happily so far we managed to avoid this. At least in part this can be attributed to the rational self-interest of the involved parties: any big atomic war would harm, even kill, all adversaries and more, including, in particular, also the "winner". I would expect a super "intelligent" artificial agent adhering to similar reasoning: destroying its very origin, basis and effective reality would most probably not be prudent. Actually, I would stress that HLMI is rather to be conceived as High Level Machine Rationality HLMR as this is an easier to define "local" concept and a more quantifiable term than "intelligence", which inevitably involves a relative, arbitrary and subjective element of judgment by external (human) agents (see paper AGI-12). |
Pei Wang: | AGI will not necessarily reach or surpass "the performance of any human in most professions". If "intelligence" is not identified with concrete problem-solving capabilities, but with general principles (such as adaptation and relative rationality) under which the problems are solved, then AGI systems and human beings will end up solving different problems, and show similarity at a more abstract level. Since the experience of a human and an AGI system will be very different, the behavior and performance of the two will not be identical. That is why I believe AGI can be achieved, but not in the form as many people assume today. |
Hiroshi Yamakawa (FUJITSU LABORATORIES LTD.): | Recent neuroscience has great advance especially in vivo behavioral experiment using optgenetics and so on. In spite of increased data, neuroscientist rather desire functional model to explain experimental data. Then I feel that it needs several breakthrough of measurement technique that amount of experimental data itself can begin speaking about high-level intelligent function through the whole brain emulation (WBE). From the view point of brain inspired computing, we have another chance by unavailing computer unachieved brain function which is related to two fundamental AI problems, such as frame problem and symbol-grounding problem. Up to now, basic algorithm for more than half sub-region of brain related to high-level cognition and control are already revealed. For example, the basal ganglia can be explained by reinforcement learning algorithm and cerebellum can be explained by feed-forward control theory. These known brain functions are explained by relatively simple algorithms mainly on neural circuit levels, and they do not need eccentric technology such as quantum computing. I believe that, left mysteries of the brain function may be related to neocortex, hippocampus and so on. I guess principle of these mysteries will be revealed until from 2015 to 2019, with combination or little improvement of existing algorithms. Because findings of neurosciences are explosively increasing now, we can narrow hypothesis enough. After functional principles are revealed, many researchers try to implement that algorithm in practical way. Then high-level machine intelligence will be realized from 2017 to 2025. |
Anonymous: | Intelligence should be separated from competence. You may get a smart AI much sooner (< 10 yrs) that simply is not yet an expert in all fields. |
Peter Eckersley: | Given that artificial intelligence research has some chance of succeeding in the next few decades, we should try to maximise the odds that if intelligent computers join us on this planet, they do so peacefully, and are able to sustain values such as pluralism, curiosity, and altruism. Unfortunately futurological puzzles like this, as with all predictions in politics, technology, and economics, will be extraordinarily difficult for us to answer well. |
JMV: | With my background Good Old AI and software engineering, here are my comments. Questions are over simplified regarding the variety of intelligences that can be imitated: it's obvious that car driving is more easy to achieve than, say, musicianship or mathematic theory making. |
Bill Hibbard: | Without an energetic political movement to prevent it, the default is that AI will be a tool for military and economic competition among humans. So AI will be negative towards some humans. The main problem isn't that humanity will be wiped out, but that AI will enable a small group of humans to dominate the rest. It all depends on getting the politics right. |
Anonymous: | Don't see any persuasive argument against the development of AGI. Don't see anyone with a persuasive argument that their plan for developing AGI is the right one. No one still seems to know where to start on the problem. Confident estimates that AGI is 20/30 years away seem unbacked by sound argument. Would quibble slightly with the definition of 'high level machine intelligence' used here. A machine that could carry out most human professions at least as well as a typical human might be much easier to make than an AGI: it seems conceivable that a machine embodying a combination of narrow-AI techniques might satisfy this criterion without being generally intelligent. (e.g. containing a Watson-like approach to medical diagnosis, legal work, etc.) |
Anonymous: | Assigning a distribution to question 2 has become more difficult in light of Armstrong's recent work. |
Richard Sutton: | The questions betray naïveté in the questioner's view of humanity by 1) apparently assuming human performance is un-augmented human performance, 2) assuming that there is a unitary "impact on humanity", positive or negative, as opposed to a myriad different senses in different people of what is good and bad, and 3) neglecting the "compared to what" problem in question A4. The questions invite us not to think carefully. |
Michael Bukatin: | The current research and discussion on Friendly AI is very abstract. In particular, we are not really using software tools to help us understand the issues involved in Friendly AI, and we are not developing any detailed plans to do so. And this is the area where we really need computers to help us think. We also need to think more how we might arrive at some consensus regarding ethics not so much among us, but between us and the developing AGI, and we need to focus more on how we are going to develop such ethics jointly with AGI, so that it is as much its creation as it is ours. The idea of trying to control or manipulate an entity which is much smarter than a human does not seem ethical, feasible, or wise. What we might try to aim for is a respectful interaction. |
Ted Goertzel: | I believe there is great uncertainty about timing but that it will come and that the risks are often exaggerated. |
Anonymous: | AGI is/will be the most important topic of this century and it is time that everybody starts to realize that and provide some major funding to both the development of these systems and the proper control of their impact. Both endeavors are equally worthwhile. |
Anonymous: | Great conf. Happy to participate. |
David Davenport: | I believe humans are general purpose learners and so, in principle, machines will not be able to do anything more. Of course, machines will probably not share our slow and unreliable biological implementation medium, and so will be able to outperform us through shear speed, memory and communication ability. Such machine intelligences will appear, but it is difficult to predict when, given the current lack of a "sensible" consensus regarding the right research direction (probably neuroscience has the best chance of showing the way). Whether HLMI is ultimately "good" or "bad" for mankind is similarly difficult to decide. Unlikely as it may be, human nature might change and we can live together harmoniously. Or, more likely, (and assuming it doesn't wipe itself out beforehand) humanity will be replaced by the (superior) machines, which is probably a good thing--an evolutionary step equivalent to the move from Neanderthal to Homo Sapiens! |
Caspar Bowden: | Perhaps there is a distinction to be made about "one that can carry out most human professions at least as well as a typical human" Does this presuppose the need to make android robots? An AI could plausibly be a run-of-the-mill accountant or doctor or lawyer, but obviously not a tennis player unless embodied. An AI could also plausibly perform such professions in a "typical" way without intellectual creativity (but with a quasi-human personality). Another way to formulate the question is to ask "how long before an AI *correctly* believes it experiences subjectivity" - because on that date it would cease to be haunted by "the hard problem" and simply become curious about it like everybody else, whilst getting on with the business of augmenting its intelligence. The ethical stance of AIs may depend on that belief. How should one assess the zombieness of an AI which might have reasons to claim to experience subjectivity when it does not, or not admit subjectivity which it in fact has ? It also likely makes a difference whether AIs are hard to distinguish from people (e.g. Blade-Runner-replicant-bodies but with artificial brains). If people treat AIs ethically as if they had subjectivity, this may well affect the ethical stance of AIs towards people (and the AIs own belief about whether they *do* experience subjectivity) |
Tijn van der Zant: | I develop robotics brains as expertise. I've seen dozens of projects, and several increments in the speed of development. This increasing development speed is what makes me believe that we will see these HLMI this century. My main concern is: what is their phylogeny? HLMI coming from military applications will be much less likely to be beneficial for humanity or even biological life. HLMI coming from domestic service robots will want to assist humans, even if we are considered less intelligent/capable than HLMI. So the questions should not be: Will we reach it and if we do, when will we reach it? But instead it should be: What type of research should we do to increase the probability that this (HLMI) technology will help us and not wipe us out? Just as most people, even most 'experts' in the 80s could not imagine the internet of 2010, nowadays most 'experts' have a too narrow or too uninformed view to be knowledgeable about what will happen 30 years from now. Being an expert in robotics (Dr. in AI, founder of RoboCup@Home) I'm most likely also not capable of imagining what will happen in 30 years. Still, the following question holds: What type of research should we do to increase the probability that this (HLMI) technology will help us and not wipe us out? |
Anonymous: | Looking at the way embodied practices are getting recognition in shaping cognition in the field of AI, the prospect of achieving some form of HLMI by 2050 appear sound. But, of course, more mystery remains at the neural level and how to integrate the conceptual clarities/ambiguities taken for granted so far, with upcoming neural-level findings would demand much labor. |
Roman Yampolskiy: | The actual date of HLMIs arrival is completely irrelevant. It could happen in 20 years or in 500 years. What we need to worry about is the impact. |
Anonymous: | Instead of worrying on how HLMI could impact humanity, researchers should first find some hints about being on the right direction. |
Yes: | We underestimate the uncertainty involved in predicting AI, from both ends - AI could be developed much sooner or much later than we generally expect. |
Jean-Daniel Dessimoz: | - The most important next step should consist in widely disseminating scientifically and technically sound definitions of core cognitive elements (in particular complexity, knowledge, learning, expertise, intelligence, experience) along with their metric units. - On this basis, progress would be better guided, and contributions from all sources would be better focused onto the most relevant areas. - The Theory/Model for Cognitive Sciences ("MCS") proposes definitions and metrics as advocated above, and formally builds on a few more basic, classical notions: information (metric unit:bit) and time (unit: second). Indirectly, it also relies on the additional, more general notions of reality, and modeling. - Robotics, and more generally, automation (notably including cognitics) provide two kinds of critical advantages: 1. infrastructure, testbeds for turning elements of theories into experimental operations, for validation and benchmarks. 2. deployment of artifical cognitive systems on a wide scale, for the benefit of mankind. |
Anonymous: | The same way most people abandoned their fantasies of vacations on the moon, they will hopefully also renounce their dreams of an army of robot slaves and HLMI will not be a research priority for very long. |
Claudius Gros: | This questionnaire neglects one of the two central aspects of human-level AI. It treats the level of cognitive information processing, not considering the motivational problem. It is a common fallacy in AI to assume that intelligence and logic alone are sufficient for an AI to set its own goal, or to assume that an high-level AI will just follow efficiently the orders of an human operator. Essentially none of the longer-term perspectives, including the singularity scenario, treats this issue. In the end, how can you judge the possible impact of human-level AI when no prediction whatsoever can be made of what the robots will eventually do with their time? |
Anonymous: | Formulation "machine intelligence that greatly surpasses the performance of any human in most professions" is open to interpretation. I take it to be the problem-solving and communication ability and perhaps not the appearance and other human-like features. "Prediction is very difficult, especially about the future," said the Niels Bohr who lobbied for peaceful atomic policies Unpredictable events can essentially speed up but even slow down the process. |
Anonymous: | Future technology is always a double-edged sword, so there's nothing special about AI and the impact it would have. Boring questionnaire. |