|
List of Accepted Paper Presentations: Abstracts |
No. |
|
|
A1 |
Aaron Sloman, U Birmingham |
Abstract: Despite huge practical
importance of developments in AI, there have always been researchers
(including Turing) less interested in using AI systems to do useful things
and more interested in potential of AI as *science* and *philosophy*; in particular
the potential to advance knowledge by providing new explanations of natural
intelligence and new answers to ancient philosophical questions about what
minds are. A particularly deep collection of questions concerns explaining
how biological evolution produced so many different forms of intelligence, in
humans and non-human animals, and in humans at different stages of
development, in different physical and cultural contexts. Current AI still
cannot match the amazing mathematical discoveries by ancient mathematicians
still in widespread use all over this planet by scientists, mathematicians
and engineers. The discovery processes are quite unlike
statistical/probabilistic learning for reasons spelled out in Kant's
philosophy of mathematics. |
A2 |
Hajo Greif, TU Munich |
Abstract: Given the personal
acquaintance between Alan M. Turing and W. Ross Ashby and the partial
proximity of their research fields, a comparative view of Turing’s and
Ashby’s work on modelling “the action of the brain” (letter from Turing to
Ashby, 1946) will help to shed light on the seemingly strict
symbolic/embodied dichotomy: While it is clear that Turing was committed to
formal, computational and Ashby to material, analogue methods of modelling,
there is no straightforward mapping of these approaches onto symbol-based AI
and embodiment-centered views respectively. Instead, it will be demonstrated
that both approaches, starting from a formal core, were at least partly
concerned with biological and embodied phenomena, albeit in revealingly
distinct ways. |
A3 |
Hyungrae Noh, U Iowa |
Abstract: The receptor notion is
that a neural or computational structure has the role of indicating because
it is regularly and reliably activated by some distal conditions (Ramsey
2007). The receptor notion has been widely accepted by naturalistic theories
of content as a form that combines Shannon-information theory (1948) with
teleosemantics (Dretske 1981; Skyrms 2010; Shea et al. 2017). … cognitive systems may turn out to be
as deeply embedded in, hence dependent on, their specific environments as are
other biological systems (Millikan 2003). In other words, biological signals
(both in the correlational information conveying sense and the mapping
functional sense) are affordance like food or like shelter, things that are
what they are only relative to an animal who would use them for a specific
purpose (Millikan 2012). |
A4 |
Ioannis Votsis, New College
& LSE, London |
Abstract: What is computation?
At the heart of this question appears to lie a paradox. On the one hand,
computation looks like the kind of thing virtually any physical system does.
After all, physics ensures that some states are followed by other states in a
rule-like manner. This view has come to be known as ‘pancomputationailsm’. On
the other hand, computation looks like the kind of thing that only emerged in
recent human history. On this view, very few physical systems compute, namely
those that were technologically designed to do so. We may call this
‘oligocomputationalism’. This talk aims to resolve the apparent paradox by
putting forward two non-rivalling notions of computation: one that
underwrites pancomputationalism and another that underwrites
oligocomputationalism. It is argued that each notion is legitimate because it
captures different uses of the term ‘computation’. |
A5 |
Jiri Wiedermann and Jan van Leeuwen, Czech Academy of the Sciences |
Abstract: AI research is
continually challenged to explain cognitive processes as being computational.
Whereas existing notions of computing seem to have their limits for it, we
contend that the recent, epistemic approach to computations may hold the key
to understanding cognition from this perspective. Here, computations are seen
as processes generating knowledge over a suitable knowledge domain, within
the framework of a suitable knowledge theory. This, machine-independent,
understanding of computations allows us to explain a variety of higher
cognitive functions such as accountability, self-awareness, introspection,
knowledge understanding, free will, creativity, anticipation, curiosity in
computational terms, as well as to understand the mechanisms behind the
development of intelligence. The argumentation does not depend on any
technological analogies. |
A6 |
Matthew Childers, U Iowa |
Abstract: The “Symbol Grounding
Problem” (SGP) concerns how an artificial agent (AA) can autonomously derive
the meaning of the symbols it successfully manipulates syntactically. While
conditions for a successful solution has been proposed (the “Z/B-condition”),
few have considered a teleosemantic solution which meets the Z/B-condition. I
argue that a teleosemantic solution is problematic because orthodox
teleosemantics construes representation in terms the evolutionary etiology of
the biological capacities of representational agents and systems. I assess
the strengths of three non-etiological theories of function (propensity,
modal, and causal-role theories) and show that they all fail. In turn, I
outline avenues for a teleosemantic solution to the SGP afforded by
artificial evolution and genetic programming research. Yet these also fall
afoul of the Z/B-condition. |
A7 |
Paul Schweizer, U Edinburgh |
Abstract: The paper explores two
related variations on the ‘animat’ theme. Animats are hybrid devices with
both artificial and biological components. Traditionally, ‘components’ have
been construed in concrete terms, as physical parts or constituent material
structures. Many interesting issues arise within this context of hybrid
physical organization. However, within the context of
functional/computational theories of mentality, demarcations based purely on
material structure are far too narrow. It is abstract functional structure
which does the key work in characterizing the respective ‘components’ of
thinking agents, while the ‘stuff’ of material implementation is of secondary
importance. Thus the paper extends the received animat paradigm, and explores
some intriguing consequences of expanding the conception of bio-machine
hybrids to include abstract functional structure, and not just concrete
physical parts. In particular, I extend the animat theme to encompass cases
of mental bio-synthetic hybrids. |
A8 |
Blay Whitby, U Sussex |
Abstract: A much neglected area
in Artificial Intelligence ethics concerns the increasing use of simulated
emotional responses. Though these are not ‘genuine’ or in any way equivalent
to human emotional responses, there is ample evidence that humans can easily
be manipulated by simulated emotional displays made by machines – even
when they know it to be simulated and the level of simulation is relatively
crude. The technology of artificial emotions is no longer experimental and it
is time to analyze what ethical limits should be applied. Since there is no
applicable legislation and very close to zero ethical guidance on the ethics
of artificial emotions. This is an area which deserves far more attention
from those interested in AI ethics. |
A9 |
Al Baker and Simon Wells, Aberdeen U |
Abstract: Even today, artificial
intelligences of varying complexity and sophistication are used for a broad
range of persuasive purposes. My
FitBit persuades me to run that extra half mile, while Amazon and Steam
persuade me to make new purchases on the basis of previous ones. These examples are innocuous enough,
but it is easy to imagine potentially more troubling uses of persuasive AI. Should an AI be permitted to persuade
a criminal defendant to plead guilty?
Persuade me to pursue a career?
Who to vote for? Who to
date? We discuss the morally relevant differences between persuasion by
artificial and human intelligences, how to determine what principles
governing human to human persuasion should govern AI persuasion, what
additional principles may be necessary, and how those principles can help us
decide under what circumstances AI to human persuasion is impermissible. |
A10 |
Geoff Keeling, U Bristol |
Abstract: Driverless cars will
be on our roads soon. Many people argue that driverless cars will sometimes
encounter collisions where (i) harm to at least one person is unavoidable or
very likely and (ii) a choice about how to allocate harm or expected harm
between different persons is required. How should we programme driverless
cars to allocate harm in these collisions? Derek Leben proposes a Rawlsian
answer to this question. In this paper, I argue that we have good moral
reasons to reject Leben’s answer. |
A11 |
Michael Prinzing, U North
Carolina |
Abstract: There is a non-trivial
chance that sometime, in perhaps the not too distant future, someone
somewhere will build an artificial general intelligence (AI). If that’s true,
it seems not unlikely that AI will eventually surpass human-level cognitive
proficiency, even may even become “superintelligent”. The advent of
superintelligence has great potential—for good or ill. It is therefore
imperative that we find a way to guarantee—before one
arrives—that any superintelligence we build will remain friendly.
Programming an AI to pursue goals that we find congenial will be an extremely
difficult challenge. This paper proposes a novel solution to this puzzle:
program the AI to love humanity. For friendly superintelligence, I suggest, all
you need is love. |
A12 |
Sander
Beckers, Cornell U |
Abstract: The ethical concerns
regarding the development of an Artificial Intelligence have received a lot
of attention lately. Even if we have good reason to believe that it is very
unlikely, the mere possibility of an AI causing extreme human suffering is
problematic enough to warrant serious consideration. In this paper I argue
that a similar concern arises when we look at this problem from the
perspective of the AI. Even if we have good reason to believe that it is very
unlikely, the mere possibility of humanity causing extreme suffering to an AI
is problematic enough to warrant serious consideration. |
A13 |
Tom Everitt, Australian
National University |
Abstract: How can we maintain
control over agents that are smarter than ourselves? We argue that we need to
ensure that we build agents that have goals aligned with ours; that are
corrigible and won't resist shutdown or corrections; and that strive to preserve
these properties in the face of accidents, adversaries, and potential
self-modifications. |
A14 |
Torben Swoboda, U Bayreuth |
Abstract: Sparrow has argued
that autonomous weapon systems cause a responsibility gap, because the
actions of an AWS are not predictable. In this paper I distinguish between
local and non-local behaviour and argue that non-local behaviour is
sufficient for attributing responsibility. An AWS can be instantiated by
supervised learning. In such a case the programmer is aware of the non-local
behaviour of the system. This implies that the programmer is blameworthy and
liable for caused damages, whenever the AWS wrongfully causes harm. I then
formulate a consequentialist criterion which excuses the programmer from
being held responsible. Lastly, I list challenges that remain, so that we
should remain sceptical about deploying AWS. |
B1 |
Anna Strasser, Berlin School
of Mind & Brain |
Abstract: Standard notions in
philosophy of mind characterize socio-cognitive abilities as if they are
unique to sophisticated adult human beings. But soon we are going to be
sharing a large part of our lives with various kinds of artificial agents.
That is why I will explore in this paper how we can expand these restrictive
notions in order to account for other types of agents. Current minimal
notions such as minimal mindreading and a minimal sense of commitment present
a promising starting point since they show how these notions can be expanded
to infants and non-human animals. Considering developments in Artificial
Intelligence I will discuss in what sense we can expand our conception of
sociality to artificial agents. |
B2 |
Bryony Pierce, U Bristol |
Abstract: For reasons for action
to be grounded, facts concerning those reasons must obtain in virtue of
something more fundamental, in a relation of non-causally dependent
justification. I argue that, in a robot or other entity with artificial
intelligence, grounding would have to be in something external: the
qualitative character of the affective responses of its programmers or other
human beings. I explore a number
of senses of grounding and discuss the distinction between semantic and
affective content in the context of grounding reasons for action. |
B3 |
Chuanfei Chin, National University of Singapore |
Abstract: What new challenges
are emerging in philosophical debates on artificial consciousness? Earlier
debates focused on thought-experiments such as Block’s Chinese Nation and
Searle’s Chinese Room, which seemed to show that phenomenal consciousness cannot
arise, or cannot be produced, in non-biological machines. These debates on
the possibility of artificial consciousness have been transformed as we make
more use of empirical methods, models, and evidence. I shall argue that this
naturalistic approach leads to a new set of philosophical challenges. These
challenges centre on the multiplicity of neurofunctional structures which
underlie ‘what it is like’ to be in a conscious state. When we uncover more
than one kind of phenomenal consciousness, how should we conceptualise
artificial consciousness? By addressing this challenge, we can classify
different theories of artificial consciousness in the AI literature and
clarify the moral status of any conscious machines. |
B4 |
David Longinotti, University of
Maryland |
Abstract: Non-living machines
can’t be agents, nor can they be conscious. An action is homeostatic for its
agent and must begin in something that moves to maintain itself: living
matter. Behavior motivated by feelings is homeostatic, so the source of qualia
is a living substance. This can also be inferred scientifically. Various
empirical evidence indicates that feelings are a distinct form of energy
generated in specialized neurons. Phenomenal energy would not be objectively
observable if it were spent as and where it is produced. This expenditure is
thermodynamically necessary if, by converting action potentials to feelings,
the source of the feelings averts an increase in its entropy. Thus, the
source of qualia is a living, self-maintaining substance. |
B5 |
Rene Mogensen, U Birmingham City |
Abstract: Geraint A. Wiggins
proposed a formalised framework for ‘computational creativity’, based on
Margaret Boden’s view of ‘creativity’ defined as searches in conceptual
spaces. I argue that the epistemological basis for well-defined ‘conceptual
spaces’ is problematic: instead of Wiggins’s well-defined types or sets, such
theoretical spaces can represent emergent traces of creative activity. To
address this problem, I have revised the framework to include dynamic
conceptual spaces, along with formalisations of memory and motivations, which
allow iteration in a time-based framework that can be aligned with
experiential learning models (e.g., John Dewey’s). My critical revision of
the framework, applied to the special case of improvising computer systems,
achieves a more detailed specification and better understanding of
computational creativity. |
B6 |
Sankalp Bhatnagar, Shahar Avin,
Stephen Cave, Marta Halina, Aiden Loe, Seán Ó HÉigeartaigh, Huw Price, Henry
Shevlin and Jose Hernandez-Orallo, U of Cambridge |
Abstract: New types of
artificial intelligence (AI), from cognitive assistants to social robots, are
challenging meaningful comparison with other kinds of intelligence. How can
such intelligent systems be catalogued, evaluated, and contrasted, so that representations
and projections offer more meaningful insights? AI and the future of
cognition research can be catalyzed by an alternative framework and
collaborative open repository for collecting and exhibiting information of
all kinds of intelligence, including humans, non-human animals, AI systems,
hybrids and collectives thereof. After presenting this initiative, we review
related efforts and offer the results of a pilot survey on the motivations,
applications and dimensions of such a framework, aimed at identifying and
refining its requirements and possibilities |
B7 |
Yoshihiro Maruyama, U of Oxford |
Abstract: The frame problem is a
fundamental challenge in AI, and the Lucas-Penrose argument indicates a
limitation of AI if it is successful. We discuss both of them from a unified
Gödelian point of view. We give an informational reformulation of the frame
problem, which turns out to be tightly linked with the nature of Gödelian
incompleteness. We then revisit the Lucas-Penrose argument, giving a version
of it which shows the impossibility of information physics. It then turns out
through a finer analysis that if the Lucas-Penrose argument is accepted then
information physics is impossible too. Our arguments indicate that the frame
problem and the Lucas-Penrose argument share a common Gödelian structure at a
level of abstraction, and what is crucial for both is the finitarity
condition of frame and computation, without which the limitations can readily
be overcome. |
B8 |
Abhishek Mishra, National
University of Singapore |
Abstract: While there has been
much discussion about the moral status of humans, non-human animals and even
other natural entities, discussion about the moral status of digital agents
has been limited. This paper proposes a way of reasoning about how we should
act towards digital agents under moral uncertainty by considering the
particular case of how we should act towards simulations run by an artificial
superintelligence (ASI). By placing the problem of simulations within the
larger problem of AI-safety (how to ensure a desirable post-ASI outcome) as
well as debates about the grounds of moral status, this paper formalizes it
into a decision problem. The paper ends by suggesting future steps to solve
this decision problem, and how the approach might be generalized. |
B9 |
Ron Chrisley, U Sussex |
Abstract: I argue that for
auto-epistemic knowledge-based systems, a further constraint beyond
consistency, which I call *epistemic consistency*, must be met. I distinguish two versions of the
constraint: *propositional* epistemic consistency requires that there be no
sentences in an agent’s knowledge base that constitute an epistemic blindspot
for that agent. Maintaining this constraint requires generalising from the
notion of an epistemic blindspot to the concept of epistemic blindspot sets;
I show how enforcing this requirement can prevent fallacious reasoning of a
form found in some well-known paradoxes. The other version, *inferential*
epistemic consistency, forbids certain epistemically problematic inferences. I argue that the intuitive notion of
the validity of a rule of inference can only be retained if inferential
epistemic consistency is enforced. |
B10 |
Daniel Kokotajlo and Ramana
Kumar, U North Carolina |
Abstract: We articulate two
projects that decision theorists can engage in: Roughly, they are (a) trying
to discover the norms that govern instrumental reasoning, and (b) trying to
decide which decision procedures to install in our AIs. We are agnostic about
the relationship between the two projects, but we argue that the two most
popular answers to (a), CDT and EDT, are clearly not good answers to (b). Our
overall goal is to argue that project (b) exists, that it is immensely
important, and that decision theorists can productively contribute to it.
Indeed, perhaps some decision theorists already are; this is what we take the
Machine Intelligence Research Institute to be doing. |
B11 |
J Mark Bishop, John Howroyd and
Andrew Martin, Goldsmiths, U London |
Abstract: In this paper we
demonstrate [for the first time] progress towards developing a swarm
intelligence algorithm - based on interacting communicating processes - that
is Turing complete. This is a relatively an important result – not
least as the core principle underlying the interacting processes are (a)
analogous to the behaviour of certain [tandem running] ant species (in
nest/resource location tasks) and (b) are based on communications NOT
computations (although, of course, they can be described [and simulated]
computationally); the latter feature, positions out work in a different class
to both Siegelmann and Sontag’s Turing Complete RNN (Recurrent Neural
Network), and the Google/DeepMind team (of Grave, Wayne & Danihelka)
2014 NTM (Neural Turing Machine), both of which remain implicitly – if
not explicitly - grounded on computational processes (summations,
multiplications, activation-functions etc.).</body> |
B12 |
Raül Fabra Boluda, Cesar Ferri, Jose Hernandez-Orallo,
Fernando Martínez-Plumed and M.J. Ramírez, U Sevilla |
Abstract: Being surrounded by
machine learning (ML) models making decisions for governments, companies and
individuals, there is the increasing concern of not having a rich explanatory
and predictive account of the behaviour of these ML models relative to the
users' interests (goals) and (pre-)conceptions (narratives). We argue that
the recent research trends in finding better characterisations of what a ML
model does are leading to the view of ML models as complex behavioural
systems. Consequently, we argue that a more contextual abstraction is
necessary, as done in system theory and psychology, very much like a mind
modelling problem. We bring some research evidence of how this transition can
take place, suggesting that more machine learning is the answer. |
B13 |
Shlomo Danziger, Hebrew U of
Jerusalem |
Abstract: Turing's Imitation
Game (IG) is usually understood as a test for machines' intelligence. I offer
an alternative interpretation, according to which Turing holds an
externalist-like view of intelligence; and I discuss some ramifications this
view may have on current AI development and cognitive research. Turing, I
argue, conditioned the determination that a machine is intelligent upon two
criteria: one technological and one sociolinguistic. The technological
criterion, tested by the IG, requires that the machine be designed so that
its behavior is indistinguishable from human intellectual behavior. But the
IG does not test if the machine is intelligent; that requires also the
fulfillment of the sociolinguistic criterion – that the machine be
perceived by society as a potentially intelligent entity. To Turing,
intelligence is constituted by the way a system is perceived by humans, and
not just by its internal properties. |
B14 |
Tobias Wängberg, Mikael Böörs,
Elliot Catt, Tom Everitt and Marcus Hutter, Australian National University |
Abstract: The off-switch game is
a game theoretic model of a highly intelligent robot interacting with a
human. In the original paper by Hadfield-Menell et al. (2016), the analysis
is not fully game-theoretic as the human is modelled as an irrational player,
and the robot's best action is only calculated under unrealistic normality
and soft-max assumptions. Wangberg et al. (2017) make the analysis fully game
theoretic, by modelling the human as a rational player with a random utility
function. As a consequence, the robot's best action can be calculated for
arbitrary belief and irrationality assumptions. |