|
PT-AI 2017 |
|
|
|
List of Accepted Poster
Presentations |
|
1 |
Wolfhart Totschnig. |
|
Fully autonomous AI |
2 |
Paulius Astromskis. |
|
In Critique of RoboLaw: the
Model of SmartLaw |
3 |
Selmer Bringsjord and Naveen Sundar Govindarajulu. |
|
Do Machine-Learning Machines
Learn? |
4 |
Christopher Burr, Nello
Cristianini and James Ladyman. |
|
Intelligent Agents and the
Manipulation of User Behaviour |
5 |
Gordana Dodig Crnkovic. |
|
Morphologically Computing
Embodied, Embedded, Enactive, Extended Cognition |
6 |
Tom Everitt, Victoria
Krakovna, Laurent Orseau, Marcus Hutter and Shane Legg. |
|
Value Learning from a Corrupted
Signal |
7 |
John Fox. |
|
Slicing and dicing AI theories:
how close are we to an agreed ontology? |
8 |
Sam Freed. |
|
Is All Original Programming
Introspective? |
9 |
Arzu Gokmen. |
|
Institutional Facts and AI in
Society |
10 |
Jodi Guazzini. |
|
A Gnoseological Approach to the
SGP: the Difference between Perception and Knowledge and Two Ways of Being
Meaningful |
11 |
Mahi Hardalupas. |
|
On a new "systematic"
account of machine moral agency |
12 |
Soheil Human, Golnaz
Bidabadi and Vadim Savenkov. |
|
Supporting Pluralism by
Artificial Intelligence: Conceptualizing Epistemic Disagreements As Digital
Artifacts |
13 |
Soheil Human, Markus Peschl,
Golnaz Bidabadi and Vadim Savenkov. |
|
An Enactive Theory of Need
Satisfaction |
14 |
Thomas Kane. |
|
Dealing with Artificial Persons
and Four Types of Artificial Intelligence |
15 |
Yoshihiro Maruyama. |
|
Pancomputationalism and
Philosophy of Data Science: From Symbolic to Statistical AI, and to Quantum
AI? |
16 |
Dagmar Monett and Colin
Lewis. |
|
Getting clarity by defining
Artificial Intelligence — A survey |
17 |
Caterina Moruzzi. |
|
Creative AI: Music Composition Programs
as an Extension of the Composer's Mind |
18 |
Stefan Reining. |
|
Revisiting the Dancing-Qualia
Argument for Computationalism |
19 |
Aziz F. Zambak and Erdem Unal. |
|
Computational Discovery Models:
A Category Theoretic Approach to Knowledge Representation in Science |
20 |
Carlos Zednik. |
|
From Machine Learning to Machine
Intelligence |
|
|
|
|
|
|
|
|
|
|
|
|
|
List of Accepted Poster
Presentations: Abstracts |
|
1 |
Wolfhart Totschnig. |
Fully autonomous AI |
Abstract: In the field of AI,
the term “autonomy” is generally used to refer to the capacity of an
artificial agent to operate independently of human oversight in complex
environments. In philosophy, by contrast, the term “autonomy” is generally
used to refer to a stronger capacity, namely the capacity to “give oneself
the law,” i.e., to decide by oneself what one’s goal or principle of action
will be. The predominant view in the literature on the long-term prospects
and risks of artificial intelligence is that an artificial agent cannot
exhibit autonomy of this kind because it cannot rationally change its own
final goal. The aim of the present paper is to challenge this view by showing
that it is based on questionable assumptions about the behavior of
intelligent agents. |
2 |
Paulius Astromskis. |
In Critique of RoboLaw: the
Model of SmartLaw |
Abstract: The exponential
development of the intelligent technologies requires in depth analysis of the
ethical and legal issues raised by their applications. The existing
regulation model and the very idea that laws should be made on robots should
not be taken as granted anymore. Besides the laws on robots, there are
emerging alternatives such as laws by robots and laws in robots. Accordingly,
in the search of the ways to reconcile regulation and technology, the
transaction cost analysis of the existing regulation model per se, in the
context of technological singularity, should be performed. After such
analysis is completed, one can identify the questions to be answered in the
search of the trust-free model of regulation (i.e. the model of SmartLaw) |
3 |
Selmer Bringsjord and Naveen Sundar Govindarajulu. |
Do Machine-Learning Machines
Learn? |
Abstract: No; despite the
Zeitgeist, according to which vaunted `ML' is on the brink of disemploying
most members of H. sapiens sapiens, no.
Were the correct answer `Yes,' a machine that machine-learns some target
t would, in a determinate, non-question-begging sense of `learn,' learn
t. But this cannot be the case.
Why? Because an effortless
application of the process of elimination, a.k.a. disjunctive syllogism,
proves the negative reply. We use
proof by cases. In the first
case, the unary number-theoretic function g learned by a human is
Turing-uncomputable --- which entails that no standard artificial neural
network can machine-learn g. In
Case 2, g is Turing-computable, but, for reasons we explain, not
machine-learnable. Our case
includes a defense of a modern, limited version of ordinary-language
philosophy. |
4 |
Christopher Burr, Nello
Cristianini and James Ladyman. |
Intelligent Agents and the
Manipulation of User Behaviour |
Abstract: There are many ways in
which autonomous software agents can affect the behaviour of their users,
either directly or indirectly. We describe the most common examples, using
the standard model of bounded rationality as an organising principle. We then
focus on the particular case in which the utility function pursued by the
software agent is defined in terms of the user’s actions: in this case the
agent can increase its utility by reducing the autonomy of the user, but need
not always do so. We discuss the cases where user behaviour is influenced
without changing their utility function, by exploiting (and sometimes
reducing) existing limitations of the decision making process. Finally we
discuss the implications of persuasive technologies for human autonomy,
particularly the case where personal information is used by the agent to
determine how it interacts with the user. |
5 |
Gordana Dodig Crnkovic. |
Morphologically Computing
Embodied, Embedded, Enactive, Extended Cognition |
Abstract: Cognitive science in
The Stanford Encyclopedia of Philosophy is considered to be the study of mind
and intelligence, developed through interdisciplinary collaboration between
psychology and philosophy of mind, linguistics, neuroscience, anthropology
and artificial intelligence (Thagard, 2014). Under such narrow definition of
cognitive science variety of unsolved/unsolvable problems appear. Much can be
won by broadening the definition of cognition, to include sub-symbolic
processes in humans (i.e. feelings, intuitions), to involve cognition in
other living beings and distributed social cognition. This is done by
connecting cognitivists and EEEE (embodied, embedded, enactive, extended)
approaches through the idea of morphological computation as
info-computational processing in cognizing agents at variety of levels of
organisation, emerging through evolution of organisms in interaction with the
environment. |
6 |
Tom Everitt, Victoria
Krakovna, Laurent Orseau, Marcus Hutter and Shane Legg. |
Value Learning from a Corrupted
Signal |
Abstract: Sensory errors,
software bugs, or reward misspecifications may incentivise agents to cheat.
For example, a reinforcement learning agent may prefer states where a sensory
error gives it the maximum reward, but where its not doing anything useful.
This problem can be formalised in a generalised Markov Decision Process. We
study the performance of traditional RL methods, well-intentioned agents
designed to manage reward corruption, randomised agents, and agents using
richer sources of data such as in inverse RL. The main takeaways are that
inverse RL is safer than RL, and that randomisation may improve robustness in
settings where only a reward signal is available. |
7 |
John Fox. |
Slicing and dicing AI theories:
how close are we to an agreed ontology? |
Abstract: In 1994 Allen Newell,
one of the founders of AI and admired by psychologists and computer
scientists alike, proposed what we would now call a grand challenge for
cognitive science, the development of a "Unified Theory of
Cognition". Since then, however, interdisciplinary research on UTC has
fallen away and the separation of AI, computer science and psychology have
been worsened by the fragmentation of research into countless conceptual and
methodological silos. This talk will discuss how lack of a common vocabulary
and theoretical ontology raises obstacles to addressing Newell’s grand
challenge and outlines a possible way forward. |
8 |
Sam Freed. |
Is All Original Programming
Introspective? |
Abstract: Introspection has had
a bad press in cognitive science. In a recent publication, introspection was
rehabilitated as a source of ideas for AI development. This talk will explore
the possibility that all original programming, AI or not, requires using of
introspection. This is shown by exploring role-playing, how when pretending
to be in a world consisting of a software environment (say python),
introspection is how we explore the possibilities to attain our goes as
programmers. |
9 |
Arzu Gokmen. |
Institutional Facts and AI in
Society |
Abstract: This study is an
ethical assessment of design of the infosphere by considering the human
beings not as beneficiaries or consumers of ICT’s but as the source of the
data that a machine cannot learn otherwise. The crucial success point of
intelligent systems is that they learn from data. But, what is the source of
data about social reality? As Searle points out, unlike the brute facts about
physical world, the ontology of the human facts, which he calls institutional
facts, is subjective and they exist within a social environment. And the
appropriate way to learn these facts is by interaction. But, how should this
interaction be like, and with whom, before AI became ‘mature’?; and this
implies that we face almost the same problem with raising a child. |
10 |
Jodi Guazzini. |
A Gnoseological Approach to the
SGP: the Difference between Perception and Knowledge and Two Ways of Being
Meaningful |
Abstract: I identify two ways in
which representations can acquire meaning within human knowledge and argue
that one of these can be a partial solution to the “Symbol Grounding
Problem”. The root of the SGP is that representations generated by an entity that
has no immediate grasp on the world must receive meanings from objects. Since
both human body/mind and AI are concerned by this, I suggest that an analysis
of human ways of knowing may suggest a viable strategy for grounding meaning
in symbolic writings used in AI. The solution I will present is to accept as
a model symbolic writings which can construct their objects by translating
their description into a system of reference within which it is possible to
specify both how the translation has been developed, and how the resulting
description of the referent can be validated. |
11 |
Mahi Hardalupas. |
On a new "systematic"
account of machine moral agency |
Abstract: I present a new
approach to moral agency, which I argue is more suitable for analyzing
machine moral agency than the traditional account. First, I outline the
traditional account of moral agency and, by considering two thought
experiments, show why it is flawed when applied to machine moral agency and
other cases. Then, I present an alternative “systematic” account of moral
agency and apply it to paradigmatic cases. In this new account, though
machines alone cannot be moral agents, they can be partial moral agents in a
system, where the system is a moral agent. Finally, I address potential
challenges to this new account and explain how the systematic account is
equipped to address them. |
12 |
Soheil Human, Golnaz
Bidabadi and Vadim Savenkov. |
Supporting Pluralism by
Artificial Intelligence: Conceptualizing Epistemic Disagreements As Digital
Artifacts |
Abstract: A crucial concept in
philosophy and social sciences, epistemic disagreement, has not yet been
adequately reflected in the Web. We argue that intelligent tools for
detection, representation and visualisation of epistemic disagreements are
needed to support pluralism. As a first step, epistemic disagreements and
possible responses to them are conceptualised and an ontology for
representing and annotating disagreements is proposed. Potential applications, challenges and
future works are discussed. |
13 |
Soheil Human, Markus Peschl,
Golnaz Bidabadi and Vadim Savenkov. |
An Enactive Theory of Need
Satisfaction |
Abstract: Need satisfaction can
be considered as one of the most fundamental aspects of biological cognitive
agents. The problem of need satisfaction is defined as "how is an object
or a state inferred to be a satisfier for a cognitive agent[’s need(s)]?".
In this paper, based on the interdisciplinary literature on need satisfaction
and state of the art evidence and theories in cognitive science and
artificial intelligence, including predictive processing, an important and
emerging approach that views the brain as a hypothesis-testing mechanism, an
enactive theory of need satisfaction is presented. Besides its potential
contribution to better understanding of biological cognitive systems, the
proposed cognitive theory can be seen as a first step towards development of
enactive need-oriented artificial agents. |
14 |
Thomas Kane. |
Dealing with Artificial Persons
and Four Types of Artificial Intelligence |
Abstract: The intelligence
(organisational intelligence) of organisations (artificial persons) may be
the most successful form of artificial intelligence that is operational in
the world today. In companies such
as Facebook, it has already demonstrated capabilities for altering human
behaviours. We present a new means of analysis for this type of intelligence
and suggest new means of reasoning with it. The paper presents an Artificial
Person within Hobbesian terminology and adapts Heidegger’s ontological
framework to introduce a level 2.x being, with which algorithmic, professional,
organisational and societal artificial intelligences can be positioned, and
from which new forms of Chinese Rooms, useful for monitoring and curbing
inappropriate behavior in organisations could be developed. |
15 |
Yoshihiro Maruyama. |
Pancomputationalism and
Philosophy of Data Science: From Symbolic to Statistical AI, and to Quantum
AI? |
Abstract: The rise of
probability and statistics is striking in contemporary science, ranging from
physics to artificial intelligence. Here we focus upon two issues in
particular: one is the computational theory of mind as the fundamental
underpinning of AI, and the nature of computation there; the other is the
transition from symbolic to statistical AI, and the nature of truth in data
science as a new kind of science. We argue: "computation" in the
computational theory of mind must ultimately be quantum if the singularity
thesis is true; data science is concerned with a new form of scientific
truth, which may be called "post-truth"; whereas conventional
science is about establishing universal truths from pure data carefully
collected in a controlled situation, data science is about indicating useful,
existential truths from real-world data collected from contingent real-life
and contaminated in different ways. |
16 |
Dagmar Monett and Colin
Lewis. |
Getting clarity by defining
Artificial Intelligence — A survey |
Abstract: We present the
preliminary results of our research survey “Defining (machine) Intelligence.”
The aim of the survey is to gather opinions, from a cross sector of
professionals, ultimately to help create a unified message on the goal and
definition of Artificial Intelligence (A.I.). The survey on definitions of
machine and human intelligence is still accepting responses. There has been a
positive volume of responses together with high-level, opinions and
recommendations concerning the definitions from experts around the world. We
hope we can contribute to the science of A.I. with a well-defined goal of the
discipline and also spread a stronger, more coherent message, to the
mainstream media, policymakers, investors, and the general public to help
dispel myths about A.I. |
17 |
Caterina Moruzzi. |
Creative AI: Music Composition Programs
as an Extension of the Composer's Mind |
Abstract: In this paper I answer
the question ‘Can a computer be creative?’ by focusing on music as
paradigmatic expression of human creativity. The diffusion of AI music
generation programs raises the concern of whether they produce ‘musical
works’. A widely recognised requirement of musical works is that of being
intentionally created. It follows that AI music programs produce ‘musical
works’ only if they are intentionally creative. My central claim is that AI
music generators possess creativity insofar as they are an extension of the
musician’s mind. More generally, I argue that, even though they are located
outside of the human’s head, AI programs are integrated into the cognitive
process that leads to the production of expressions of creativity. |
18 |
Stefan Reining. |
Revisiting the Dancing-Qualia
Argument for Computationalism |
Abstract: My aim in this talk
will be to attack David Chalmers’ dancing-qualia argument for computational
sufficiency from a hitherto neglected angle. Chalmers’ argument involves the
claim that if replacing certain neurons with input/output-equivalent silicon
chips resulted in a modification of the subject’s phenomenal state, then the
subject should be able to notice the change. I will, however, show that this
claim is incompatible with a well-established view in neurobiology regarding
the workings of phenomenal memory, according to which remembering phenomenal
states involves a reactivation of the very same neurons that were active
during the original perceptual episode, such that neuronal replacement also
alters the subject’s phenomenal memory and the modification would therefore
indeed go unnoticed by the subject. |
19 |
Aziz F. Zambak and Erdem Unal. |
Computational Discovery Models:
A Category Theoretic Approach to Knowledge Representation in Science |
Abstract: Data, information and
knowledge are exponentially growing in natural, formal, and social sciences.
The growth of data and knowledge brought novel topics into AI such as big
data, semantic web, machine learning etc. We claim that the very old AI
problem namely, knowledge representation, is at the intersection of all these
novel topics. The classical theories in knowledge representation are
insufficient to provide a new perspective to these novel topics of AI. This
paper aims at showing that we need a new approach to knowledge representation
based on the category theory. We will propose the Uni-Morphic Mapping as a
computational ontology and the HWSL [HypoWoven Source Language] as a Mark-up
Language for a proper knowledge representation model that can be used in
developing AI techniques that can contribute to develop the
theoretic/hypothetic content of scientific knowledge. |
20 |
Carlos Zednik. |
From Machine Learning to Machine
Intelligence |
Abstract: In this talk I
consider the prospects of Machine Learning methods such as Deep Learning to
develop intelligent computers. To this end, I outline a generalized Turing
Test in which computers are tasked with exhibiting intelligent behavior in a
variety of contexts, but also urge for the importance of “looking under the
hood”. Unfortunately, “looking under the hood” is notoriously
difficult—the Black Box Problem in AI. I consider the nascent
Explainable AI research program as a possible solution to this problem, but
also provide independent reasons for thinking that Machine Learning will
yield computers that act like humans, and that act for the same kinds of
reasons. Because these computers are being nurtured and situated in the
real-world environment also inhabited by humans, the similarities between
human and artificial intelligence will be more than skin deep. |