11 July: Gestures and Dialog, July 17-19, Lloyd Building Rm 107
Dublin Computational Linguistics Research Seminar: Index of July 2006 | Dublin Computational Linguistics Research Seminar - Index of year: 2006 | Full index
GESTURES and DIALOG from BIELEFELD and STRALSUND
Researchers at the University of Bielefeld, a Socrates Partner of
Trinity College, and from Stralsund will be presenting recent research
as detailed below, July 17-19. All are invited to participate. The
visit is sponsored Socrates, the Centre for Computing and Language
Studies, and the School of Computer Science and Statistics of Trinity
College Dublin.
Monday, July 17 12pm-1pm
Who models conversational gestures - why not linguists?
Dafydd Gibbon
Tuesday, July 18 12pm-1pm
Pointing and Reference. A Doubtful Alliance
Hannes Rieser
Tuesday, July 18, 3pm-4pm
Finding information the way you want, or: How to find the meaning of a
gesture?
Thorsten Trippel
Wednesday, July 19 11am - 1 pm
Grammar-Based Metonymy Resolution
Josef Meyer-Fujara[1], Hannes Rieser[2]
Location: Lloyd Building Room 107
(very near the Pearse St. Dart Station)
Abstracts below.
Bielefeld-Dublin SOCRATES/ERASMUS workshop, Dublin, July 2006
Who models conversational gestures - why not linguists?
Dafydd Gibbon
Despite early work by American structuralists such as Birdwhistell,
Hockett, and others, linguists and scholars in related fields such as
the philosophy of language have mainly just paid lip-service to the
role of gesture in face-to-face communication, as a source of
"markers" for speech acts and turn-taking. Conventionalised sign
languages in acoustically hostile situations (the deaf; factory
environments; the stock exchange) have received much more systematic
attention than conversational gesture. In contrast, scholars in fields
such as the psychology and sociology of language, applied behaviour
theory ("body language") and of course in human language technologies
have given the domain much more thought. There are notable exceptions,
particularly in centres such as Bielefeld, Dublin and Gothenburg, and
publications in this area are increasing in number.
My contention is that it is a shame to waste much of a century and a
half of work in philology and linguistics in respect of the
description, modelling and explanation of conversational gesture. I
contend that it is high time to apply sophisticated and relatively
consensual linguistic models of language structure and function to
this domain, ranging from phonetics and phonology (particularly
autosegmental and articulatory phonology approaches) through the other
standard levels of linguistic description to linguistic semantics and
pragmatics. The talk demonstrates aspects of applying linguistic
procedure to the descriptive and computational modelling of gesture,
as developed in Bielefeld research projects, particlarly in the
ModeLex project on conversational gesture in corpus-based
computational lexicography.
Pointing and Reference. A Doubtful Alliance
Hannes Rieser
SFB 360
"Situated Artificial Communicators", Bielefeld University
Hannes.Rieser@Uni-Bielefeld.de
In my talk the focus is laid on the denotional function of
demonstrations affiliated with referring expressions and to some
lesser extent on graspings. The referring expressions are mostly
definite descriptions like "the yellow bolt" or simple demonstratives
accompanied by a pointing gesture. However, both pointings and
graspings matter because the empirical data on which the talk is based
deal with object identification games, i.e. task-oriented dialogues of
a special sort, where a description giver singles out an object with
some description plus a demonstration/pointing and the
object-identifier tries to identify the object. He frequently does so
using a clarification question or a check-back such as "This one?"
grasping at the same time a particular object, bolt, nut, disc or
whatever. This shows that grasping and reference are linked just as
pointings and denotation are. In addition, object identification games
are also prototypical examples of situated communication, the
investigation of which has been one of the main objectives of research
at Bielefeld University.
So much for setting the general context. At the beginning I point out
that the research reported is linked up with the concept of Embodied
Communication, especially the VR-agent Max developed in Ipke
Wachsmuth's research group. Ensuing, a short overview of Cognitive
Science approaches to gesture is provided. There is a brief encounter
with Peirce, Quine, Wittgenstein, Davidson and Kaplan concerning
pointing and reference. Two sorts of empirical data generated from
object identification games are presented; phenomenological data and
analytical tracking data. Based on annotations and ratings, different
functions of demonstration such as pointing to objects, regions and
directions are distinguished wrt the phenomenological data. Tracking
data differ from phenomenological data in various respects. Above all,
they seem to forbid a simple equation pointing = referring. Concerning
the phenomenological data, a multi-modal interface integrating the
information coming from the gesture channel and the verbal channel is
laid out, where special emphasis is put on the underspecification
properties of multi-modal content. Residing to recent SDRT, it is
shown how the information from the multi-modal interface can in
principle be integrated into dialogue theory. Finally, an appraisal
of the interdisciplinary work consisting of informatics and
linguistics is given.
Finding information the way you want, or: How to find the meaning of a
gesture?
Thorsten Trippel
Lexicons are often seen as a primary information source to understand
unknown words and concepts. However, traditional dictionaries, usually
based on the concept of a word-form having one and only one meaning,
and lexicon databases have not achieved to access the unknown from the
known, sometimes instead starting from the unknown to access the
known. Going to a country where one does not understand the language,
how should one find the translation if one does not know how to spell
the word? If someone describes in a lexicon the meaning of a gesture,
how can this meaning be accessed? If someone needs a word of a
particular word-class that has a distinct meaning, how can this be
found? This sort of unknown bits of information contained in a lexicon
can hardly be found in terms of semasiological or onomasiological
dictionaries.
One way of getting the information of a lexicon more generally is to
use all the information bits included in the lexicon and make it
searchable. This search is not using a full text search but models the
individual relations between different atoms of information to each
other, even taken into account that there is ambiguity in many
aspects. This ambiguity includes words having several meanings,
word-classes having thousands of words, transcriptions that can have
different word-forms. The resulting lexicon graph can be traversed,
and traditional lexicons can be seen as a selection of a sub-graph of
this lexicon.
The lexicon graph hence allows to start with a (representation of a)
gesture and find possibly the different meanings of it, and this is
done the same way as finding the definitions(s) of a lemma, the
translation of a spoken language word, etc. The lexicon graph allows
completely different ways of accessing lexicon data, starting from
what a searcher knows and providing information previously unknown to
him or her.
Grammar-Based Metonymy Resolution
Josef Meyer-Fujara[1], Hannes Rieser[2]
[1]Fachhochschule Stralsund, Zur Schwedenschanze 15, D-18435 Stralsund
josef.meyer-fujara@fh-stralsund.de
[2]Universität Bielefeld, Postfach 10
01 31, D-33501 Bielefeld rieser@lili.uni-bielefeld.de
Our work originated in an analysis of SFB corpora concerning
construction dialogues where depiction metonymy was a striking
phenomenon, such as in Wir bauen jetzt ein Flugzeug (We are going to
build an airplane), where it is not the intention to build a
real-world airplane but an object depicting an airplane. Examples of
metonymy based on different relations are, e.g., Germany (Pointing to
label "Angela Merkel") votes against the US (pointing to label "George
W. Bush") and This is now 8000 EUR (pointing to a Jaeger LeCoultre
wrist watch). We give a formal treatment of metonymy resolution,
based on a GB-based grammar and an intensional predicate calculus
called lf. Interpretation is with respect to a specific model and can
take into account both the literal and the metonymical meaning. The
model deals with several aspects of context, a.o. indicated and
available objects as well as modal bases. Transition from the literal
to the metonymical interpretation is considered to be of pragmatic
nature and taken as a Gricean implicature having default nature. It is
triggered by observed violation of conversation maxims such as
Quality or Relation and described by an inductively defined operator
that takes parse trees annotated with lf-formulae to other such trees
in strict observation of compositionality. The approach shown is valid
irrespective of the metonymical relation involved.
We give examples of violation of the quality maxim as well as of the
relevance maxim. We also show how in the treatment of NP-phrases
different "scopes" of metonymical reading can be dealt with such as in
The arrogant schnitzel complains ("narrow scope") versus The half-done
schnitzel complains ("wide scope"). We finally explain the treatment
of metonymical meaning of adjectives and the intertwining with
anaphora resolution. Our work may also be used for updating
information states in DRT-accounts. Other approaches put forward
recently address conventionalised as well as "weird" metonymies and
consider their resolution e.g. as instances of underspecification
phenomena or as (default or "metonymical") inference triggered by
cognitive mechanisms still to be clarified. Problems to be addressed
in future work comprise application of particular default theories and
the interaction with anaphora resolution and bridging inferences. The
ultimate locus for metonymy theories seems to be constraint-based
syntax interfacing with dynamic semantics operating on
underspecification theory.
Dublin Computational Linguistics Research Seminar - Index of July 2006 | Index of year: 2006 | Full index