Technical Report BUCS-TR-2011-013, Computer Science Department, Boston University, May 15, 2011, 2011
Handshape is a key linguistic component of signs, and thus, handshape recognition is essential to... more Handshape is a key linguistic component of signs, and thus, handshape recognition is essential to algorithms for sign language recognition and retrieval. In this work, linguistic constraints on the relationship between start and end handshapes are leveraged to improve handshape recognition accuracy. A Bayesian network formulation is proposed for learning and exploiting these constraints, while taking into consideration inter-signer variations in the production of particular handshapes. A Variational Bayes formulation is employed for supervised learning of the model parameters. A non-rigid image alignment algorithm, which yields improved robustness to variability in handshape appearance, is proposed for computing image observation likelihoods in the model. The resulting handshape inference algorithm is evaluated using a dataset of 1500 lexical signs in American Sign Language (ASL), where each lexical sign is produced by three native ASL signers.
Bookmarks Related papers MentionsView impact
Uploads
Papers by Carol Neidle
Our current method estimates the 3D hand configuration to distinguish among 77 hand configurations linguistically relevant for ASL. Constraining the problem in this way makes recognition of 3D hand configuration more tractable and provides the information specifically needed for sign recognition. Further improvements are obtained by incorporation of statistical information about linguistic dependencies among handshapes within a sign derived from an annotated corpus of almost 10,000 sign tokens.
markings—with differentiation of temporal phases (onset, core, offset, where appropriate), analysis of their characteristic properties, and extraction of corresponding features; (3) a 2-level learning framework to combine low- and high-level features of differing spatio-temporal scales. This new approach achieves significantly better tracking and recognition results than our previous methods.
computer of essential linguistic information conveyed through facial expressions and head movements. We have tracked head position
and facial deformations, and used computational learning to discern specific grammatical markings. Our ability to detect, identify, and
temporally localize the occurrence of such markings in ASL videos has recently been improved by incorporation of (1) new techniques
for deformable model-based 3D tracking of head position and facial expressions, which provide significantly better tracking accuracy
and recover quickly from temporary loss of track due to occlusion; and (2) a computational learning approach incorporating 2-level
Conditional Random Fields (CRFs), suited to the multi-scale spatio-temporal characteristics of the data, which analyses not only
low-level appearance characteristics, but also the patterns that enable identification of significant gestural components, such as
periodic head movements and raised or lowered eyebrows. Here we summarize our linguistically motivated computational approach
and the results for detection and recognition of nonmanual grammatical markings; demonstrate our data visualizations, and discuss the
relevance for linguistic research; and describe work underway to enable such visualizations to be produced over large corpora and
shared publicly on the Web.
for generating animations of novel sentences of American Sign Language (ASL). Drawing from a collection of recordings that have
been categorized into various types of non-manual expressions (NMEs), we define a method for selecting an exemplar recording of a
given type using a centroid-based selection procedure, using multivariate dynamic time warping (DTW) as the distance function.
Through intra- and inter-signer methods of evaluation, we demonstrate the efficacy of this technique, and we note useful potential
for the DTW visualizations generated in this study for linguistic researchers collecting and analyzing sign language corpora.
identification. In order to test the discriminative potential of the hand motion analysis, we performed sign recognition based exclusively on hand trajectories while holding the handshape constant. To facilitate this evaluation, we captured a collection of videos involving signs with a constant handshape produced by multiple subjects; and we automatically annotated the 3D motion trajectories. 3D hand locations are normalized in accordance with invariant properties of ASL movements. We trained time-series learning-based models for different signs of constant handshape in our dataset using the normalized 3D motion trajectories. Results show significant computer-based sign recognition accuracy across subjects and across a diverse set of signs. Our framework demonstrates the discriminative power and importance of 3D hand motion trajectories for sign recognition, given known handshapes.
Our current method estimates the 3D hand configuration to distinguish among 77 hand configurations linguistically relevant for ASL. Constraining the problem in this way makes recognition of 3D hand configuration more tractable and provides the information specifically needed for sign recognition. Further improvements are obtained by incorporation of statistical information about linguistic dependencies among handshapes within a sign derived from an annotated corpus of almost 10,000 sign tokens.
markings—with differentiation of temporal phases (onset, core, offset, where appropriate), analysis of their characteristic properties, and extraction of corresponding features; (3) a 2-level learning framework to combine low- and high-level features of differing spatio-temporal scales. This new approach achieves significantly better tracking and recognition results than our previous methods.
computer of essential linguistic information conveyed through facial expressions and head movements. We have tracked head position
and facial deformations, and used computational learning to discern specific grammatical markings. Our ability to detect, identify, and
temporally localize the occurrence of such markings in ASL videos has recently been improved by incorporation of (1) new techniques
for deformable model-based 3D tracking of head position and facial expressions, which provide significantly better tracking accuracy
and recover quickly from temporary loss of track due to occlusion; and (2) a computational learning approach incorporating 2-level
Conditional Random Fields (CRFs), suited to the multi-scale spatio-temporal characteristics of the data, which analyses not only
low-level appearance characteristics, but also the patterns that enable identification of significant gestural components, such as
periodic head movements and raised or lowered eyebrows. Here we summarize our linguistically motivated computational approach
and the results for detection and recognition of nonmanual grammatical markings; demonstrate our data visualizations, and discuss the
relevance for linguistic research; and describe work underway to enable such visualizations to be produced over large corpora and
shared publicly on the Web.
for generating animations of novel sentences of American Sign Language (ASL). Drawing from a collection of recordings that have
been categorized into various types of non-manual expressions (NMEs), we define a method for selecting an exemplar recording of a
given type using a centroid-based selection procedure, using multivariate dynamic time warping (DTW) as the distance function.
Through intra- and inter-signer methods of evaluation, we demonstrate the efficacy of this technique, and we note useful potential
for the DTW visualizations generated in this study for linguistic researchers collecting and analyzing sign language corpora.
identification. In order to test the discriminative potential of the hand motion analysis, we performed sign recognition based exclusively on hand trajectories while holding the handshape constant. To facilitate this evaluation, we captured a collection of videos involving signs with a constant handshape produced by multiple subjects; and we automatically annotated the 3D motion trajectories. 3D hand locations are normalized in accordance with invariant properties of ASL movements. We trained time-series learning-based models for different signs of constant handshape in our dataset using the normalized 3D motion trajectories. Results show significant computer-based sign recognition accuracy across subjects and across a diverse set of signs. Our framework demonstrates the discriminative power and importance of 3D hand motion trajectories for sign recognition, given known handshapes.
Signed languages provide illuminating evidence about functional projections of a kind unavailable in the study of spoken languages. Along with manual signing, crucial information is expressed by specific movements of the face and upper body. The authors argue that such nonmanual markings are often direct expressions of abstract syntactic features. The distribution and intensity of these markings provide information about the location of functional heads and the boundaries of functional projections. The authors show how evidence from ASL is useful for evaluating a number of recent theoretical proposals on, among other things, the status of syntactic agreement projections and constraints on phrase structure and the directionality of movement.