Skip to main content

    Andrea Tacchetti

    Research Interests:
    C
    The human brain can rapidly parse a constant stream of visual input. The majority of visual neuroscience studies, however, focus on responses to static, still-frame images. Here we use magnetoencephalography (MEG) decoding and a... more
    The human brain can rapidly parse a constant stream of visual input. The majority of visual neuroscience studies, however, focus on responses to static, still-frame images. Here we use magnetoencephalography (MEG) decoding and a computational model to study invariant action recognition in videos. We created a well-controlled, naturalistic dataset to study action recognition across different views and actors. We find that, like objects, actions can also be read out from MEG data in under 200 ms (after the subject has viewed only 5 frames of video). Action can also be decoded across actor and viewpoint, showing that this early representation is invariant. Finally, we developed an extension of the HMAX model, inspired by Hubel and Wiesel's findings of simple and complex cells in primary visual cortex as well as a recent computational theory of the feedforward invariant systems, which is traditionally used to perform size- and position-invariant object recognition in images, to reco...
    ABSTRACT In this paper the important practical issues of tuning and implementation of an Extended Kalman Filter for a sensorless hybrid stepper motor drive working with long cables is considered. A method to tune the filter using one set... more
    ABSTRACT In this paper the important practical issues of tuning and implementation of an Extended Kalman Filter for a sensorless hybrid stepper motor drive working with long cables is considered. A method to tune the filter using one set of data acquired from the real system is proposed. From this dataset, the system parameters and the Extended Kalman Filter's covariance matrices are estimated. The hardware and software implementation of the Extended Kalman Filter in the drive is also described, with specific emphasis on the code optimisation steps that are necessary to execute the filter at the desired sampling rate. Moreover, the developed drive's data acquisition capabilities and the experimental testbench used in the tuning and validation of the filter are discussed. Experimental results prove the effectiveness of the tuning method and implementation.
    Representations that are invariant to translation, scale and other transformations, can considerably reduce the sample complexity of learning, allowing recognition of new object classes from very few examples - a hallmark of human... more
    Representations that are invariant to translation, scale and other transformations, can considerably reduce the sample complexity of learning, allowing recognition of new object classes from very few examples - a hallmark of human recognition. Empirical estimates of one-dimensional projections of the distribution induced by a group of affine transformations are proven to represent a unique and invariant signature associated with an image. We show how projections yielding invariant signatures for future images can be learned automatically, and updated continuously, during unsupervised visual experience. A module performing filtering and pooling, like simple and complex cells as proposed by Hubel and Wiesel, can compute such estimates. Under this view, a pooling stage estimates a one-dimensional probability distribution. Invariance from observations through a restricted window is equivalent to a sparsity property w.r.t. to a transformation, which yields templates that are a) Gabor for optimal simultaneous invariance to translation and scale or b) very specific for complex, class-dependent transformations such as rotation in depth of faces. Hierarchical architectures consisting of this basic Hubel-Wiesel module inherit its properties of invariance, stability, and discriminability while capturing the compositional organization of the visual world in terms of wholes and parts, and are invariant to complex transformations that may only be locally affine. The theory applies to several existing deep learning convolutional architectures for image and speech recognition. It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects which is invariant to transformations, stable, and discriminative for recognition - this representation may be learned in an unsupervised way from natural visual experience.