Kobayashi et al., 2022 - Google Patents
Motion illusion-like patterns extracted from photo and art images using predictive deep neural networksKobayashi et al., 2022
View HTML- Document ID
- 279877162529068654
- Author
- Kobayashi T
- Kitaoka A
- Kosaka M
- Tanaka K
- Watanabe E
- Publication year
- Publication venue
- Scientific Reports
External Links
Snippet
In our previous study, we successfully reproduced the illusory motion perceived in the rotating snakes illusion using deep neural networks incorporating predictive coding theory. In the present study, we further examined the properties of the network using a set of 1500 …
- 230000001537 neural 0 title abstract description 10
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/30861—Retrieval from the Internet, e.g. browsers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration, e.g. from bit-mapped to bit-mapped creating a similar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00221—Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for a specific business sector, e.g. utilities or tourism
- G06Q50/01—Social networking
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Suzuki et al. | A deep-dream virtual reality platform for studying altered perceptual phenomenology | |
Ozcelik et al. | Natural scene reconstruction from fMRI signals using generative latent diffusion | |
US20240137614A1 (en) | Applications, systems and methods to monitor, filter and/or alter output of a computing device | |
Visconti di Oleggio Castello et al. | The neural representation of personally familiar and unfamiliar faces in the distributed system for face perception | |
Tadin et al. | Perceptual consequences of centre–surround antagonism in visual motion processing | |
Taubert et al. | Different coding strategies for the perception of stable and changeable facial attributes | |
Dado et al. | Hyperrealistic neural decoding for reconstructing faces from fMRI activations via the GAN latent space | |
Murray et al. | The representation of perceived angular size in human primary visual cortex | |
Kobayashi et al. | Motion illusion-like patterns extracted from photo and art images using predictive deep neural networks | |
Parks et al. | Augmented saliency model using automatic 3d head pose detection and learned gaze following in natural scenes | |
Haskins et al. | Active vision in immersive, 360 real-world environments | |
Yokoyama et al. | Perception of direct gaze does not require focus of attention | |
Dalrymple et al. | Machine learning accurately classifies age of toddlers based on eye tracking | |
Dobs et al. | Near-optimal integration of facial form and motion | |
Rideaux et al. | But still it moves: static image statistics underlie how we see motion | |
Hayes et al. | Deep saliency models learn low-, mid-, and high-level features to predict scene attention | |
Masarwa et al. | Larger images are better remembered during naturalistic encoding | |
Tamura et al. | Dynamic visual cues for differentiating mirror and glass | |
Al-Wasity et al. | Hyperalignment of motor cortical areas based on motor imagery during action observation | |
Kroczek et al. | Angry facial expressions bias towards aversive actions | |
Zhang et al. | Multi-view emotional expressions dataset using 2D pose estimation | |
Wang et al. | Large-scale calcium imaging reveals a systematic V4 map for encoding natural scenes | |
Luo et al. | Facial expression aftereffect revealed by adaption to emotion-invisible dynamic bubbled faces | |
Perez et al. | Prior experience alters the appearance of blurry object borders | |
Morimoto et al. | Material surface properties modulate vection strength |