Soldatos et al., 2005 - Google Patents
Perceptual interfaces and distributed agents supporting ubiquitous computing servicesSoldatos et al., 2005
View PDF- Document ID
- 16467734433489800424
- Author
- Soldatos J
- Polymenakos L
- Pnevmatikakis A
- Talantzis F
- Stamatis K
- Carras M
- Publication year
- Publication venue
- Proceedings of the Eurescom Summit
External Links
Snippet
Ubiquitous computing constitutes a visionary yet constantly evolving computing paradigm, which is supported by a rich set of sensors, as well as a variety of middleware components. Sophisticated ubiquitous computing applications provide context-awareness through …
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00221—Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
- G06K9/00288—Classification, e.g. identification
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
- H04N5/225—Television cameras; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
- H04N5/232—Devices for controlling television cameras, e.g. remote control; Control of cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in, e.g. mobile phones, computers or vehicles
- H04N5/23219—Control of camera operation based on recognized human faces, facial parts, facial expressions or other parts of the human body
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11676369B2 (en) | Context based target framing in a teleconferencing environment | |
| Stiefelhagen et al. | Modeling focus of attention for meeting indexing based on multiple cues | |
| US6894714B2 (en) | Method and apparatus for predicting events in video conferencing and other applications | |
| Al-Allaf | Review of face detection systems based artificial neural networks algorithms | |
| McCowan et al. | Automatic analysis of multimodal group actions in meetings | |
| Ba et al. | Multiperson visual focus of attention from head pose and meeting contextual cues | |
| CN102473264B (en) | The method and apparatus of image display and control is carried out according to beholder's factor and reaction | |
| Karpouzis et al. | Modeling naturalistic affective states via facial, vocal, and bodily expressions recognition | |
| Wojek et al. | Activity recognition and room-level tracking in an office environment | |
| Gatica-Perez | Analyzing group interactions in conversations: a review | |
| Coutrot et al. | An audiovisual attention model for natural conversation scenes | |
| CN109241336A (en) | Music recommendation method and device | |
| US10937428B2 (en) | Pose-invariant visual speech recognition using a single view input | |
| Jayagopi et al. | Predicting two facets of social verticality in meetings from five-minute time slices and nonverbal cues | |
| Soldatos et al. | Perceptual interfaces and distributed agents supporting ubiquitous computing services | |
| Rybski et al. | Cameo: Camera assisted meeting event observer | |
| Azodolmolky et al. | Middleware for in-door ambient intelligence: the polyomaton system | |
| Neumann et al. | Integration of audiovisual sensors and technologies in a smart room | |
| Gatica-Perez et al. | Nonverbal behavior analysis | |
| Howell et al. | Active vision techniques for visually mediated interaction | |
| Yu et al. | Towards smart meeting: Enabling technologies and a real-world application | |
| Al-Hames et al. | Automatic multi-modal meeting camera selection for video-conferences and meeting browsers | |
| WO2010125488A2 (en) | Prompting communication between remote users | |
| Hunt et al. | Emotion Recognition in Images and Video with Python For Autism Assessment | |
| Jyoti et al. | Salient face prediction without bells and whistles |