A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation
<p>Lean UX learning loop.</p> "> Figure 2
<p>Proposed platform overview.</p> "> Figure 3
<p>Lean UX platform architecture.</p> "> Figure 4
<p>Lean UX Model.</p> "> Figure 5
<p>Hybrid level fusion for affect computing.</p> "> Figure 6
<p>Self-reported feedback form.</p> "> Figure 7
<p>Survey workflow: triangulation of UX metric with self-reporting.</p> "> Figure 8
<p>The workflow of sentiment and emotion analyzer.</p> "> Figure 9
<p>Filter base feature selection process.</p> "> Figure 10
<p>User Interface of UX toolkit.</p> "> Figure 11
<p>Overall workflow of the proposed platform.</p> "> Figure 12
<p>How it works.</p> "> Figure 13
<p>Multi-modal data sync testing per time-window.</p> "> Figure 14
<p>Recognition average accuracy for each dataset.</p> "> Figure 15
<p>Average accuracy of the classifier using different features on different frequency bands.</p> "> Figure 16
<p>The average pupil size of each trail.</p> "> Figure A1
<p>Project creation step-wise process.</p> "> Figure A2
<p>Dashboard of Lean UX- List of created projects for UX evaluation.</p> "> Figure A3
<p>Task creation and evaluation process by task wise.</p> "> Figure A4
<p>Momentary UX evaluation: real-time data collection and UX metric measurement.</p> "> Figure A5
<p>Task wise result view.</p> "> Figure A6
<p>Project wise result view.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Self-Reported Measurement
2.2. Observational Measurement
2.3. Physiological Measurement
3. Lean UX Platform Architecture
4. Lean UX Platform Implementation Details
4.1. Data Layer
4.1.1. Multimodal Data Acquisition and Synchronization
4.1.2. Data Persistence
4.2. UX Measurements Layer
4.2.1. User Interaction Metrics
4.2.2. Emotion and Stress Metrics
- ◯
- Physiological-based Emotion Recognition: We use the biometric measurement to understand the emotional engagement of user while the user interacts with the system. We use multimodal data from various sensors, such as eye tracking, for visual attention and EEG for quick detection of emotions, motivations, engagement (arousal) in the cognitive workload and frustration level. We will add more modules that can measure emotional arousal and stress by the galvanic skin response (GSR) via measuring changes in the conductivity of the skin, and we will use EMG/ECG for the detection of muscle activity, motion, stress, and arousal. In this study, we implemented the eye tracking and EEG modules.
- ◯
- Video-based Emotion Recognition: The video-based emotion recognition metric consists of two methods and sub-modules: facial expression analysis [41] and body language analysis. Automatic facial expression analysis (AFEA) plays an important role in producing deeper insights in human emotional reactions (valence), such as fear, happiness, sadness, surprise, anger, disgust, or neutrality. For AFEA, we used an inexpensive webcam to capture video of a participant in order to reduce the overall financial cost. Our developed AFEA first detects the face in a given video frame or image by applying the Viola Jones cascaded classifier algorithm. Second, different facial landmarks features are detected (e.g., eyes, brows, mouth, nose) as the face model. Finally, the face model is fed into the classifier to provide emotions and facial expression metrics as labels [41]. Non-verbal gestures (i.e., body language) play a significant part in the communication process and can yield critical insight into one’s experience while interacting with any computing system. We will use a depth camera to recognize emotions through user body language in upcoming version of lean UX platform release.
- ◯
- Audio-based Emotion Recognition: We used an automatic method of measuring human emotions by analyzing the human voice collected through a microphone while using the system [71], such as anger, sadness, and happiness. The trained model is built on the emotion audio data collected from lab students using a microphone recording by manually labeling each audio clip, Berlin Emotional Speech (EMO-DB) [72], and SEMAINE corpus [73]. The model classifies incoming audio to the platform as seven basic emotions: fear, happiness, sadness, surprise, anger, disgust, or neutrality. A Voice Activity Detection (VAD) VAD technique that consists of short-time energy (STE) and short-time zero-crossing rate (STZCR) [74,75] is used to remove the background noise and eliminate the silent parts from audio signals. The speech signals are divided into frames, then STE detects the energy within each frame for voice segmentation. Afterward, STZCR is calculated from the rate of change of speech signal within a particular time window. These two features are used to extract the speech segment for emotion recognition and removed the unwanted frames from signals. The output of VAD is used by the audio feature extraction to extract the audio features such as pitch, log-energy, teager energy operator (TEO), and zero ZCR. Subsequently, we have employed the feature level fusion using a set of rules to choose the right emotions as a previous study [75].
- ◯
- Multimodal Data Fusion: The primary goal of multimodal fusion is to enhance the accuracy of prediction classifiers [76]. It shows the importance of making a multimodal fusion framework that could effectively extract emotions from different modalities in human-centric environment. The benefit of using multimodal data from different devices is to get deep insights of human emotions and motivations. The platform fuses the different emotional measurements, such as audio, video, physiological, and eye tracking to acquire the complete overview of the user’s emotional experience by using the mixed method approach to measure the actual user’s emotional experience, as shown in Figure 5.
4.2.3. Self-Reported Metrics
4.3. Analytics Layer
4.4. Visualization Server (UX Toolkit)
5. Execution Scenarios as Case Studies of Mining Minds Evaluation
6. Results and Evaluation
6.1. Multimodal Data Acquisition and Data Synchronization Process
6.2. Emotion and Stress Metrics
6.3. Self-Reported Metric
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A
Question ID | Bipolar Words | |
---|---|---|
WL | WR | |
1 | annoying | enjoyable |
2 | not understandable | understandable |
3 | dull | Creative |
4 | difficult to learn | easy to learn |
5 | inferior | valuable |
6 | boring | exciting |
7 | not interesting | interesting |
8 | unpredictable | predictable |
9 | slow | fast |
10 | inventive | conventional |
11 | obstructive | supportive |
12 | bad | good |
13 | complicated | easy |
14 | unlikable | pleasing |
15 | usual | leading edge |
16 | unpleasant | pleasant |
17 | not secure | secure |
18 | motivating | demotivating |
19 | Does not meets expectations | meet expectations |
20 | inefficient | effient |
21 | confusing | clear |
22 | impractical | practical |
23 | cluttered | organized |
24 | unattractive | attractive |
25 | unfriendly | friendly |
26 | conservative | innovative |
27 | technical | human |
28 | isolating | connective |
29 | unprofessional | professional |
30 | cheap | premium |
31 | alienating | integrating |
32 | separates me | brings me closer |
33 | unpresentable | presentable |
34 | cautious | bold |
35 | undemanding | challenging |
36 | ordinary | novel |
37 | rejecting | inviting |
38 | repelling | appealing |
39 | disagreeable | likeable |
Appendix B
References
- Hassenzahl, M.; Tractinsky, N. User experience—A research agenda. Behav. Inf. Technol. 2006, 25, 91–97. [Google Scholar] [CrossRef]
- Liang, Y.; Liu, Y.; Loh, H.T. Exploring Online Reviews for User Experience Modeling. In DS 75–7: Proceedings of the 19th International Conference on Engineering Design (ICED13), Design for Harmonies, Vol. 7: Human Behaviour in Design, Seoul, Korea, 19–22.08. 2013; Sungkyunkwan University: Seoul, Korea, 2013. [Google Scholar]
- Kula, I.; Atkinson, R.K.; Branaghan, R.J.; Roscoe, R.D. Assessing User Experience via Biometric Sensor Affect Detection. In End-User Considerations in Educational Technology Design; IGI Global: Hershey, PA, USA, 2017; p. 123. [Google Scholar]
- Law, E.L.-C.; van Schaik, P. Modelling User Experience–An Agenda for Research and Practice; Oxford University Press: Oxford, UK, 2010; ISBN 0953-5438. [Google Scholar]
- Roto, V.; Law, E.; Vermeeren, A.; Hoonhout, J. User experience white paper. Bringing clarity to the concept of user experience. Result from Dagstuhl Seminar on Demarcating User Experience, September 15–18 (2010). Disponible en ligne le 2011, 22, 6–15. [Google Scholar]
- Laugwitz, B.; Held, T.; Schrepp, M. Construction and Evaluation of a User Experience Questionnaire. In Symposium of the Austrian HCI and Usability Engineering Group; Springer: Berlin/Heidelberg, Germany, 2008; pp. 63–76. [Google Scholar]
- All About, U.X. Available online: http://www.allaboutux.org/all-methods (accessed on 29 March 2007).
- Bolger, N.; Davis, A.; Rafaeli, E. Diary methods: Capturing life as it is lived. Annu. Rev. Psychol. 2003, 54, 579–616. [Google Scholar] [CrossRef] [PubMed]
- Karapanos, E.; Zimmerman, J.; Forlizzi, J.; Martens, J.-B. User Experience over Time: An Initial Framework. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; ACM: New York, NY, USA, 2009; pp. 729–738. [Google Scholar]
- Fallman, D.; Waterworth, J. Dealing with User Experience and Affective Evaluation in HCI Design: A Repertory Grid Approach. In Proceedings of the Conference on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; pp. 2–7. [Google Scholar]
- Scollon, C.N.; Prieto, C.-K.; Diener, E. Experience Sampling: Promises and Pitfalls, Strength and Weaknesses. In Assessing Well-Being; Springer: Dordrecht, The Netherlands, 2009; pp. 157–180. [Google Scholar]
- Vermeeren, A.P.; Law, E.L.-C.; Roto, V.; Obrist, M.; Hoonhout, J.; Väänänen-Vainio-Mattila, K. User Experience Evaluation Methods: Current State and Development Needs. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, Reykjavik, Iceland, 16–20 October 2010; ACM: New York, NY, USA, 2010; pp. 521–530. [Google Scholar]
- Schubert, E. Continuous Measurement of Self-Report Emotional Response to Music; Oxford University Press: Oxford, UK, 2001. [Google Scholar]
- Tähti, M.; Arhippainen, L. A Proposal of collecting Emotions and Experiences. Interact. Exp. HCI 2004, 2, 195–198. [Google Scholar]
- Russel, J.A.; Weiss, A.; Mendelsohn, G.A. Affect grid: A single-item scale of pleasure and arousal. J. Personal. Soc. Psychol. 1989, 57, 493–502. [Google Scholar] [CrossRef]
- Van Gog, T.; Paas, F.; Van Merriënboer, J.J.; Witte, P. Uncovering the problem-solving process: Cued retrospective reporting versus concurrent and retrospective reporting. J. Exp. Psychol. Appl. 2005, 11, 237. [Google Scholar] [CrossRef] [PubMed]
- Goodman, E.; Kuniavsky, M.; Moed, A. Observing the User Experience: A Practitioner’s Guide to User Research. IEEE Trans. Prof. Commun. 2013, 56, 260–261. [Google Scholar] [CrossRef]
- Kuniavsky, M. Observing the User Experience: A Practitioner’s Guide to User Research; Morgan Kaufmann: Burlington, MA, USA, 2003; ISBN 0-08-049756-X. [Google Scholar]
- Fu, B.; Noy, N.F.; Storey, M.-A. Eye tracking the user experience–An evaluation of ontology visualization techniques. Semant. Web J. 2017, 8, 23–41. [Google Scholar] [CrossRef]
- Qu, Q.-X.; Zhang, L.; Chao, W.-Y.; Duffy, V. User Experience Design Based on Eye-Tracking Technology: A Case Study on Smartphone APPs. In Advances in Applied Digital Human Modeling and Simulation; Springer: Cham, Switzerland, 2017; pp. 303–315. [Google Scholar]
- Bojko, A. Eye Tracking the User Experience: A Practical Guide to Research; Rosenfeld Media: New York, NY, USA, 2013; ISBN 1-933820-91-8. [Google Scholar]
- Zheng, W.-L.; Zhu, J.-Y.; Lu, B.-L. Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans. Affect. Comput. 2017. [Google Scholar] [CrossRef]
- Li, X.; Yan, J.-Z.; Chen, J.-H. Channel Division Based Multiple Classifiers Fusion for Emotion Recognition Using EEG signals. In ITM Web of Conferences; EDP Sciences: Les Ulis, France, 2017; Volume 11, p. 07006. [Google Scholar]
- Liu, Y.-J.; Yu, M.; Zhao, G.; Song, J.; Ge, Y.; Shi, Y. Real-time movie-induced discrete emotion recognition from EEG signals. IEEE Trans. Affect. Comput. 2017. [Google Scholar] [CrossRef]
- Mundell, C.; Vielma, J.P.; Zaman, T. Predicting Performance Under Stressful Conditions Using Galvanic Skin Response. arXiv, 2016; arXiv:160601836. [Google Scholar]
- Nourbakhsh, N.; Chen, F.; Wang, Y.; Calvo, R.A. Detecting Users’ Cognitive Load by Galvanic Skin Response with Affective Interference. ACM Trans. Interact. Intell. Syst. 2017, 7, 12. [Google Scholar] [CrossRef]
- Greene, S.; Thapliyal, H.; Caban-Holt, A. A survey of affective computing for stress detection: Evaluating technologies in stress detection for better health. IEEE Consum. Electron. Mag. 2016, 5, 44–56. [Google Scholar] [CrossRef]
- Basu, S.; Bag, A.; Aftabuddin, M.; Mahadevappa, M.; Mukherjee, J.; Guha, R. Effects of Emotion on Physiological Signals. In Proceedings of the 2016 IEEE Annual India Conference (INDICON), Bangalore, India, 16–18 December 2016; pp. 1–6. [Google Scholar]
- Schubert, E. Measuring emotion continuously: Validity and reliability of the two-dimensional emotion-space. Aust. J. Psychol. 1999, 51, 154–165. [Google Scholar] [CrossRef]
- Izard, C.E. The Differential Emotions Scale: DES IV-A; [A Method of Measuring the Meaning of Subjective Experience of Discrete Emotions]; University of Delaware: Newark, DE, USA, 1993. [Google Scholar]
- Sacharin, V.; Schlegel, K.; Scherer, K.R. Geneva Emotion Wheel Rating Study. Available online: https://archive-ouverte.unige.ch/unige:97849 (accessed on 29 March 2017).
- Desmet, P. Measuring emotion: Development and Application of an Instrument to Measure Emotional Responses to Products. In Funology; Springer: Dordrecht, The Netherlands, 2003; pp. 111–123. [Google Scholar]
- Laurans, G.; Desmet, P.M.A.; Karlsson, M.A.; van Erp, J. Using Self-Confrontation to Study User Experience: A New Approach to the Dynamic Measurement of Emotions while Interacting with Products. In Design & Emotion; Chalmers University of Technology: Gothenburg, Sweden, 2006; Volume 2006. [Google Scholar]
- Desmet, P.; Overbeeke, K.; Tax, S. Designing products with added emotional value: Development and appllcation of an approach for research through design. Des. J. 2001, 4, 32–47. [Google Scholar] [CrossRef]
- Hassenzahl, M.; Burmester, M.; Koller, F. AttrakDiff: A Questionnaire to Measure Perceived Hedonic and Pragmatic Quality. In Mensch & Computer; Springer: Berlin, Germany, 2003; pp. 187–196. [Google Scholar]
- Norman, K.L.; Shneiderman, B.; Harper, B.; Slaughter, L. Questionnaire for User Interaction Satisfaction; University of Maryland: College Park, MD, USA, 1998. [Google Scholar]
- Kirakowski, J.; Corbett, M. SUMI: The software usability measurement inventory. Br. J. Educ. Technol. 1993, 24, 210–212. [Google Scholar] [CrossRef]
- Brooke, J. SUS-A quick and dirty usability scale. Usability Eval. Ind. 1996, 189, 4–7. [Google Scholar]
- Lavie, T.; Tractinsky, N. Assessing dimensions of perceived visual aesthetics of web sites. Int. J. Hum. Comput. Stud. 2004, 60, 269–298. [Google Scholar] [CrossRef]
- Paas, F.G.; Van Merriënboer, J.J. The efficiency of instructional conditions: An approach to combine mental effort and performance measures. Hum. Factors 1993, 35, 737–743. [Google Scholar] [CrossRef]
- Siddiqi, M.H.; Alam, M.G.R.; Hong, C.S.; Khan, A.M.; Choo, H. A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition. PLoS ONE 2016, 11, e0162702. [Google Scholar] [CrossRef] [PubMed]
- El Ayadi, M.; Kamel, M.S.; Karray, F. Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern Recognit. 2011, 44, 572–587. [Google Scholar] [CrossRef]
- Plaza, B. Google Analytics for measuring website performance. Tour. Manag. 2011, 32, 477–481. [Google Scholar] [CrossRef]
- Scherr, S.A.; Elberzhager, F.; Holl, K. An Automated Feedback-Based Approach to Support Mobile App Development. In Proceedings of the 2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Vienna, Austria, 30 August–1 September 2017; pp. 44–51. [Google Scholar]
- Den Uyl, M.J.; Van Kuilenburg, H. The FaceReader: Online Facial Expression Recognition. Proceedings of Measuring Behavior 2005, 5th Internaltional Conference on Methods and Techniques in Behavioral Research, Wageningen, The Netherlands, 30 August–2 September 2005; Volume 30, pp. 589–590. [Google Scholar]
- Zaman, B.; Shrimpton-Smith, T. The FaceReader: Measuring Instant Fun of Use. In Proceedings of the 4th Nordic Conference on Human-Computer Interaction: Changing Roles, Oslo, Norway; ACM: New York, NY, USA, 2006; pp. 457–460. [Google Scholar]
- Whitehill, J.; Bartlett, M.; Movellan, J. Automatic Facial Expression Recognition for Intelligent Tutoring Systems. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 23–28 June 2008; pp. 1–6. [Google Scholar]
- Noroozi, F.; Marjanovic, M.; Njegus, A.; Escalera, S.; Anbarjafari, G. Audio-visual emotion recognition in video clips. IEEE Trans. Affect. Comput. 2017. [Google Scholar] [CrossRef]
- Clifton, B. Advanced Web Metrics with Google Analytics; John Wiley & Sons: Hoboken, NJ, USA, 2012; ISBN 1-118-23958-X. [Google Scholar]
- Miller, S.A. Piwik Web Analytics Essentials; Packt Publishing Ltd.: Birmingham, UK, 2012; ISBN 1-84951-849-1. [Google Scholar]
- Liu, X.; Zhu, S.; Wang, W.; Liu, J. Alde: Privacy Risk Analysis of Analytics Libraries in the Android Ecosystem. In International Conference on Security and Privacy in Communication Systems; Springer: Cham, Switzerland, 2016; pp. 655–672. [Google Scholar]
- Alepuz, I.; Cabrejas, J.; Monserrat, J.F.; Perez, A.G.; Pajares, G.; Gimenez, R. Use of Mobile Network Analytics for Application Performance Design. In Proceedings of the 2007 Network Traffic Measurement and Analysis Conference (TMA), Dublin, Ireland, 21–23 June 2017; pp. 1–6. [Google Scholar]
- Girard, J.M.; Cohn, J.F. A primer on observational measurement. Assessment 2016, 23, 404–413. [Google Scholar] [CrossRef] [PubMed]
- Zheng, W.-L.; Dong, B.-N.; Lu, B.-L. Multimodal Emotion Recognition Using EEG and Eye Tracking Data. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 26–30 August 2014; pp. 5040–5043. [Google Scholar]
- Bergstrom, J.R.; Schall, A. Eye Tracking in User Experience Design; Elsevier: Walthan, MA, USA, 2014; ISBN 0-12-416709-8. [Google Scholar]
- Tzafilkou, K.; Protogeros, N. Diagnosing user perception and acceptance using eye tracking in web-based end-user development. Comput. Hum. Behav. 2017, 72, 23–37. [Google Scholar] [CrossRef]
- Sanfilippo, F. A multi-sensor fusion framework for improving situational awareness in demanding maritime training. Reliab. Eng. Syst. Saf. 2017, 161, 12–24. [Google Scholar] [CrossRef]
- Sivaji, A.; Ahmad, W.F.W. Benefits of Complementing Eye-Tracking Analysis with Think-Aloud Protocol in a Multilingual Country with High Power Distance. In Current Trends in Eye Tracking Research; Springer: Cham, Switzerland, 2014; pp. 267–278. [Google Scholar]
- Vrana, S.R. The psychophysiology of disgust: Differentiating negative emotional contexts with facial EMG. Psychophysiology 1993, 30, 279–286. [Google Scholar] [CrossRef] [PubMed]
- Bacic, D. Understanding Business Dashboard Design User Impact: Triangulation Approach Using Eye-Tracking, Facial Expression, Galvanic Skin Response and EEG Sensors. Available online: http://aisel.aisnet.org/amcis2017/HumanCI/Presentations/21/ (accessed on 15 May 2018).
- Klein, L. UX for Lean Startups: Faster, Smarter User Experience Research and Design; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2013; ISBN 1-4493-3504-7. [Google Scholar]
- Meneweger, T.; Wurhofer, D.; Obrist, M.; Beck, E.; Tscheligi, M. Characteristics of Narrative Textual Data Linked to User Experiences. In Proceedings of the CHI’14 Extended Abstracts on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; ACM: New York, NY, USA, 2014; pp. 2605–2610. [Google Scholar]
- Banos, O.; Amin, M.B.; Khan, W.A.; Afzal, M.; Hussain, M.; Kang, B.H.; Lee, S. The Mining Minds digital health and wellness framework. Biomed. Eng. Online 2016, 15, 76. [Google Scholar] [CrossRef] [PubMed]
- Amin, M.B.; Banos, O.; Khan, W.A.; Muhammad Bilal, H.S.; Gong, J.; Bui, D.-M.; Cho, S.H.; Hussain, S.; Ali, T.; Akhtar, U. On curating multimodal sensory data for health and wellness platforms. Sensors 2016, 16, 980. [Google Scholar] [CrossRef] [PubMed]
- Lin, K.-Y.; Chien, C.-F.; Kerh, R. UNISON framework of data-driven innovation for extracting user experience of product design of wearable devices. Comput. Ind. Eng. 2016, 99, 487–502. [Google Scholar] [CrossRef]
- Node.js. Available online: https://nodejs.org/en/ (accessed on 29 March 2017).
- Hussain, J.; Khan, W.A.; Afzal, M.; Hussain, M.; Kang, B.H.; Lee, S. Adaptive User Interface and User Experience Based Authoring Tool for Recommendation Systems. In International Conference on Ubiquitous Computing and Ambient Intelligence; Springer: Cham, Switzerland, 2014; pp. 136–142. [Google Scholar]
- Hussain, J.; Lee, S. Identifying User Experience (UX) Dimensions from UX Literature Reviews. Available online: http://www.riss.kr/search/detail/DetailView.do?p_mat_type=1a0202e37d52c72d&control_no=f631e21b1c0c2bd1b36097776a77e665 (accessed on 15 May 2018).
- Hussain, J.; Hassan, A.U.; Bilal, H.S.M.; Ali, R.; Afzal, M.; Hussain, S.; Bang, J.; Banos, O.; Lee, S. Model-based adaptive user interface based on context and user experience evaluation. J. Multimodal User Interfaces 2018, 12, 1–16. [Google Scholar] [CrossRef]
- Albert, W.; Tullis, T. Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics; Newnes: Oxford, UK, 2013; ISBN 0-12-415792-0. [Google Scholar]
- Banos, O.; Villalonga, C.; Bang, J.; Hur, T.; Kang, D.; Park, S.; Le-Ba, V.; Amin, M.B.; Razzaq, M.A.; Khan, W.A. Human Behavior Analysis by Means of Multimodal Context Mining. Sensors 2016, 16, 1264. [Google Scholar] [CrossRef] [PubMed]
- Ververidis, D.; Kotropoulos, C. A State of the Art Review on Emotional Speech Databases. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.420.6988&rep=rep1&type=pdf (accessed on 15 May 2018).
- McKeown, G.; Valstar, M.F.; Cowie, R.; Pantic, M. The SEMAINE Corpus of Emotionally Coloured Character Interactions. In Proceedings of the 2010 IEEE International Conference on Multimedia and Expo (ICME), Suntec City, Singapore, 19–23 July 2010; pp. 1079–1084. [Google Scholar]
- Yang, X.; Tan, B.; Ding, J.; Zhang, J.; Gong, J. Comparative Study on Voice Activity Detection Algorithm. In Proceedings of the 2010 International Conference on Electrical and Control Engineering (ICECE), Wuhan, China, 25–27 June 2010; pp. 599–602. [Google Scholar]
- Ooi, C.S.; Seng, K.P.; Ang, L.-M.; Chew, L.W. A new approach of audio emotion recognition. Expert Syst. Appl. 2014, 41, 5858–5869. [Google Scholar] [CrossRef]
- Zeng, Z.; Pantic, M.; Roisman, G.I.; Huang, T.S. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 39–58. [Google Scholar] [CrossRef] [PubMed]
- D’mello, S.K.; Kory, J. A review and meta-analysis of multimodal affect detection systems. ACM Comput. Surv. CSUR 2015, 47, 43. [Google Scholar] [CrossRef]
- Patwardhan, A.S. Multimodal Mixed Emotion Detection. In Proceedings of the 2017 2nd International Conference on Communication and Electronics Systems (ICCES), Suntec City, Singapore, 19–23 July 2010; pp. 139–143. [Google Scholar]
- Poria, S.; Cambria, E.; Howard, N.; Huang, G.-B.; Hussain, A. Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing 2016, 174, 50–59. [Google Scholar] [CrossRef]
- Wöllmer, M.; Weninger, F.; Knaup, T.; Schuller, B.; Sun, C.; Sagae, K.; Morency, L.-P. Youtube movie reviews: Sentiment analysis in an audio-visual context. IEEE Intell. Syst. 2013, 28, 46–53. [Google Scholar] [CrossRef]
- Mansoorizadeh, M.; Charkari, N.M. Multimodal information fusion application to human emotion recognition from face and speech. Multimed. Tools Appl. 2010, 49, 277–297. [Google Scholar] [CrossRef]
- Sarkar, C.; Bhatia, S.; Agarwal, A.; Li, J. Feature Analysis for Computational Personality Recognition Using Youtube Personality Data Set. In Proceedings of the 2014 ACM Multi Media on Workshop on Computational Personality Recognition, Orlando, FL, USA, 7 November 2014; ACM: New York, NY, USA, 2014; pp. 11–14. [Google Scholar]
- Poria, S.; Cambria, E.; Hussain, A.; Huang, G.-B. Towards an intelligent framework for multimodal affective data analysis. Neural Netw. 2015, 63, 104–116. [Google Scholar] [CrossRef] [PubMed]
- Wang, S.; Zhu, Y.; Wu, G.; Ji, Q. Hybrid video emotional tagging using users’ EEG and video content. Multimed. Tools Appl. 2014, 72, 1257–1283. [Google Scholar] [CrossRef]
- Dobrišek, S.; Gajšek, R.; Mihelič, F.; Pavešić, N.; Štruc, V. Towards efficient multi-modal emotion recognition. Int. J. Adv. Robot. Syst. 2013, 10, 53. [Google Scholar] [CrossRef]
- Jick, T.D. Mixing qualitative and quantitative methods: Triangulation in action. Adm. Sci. Q. 1979, 24, 602–611. [Google Scholar] [CrossRef]
- Ali, R.; Afzal, M.; Hussain, M.; Ali, M.; Siddiqi, M.H.; Lee, S.; Kang, B.H. Multimodal hybrid reasoning methodology for personalized wellbeing services. Comput. Biol. Med. 2016, 69, 10–28. [Google Scholar] [CrossRef] [PubMed]
- Sauro, J.; Dumas, J.S. Comparison of Three One-Question, Post-Task Usability Questionnaires. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; ACM: New York, NY, USA, 2009; pp. 1599–1608. [Google Scholar]
- Yousefpour, A.; Ibrahim, R.; Hamed, H.N.A. Ordinal-based and frequency-based integration of feature selection methods for sentiment analysis. Expert Syst. Appl. 2017, 75, 80–93. [Google Scholar] [CrossRef]
- Xia, R.; Zong, C.; Li, S. Ensemble of feature sets and classification algorithms for sentiment classification. Inf. Sci. 2011, 181, 1138–1152. [Google Scholar] [CrossRef]
- Taylor, A.; Marcus, M.; Santorini, B. The Penn Treebank: An Overview. In Treebanks; Springer: Dordrecht, The Netherlands, 2003; pp. 5–22. [Google Scholar]
- Lucey, P.; Cohn, J.F.; Kanade, T.; Saragih, J.; Ambadar, Z.; Matthews, I. The Extended Cohn-Kanade Dataset (ck+): A Complete Dataset for Action Unit and Emotion-Specified Expression. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), San Francisco, CA, USA, 13–18 June 2010; pp. 94–101. [Google Scholar]
- Lyons, M.J.; Akamatsu, S.; Kamachi, M.; Gyoba, J.; Budynek, J. The Japanese Female Facial Expression (JAFFE) Database. In Proceedings of the Third International Conference on Automatic Face and Gesture Recognition, Nara, Japan, 14–16 April 1998; pp. 14–16. [Google Scholar]
- Krumhuber, E.G.; Manstead, A.S. Can Duchenne smiles be feigned? New evidence on felt and false smiles. Emotion 2009, 9, 807. [Google Scholar] [CrossRef] [PubMed]
- Lee, K.-C.; Ho, J.; Kriegman, D.J. Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 684–698. [Google Scholar] [PubMed]
- Thomaz, C.E. FEI Face Database. Available online: http://fei.edu.br/~cetfacedatabase.html (accessed on 2 October 2012).
Rule ID | Condition (IF) | Action (THEN) |
---|---|---|
R1 | IF emotional_state = “anger” AND congnitive_state = ”stress” AND usability.tasksuccess = “failure” | T1, WL1, WL13 |
R2 | IF emotional_state = “anger” AND congnitive_state = ”confuse” AND usability.tasksuccess = “failure” | T1, WL1, WL21 |
R3 | IF emotional_state = “disgust” AND congnitive_state = ”confuse” AND usability.tasksuccess = “failure” | T1, WL19, WL21 |
Rn | IF emotional_state = “happy” AND usability.tasksuccess = “complate” | T1, WR14, WR9 |
No. of API Calls | Missing Data Packets | Error Rate |
---|---|---|
20,000 | 2 | 0.010 |
40,000 | 5 | 0.012 |
60,000 | 9 | 0.015 |
80,000 | 12 | 0.015 |
120,000 | 21 | 0.017 |
Average | 0.03 |
Expression | Happy | Anger | Sad | Surprise | Fear | Disgust | Neutral |
---|---|---|---|---|---|---|---|
Happy | 99 | 0 | 0 | 1 | 0 | 0 | 0 |
Anger | 0 | 98 | 0 | 1 | 0 | 1 | 0 |
Sad | 0 | 0 | 98 | 0 | 1 | 0 | 1 |
Surprise | 0 | 1 | 1 | 96 | 0 | 2 | 0 |
Fear | 0 | 1 | 1 | 1 | 95 | 1 | 1 |
Disgust | 0 | 1 | 1 | 0 | 1 | 97 | 0 |
Neutral | 0 | 0 | 1 | 0 | 0 | 0 | 99 |
Overall Accuracy | 97.429% |
Expression | Happy | Anger | Sad | Surprise | Fear | Disgust | Neutral |
---|---|---|---|---|---|---|---|
Happy | 83 | 10 | 0 | 7 | 0 | 0 | 0 |
Anger | 2 | 92 | 0 | 1 | 0 | 5 | 0 |
Sad | 0 | 0 | 87 | 0 | 2 | 0 | 11 |
Surprise | 6 | 3 | 0 | 89 | 0 | 2 | 0 |
Fear | 0 | 1 | 1 | 8 | 87 | 1 | 2 |
Disgust | 0 | 7 | 2 | 6 | 2 | 80 | 3 |
Neutral | 0 | 0 | 10 | 0 | 2 | 0 | 88 |
Overall Accuracy | 86.571% |
Subject | Facial Expression | Audio Base | Textual | EEG (DE) | Eye Tracking | Fusion | |
---|---|---|---|---|---|---|---|
Feature Level | Decision Level | ||||||
1 | 95 | 84 | 91 | 68 | 80 | 96 | 96 |
2 | 92 | 82 | 89 | 63 | 82 | 97 | 98 |
3 | 100 | 80 | 94 | 64 | 83 | 98 | 99 |
4 | 98 | 83 | 89 | 62 | 89 | 93 | 98 |
5 | 98 | 84 | 93 | 76 | 90 | 92 | 93 |
6 | 90 | 83 | 94 | 70 | 81 | 97 | 98 |
7 | 94 | 84 | 94 | 72 | 87 | 91 | 93 |
8 | 93 | 83 | 91 | 69 | 85 | 94 | 94 |
9 | 93 | 80 | 92 | 64 | 80 | 95 | 93 |
10 | 98 | 82 | 92 | 70 | 87 | 98 | 96 |
Average | 95.1 | 82.5 | 91.9 | 67.8 | 84.4 | 95.1 | 95.8 |
Dataset | Classifier | # of Features | Accuracy |
---|---|---|---|
Movie | SVM | 3625 ± 1209 | 93 |
NB | 2400 ± 1375 | 92 | |
DT | 3816 ± 1254 | 88 | |
Ensemble | 3779 ± 1314 | 94 | |
Average | 3405 | 91.75 | |
Book | SVM | 2199 ± 1066 | 87 |
NB | 2612 ± 1074 | 86 | |
DT | 2031 ± 1048 | 83 | |
Ensemble | 2956 ± 1021 | 89 | |
Average | 2449 | 86.25 | |
Electronic | SVM | 1323 ± 474 | 85 |
NB | 1002 ± 1090 | 89 | |
DT | 1938 ± 625 | 87 | |
Ensemble | 1760 ± 855 | 86 | |
Average | 1505 | 86.75 | |
Kitchen | SVM | 1843 ± 770 | 89 |
NB | 1566 ± 470 | 86 | |
DT | 1600 ± 787 | 89 | |
Ensemble | 1969 ± 877 | 90 | |
Average | 1744 | 88.5 | |
Music | SVM | 642 ± 296 | 89 |
NB | 819 ± 276 | 87 | |
DT | 855 ± 267 | 86 | |
Ensemble | 362 ± 155 | 88 | |
Average | 669 | 87.5 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hussain, J.; Khan, W.A.; Hur, T.; Bilal, H.S.M.; Bang, J.; Hassan, A.U.; Afzal, M.; Lee, S. A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation. Sensors 2018, 18, 1622. https://doi.org/10.3390/s18051622
Hussain J, Khan WA, Hur T, Bilal HSM, Bang J, Hassan AU, Afzal M, Lee S. A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation. Sensors. 2018; 18(5):1622. https://doi.org/10.3390/s18051622
Chicago/Turabian StyleHussain, Jamil, Wajahat Ali Khan, Taeho Hur, Hafiz Syed Muhammad Bilal, Jaehun Bang, Anees Ul Hassan, Muhammad Afzal, and Sungyoung Lee. 2018. "A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation" Sensors 18, no. 5: 1622. https://doi.org/10.3390/s18051622