Authors:
Milena Fernandes
1
;
Roberto Filho
1
;
Iwens Sene-Junior
2
;
Stefan Sarkadi
3
;
Alison R. Panisson
1
and
AnalĂșcia Morales
1
Affiliations:
1
Department of Computing, Federal University of Santa Catarina, Santa Catarina, Brazil
;
2
Institute of Informatics, Federal University of GoiĂĄs, GoiĂąnia, Brazil
;
3
Department of Informatics, Kingâs College London, London, U.K.
Keyword(s):
Interpretable ML, Stress, AI for Healthcare.
Abstract:
In the last few years, several scientific studies have shown that occupational stress has a significant impact on workers, particularly those in the healthcare sector. This stress is caused by an imbalance between work conditions, the workerâs ability to perform their tasks, and the social support they receive from colleagues and management professionals. Researchers have explored occupational stress as part of a broader study on affective systems in healthcare, investigating the use of biomarkers and machine learning approaches to identify early conditions and avoid Burnout Syndrome. In this paper, a set of machine learning (ML) algorithms was evaluated using statistical data on biomarkers from the AffectiveRoad database to determine whether the use of explanations can help identify stress more objectively. This research integrates explainability and machine learning to aid in the identification of various levels of stress, which has not been previously evaluated for the domain of o
ccupational stress. The Random Forest is the best-performing model for this assignment, followed by k-Nearest Neighbors and Neural Network. Later, explainers were applied to the Random Forest, highlighting feature importance, partial dependencies between characteristics, and a summary of the impact of features on outputs based on their values.
(More)