SFM Major
SFM Major
Bachelor of Technology
in
Artificial Intelligence and Machine Learning
Submitted by
R.Shiva
(21H51A7343)
SK.Fareed
(21H51A7344)
G.Madhu
(21H51A7349)
2024-2025
.
CMR COLLEGE OF ENGINEERING & TECHNOLOGY
KANDLAKOYA, MEDCHAL ROAD, HYDERABAD – 501401
CERTIFICATE
This is to certify that the Major Project Phase-1 report entitled "Pain
Recognition With Physiological Signals Using Multi-Level Context
Information" being submitted by R.Shiva (21H51A7343),
SK.Fareed(21H51A7344),G.Madhu (21H51A7349) in partial fulfillment for the
award of Bachelor of Technology in Artificial Intelligence and Machine
Learning is a record of bonafide work carried out his/her under my guidance and
supervision. The results embodies in this project report have not been submitted to
any other University or Institute for the award of any Degree.
With great pleasure we want to take this opportunity to express my heartfelt gratitude to all
the people who helped in making this project work a grand success.
We are grateful to Dr.Y.Ambica, Asst.Professor CSE(AI&ML) , Department of
Artificial Intelligence and Machine Learning for his valuable technical suggestions and guidance
during the execution of this project work.
We would like to thank Dr. S.Kirubakaran, Head of the Department of Artificial
Intelligence and Machine Learning, CMR College of Engineering and Technology, who is the
major driving forces to complete my project work successfully.
We are very grateful to Dr. Ghanta Devadasu, Dean-Academics, CMR College of
Engineering and Technology, for his constant support and motivation in carrying out the project
work successfully.
We extend our heartfelt gratitude to Dr.Seshu Kumar Avadhanam, Principal, CMR
College of Engineering and Technology, for his unwavering support and guidance in the
successful completion of our project and his encouragement has been invaluable throughout this
endeavor.
We are highly indebted to Major Dr. V A Narayana, Director, CMR College of
Engineering and Technology, for giving permission to carry out this project in a successful and
fruitful way.
We express our sincere thanks to Shri. Ch. Gopal Reddy, Secretary& Correspondent,
CMR Group of Institutions, and Shri Ch Abhinav Reddy, CEO, CMR Group of Institutions for
their continuous care and support
Finally, We extend thanks to our parents who stood behind us at different stages of this
Project. We sincerely acknowledge and thank all those who gave support directly and indirectly in
completion of this project work.
R. Shiva 21H51A7343
SK.Fareed 21H51A7344
G.Madhu 21H51A7349
Pain Recognition with physiological signals using Multi-Level Context Information
TABLE OF CONTENTS
CHAPTER
NO. TITLE PAGE NO.
LIST OF FIGURES ii
LIST OF TABLES iii
ABSTRACT iv
1 INTRODUCTION 1
1.1 Problem Statement 2
1.2 Research Objective 3
1.3 Project scope and limitations 4
2 BACKGROUND WORK 6
2.1.Automatic recognition methods supporting pain assessment 7
2.1.1.Introduction 7
2.1.2.Merits,Demerits and Challenges 7
2.1.3.Implementation of Intelligent tutoring System 8
2.2.The biovid heat pain database data for the advancement 9
2.2.1.Introduction 9
2.2.2.Merits,Demerits and Challenges 9
2.2.3.Implementation of Existing method 9
2.3.Affect recognition from naturalistic movement data 10
2.3.1.Introduction 10
2.3.2.Merits,Demerits and Challenges 10
2.3.3.Implementation of Existing Method 11
2.4.Centered continuous pain intensity assessment 11
2.4.1 Introduction 11
2.4.2 Merits, Demerits and Challenges 12
2.4.3 Implementation of Existing Method 12
3. PROPOSED SYSTEM 21
List of Figures
FIGURE
NO. TITLE PAGE NO.
2.1 Flow of data Diagram 16
List of Tables
FIGURE
NO. TITLE PAGE NO.
3.1 Literature survey . 15
3.3 Comparision of existing systems 38
ABSTRACT
In the medical field, automatic pain detection is crucial. Previous research has shown that
physiological signal characteristics are used preferentially for traditional models by automated
pain identification algorithms. These techniques work well, however they mostly depend on
medical knowledge to extract physiological signal features. Regardless of medical
background, this work proposes a deep learning strategy based on physiological signals that
play the roles of both feature extraction and classification.
Our experimental findings for pain detection tasks include Pain 0 and Pain 1, Pain 0 and Pain
2, Pain 0 and Pain 3, and Pain 0 and Pain 4 for Part A of the BioVid Heat Pain database. In a
Leave-One-Subject-Out cross validation analysis, the classification task between Pain 0 and
Pain 4 yields average accuracy of 84.8 B1 13.3% for 87 patients and 87.8 B1 11.4% for 67
individuals. The suggested approach makes use of deep learning's superior performance over
traditional techniques while handling physiological inputs.
The author of the proposal used multilevel or two level feature selection algorithms, such as
CNN + BI-LSTM. In the extension work, we added three levels of feature optimization by
combining CNN + BI-LSTM + BI GRU. In this way, BI-STM will select the CNN optimized
features, and BI-GRU will select the BI-LSTM optimized features. Three level feature
optimization and selection contributes to increased accuracy.
CHAPTER 1
INTRODUCTION
CHAPTER 1
To address these issues, there is a pressing need for an automated pain recognition
system that leverages physiological signals to ensure more accurate and universal pain
assessment in the medical field.
The primary objective of this study is to develop an automated, deep learning-based pain
detection system that uses physiological signals for both feature extraction and classification,
eliminating the need for medical expertise in feature extraction. The study introduces the
concept of multi-level contextual information, which considers multidimensional features of
physiological signals to distinguish between pain and non-pain states.
The research aims to leverage advanced deep learning models, such as CNN + BI-LSTM and
extended versions like CNN + BI-LSTM + BI-GRU, to optimize feature extraction and
improve accuracy. Through multi-level feature optimization, the proposed system seeks to
outperform traditional methods while utilizing only physiological inputs for pain recognition.
The research evaluates the system's performance using two primary datasets—Part A of the
BioVid Heat Pain database and the Emopain 2021 dataset—and focuses on several
classification tasks (e.g., Pain 0 vs. Pain 1, Pain 0 vs. Pain 2, Pain 0 vs. Pain 3, and Pain 0 vs.
Pain 4). The goal is to demonstrate that multi-level context information and advanced deep
learning models can achieve higher accuracy compared to traditional uni-level methods,
offering a reliable, automated solution for pain detection in medical diagnostics.
LIMITATIONS:-
1.Dependence on Specific Physiological Signals:
The model primarily relies on EDA and ECG signals, which may not fully capture
pain perception in all individuals. Other signals like Facial Expressions, EMG
(Electromyography), and EEG (Electroencephalogram) could provide additional
insights.
Physiological signals are prone to noise and artifacts, which can reduce model
reliability.External factors such as stress, anxiety, or environmental conditions may
impact signal quality.
3. Dataset Constraints:
The BioVid Heat Pain Database and EmoPain 2021 focus on controlled experiments,
which may not fully reflect real-world clinical settings.The model's performance in
diverse, unstructured environments remains uncertain.
4. Computational Complexity:
The proposed CNN + Bi-LSTM + Bi-GRU model is computationally intensive,
requiring high processing power and memory.Real-time pain detection on low-power
embedded systems (e.g., wearable devices) may be challenging.
6. Lack of Personalization:
Pain perception varies significantly between individuals based on age, gender, and
CMRCET B.Tech (AIML) Page 5
Pain Recognition with physiological signals using Multi-Level Context Information
CHAPTER 2
BACKGROUND
WORK
CHAPTER 2
BACKGROUND WORK
2.1 Automatic recognition methods supporting pain assessment:
2.1.1 Introduction: Pain is a subjective experience that plays a vital role in healing but
becomes a significant issue when chronic, impacting individuals and society. Effective pain
assessment is crucial but challenging, especially for non-communicative patients. Traditional
methods often lack objectivity, leading to under- or overtreatment. Emerging automated pain
recognition systems, using behaviors like facial expressions and physiological signals, offer
the potential for continuous and unbiased monitoring, promising better pain management and
improved clinical outcomes.
Promising Approaches:
o Progress has been made with multimodal systems combining different data
types.
o Utilization of weak and ordinal ground truth data with minimal annotated data
has shown success.
o Learning models are being personalized, and temporal contexts are being
leveraged for better accuracy.
Improved Multi-Modal System Insights:
o Combining modalities such as audio and visual data has demonstrated potential
for enhancing sensitivity and specificity.
Usefulness in Clinical Scenarios:
o Efforts are directed toward better clinical adaptation by addressing challenges
Demerits:
Challenges:
Knowledge Gaps:
o Limited understanding of the physiology of pain and its measurable responses.
o Need to identify factors influencing pain responses and explore the interaction
between emotions and pain.
Data and Validation:
o Insufficient availability of data from real clinical scenarios.
o Requirement for datasets with multimodal annotations and mechanisms to
control false signals.
Algorithm and Hardware Improvements:
o Addressing inter-individual differences in pain responses.
o Enhancing the detection of low-intensity pain signals.
o Developing systems to manage artifacts and interference (e.g., lighting
changes, occlusions, motion).
o Tackling small datasets and data sparsity while ensuring model reliability.
Real-Time Processing and Emotional Blending:
o Overcoming dependence on long-term data collection, which is impractical in
clinical settings.
o Differentiating pain signals from accompanying emotions using advanced
algorithms.
signals.
Has potential for increased reliability and objectivity compared to traditional verbal
and visual scales.
Facilitates automated pain recognition, reducing human intervention.
Demerits:
Existing coding systems are costly and time-intensive.
Challenges:
Achieving adequate theoretical testing quality for automated systems.
.
CMRCET B.Tech (AIML) Page 9
Pain Recognition with physiological signals using Multi-Level Context Information
2.3.1 Introduction: The AffectMove 2021 Challenge was the first Affective Movement
Recognition (AffectMove) challenge, designed to bring together datasets of affective bodily
behavior from real-life scenarios to encourage research in affective computing. Despite the
relevance of movement-based models, automatic detection of naturalistic affective body
expressions lags behind other modalities. This challenge aimed to utilize existing body
movement datasets to tackle the problem of recognizing complex and naturalistic affective
behaviors, with a focus on three specific tasks based on real-life contexts and sensor data
Demerits:
The research in this area is still lagging compared to other modalities of affective
computing.
Complexity of datasets from real-life scenarios creates challenges for analysis and
generalization.
The challenge relies heavily on multimodal sensor data, which can complicate analysis
Challenges:
Developing effective methods for recognizing naturalistic and complex affective
Experimental Competitions: Teams competed to solve at least one of the three challenges
with methods focusing on automatic emotion detection from bodily movement data.
2.4.1 Introduction
The study introduces methods for developing a personalized system to continuously assess
pain intensity using biophysiological channels. The focus is on estimating individual
differences and retrieving the most relevant data using meta-information, personality traits,
and machine learning techniques.
The goal is to create specialized classifiers that are efficient, accurate, and require shorter
training times compared to classifiers trained on complete data. This study also explores the
real-time application of these systems while addressing the challenges of incremental data
processing.
2. Feature Extraction
Extract features from physiological signals using algorithms like:
o Time-domain analysis (e.g., mean, standard deviation).
o Frequency-domain analysis (e.g., power spectral density).
o Nonlinear methods (e.g., entropy, fractal analysis).
Combine extracted features with meta-information and personality traits to create a
comprehensive feature set.
3. Personalized Classifier Design
Segmentation of Data:
o Partition data based on individual differences using meta-information and
personality traits.
Specialized Classifiers:
o Train machine learning models (e.g., Support Vector Machines, Random
Forest, or deep learning models like CNNs) on segmented datasets.
o These classifiers adapt to specific individuals, improving personalization and
accuracy.
4. Training and Optimization
Incremental Learning:
o Train the model with initial data and update it incrementally as new data
becomes available.
o Use methods like online learning or transfer learning for adaptation.
Optimization Techniques:
o Use meta-information to select relevant features for each individual.
o Apply hyperparameter tuning (e.g., grid search or Bayesian optimization) to
improve model performance.
5. Real-Time Data Processing
Implement a real-time data pipeline using technologies like:
o Edge Computing: Preprocess and analyze signals on local devices for real-
time feedback.
This implementation method ensures a personalized, efficient, and real-time system for
continuous pain intensity assessment, addressing the challenges and leveraging the merits
highlighted in the study.
Uni-level machine learning algorithms like SVM, Random Forest, and Linear Regression are
unable to achieve satisfactory recognition accuracy for pain assessment. Their inability to
model temporal dependencies and integrate multiple data streams (visual, voice,
physiological) hinders their practical application. Multilevel deep learning methods, such as
CNN + BI-LSTM, provide superior performance by addressing these limitations through
advanced feature extraction and temporal data modeling.
SDLC Stages
1. Requirement Gathering: Goals and system features will be identified and defined.
2. Analysis: The system structure will be planned, feasibility risks will be evaluated.
3. Requirement Gathering: Goals and system features will be identified and defined.
4. Analysis: The system structure will be planned, feasibility risks will be evaluated, and
a risk management strategy will be established.
5. Designing: Relevant design diagrams, business rules, and system prototypes will be
developed.
6. Coding: The actual system development phase using the selected multilevel deep
learning architectures.
7. Testing: The developed system's accuracy and performance will be validated,
optimized, and iteratively improved.
Technical Workflow
1. CNN (Convolutional Neural Networks): Captures spatial features from pain-related data like
images, videos, or other visual signals.
2. BI-LSTM (Bi-directional LSTM): Handles temporal sequences and captures changes in pain
responses over time. Combining these two enables the model to integrate spatial and temporal
patterns.
Challenges Addressed
1. Generalization Issues: Traditional machine learning models fail to generalize across real-
world physiological response conditions.
2. Signal Variability: Physiological signal artifacts and emotion blending affect accuracy.
Multilevel methods like CNN + BI-LSTM address these by improving feature extraction and
classification across time-based signals.
•Physiological Response Features: Such as biopotentials and movement data from wearable
devices
The proposed solution will follow the systematic SDLC stages, progressing from requirement
gathering to testing and maintenance. This timeline ensures that the system is thoroughly
tested and optimized at every stage before final deployment.
CHAPTER 3
PROPOSED
SYSTEM
detection rates.
individuals.
RandomForest: A powerful ensemble learning method that builds multiple decision trees during
training and aggregates their predictions for classification (mode) or regression
(average). It is widely used for various domains such as finance, healthcare, and image
analysis, offering robustness and high accuracy.
CNN+BiLSTM: This hybrid model integrates Convolutional Neural Networks (CNN) with Bidirectional
temporal dependencies, making it ideal for natural language processing (NLP) and time
series forecasting.
CNN+BiLSTM+BiGRU: An advanced deep learning architecture that combines CNN, BiLSTM, and
3.3 DESIGNING
DESIGNING: The design stage takes as its initial input the requirements identified in the approved
requirements document. For each requirement, a set of one or more design elements will
be produced as a result of interviews, workshops, and/or prototype efforts. Design
elements describe the desired software features in detail, and generally include
functional hierarchy diagrams, screen layout diagrams, tables of business rules, business
process diagrams, pseudo code, and a complete entity-relationship diagram with a full
data dictionary. These design elements are intended to describe the software in sufficient
detail that skilled programmers may develop the software with minimal additional input.
When the design document is finalized and accepted, the RTM is updated to show that
each design element is formally associated with a specific requirement. The outputs of
CMRCET B. Tech (AIML) Page 25
Pain Recognition with physiological signals using Multi-Level Context Information
the design stage are the design document, an updated RTM, and an updated project plan.
UML Diagram: The Unified Modelling Language allows the software engineer to express an analysis model
using the modelling notation that is governed by a set of syntactic semantic and pragmatic
rules
Class Diagram: A class diagram is a key part of object-oriented modeling. It shows the structure of a
system and relationships between objects. Each class is represented as a box with three
parts:
1.Name – Class identifier
2.Attributes – Properties of the class
3. Methods – Functions of the class
Class diagrams help in designing, coding, and organizing software efficiently.
PainRecognition.py
x,y
X_train, X_test
y_train, y_test
uploadDataset()
preprocessDataset()
def CalculateMetrices()
def RandomForest()
Use case Diagram: A use case diagram shows how users interact with a system. It highlights different types
of users and their actions. This diagram is often used with textual use cases and other
diagrams to explain system behavior.
uploadDataset()
preprocessDataset()
def RandomForest()
Collaboration diagram: A collaboration diagram shows how objects interact using sequenced messages. It
combines details from class, sequence, and use case diagrams to represent both
structure and behavior of a system.
Component Diagram: A component diagram in UML shows how different parts of a system connect and
work together. It illustrates the structure of complex systems by linking components
through assembly connectors, representing the service consumer-provider
relationship.
Deployment Diagram: A deployment diagram in UML shows how software and hardware components
work together. It represents nodes (e.g., servers) and artifacts (e.g., applications) and
how they connect. Nodes are shown as boxes, with artifacts inside them, and can
have sub-nodes for complex systems.
Upload Dataset
Preprocess Dataset
Calculate Metrices
datasetSplit
datasetSplit
Predict
Data Flow Diagram:A Data Flow Diagram (DFD) shows how data moves through a system, illustrating inputs,
CMRCET B. Tech (AIML) Page 30
Pain Recognition with physiological signals using Multi-Level Context Information
processes, and outputs. It provides a clear view of business functions and can be detailed as
needed. DFDs use simple symbols to represent data flow and help in analyzing and
automating processes.
User
SYSTEM
IMPLEMENTATION:
Data Exploration: This module is responsible for loading and analyzing data within the system.
Examining the BioVid_coords dataset to understand its structure and
contents. Processing data using Pandas and NumPy for reshaping and
eliminating unnecessary columns. Normalizing the training dataset using an
appropriate scaling technique to ensure consistency. Visualizing data patterns
with Seaborn and Matplotlib to gain insights. Applying label encoding to
transform categorical attributes into numerical representations. Selecting key
features to enhance model accuracy and efficiency.
Data Splitting: Part Breaking the dataset into train and test segments to evaluate model performance.
CODE:
from tkinter import *
from tkinter import simpledialog
import tkinter
from tkinter import filedialog
import seaborn as sns
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import pickle
import numpy as np
import pandas as pd
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_spl
main = tkinter.Tk()
main.title("Pain Recognition") #designing main screen
main.geometry("1300x1200")
def uploadDataset():
global filename, dataset, labels, values,unique
filename = filedialog.askopenfilename(initialdir = "Dataset")
text.delete('1.0', END)
text.insert(END,'Dataset loaded\n\n')
dataset = pd.read_csv(filename)
dataset.fillna(0, inplace = True)
text.insert(END,str(dataset))
data = dataset.values
plt.plot(data)
plt.xlabel("Number of Records")
plt.ylabel("Signals")
plt.title("EEG Signal from all Subjects")
plt.show()
def processDataset():
global dataset, X, Y
global X_train, X_test, y_train, y_test, pca, scaler,labels
text.delete('1.0', END)
dataset = dataset.values
def trainRF():
global X_train, y_train, X_test, y_test
global algorithm, predict, test_labels,labels
text.delete('1.0', END)
rf = RandomForestClassifier(n_estimators=40, criterion='gini', max_features="log2",
min_weight_fraction_leaf=0.3)
rf.fit(X_train, y_train)
predict = rf.predict(X_test)#perform prediction on test data
calculateMetrics("Existing Random Forest", predict, y_test,labels)#call function to calculate accuracy and other
metrics
def trainCNNBILSTM():
global X_train, y_train, X_test, y_test
global algorithm, predict,labels
text.delete('1.0', END)
X_train = np.reshape(X_train, (X_train.shape[0], 34, 4))
X_test = np.reshape(X_test, (X_test.shape[0], 34, 4))
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
#create CNN sequential object
propose_model = Sequential()
#create CNN1D layer with 32 neurons for data filteration and pool size as 3
propose_model.add(Conv1D(filters=32, kernel_size = 3, activation = 'relu', input_shape = (X_train.shape[1], X_train.shape[2])))
#defining another CNN layer with 64 neurons
propose_model.add(Conv1D(filters=64, kernel_size = 2, activation = 'relu'))
propose_model.add(Conv1D(filters=128, kernel_size = 2, activation = 'relu'))
#max pooling layer to collect relevant features from CNN layer
propose_model.add(MaxPooling1D(pool_size = 1))
propose_model.add(Flatten())
propose_model.add(RepeatVector(2))
#defining BILSTM kayer with 32 neurons to optimize CNN features
propose_model.add(Bidirectional(LSTM(32, activation = 'relu', return_sequences=True)))
propose_model.add(Bidirectional(LSTM(64, activation = 'relu')))
#adding dropout layer to remove irrelevant features
propose_model.add(Dropout(0.2))
#defining output an dprediction layer
propose_model.add(Dense(units = 100, activation = 'softmax'))
propose_model.add(Dense(units = y_train.shape[1], activation = 'softmax'))
#train and compile the model
propose_model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
if os.path.exists("model/propose_weights.hdf5") == False:
model_check_point = ModelCheckpoint(filepath='model/propose_weights.hdf5', verbose = 1, save_best_only = True)
hist = propose_model.fit(X_train, y_train, batch_size = 32, epochs = 10, validation_data=(X_test, y_test),
callbacks=[model_check_point], verbose=1)
f = open('model/propose_history.pckl', 'wb')
pickle.dump(hist.history, f)
f.close()
else:
propose_model = load_model("model/propose_weights.hdf5")
#perform prediction on test data
CMRCET B. Tech (AIML) Page 36
Pain Recognition with physiological signals using Multi-Level Context Information
predict = propose_model.predict(X_test)
predict = np.argmax(predict, axis=1)
def predict():
global X_train, y_train, X_test, y_test
global algorithm, predict,labels,extension_model
text.delete('1.0', END)
testData = pd.read_csv("Dataset/testData.csv")#reading test data
testData.fillna(0, inplace = True)
temp = testData.values
testData = testData.values
test = scaler.transform(testData)#normalizing values
test = np.reshape(test, (test.shape[0], 34, 4))
predict = extension_model.predict(test)#performing prediction on test data using extension model object
for i in range(len(predict)):
y_pred = np.argmax(predict[i])
text.insert(END,"Test Data = "+str(temp[i])+" Predicted Pain Type ====> "+labels[y_pred]+"\n")
CHAPTER 4
RESULTS AND
DISCUSSION
CHAPTER 4
RESULTS AND DISCUSSION
4.1 Comparision of existing solutions:
Method/Technique Description Strengths
Weaknesses Performance
Support Vector A traditional classifies - Relies on Moderate
Machines (SVM) machine signalsbased manually accuracy;
learning model on extracted extracted varies
that classifies features. features. depending on
signals based - Poor feature
- Effective for
on extracted performance with extraction.
linear
features. noisy signals
separation.
- Well-
established in
feature-based
analysis.
Random Forest Ensemble - Robust to overfitting. Moderate
method that overfitting. - Handles accuracy with
uses decision - Handles nonlinearities varying
trees for nonlinearities well. subject
classification well. responses.
- Still dependent
based on hand-
on manual
crafted features.
feature
extraction.
-
Computationally
intensive.
Discussion
Data Collection
Source of Data:
The data was collected from publicly available datasets containing physiological signals such
as heart rate, skin conductivity, and facial expressions. These signals were captured under
varying pain conditions.
Preprocessing: The raw signals were cleaned and normalized to remove noise and inconsistencies
before being used for training and testing.
Performance Metrics
To evaluate the effectiveness of the models, the following metrics were used:
Accuracy:
Measures the percentage of correctly classified instances out of the total instances.
The CNN + Bi-LSTM hybrid model achieved the highest accuracy (84.8% to 87.8%).
Sensitivity (Recall):
Indicates the ability of the model to correctly identify true positives
(pain conditions).
The hybrid model showed sensitivity ranging from 81% to 86%,
demonstrating its effectiveness in identifying pain signals.
Specificity:
Measures the ability to correctly identify true negatives (no-pain
conditions).
The hybrid model recorded specificity values between 85% and 88%,
showing it is reliable in excluding false positives.
Validation Method:
LOSO Cross-Validation (Leave-One-Subject-Out) was employed to
assess the models.
This approach tests the model’s ability to generalize by training on all
but one subject and testing on the excluded subject.
Key Observations from Metrics:
Traditional Models (SVM, Random Forest):
Achieved moderate performance with 65% to 75% accuracy, limited
by dependence on manually extracted features.
Deep Learning Models (CNN, Bi-LSTM):
Showed significant improvement with accuracies of 75% to 82%,
leveraging automatic feature extraction.
Best Performer:
The CNN + Bi-LSTM hybrid model excelled in all metrics,
showcasing its ability to combine spatial and temporal feature learning
for superior results.
These metrics underscore the importance of hybrid architectures and
robust validation in developing effective solutions for pain detection.
CHAPTER 5
CONCLUSION
CHAPTER 5
CONCLUSION
The study demonstrates that the CNN + Bi-LSTM hybrid model is the most robust and
effective solution for real-time pain detection from physiological signals. This approach
leverages multi-level spatial and temporal feature extraction to outperform traditional
machine learning models and standalone deep learning methods.
The findings highlight the importance of combining CNNs and Bi-LSTMs in pain
detection tasks to provide both spatial and temporal context, resulting in higher accuracy
and better generalization. Future improvements could focus on model optimization to
ensure real- time implementation on edge devices.
CHAPTER 6
REFERENCES
CHAPTER 6
REFERENCES
1. https://www.researchgate.net/publication/336447252_Automatic_Recognition_Methods_
Supporting_Pain_Asse ssment_A_Survey
2. https://ieeexplore.ieee.org/document/9666322
3. https://www.researchgate.net/publication/319316942_Analysis_of_Facial_Expressiveness
_During_Experimenta lly_Induced_Heat_Pain
4. https://www.researchgate.net/publication/319316942_Analysis_of_Facial_Expressiveness
_During_Experimenta lly_Induced_Heat_Pain