Skip to main content
Selecting the best location to establish a new business site is very important in order to achieve success. It is therefore one of the most important aspect in any business plan. Multi-criteria decision-making methods such as the Analytic... more
Selecting the best location to establish a new business site is very important in order to achieve success. It is therefore one of the most important aspect in any business plan. Multi-criteria decision-making methods such as the Analytic Hierarchy Process (AHP) has been used to elicit information that supports the decision of business site selection. However, AHP often involves multiple decision makers, each with their own opinions and biases. Different decision makers will have different opinions and views on the importance of the criteria and sub-criteria in the AHP model. In this study, three aggregation methods that can be used to carefully aggregate the resultant judgements from the multiple decision makers to form a single group judgement are discussed. The goal of obtaining the single group judgement is to use it as input to the AHP model in order to achieve the goal of selecting the most suitable business location. The study case for this paper is that of the selection of a location for a telecommunication payment point. From this study case, a conclusion can be drawn for the best aggregation method for the selection of the best location to set up a business of the telecommunication nature.
Research Interests:
The location of a business site is one of the main factors that can determine the success of the business. Many criteria are taken into consideration when selecting the location of the business site, therefore decision makers will need to... more
The location of a business site is one of the main factors that can determine the success of the business. Many criteria are taken into consideration when selecting the location of the business site, therefore decision makers will need to achieve an agreement when evaluating the criteria. The decision-making process involving multiple criteria is a complex task and over the years, many multi-criteria decision-making (MCDM) methods were researched upon and developed. In this paper, a model combining the Fuzzy Analytic Hierarchy Process (FAHP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) for site selection is discussed. This model is used to rank six utility payment points in Selangor, Malaysia to determine the effect of the business site on the sales performance.
Research Interests:
Hand gestures and Deep Learning Strategies can be used to control a virtual robotic arm for real-time applications. A robotic arm, which is portable to carry various places and which that can be easily programmed to do any work similar to... more
Hand gestures and Deep Learning Strategies can be used to control a virtual robotic arm for real-time applications. A robotic arm, which is portable to carry various places and which that can be easily programmed to do any work similar to that of a human hand and is controlled by using deep learning techniques. Deep hand is a combination of both virtual reality and deep learning techniques. It estimated Tthe active spatio-temporal feature and the corresponding pose parameter for various hand movements are estimated, to determine the unknown pose parameter of hand gestures by using various deep learning algorithms. A novel framework for hand gestures has been made to estimate by using a deep convolution neural network (CNN) and a deep belief network (DBN). A comparison in terms of accuracy and recognition rate has been drawnis evaluated in this paper. This helps inBy analyzing the movement of a human hand and its fingers, this improves ability to which can be made to control a robotic arm with high recognition rate and less error rate.
Research Interests:
In this paper, an automated system for grading the severity level of Diabetic Retinopathy (DR) disease based on fundus images is presented. Features are extracted using fast discrete curvelet transform. These features are applied to... more
In this paper, an automated system for grading the severity level of Diabetic Retinopathy (DR) disease based on fundus images is presented. Features are extracted using fast discrete curvelet transform. These features are applied to hierarchical support vector machine (SVM) classifier to obtain four types of grading levels, namely, normal, mild, moderate and severe. These grading levels are determined based on the number of anomalies such as microaneurysms, hard exudates and haemorrhages that are present in the fundus image. The performance of the proposed system is evaluated using fundus images from the Messidor database. Experiment results show that the proposed system can achieve an accuracy rate of 86.23%.
Jobstreet Salary Guide 2017
Deep learning is a powerful technique for learning representation and can be used to learn features within text. The learned features are useful for solving Natural Language Processing problem. In this paper we review key literature... more
Deep learning is a powerful technique for learning representation and can be used to learn features within text. The learned features are useful for solving Natural Language Processing problem. In this paper we review key literature related to deep learning and its application on solving text analysis.
Location analytics has been employed to capture insights about business, retail, disaster planning, public safety, conservation of energy, and many more. Despite the success of location analytics in various domains, obtaining a set of... more
Location analytics has been employed to capture insights about business, retail, disaster planning, public safety, conservation of energy, and many more. Despite the success of location analytics in various domains, obtaining a set of optimal feature or criteria for analysis purposes remained a challenge. Hence, feature selection plays an important role in obtaining the optimal features as it determines which factors are valuable and significant to be included in the final analytical dataset. In this light, feature selection was proposed to optimize the geospatial features to predict sales as well as recommendation for locations when establishing new outlets. In this study, sales data for a certain telecommunication company was used. This paper ends with the results of empirical experiments and recommendation of location characteristics that optimize yearly sales.
We present a fuzzy-based decision embedded in in rule-based methods that spontaneously changes pixel intensity for every frame in a given by evaluating both hue type and level of intensity prior to the feature extraction step. The term... more
We present a fuzzy-based decision embedded in in rule-based methods that spontaneously changes pixel intensity for every frame in a given by evaluating both hue type and level of intensity prior to the feature extraction step. The term fuzzy is to answer such question as " How low or high is the frame brightness that is categorized as bright or dark type of frame? " and vice versa. In comparison to normal background subtraction and hard-based decision, the designed fuzzy mechanism intents to demonstrate an enhancement or unravel the visibility of illumination discrepancy which has been video analysis's nemesis. Thus, we illustrate the development in post-processing phase via applying the results onto gait recognition task. The following results are measured and compared by percentage value of Correct Classification Rate (CCR) for each approach. More than one thousands of hand-held recorded videos and static surveillance videos were acquired as experiment samples.
The success of a business depends heavily on the business site. Various criteria affect the selection of the business site; hence decision makers will need to come to a consensus when evaluating the criteria that affects the selection of... more
The success of a business depends heavily on the business site. Various criteria affect the selection of the business site; hence decision makers will need to come to a consensus when evaluating the criteria that affects the selection of a business site. Decision making involving multiple criteria is a complex task hence many Multi Criteria Decision Making (MCDM) methods were developed and Analytic Hierarchy Process (AHP) is a commonly used method. In this study, an AHP model for site selection is validated for four utility payment points in Selangor to determine the ranking of these four utility payment points in terms of their sales.
Objectives: The objective of this paper is to classify multimodal human actions of the Berkeley Multimodal Human Action Database (MHAD). Methods/Statistical analysis: Actions from accelerometer and motion capture modals are utilized in... more
Objectives: The objective of this paper is to classify multimodal human actions of the Berkeley Multimodal Human Action Database (MHAD). Methods/Statistical analysis: Actions from accelerometer and motion capture modals are utilized in this study. Features extracted include statistical measures such as minimum, maximum, mean, median, standard deviation, kurtosis and skewness. Feature extraction level fusion is applied to form a feature vector comprising two modalities. Feature selection is implemented using Particle Swarm Optimization (PSO), Tabu, and Ranker. Classification is performed with Support Vector Machine (SVM), Random Forest (RF), k-Nearest Neighbour (k-NN) and Best First Tree (BFT). Findings: The classification model that gave the highest accuracy is Support Vector Machine with Radial Basis Function kernel with a correct classification rate (CCR) of 97.6 % for the accelerometer modal (Acc), 99.8% for the motion capture system modal (Mocap), and 99.8% for the fusion modal (FusioMA). In the feature selection process, Ranker selected every single extracted feature (162 features for Acc and 1161 features for Mocap and 1323 features for FusioMA) and produced an average CCR of 97.4%. Comparing with PSO (68 features for Acc, 350 features for Mocap and 412 features for FusioMA), it produced an average CCR of 97.1% and Tabu (54 features for Acc, 199 features for Mocap and 323 features for FusionMA) produced an average CCR of 97.2%. Although Ranker gave the best result, the difference in the average CCR is not significant. Thus, PSO and Tabu may be more suitable in this case as the reduced feature set can result in computational speedup and reduced complexity. Application/Improvements: The extracted statistical features are able to produce high accuracy in classification of multimodal human actions. The feature extraction level fusion to combine the two modalities performs better than single modality in the classification.
Crowd source analytics have become an increasing trend among companies who are exploring data-driven decision making. Crowd source analytics is a viable choice for such companies, especially in terms of cost. One push factor for increased... more
Crowd source analytics have become an increasing trend among companies who are exploring data-driven decision making. Crowd source analytics is a viable choice for such companies, especially in terms of cost. One push factor for increased adoption of crowd sourced analytics would be the identification of key metrices which can be measured throughout the process of crowdsourced analytics, for the purposes of post-project evaluation and for future planning. In this paper, we review generic measures for measuring crowdsourcing projects, and from these measures identify key measures useful for a crowd source analytics project.
Research Interests:
With the rising popularity of social media such as Facebook, Twitter, Instagram and many more, sentiment classification for social media has become a hot research topic. There were many research studies conducted on Twitter as it is one... more
With the rising popularity of social media such as Facebook, Twitter, Instagram and many more, sentiment classification for social media has become a hot research topic. There were many research studies conducted on Twitter as it is one of the most widely used social media. Previous studies have approached the problem as a tweet-level classification task where each tweet is classified as positive, negative or neutral. However, getting an overall sentiment might not be useful to a business organizations which are using Twitter for monitoring consumer opinion of their products/services. Instead, it is more useful to determine specifically which tweets where users are happy or unhappy about. This paper proposes the discovery of Twitter user level interestingness based on relationships such as retweets, reply-mentions and pure-mentions using Google's PageRank algorithm. We conducted experiments and compared the results with hard-marked results by seven annotators.
Research Interests:
Human action recognition from low quality video remains a challenging task for the action recognition community. Recent state-of-the-art methods such as space-time interest point (STIP) uses shape and motion features for characterization... more
Human action recognition from low quality video remains a challenging task for the action recognition community. Recent state-of-the-art methods such as space-time interest point (STIP) uses shape and motion features for characterization of action. However, STIP features are over-reliant on video quality and lack robust object semantics. This paper harness the robustness of deeply learned object features from off-the-shelf convolutional neural network (CNN) models to improve action recognition under low quality conditions. A two-channel framework that aggregates shape and motion features extracted using STIP detector, and frame-level object features obtained from the final few layers (i.e. FC6, FC7, softmax layer) of a state-of-the-art image-trained CNN model is proposed. Experimental results on low quality versions of two publicly available datasets – UCF-11 and HMDB51, showed that the use of CNN object features together with conventional shape and motion can greatly improve the performance of action recognition in low quality videos.
This work aims to develop an information retrieval application based on augmented reality (AR) technologies to enhance visitors' experience in a museum exhibition. The purpose of developing this application is to give visitors of museums... more
This work aims to develop an information retrieval application based on augmented reality (AR) technologies to enhance visitors' experience in a museum exhibition. The purpose of developing this application is to give visitors of museums a customized interactive experience through a handheld smartphone. The application recognizes objects of interest and retrieve information of such objects for display through feeds from a smartphone's camera in real time and overlays the information over the object. This is achieved with vision-based AR, utilizing 3D object tracking, thus eliminating the use of markers, which could prove unreliable due to obfuscation or damage.
Research Interests:
This study reports the classification of subdural and extradural hematomas in brain CT images. The major difference between subdural and extradural hematomas lies in their shapes, therefore eight shape descriptors are proposed to describe... more
This study reports the classification of subdural and extradural hematomas in brain CT images. The major difference between subdural and extradural hematomas lies in their shapes, therefore eight shape descriptors are proposed to describe the characteristics of the two types of hematoma. The images will first undergo the pre-processing step which consists of two-level contrast enhancement separated by parenchyma extraction processes. Next, k-means clustering is performed to garner all Regions of Interest (ROIs) into one cluster. Prior to classification, shape features are extracted from each ROI. Finally for classification, fuzzy k-Nearest Neighbor (fuzzy k-NN) and Linear Discriminant Analysis (LDA) are employed to classify the regions into subdural hematoma, extradural hematoma or normal regions. Experimental results suggest that fuzzy k-NN produces the optimum accuracy. It manages to achieve over 93% correct classification rate on a set of 109 subdural and 247 extradural hematoma regions, as well as 629 normal regions.
Research Interests:
We present a heuristic method to automatically adjust pixel intensity per frame from video by analyzing its colour type and level of brightness before initiating silhouette extraction phase. As this is performed at the pre-processing... more
We present a heuristic method to automatically adjust pixel intensity per frame from video by analyzing its colour type and level of brightness before initiating silhouette extraction phase. As this is performed at the pre-processing phase, our proposed method aims to show that it is an improvement or solution for videos containing inconsistency of illumination compared to normal background subtraction. We are introducing two modules; a prior processing module and an illumination modeling module.  The prior processing module consists of resizing and smoothing operations on related frame in order to accommodate the subsequent module. The illumination modeling module manipulates pixel values in each frame to improve silhouette extraction for a video containing inconsistency of illumination. This proposed method is tested on 1072 videos including videos from an external KTH database.
Human action recognition is a well researched problem, which is considerably more challenging when video quality is poor. In this paper , we investigate human action recognition in low quality videos by leveraging the robustness of... more
Human action recognition is a well researched problem, which is considerably more challenging when video quality is poor. In this paper , we investigate human action recognition in low quality videos by leveraging the robustness of textural features to better characterize actions , instead of relying on shape and motion features may fail under noisy conditions. To accommodate videos, texture descriptors are extended to three orthogonal planes (TOP) to extract spatio-temporal features. Extensive experiments were conducted on low quality versions of the KTH and HMDB51 datasets to evaluate the performance of our proposed approaches against standard baselines. Experimental results and further analysis demonstrated the usefulness of textural features in improving the capability of recognizing human actions from low quality videos.
Research Interests:
This work aims to develop an information retrieval application based on augmented reality (AR) technologies to enhance visitors' experience in a museum exhibition. The purpose of developing this application is to give visitors of museums... more
This work aims to develop an information retrieval application based on augmented reality (AR) technologies to enhance visitors' experience in a museum exhibition. The purpose of developing this application is to give visitors of museums a customized interactive experience through a handheld smartphone. The application recognizes objects of interest and retrieve information of such objects for display through feeds from a smartphone's camera in real time and overlays the information over the object. This is achieved with vision-based AR, utilizing 3D object tracking, thus eliminating the use of markers, which could prove unreliable due to obfuscation or damage.
With the rising popularity of social media platforms such Twitter, sentiment classification for social media has become a hot research topic. There were many research studies conducted on Twitter as it is one of the most widely used... more
With the rising popularity of social media platforms such Twitter, sentiment classification for social media has become a hot research topic. There were many research studies conducted on Twitter as it is one of the most widely used social media. Previous studies have approached the problem as a tweet-level classification task where each tweet is classified as being positive, negative or neutral. However, getting an overall sentiment might not be useful to a business organizations which are using Twitter for monitoring consumer opinion of their products or services. It is more useful to determine specifically which tweets where users are happy or unhappy about. This paper proposes the discovery of Twitter user level interestingness based on relationships such as Retweets, Reply-Mentions and Pure-Mentions using Google's PageRank algorithm. We conducted experiments for telecommunications companies related tweets and compared the results with hard-marked results by seven annotators.
Crowd source analytics have become an increasing trend among companies who are exploring data-driven decision making. Crowd source analytics is a viable choice for such companies, especially in terms of cost. One push factor for increased... more
Crowd source analytics have become an increasing trend among companies who are exploring data-driven decision making. Crowd source analytics is a viable choice for such companies, especially in terms of cost. One push factor for increased adoption of crowd sourced analytics would be the identification of key metrices which can be measured throughout the process of crowdsourced analytics, for the purposes of post-project evaluation and for future planning. In this paper, we review generic measures for measuring crowdsourcing projects, and from these measures identify key measures useful for a crowd source analytics project.
The objective of this paper is to analyse the gait of subjects with suffering Parkinson's Disease (PD), plus to differentiate their gait from those of normal people. The data isobtained from a medical gait database known as Gaitpdb [1].... more
The objective of this paper is to analyse the gait of subjects with suffering Parkinson's Disease (PD), plus to differentiate their gait from those of normal people. The data isobtained from a medical gait database known as Gaitpdb [1]. In the data set, there are
73 control subjects and 93 subjects with PD. In our study, we first obtained the gait features
using statistical analysis, which include minimum, maximum, median, kurtosis, mean,
skewness, standard deviation and average absolute deviation of the gait signal. Next, selection of the extracted features is performed using PSO search, Tabu search and Ranker. Finally the selected features will undergo classification using BFT, BPANN, k-NN, SVM with Ln kernel, SVM with Poly kernel and SVM with Rbf kernel. From the experimental results,
the proposed model achieved average of 66.43%, 89.97%, 87.00%, 88.47%, 86.80% and87.53% correct classification rates respectively.
This paper describes the baseline corpus of a new multimodal biometric database, the MMU GASPFA (Gait–Speech–Face) database. The corpus in GASPFA is acquired using commercial off the shelf (COTS) equipment including digital video cameras,... more
This paper describes the baseline corpus of a new multimodal biometric database, the MMU GASPFA (Gait–Speech–Face) database. The corpus in GASPFA is acquired using commercial off the shelf (COTS) equipment including digital video cameras, digital voice recorder, digital camera, Kinect camera and accelerometer equipped smart phones. The corpus consists of frontal face images from the digital camera, speech utterances recorded using the digital voice recorder, gait videos with their associated data recorded using both the digital video cameras and Kinect camera simultaneously as well as accelerometer readings from the smart phones. A total of 82 participants had their biometric data recorded. MMU GASPFA is able to support both multimodal biometric authentication as well as gait action recognition. This paper describes the acquisition setup and protocols used in MMU GASPFA, as well as the content of the corpus. Baseline results from a subset of the participants are presented for validation purposes.► The MMU GASPFA database contains audio, video and accelerometer data for 82 subjects. ► Data were recorded using commercial off the shelf sensors in a real live setting. ► Gait data were recorded using both digital video and video with depth and accelerometers. ► Gait data were recorded under 11 conditions, including novel usage of traditional clothing. ► Gait action such as standstill, normal walk, brisk walk and encumbered walk are recorded.
In recent years, attacks on password databases have been carried out at an increasing rate, with significant success. Thus, a new approach is needed to prove one's claim to identity instead of relying on a password. In this paper, we... more
In recent years, attacks on password databases have been carried out at an
increasing rate, with significant success. Thus, a new approach is needed to
prove one's claim to identity instead of relying on a password. In this paper, we
investigate the use of biometric match scores for the purpose of verification. Our
work was performed using the BSSR1 multimodal match score biometric dataset,
which contains match scores from face and fingerprint biometric systems. We
investigated the use of match scores as a feature vector, and performed Simple
Sum and Product Rule fusion of match scores. The results we obtained
demonstrated that the use of match scores for verification purposes can be
achieved to give a result that is highly accurate.
The modeling of dengue fever cases is an important task to help public health officers to plan and prepare their resources to prevent dengue fever outbreak. In this paper, we present the time-series modeling of accumulated dengue fever... more
The modeling of dengue fever cases is an important task to help public health officers to plan and prepare their resources to prevent dengue fever outbreak. In this paper, we present the time-series modeling of accumulated dengue fever cases acquired from the Malaysian Open Data Government Portal. Evaluation of the forecast for future dengue fever outbreak shows promising results, as evidence is presented for the trend and seasonal nature of dengue fever outbreaks in Malaysia.
Shape, motion and texture features have recently gained much popularity in their use for human action recognition. While many of these descriptors have been shown to work well against challenging variations such as appearance, pose and... more
Shape, motion and texture features have recently
gained much popularity in their use for human action recognition.
While many of these descriptors have been shown to
work well against challenging variations such as appearance,
pose and illumination, the problem of low video quality is
relatively unexplored. In this paper, we propose a new idea
of jointly employing these three features within a standard
bag-of-features framework to recognize actions in low quality
videos. The performance of these features were extensively
evaluated and analyzed under three spatial downsampling and
three temporal downsampling modes. Experiments conducted
on the KTH and Weizmann datasets with several combination
of features and settings showed the importance of all three
features (HOG, HOF, LBP-TOP), and how low quality videos
can benefit from the robustness of textural features
This paper presents the dataset collected from student interactions with INQPRO, a computer based scientific inquiry learning environment. The dataset contains records of 100 students and is divided into two portions. The first portion... more
This paper presents the dataset collected from student interactions with INQPRO, a computer based scientific inquiry learning environment. The dataset contains records of 100 students and is divided into two portions. The first portion comprises (i) raw log data, capturing the student’s
name, interfaces visited, the interface components the student interacted with, the actions performed by the students, and the values asserted at a particular interface component; (ii)
transformed log data, a restructured and refined raw log data that takes the form of an attribute value pair record. The second portion of the dataset consists of pretest-posttest results. This paper begins with an overview of INQPRO and the discussion on how student-computer interactions were captured. Subsequently, the process of pre-processing and transformation of raw log data into relational database tables will be presented. In this paper, two applications of INQPRO dataset are presented; the first application discusses how students’ levels of scientific inquiry skills can be
extracted from the dataset while the second application demonstrates how the dataset supports the
prediction of conceptual change occurrence. The paper ends with highlighting potential future
research work by using this dataset, which includes techniques to elicit clusters of students as well as provision of adaptive pedagogical interventions as the student interacts with INQPRO. In conclusion, this dataset atttempts to contribute to the research community through: (i) time and cost
saving in acquiring field data, (ii) as a benchmark dataset to evaluate and compare different predictive models.
This is a preprint draft and the fulltac paper is available at
http://onlinelibrary.wiley.com/doi/10.1111/bjet.12331/abstract
while a sample of the dataset can be found at http://pesona.mmu.edu.my/~cyting/INQPRO/dataset.zip and the full data can be acquired by contacting the first author at cyting@mmu.edu.my.
In recent years, attacks on password databases have been carried out at an increasing rate, with significant success. Thus, a new approach is needed to prove one's claim to identity instead of relying on a password. In this paper, we... more
In recent years, attacks on password databases have been carried out at an increasing rate, with significant success. Thus, a new approach is needed to prove one's claim to identity instead of relying on a password. In this paper, we investigate the use of biometric match scores for the purpose of verification. Our work was performed using the BSSR1 multimodal match score biometric dataset, which contains match scores from face and fingerprint biometric systems. We investigated the use of match scores as a feature vector, and performed Simple Sum and Product Rule fusion of match scores. The results we obtained demonstrated that the use of match scores for verification purposes can be achieved to give a result that is highly accurate.
The objective of this paper is to analyse the gait of subjects with suffering Parkinson's Disease (PD), plus to differentiate their gait from those of normal people. The data is obtained from a medical gait database known as Gaitpdb [1].... more
The objective of this paper is to analyse the gait of subjects with suffering Parkinson's Disease (PD), plus to differentiate their gait from those of normal people. The data is obtained from a medical gait database known as Gaitpdb [1]. In the data set, there are 73 control subjects and 93 subjects with PD. In our study, we first obtained the gait features using statistical
analysis, which include minimum, maximum, median, kurtosis, mean, skewness, standard deviation and average absolute deviation of the gait signal. Next, selection of the extracted features is performed using PSO search, Tabu search and Ranker. Finally the selected features will undergo
classification using BFT, BPANN, k-NN, SVM with Ln kernel, SVM with Polykernel and SVM with Rbf kernel. From the experimental results, the proposed model achieved average of 66.43%, 89.97%, 87.00%, 88.47%, 86.80% and 87.53% correct classification rates respectively.
Research and development of exercise recognition applications have predominantly focused on motion related exercise, with not much emphasis on weight lifting exercise. At the same time, while such applications supports the posting of... more
Research and development of exercise recognition applications have predominantly focused on motion related exercise, with not much emphasis on weight lifting exercise. At the same time, while such applications supports the posting of completed exercise session on social network, the veracity of the post is entirely determined by the user of the application. In this paper, we present the building blocks for a weight lifting application. It recognizes and counts the number of repetitions of a weight lifting exercise, and subsequently posts it on the user's behalf, thus ensuring the veracity of the post. Our empirical results demonstrate the potential of such an application.
Social Networks such as Facebook, Twitter, Google+ and LinkedIn have millions of users. These networks are con- stantly evolving and it is a good source of information, both explicitly and implicitly. The analysis of Social Network... more
Social Networks such as Facebook, Twitter, Google+
and LinkedIn have millions of users. These networks are con-
stantly evolving and it is a good source of information, both
explicitly and implicitly. The analysis of Social Network mainly
focuses on the aspect of social networking with an emphasis
on mapping relationships, patterns of interaction between user
and content information. One of the common research topics
focuses on the centrality measures where useful information of
the connected people in the social network is represented in
a graph. In this paper, we employed two link-based ranking
algorithms to analyze the ranking of the users: HITS (Hyperlink-
Induced Topic Search) and PageRank. We constructed Twitter
user retweet-relationship graph using 21 days worth of data.
Lastly, we compared the ranking sequence of the users in addition
to their followers count against the average and also whether
they are verified Twitter accounts. From the results obtained,
both HITS and PageRank showed a similar trend, and more
importantly highlighted the importance of the direction of the
edges in this work.
This paper describes the acquisition setup and development of a new gait database, MMUGait DB. The database was captured in side and oblique views, where 82 subjects participated under normal walking conditions and 19 subjects walking... more
This paper describes the acquisition setup and development of a new gait database, MMUGait DB.  The database was captured in side and oblique views, where 82 subjects participated under normal walking conditions and 19 subjects walking under 11 covariate factors. The database includes sarong and kain samping as changes of apparel, which are the traditional costumes for ethnic Malays in South East Asia. Classification experiments were carried out on MMUGait DB and the baseline results are presented for validation purposes
Incorporating affect into conceptual change model- ing for a computer-based scientific inquiry learning environment is difficult. The challenges mainly stemmed from three perspec- tives: first, to identify the appropriate variables of... more
Incorporating affect into conceptual change model-
ing for a computer-based scientific inquiry learning environment
is difficult. The challenges mainly stemmed from three perspec-
tives: first, to identify the appropriate variables of affect that
influence conceptual change; second, to determine the causal
dependencies between the variables of affect and the variables of
conceptual change; third, to perform assessment on the evolving
states of affect as a student interacts with computer-based learn-
ing activities. This research work employed Bayesian Networks
as an attempt to tackle the challenges. Three Bayesian Network
models of conceptual change were proposed and integrated into
INQPRO, a scientific inquiry learning environment developed in
this research work. The first model has only nodes of conceptual
change, while the second and the third model have nodes of
affect
component. Two phases of empirical study were conducted
involving a total of 143 students and the findings suggested that
the third model that has nodes of affect had outperformed those
models without them.
N-tier application design has become very common in the IT industry. Each individual layer, such as the application and data layer has its own main functionality. This design is very helpful in securing the application from... more
N-tier application design has become very
common in the IT industry. Each individual layer, such as the
application and data layer has its own main functionality. This
design is very helpful in securing the application from
unauthorized access and in protecting it from attacks to the
data layer. The data layer is the core of a company's business,
as all the important information of the company will be stored
in the data layer and normally will be located in a secured offline
server with limited local network access. The application
layer acts as the medium to exchange data between the client
layer and the data layer over a network. As such, the
application layer has been increasingly targeted for intrusion
and attacks. This paper will introduce a method to minimize
the security risks in the n-tier application design. The method
introduced in this paper will mainly focus on how to secure the
application layer from various attacks such as Denial of
Services (DoS) attack and spoofing attacks. This is achieved
through data protection such as random encryption key
generation, data encryption etc. and so forth at both the client
application and the application layer.
Smart devices in the form of mobile phones and tablets are becoming increasingly ubiquitous. These devices are easily available and come equipped with powerful processors and sensors. These attributes suggest that smart devices could be... more
Smart devices in the form of mobile phones and tablets are becoming increasingly ubiquitous. These devices are easily available and come equipped with powerful processors and sensors. These attributes suggest that smart devices could be used successfully as an experimental device, particularly for acquisition and recording of data. This paper examines the viability of using smart devices as experimental devices taking into consideration Human Computer Interaction  and performance issues.
Security threats for computer workstations and servers have been receiving full attention from both cyber security companies and researchers. Researchers and security companies employ honeypots as a platform to capture both an attacker’s... more
Security threats for computer workstations and servers have been receiving full attention from both cyber security companies and researchers. Researchers and security companies employ honeypots as a platform to capture both an attacker’s profile as well as the behaviour of destructive programs (i.e., virus, malware, Trojan). However, little attention has been given to security monitoring for smart mobile devices, which includes smart phones and tablet PCs. Therefore, this paper proposes a conceptual framework for deploying honeypots in smart mobile devices. The proposed conceptual framework for mobile honeypots could run in two modes. In addition to conventional methods in capturing patterns of attacks, the conceptual framework has also considered incorporating user behavioral modelling for better understanding of specific user behavior for cyber security.
This paper describes the baseline corpus of a new multimodal biometric database, the MMU GASPFA (Gait-Speech-Face) database. The corpus in GASPFA is acquired using commercial off the shelf (COTS) equipment including digital video cameras,... more
This paper describes the baseline corpus of a new multimodal biometric database, the MMU GASPFA (Gait-Speech-Face) database. The corpus in GASPFA is acquired using commercial off the shelf (COTS) equipment including digital video cameras, digital voice recorder, digital camera, Kinect camera and accelerometer equipped smart phones. The corpus consists of frontal face images from the digital camera, speech utterances recorded using the digital voice recorder, gait videos with their associated data recorded using both the digital video cameras and Kinect camera simultaneously as well as accelerometer readings from the smart phones. A total of 82 participants had their biometric data recorded. MMU GASPFA is able to support both multimodal biometric authentication as well as gait action recognition. This paper describes the acquisition setup and protocols used in MMU GASPFA, as well as the content of the corpus. Baseline results from a subset of the participants are presented for validation purposes.
Research Interests:

And 4 more

Research Interests:
Java, UML, and NetBeans
Research Interests:
Research Interests:
I had the opportunity to share my thoughts on Automatic Machine Learning during Google Cloud Next'19 Extended KL. Here's the slide deck. And here's the answer to some of the questions posed by the audience. 1. Will this take away my job?... more
I had the opportunity to share my thoughts on Automatic Machine Learning during Google Cloud Next'19 Extended KL. Here's the slide deck. And here's the answer to some of the questions posed by the audience.
1. Will this take away my job? Nope. Subject matter expertise and imagination in solving a problem is very much important
2. Should I start my junior DS folks on this? Nope again. They need to be coached to know not just what they're doing but why they're making choices in solving a problem. It's invaluable in communicating insights to stakeholders down the line.
3. So when should I use this? As a first try in solving a problem and when one is under time-pressure for hackathons, academic papers (:D) or pitches, autoML is a useful tool.
This is the slide deck for my presentation done during confeRence 2018, a R user group conference hosted in Malaysia. It describes the state of R Studio Cloud circa October 2018
Research Interests:
A quick walkthrough on the Azure ML platform
Research Interests:
Crowd source analytics have become an increasing trend among companies who are exploring data-driven decision making. Crowd source analytics is a viable choice for such companies, especially in terms of cost. One push factor for increased... more
Crowd source analytics have become an increasing trend among companies who are exploring data-driven decision making. Crowd source analytics is a viable choice for such companies, especially in terms of cost. One push factor for increased adoption of crowd sourced analytics would be the identification of key metrices which can be measured throughout the process of crowdsourced analytics, for the purposes of post-project evaluation and for future planning. In this paper, we review generic measures for measuring crowdsourcing projects, and from these measures identify key measures useful for a crowd source analytics project.
Research Interests:
The modeling of dengue fever cases is an important task to help public health officers to plan and prepare their resources to prevent dengue fever outbreak. In this paper, we present the time-series modeling of accumulated dengue fever... more
The modeling of dengue fever cases is an important task to help public health officers to plan and prepare their resources to prevent dengue fever outbreak. In this paper, we present the time-series modeling of accumulated dengue fever cases acquired from the Malaysian Open Data Government Portal. Evaluation of the forecast for future dengue fever outbreak shows promising results, as evidence is presented for the trend and seasonal nature of dengue fever outbreaks in Malaysia.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Hay's 2017 Salary guide
Research Interests:
Something different from Hayes, on an Asian level
Research Interests:
Research Interests:
Research Interests:
Universiti Teknologi Brunei is going to organize a conference (CIIS 2016) this coming November and my colleague is chairing a special session on data mining, SoDM 2016. Please refer to... more
Universiti Teknologi Brunei is going to organize a conference (CIIS 2016) this coming November and my colleague is  chairing a special session on data mining, SoDM 2016. Please refer to http://www.itb.edu.bn/academics/sci/ciis2016/sodm2016.html

Accepted papers will be published by Springer in the Advances in Intelligent Systems and Computing (ISSN: 2194-5357 , Indexed by ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink)

You are very much welcome to submit your papers to this special session. Please also help disseminate the info to your contacts.  Do contact me if you have any question.

There are special registration rates for papers published, as follows:

Regular papers USD$250
Student papers (i.e., the first author is a student) USD$125
Research Interests:
Three workshops will be organized in conjunction with IVIC 2015. Learn hand on skills from experts in the following workshops: Workshop 1 Big Data Mining on OpenSource Platform Workshop 2 Exploring Scilab for the Internet of Things and... more
Three workshops will be organized in conjunction with IVIC 2015. Learn hand on skills from experts in the following workshops:
Workshop 1
Big Data Mining on OpenSource Platform
Workshop 2
Exploring Scilab for the Internet of Things and Possibility of Big Data Analysis
Workshop 3
What’s the big deal about Big Data?
How to succeed in a Big Data Analytics project
Research Interests: