Skip to main content

Samira Sadaoui

Our study explores offensive and hate speech detection for the Arabic language, as previous studies are minimal. Based on two-class, three-class, and six-class Arabic-Twitter datasets, we develop single and ensemble CNN and BiLSTM... more
Our study explores offensive and hate speech detection for the Arabic language, as previous studies are minimal. Based on two-class, three-class, and six-class Arabic-Twitter datasets, we develop single and ensemble CNN and BiLSTM classifiers that we train with non-contextual (Fasttext-SkipGram) and contextual (Multilingual Bert and AraBert) word-embedding models. For each hate/offensive classification task, we conduct a battery of experiments to evaluate the performance of single and ensemble classifiers on testing datasets. The average-based ensemble approach was found to be the best performing, as it returned F-scores of 91%, 84%, and 80% for two-class, three-class and six-class prediction tasks, respectively. We also perform an error analysis of the best ensemble model for each task.
Constraint optimization consists of looking for an optimal solution maximizing a given objective function while meeting a set of constraints. In this study, we propose a new algorithm based on mushroom reproduction for solving constraint... more
Constraint optimization consists of looking for an optimal solution maximizing a given objective function while meeting a set of constraints. In this study, we propose a new algorithm based on mushroom reproduction for solving constraint optimization problems. Our algorithm, that we call Mushroom Reproduction Optimization (MRO), is inspired by the natural reproduction and growth mechanisms of mushrooms. This process includes the discovery of rich areas with good living conditions allowing spores to grow and develop their own colonies. Given that constraint optimization problems often su®er from a high-time computation cost, we thoroughly assess MRO performance on well-known constrained engineering and real-world problems. The experimental results con¯rm the high performance of MRO, comparing to other known meta-heursitcs, in dealing with complex optimization problems.
Shill Bidding (SB) is still a predominant auction fraud because it is the toughest to identify due to its resemblance to the standard bidding behavior. To reduce losses on the buyers' side, we devise an example-incremental classification... more
Shill Bidding (SB) is still a predominant auction fraud because it is the toughest to identify due to its resemblance to the standard bidding behavior. To reduce losses on the buyers' side, we devise an example-incremental classification model that can detect fraudsters from incoming auction transactions. Thousands of auctions occur every day in a commercial site, and to process the continuous rapid data flow, we introduce a chunk-based incremental classification algorithm, which also tackles the imbalanced and non-linear learning issues. We train the algorithm incrementally with several training SB chunks and concurrently assess the performance and speed of the new learned models using unseen SB chunks.
Research Interests:
We present a new nature-inspired approach based on the Focus Group Optimization Algorithm (FGOA) for solving Constraint Satisfaction Problems (CSPs). CSPs are NP-complete problems meaning that solving them by classical systematic search... more
We present a new nature-inspired approach based on the Focus Group Optimization Algorithm (FGOA) for solving Constraint Satisfaction Problems (CSPs). CSPs are NP-complete problems meaning that solving them by classical systematic search methods requires exponential time, in theory. Appropriate alternatives are approximation methods such as metaheuristic algorithms which have shown successful results when solving combinatorial problems. FGOA is a new metaheuristic inspired by a human collaborative problem solving approach. In this paper, the steps of applying FGOA to CSPs are elaborated. More precisely, a new diversification method is devised to enable the algorithm to efficiently find solutions to CSPs, by escaping local optimum. To assess the performance of the proposed Discrete FGOA (DFGOA) in practice, we conducted several experiments on randomly generate hard to solve CSP instances (those near the phase transition) using the RB model. The results clearly show the ability of DFGOA to successfully find the solutions to these problems in very reasonable amount of time.
Research Interests:
Shill Bidding (SB) is a serious auction fraud committed by clever scammers. The challenge in labeling multi-dimensional SB training data hinders research on SB classification. To safeguard individuals from shill bidders , in this study,... more
Shill Bidding (SB) is a serious auction fraud committed by clever scammers. The challenge in labeling multi-dimensional SB training data hinders research on SB classification. To safeguard individuals from shill bidders , in this study, we explore Semi-Supervised Classification (SSC), which is the most suitable method for our fraud detection problem since SSC can learn efficiently from a few labeled data. To label a portion of SB data, we propose an anomaly detection method that we combine with hierarchical clustering. We carry out several experiments to determine statistically the minimal sufficient amount of labeled data required to achieve the highest accuracy. We also investigate the misclassified bidders to see where the misclassification occurs. The empirical analysis demonstrates that SSC reduces the laborious effort of labeling SB data.
Given the magnitude of monetary transactions at auction sites, they are very attractive to fraudsters and scam artists. Shill bidding (SB) is a severe fraud in e-auctions, which occurs during the bidding period and is driven by modern-day... more
Given the magnitude of monetary transactions at auction sites, they are very attractive to fraudsters and scam artists. Shill bidding (SB) is a severe fraud in e-auctions, which occurs during the bidding period and is driven by modern-day technology and clever scammers. SB does not produce any obvious evidence, and it is often unnoticed by the victims. The lack of availability of training datasets for SB and the difficulty in identifying the behavior of sophisticated fraudsters hinder research on SB detection. To safeguard consumers from dishonest bidders, we were incentivized to investigate semi-supervised classification (SSC) for the first time, which is the most suitable approach to solving fraud classification problems. In this study, we first introduce two new SB patterns, and then based on a total of nine SB patterns, we build an SB dataset from commercial auctions and bidder history data. SSC requires the labeling of a few SB data samples, and to this end, we propose an anomaly detection method based on data clustering. We addressed the skewed class distribution with a hybrid data sampling method. Our experiments in training several SSC models show that using primarily unlabeled SB data with a few labeled SB data improves predictive performance when compared to that of supervised models.
Online auctions have become one of the most convenient ways to commit fraud due to a large amount of money being traded every day. Shill bidding is the predominant form of auction fraud, and it is also the most difficult to detect because... more
Online auctions have become one of the most convenient ways to commit fraud due to a large amount of money being traded every day. Shill bidding is the predominant form of auction fraud, and it is also the most difficult to detect because it so closely resembles normal bidding behavior. Furthermore, shill bidding does not leave behind any apparent evidence, and it is relatively easy to use to cheat innocent buyers. Our goal is to develop a classification model that is capable of efficiently differentiating between legitimate bidders and shill bidders. For our study, we employ an actual training dataset, but the data are unlabeled. First, we properly label the shill bidding samples by combining a robust hierarchical clustering technique and a semi-automated labeling approach. Since shill bidding datasets are imbalanced, we assess advanced over-sampling, under-sampling and hybrid-sampling methods and compare their performances based on several classification algorithms. The optimal shill bidding classifier displays high detection and low misclassification rates of fraudulent activities.
E-auctions are vulnerable to Shill Bidding (SB), the toughest fraud to detect due to its resemblance to usual bidding behavior. To avoid financial losses for genuine buyers, we develop a SB detection model based on multi-class ensemble... more
E-auctions are vulnerable to Shill Bidding (SB), the toughest fraud to detect due to its resemblance to usual bidding behavior. To avoid financial losses for genuine buyers, we develop a SB detection model based on multi-class ensemble learning. For our study, we utilize a real SB dataset but since the data are unlabeled, we combine a robust data clustering technique and a labeling approach to categorize the training data into three classes. To solve the issue of imbalanced SB data, we use an advanced multi-class over-sampling method. Lastly, we compare the predictive performance of ensemble classifiers trained with balanced and imbalanced SB data. Combining data sampling with ensemble learning improved the classifier accuracy, which is significant in fraud detection problems.
We introduce a new nature-inspired optimization algorithm namely Mushroom Reproduction Optimization (MRO) inspired and motivated by the reproduction and growth mechanisms of mushrooms in nature. MRO follows the process of discovering rich... more
We introduce a new nature-inspired optimization algorithm namely Mushroom Reproduction Optimization (MRO) inspired and motivated by the reproduction and growth mechanisms of mushrooms in nature. MRO follows the process of discovering rich areas (containing good living conditions) by spores to grow and develop their own colonies. We thoroughly assess MRO performance based on numerous unimodal and multimodal benchmark functions as well as engineering problem instances. Moreover, to further investigate on the performance of the proposed MRO algorithm, we conduct a useful statistical evaluation and comparison with well known meta-heuristic algorithms. The experimental results confirm the high performance of MRO in dealing with complex optimization problems by discovering solutions with better quality.
With the growing recognition of the importance of Project Management (PM), new solutions are still researched to improve PM practices in environments where there is a restriction on the project types. PM is becoming more widespread in... more
With the growing recognition of the importance of Project Management (PM), new solutions are still researched to improve PM practices in environments where there is a restriction on the project types. PM is becoming more widespread in business and academia but without enough information about the course of actions to be taken to archive success. This is principally true in the IT sector where the impact of new technologies is felt faster than in any other areas. This present study reviews the actual state of IT project management based on an online survey that we conducted with worldwide companies. Our aim is to provide insights and recommendations on how to increase the projects' success rate based on the results of the survey analysis.
Constraint Satisfaction Problems (CSPs) are known NP-complete problems requiring systematic search methods of exponential time costs for solving them. To overcome this limitation, an alternative is to use metaheuristics. However, these... more
Constraint Satisfaction Problems (CSPs) are known NP-complete problems requiring systematic search methods of exponential time costs for solving them. To overcome this limitation, an alternative is to use metaheuristics. However, these techniques often suffer from immature convergence, and this is mainly due to a lack of adequate diversity of the potential solutions. To address this challenge, we update the Discrete Firefly Algorithm (DFA) with the Chaos Theory. We call the Chaotic Discrete Firefly Algorithm (CDFA) this proposed algorithm. To assess the performance in practice of the proposed CDFA, we conducted several experiments on CSP instances randomly generated based on the model RB. The results of the experiments demonstrate the efficiency of CDFA in dealing with CSPs.
Shill Bidding (SB) has been recognized as the predominant online auction fraud and also the most difficult to detect due to its similarity to normal bidding behavior. Previously, we produced a high-quality SB dataset based on actual... more
Shill Bidding (SB) has been recognized as the predominant online auction fraud and also the most difficult to detect due to its similarity to normal bidding behavior. Previously, we produced a high-quality SB dataset based on actual auctions and effectively labeled the instances into normal or suspicious. To overcome the serious problem of imbalanced SB datasets, in this study, we investigate over-and under-sampling techniques through several instance-based classification algorithms. Thousands of auctions occur in eBay every day, and auction data may be sent continuously to the optimal fraud classifier to detect potential SB activities. Consequently , instance-based classification is appropriate for our particular fraud detection problem. According to the experimental results, incremental classification returns high performance for both over-and under-sampled SB datasets. Still, over-sampling slightly outperforms under-sampling for both normal and suspicious classes across all the classifiers.
Online auctions created a very attractive environment for dishonest moneymakers who can commit different types of fraud. Shill Bidding (SB) is the most predominant auction fraud and also the most difficult to detect because of its... more
Online auctions created a very attractive environment for dishonest moneymakers who can commit different types of fraud. Shill Bidding (SB) is the most predominant auction fraud and also the most difficult to detect because of its similarity to usual bidding behavior. Based on a newly produced SB dataset, in this study, we devise a fraud classification model that is able to efficiently differentiate between honest and malicious bidders. First, we label the SB data by combining a hierarchical clustering technique and a semi-automated labeling approach. To solve the imbalanced learning problem, we apply several advanced data sampling methods and compare their performance using the SVM model. As a result, we develop an optimal SB classifier that exhibits very satisfactory detection and low misclassification rates.
E-auctions have attracted serious fraud, such as Shill Bidding (SB), due to the large amount of money involved and anonymity of users. SB is difficult to detect given its similarity to normal bidding behavior. To this end, we develop an... more
E-auctions have attracted serious fraud, such as Shill Bidding (SB), due to the large amount of money involved and anonymity of users. SB is difficult to detect given its similarity to normal bidding behavior. To this end, we develop an efficient SVM-based fraud classifier that enables auction companies to distinguish between legitimate and shill bidders. We introduce a robust approach to build offline the optimal SB classifier. To produce SB training data, we combine the hierarchical clustering and our own labelling strategy, and then utilize a hybrid data sampling method to solve the issue of highly imbalanced SB datasets. To avert financial loss in new auctions, the SB classifier is to be launched at the end of the bidding period and before auction finalization. Based on commercial auction data, we conduct experiments for offline and online SB detection. The classification results exhibit good detection accuracy and mis-classification rate of shill bidders.
In the last three decades, we have seen a significant increase in trading goods and services through online auctions. However, this business created an attractive environment for malicious moneymakers who can commit different types of... more
In the last three decades, we have seen a significant increase in trading goods and services through online auctions. However, this business created an attractive environment for malicious moneymakers who can commit different types of fraud activities, such as Shill Bidding (SB). The latter is predominant across many auctions but this type of fraud is difficult to detect due to its similarity to normal bidding behaviour. The unavailability of SB datasets makes the development of SB detection and classification models burdensome. Furthermore, to implement efficient SB detection models, we should produce SB data from actual auctions of commercial sites. In this study, we first scraped a large number of eBay auctions of a popular product. After preprocessing the raw auction data, we build a high quality SB dataset based on the most reliable SB strategies. The aim of our research is to share the preprocessed auction dataset as well as the SB training (unlabelled) dataset, thereby researchers can apply various machine learning techniques by using authentic data of auctions and fraud.
We have conducted an online survey to review worldwide project management practices in the IT sector. Through this study, our goal is to identify factors that influence the success rate of IT projects and elaborate a new project... more
We have conducted an online survey to review worldwide project management practices in the IT sector. Through this study, our goal is to identify factors that influence the success rate of IT projects and elaborate a new project management approach that will help businesses in the project management discipline.
Research Interests:
Exploitation and exploration are two main search strategies of every metaheuristic algorithm. However, the ratio between exploitation and exploration has a significant impact on the performance of these algorithms when dealing with... more
Exploitation and exploration are two main search strategies of every metaheuristic algorithm. However, the ratio between exploitation and exploration has a significant impact on the performance of these algorithms when dealing with optimization problems. In this study, we introduce an entire fuzzy system to tune efficiently and dynamically the firefly algorithm parameters in order to keep the exploration and exploitation in balance in each of the searching steps. This will prevent the firefly algorithm from being stuck in local optimal, a challenge issue in metaheuristic algorithms. To evaluate the quality of the solution returned by the fuzzy-based firefly algorithm, we conduct extensive experiments on a set of high and low dimensional benchmark functions as well as two constrained engineering problems. In this regard, we compare the improved firefly algorithm with the standard one and other famous metaheuristic algorithms. The experimental results demonstrate the superiority of the fuzzy-based firefly algorithm to standard firefly and also its comparability to other metaheuristic algorithms.
Research Interests:
The option of organizing E-auctions to purchase electricity required for anticipated peak load period is a new one for utility companies. To meet the extra demand load, we develop electricity combinatorial reverse auction (CRA) for the... more
The option of organizing E-auctions to purchase electricity required for anticipated peak load period is a new one for utility companies. To meet the extra demand load, we develop electricity combinatorial reverse auction (CRA) for the purpose of procuring power from diverse energy sources. In this new, smart electricity market, suppliers of different scales can participate, and home-owners may even take an active role. In our CRA, an item, which is subject to several trading constraints, denotes a time slot that has two conflicting attributes, electricity quantity and price. To secure electricity, we design our auction with two bidding rounds: round one is exclusively for variable energy, and round two allows storage and non-intermittent renewable energy to bid on the remaining items. Our electricity auction leads to a complex winner determination (WD) task that we represent as a resource procurement optimization problem. We solve this problem using multi-objective genetic algorithms in order to find the trade-off solution that best lowers the price and increases the quantity. This solution consists of multiple winning suppliers, their prices, quantities and schedules. We validate our WD approach based on large-scale simulated datasets. We first assess the time-efficiency of our WD method, and we then compare it to well-known heuristic and exact WD techniques. In order to gain an exact idea about the accuracy of WD, we implement two famous exact algorithms for our constrained combinatorial procurement problem.
Research Interests:
This study introduces an advanced Combinatorial Reverse Auction (CRA), multi-units, multiattributes and multi-objective, which is subject to buyer and seller trading constraints. Conflicting objectives may occur since the buyer can... more
This study introduces an advanced Combinatorial Reverse Auction (CRA), multi-units, multiattributes and multi-objective, which is subject to buyer and seller trading constraints. Conflicting objectives may occur since the buyer can maximize some attributes and minimize some others. To address the Winner Determination (WD) problem for this type of CRAs, we propose an optimization approach based on genetic algorithms that we integrate with our variants of diversity and elitism strategies to improve the solution quality. Moreover, by maximizing the buyer’s revenue, our approach is able to return the best solution for our complex WD problem. We conduct a case study as well as simulated testing to illustrate the importance of the diversity and elitism schemes. We also validate the proposed WD method through simulated experiments by generating large instances of our CRA problem. The experimental results demonstrate on one hand the performance of our WD method in terms of several quality measures, like solution quality, run-time complexity and trade-off between convergence and diversity, and on the other hand, it’s significant superiority to well-known heuristic and exact WD techniques that have been implemented for much simpler CRAs.
Research Interests:
—Exploration and exploitation are two strategies used to search the problem space in Evolutionary Algorithms (EAs). To significantly increase the performance of these optimization techniques in terms of the solution optimality is to... more
—Exploration and exploitation are two strategies used to search the problem space in Evolutionary Algorithms (EAs). To significantly increase the performance of these optimization techniques in terms of the solution optimality is to strike the right balance between exploration and exploitation. Firefly is one of the most favored EAs. In this study, we introduce an entire fuzzy system to tune dynamically the firefly parameters in order to keep the exploration and exploitation in balance in each of the searching steps. A serious concern of EAs is to be stuck in local optimum solutions. The proposed fuzzy controller helps the firefly algorithm to converge to the optimal solution and escape from local optimums. To evaluate the efficiency of the fuzzy-based firefly algorithm, we conduct experiments on a set of high dimensional benchmark functions. The goal here is to compare the new firefly method with the standard firefly and well-known nature-inspired optimization algorithms. The results of the experiments show the superiority of the proposed Fuzzy firefly algorithm over the standard one.
Online auctioning has attracted serious fraud given the huge amount of money involved and anonymity of users. In the auction fraud detection domain, the class imbalance, which means less fraud instances are present in bidding... more
Online auctioning has attracted serious fraud given the huge amount of money involved and anonymity of users. In the auction fraud detection domain, the class imbalance, which means less fraud instances are present in bidding transactions, negatively impacts the classification performance because the latter is biased towards the majority class i.e. normal bidding behavior. The best-designed approach to handle the imbalanced learning problem is data sampling that was found to improve the classification efficiency. In this study, we utilize a hybrid method of data over-sampling and under-sampling to be more effective in addressing the issue of highly imbalanced auction fraud datasets. We deploy a set of well-known binary classifiers to understand how the class imbalance affects the classification results. We choose the most relevant performance metrics to deal with both imbalanced data and fraud bidding data.
Research Interests:
In the context of Multi-Attribute and Reverse Auctions (MARAs), two significant problems need to be addressed: 1) specifying precisely the buyer's requirements about the attributes of the auctioned product, and 2) determining the... more
In the context of Multi-Attribute and Reverse Auctions (MARAs),
two significant problems need to be addressed: 1) specifying precisely the buyer's
requirements about the attributes of the auctioned product, and 2) determining
the winner accordingly. Buyers are more comfortable in expressing their
preferences qualitatively, and there should be an option to allow them describes
their constraints. Both constraints and preferences may be non-conditional and
conditional. However for the sake of efficiency, it is more suitable for MARAs
to process quantitative requirements. Hence, there is a remaining challenge to
provide the buyers with more facilities and comfort, and at the same time to
keep the auctions efficient. To meet this challenge, we develop a MARA system
based on MAUT. The proposed system takes advantage of the efficiency of
MAUT by transforming the qualitative requirements into quantitative ones.
Another benefit of our system is the complete automation of the bid evaluation
since it is a really difficult task for buyers to determine quantitatively all the
weights and utility functions of attributes, especially when there is a large number
of attributes. The weights and utility functions are produced based on the
qualitative preferences. Our MARA looks for the outcome that satisfies all the
constraints and best satisfies the preferences. We demonstrate the feasibility of
our system through a 10-attribute reverse auction involving many constraints
and qualitative preferences.
Winner determination is one of the main challenges in combinatorial auctions. However, not much work has been done to solve this problem in the case of reverse auctions using evolutionary techniques. This has motivated us to propose an... more
Winner determination is one of the main challenges in combinatorial auctions. However, not much work has been done to solve this problem in the case of reverse auctions using evolutionary techniques. This has motivated us to propose an improvement of a genetic algorithm based method, we have previously proposed, to address two important issues in the context of combinatorial reverse auctions: determining the winner(s) in a reasonable processing time, and reducing the procurement cost. In order to evaluate the performance of our proposed method in practice, we conduct several experiments on combinatorial reverse auctions instances. The results we report in this paper clearly demonstrate the efficiency of our new method in terms of processing time and procurement cost.
Key Performance Indicators (KPIs) are used to inspect the performance and progress of businesses. This study introduces a new, integrated approach to manage KPIs in the context of decentralized information efficiently and to address the... more
Key Performance Indicators (KPIs) are used to inspect the performance and progress of businesses. This study introduces a new, integrated approach to manage KPIs in the context of decentralized information efficiently and to address the visual and managerial gaps existing in companies. The proposed Business Indicator Management (BIM) system is essential for any businesses to meet their needs in terms of information availability and agility as well as time efficiency and quality of the decision-making task. Thanks to BIM, executives are now able to obtain real-time information and analysis of the actual situation of their businesses, thus increasing their productivity. Today, no companies have yet this type of managing KPIs. Based on a detailed case study with a big-scale corporation, we thoroughly assess the effectiveness of BIM according to the system usability, data agility and decision making efficiency.
Research Interests:
Utility companies can organize e-auctions to procure electricity from other suppliers during peak load periods. For this purpose, we develop an efficient Combinatorial Reverse Auction (CRA) to purchase power from diverse sources,... more
Utility companies can organize e-auctions to procure electricity from other suppliers during peak load periods. For this purpose, we develop an efficient Combinatorial Reverse Auction (CRA) to purchase power from diverse sources, residents and plants. Our auction is different from what has been implemented in the electricity markets. In our CRA, which is subject to trading constraints, an item denotes a time slot that has two conflicting attributes, energy volume and its price. To ensure the security of energy, we design our auction with two bidding rounds: the first one is for variable-energy suppliers and the second one for other sources, like controllable load and renewable energy. Determining the winner of CRAs is a computational hard problem. We view this problem as an optimization of resource allocation that we solve with multi-objective genetic algorithms to find the best solution. The latter represents the best combination of suppliers that lowers the price and increases the energy.
Research Interests:
— This study introduces a new type of Combinatorial Reverse Auction (CRA), products with multi-units, multi-attributes and multi-objectives, which are subject to buyer and seller constraints. In this advanced CRA, buyers may maximize some... more
— This study introduces a new type of Combinatorial Reverse Auction (CRA), products with multi-units, multi-attributes and multi-objectives, which are subject to buyer and seller constraints. In this advanced CRA, buyers may maximize some attributes and minimize some others. To address the Winner Determination (WD) problem in the presence of multiple conflicting objectives, we propose an optimization approach based on genetic algorithms. To improve the quality of the winning solution, we incorporate our own variants of the diversity and elitism strategies. We illustrate the WD process based on a real case study. Afterwards, we validate the proposed approach through artificial datasets by generating large instances of our multi-objective CRA problem. The experimental results demonstrate on one hand the performance of our WD method in terms of three quality metrics, and on the other hand, its significant superiority to well-known heuristic and exact WD techniques that have been defined for simpler CRAs (BEST PAPER AWARD)
Research Interests:
Monitoring the progress of auctions for fraudulent bidding activities is crucial for detecting and stopping fraud during runtime to prevent fraudsters from succeeding. To this end, we introduce a stage-based framework to monitor multiple... more
Monitoring the progress of auctions for fraudulent bidding activities is crucial for detecting and stopping fraud during runtime to prevent fraudsters from succeeding. To this end, we introduce a stage-based framework to monitor multiple live auctions for In-Auction Fraud (IAF). Creating a stage fraud monitoring system is different than what has been previously proposed in the very limited studies on runtime IAF detection. More precisely, we launch the IAF monitoring operation at several time points in each running auction depending on its duration. At each auction time point, our framework first detects IAF by evaluating each bidder’s stage activities based on the most reliable set of IAF patterns, and then takes appropriate actions to react to dishonest bidders. We develop the proposed framework with a dynamic agent architecture where multiple monitoring agents can be created and deleted with respect to the status of their corresponding auctions (initialized, completed or cancelled). The adoption of dynamic software architecture represents an excellent solution to the scalability and time efficiency issues of IAF monitoring systems since hundreds of live auctions are held simultaneously in commercial auction houses. Every time an auction is completed or terminated, the participants’ fraud scores are updated dynamically. Our approach enables us to observe each bidder in each live auction and manage his fraud score as well. We validate the IAF monitoring service through commercial auction data. We conduct three experiments to detect and react to shill-bidding fraud by employing datasets acquired from auctions of two valuable items, Palm PDA and XBOX. We observe each auction at three-time points, verifying the shill patterns that most likely happen in the corresponding stage for each one.
Research Interests:
Multi-Attribute Reverse Auctions (MARAs) are excel-lent protocols to automate negotiation among sellers. Eliciting the buyer s preferences and determining the winner are both challenging problems for MARAs. To solve these problems, we... more
Multi-Attribute Reverse Auctions (MARAs) are excel-lent protocols to automate negotiation among sellers. Eliciting the buyer s preferences and determining the winner are both challenging problems for MARAs. To solve these problems, we propose two algorithms namely MAUT* and ...
We present in this paper a new method based on branch and bound for solving the incremental satisfiability(SAT) problem. More precisely, the goal of the method is to maintain, in an incremental manner, the satisfiability of a given... more
We present in this paper a new method based on branch and bound for solving the incremental satisfiability(SAT) problem. More precisely, the goal of the method is to maintain, in an incremental manner, the satisfiability of a given boolean formula in Conjunctive Normal Form(CNF) ...
Auctioning multi-dimensional items is a key challenge, which requires rigorous tools. This study proposes a multi-round, first-score, semi-sealed multi-attribute reverse auction system. A fundamental concern in multi-attribute auctions is... more
Auctioning multi-dimensional items is a key challenge, which requires rigorous tools. This study proposes a multi-round, first-score, semi-sealed multi-attribute reverse auction system. A fundamental concern in multi-attribute auctions is acquiring a useful description of the buyers’ individuated requirements: hard constraints and qualitative preferences. To consider real requirements, we express dependencies among attributes. Indeed, our system enables buyers eliciting conditional constraints as well as conditional preferences. However, determining the winner with diverse criteria may be very time consuming. Therefore, it is more useful for our auction to process quantitative data. A challenge here is to satisfy buyers with more facilities, and at the same time keep the auctions efficient. To meet this challenge, our system maps the qualitative preferences into a multi-criteria decision rule. It also completely automates the winner determination since it is a very difficult task for buyers to estimate quantitatively the attribute weights and define attributes value functions. Our procurement auction looks for the outcome that satisfies all the constraints and best matches the preferences. We demonstrate the feasibility and measure the time performance of the proposed system through a 10-attribute auction. Finally, we assess the user acceptance of our requirements specification and winner selection tool.
Research Interests:
Multi-Attribute Reverse Auctions (MARAs) are excellent protocols to automate negotiation among sellers. Eliciting the buyers preferences and determining the winner are both challenging problems for MARAs. To solve these problems, we... more
Multi-Attribute Reverse Auctions (MARAs) are excellent protocols to automate negotiation among sellers. Eliciting the buyers preferences and determining the winner are both challenging problems for MARAs. To solve these problems, we propose two algorithms namely MAUT* and CP-net*, which are respectively the improvement of the Multi-Attribute Utility Theory
(MAUT) and constrained CP-net. The buyers can now express conditional, qualitative as well as quantitative preferences over the item attributes. To evaluate the performance in time of the proposed algorithms, we conduct an experimental study on several problem instances. The results favor MAUT* in most of the cases.
Winner(s) determination in online reverse auctions is a very appealing e-commerce application. This is a combinatorial optimization problem where the goal is to find an optimal solution meeting a set of requirements and minimizing a given... more
Winner(s) determination in online reverse auctions is a very
appealing e-commerce application. This is a combinatorial optimization problem
where the goal is to find an optimal solution meeting a set of requirements and
minimizing a given procurement cost. This problem is hard to tackle especially
when multiple attributes of instances of items are considered together with
additional constraints, such as seller’s stocks and discount rate. The challenge
here is to determine the optimal solution in a reasonable computation time.
Solving this problem with a systematic method will guarantee the optimality of
the returned solution but comes with an exponential time cost. On the other
hand, approximation techniques such as evolutionary algorithms are faster but
trade the quality of the solution returned for the running time. In this paper, we
conduct a comparative study of several exact and evolutionary techniques that
have been proposed to solve various instances of the combinatorial reverse
auction problem. In particular, we show that a recent method based on genetic
algorithms outperforms some other methods in terms of time efficiency while
returning a near to optimal solution in most of the cases.
Research Interests:
In this paper, a new framework for constraint and preference representation and reasoning is proposed, including the related definitions, algorithms and implementations. A Conditional Preference Network (CPNet) is a widely used... more
In this paper, a new framework for constraint and preference representation and reasoning is proposed, including the related definitions, algorithms and implementations. A Conditional Preference Network (CPNet) is a widely used graphicalmodel for expressing the preferences among various outcomes. While it allows users to describe their preferences over variables values, the CP-Net does not express the preferences over the variables themselves, thus making the orders of outcomes incomplete. Due to this limitation, an extension of CP-Nets called Tradeoffs-enhanced Conditional Preference Networks (TCP-Nets) has been proposed to represent the relative importance between variables. Nonetheless, there is no research work reporting on the implementation of TCP-Nets as a solver. Moreover, the TCP-Net only deals with preferences (soft constraints). Hard constraints are not explicitly considered. This is a real limitation when dealing with a wide variety of real life problems including both constraints and preferences. This has motivated us to propose a new model integrating TCP-Nets with the well known Constraint Satisfaction Problem (CSP) framework for constraint processing. The new model, called Constrained TCP-Net (CTCP-Net), has been implemented as a three-layer architecture system using Java and provides aGUI for users to freely describe their problem as a set of constraints and preferences. The system will then solve the problem and returns the solutions in a reasonable time. Finally, this work provides precious information for other researchers who are interested in CSPs and graphical models for preferences from the theoretical and practical aspects.
Research Interests:
In spite of many advantages of online auctioning, serious frauds menace the auction users’ interests. Today, monitoring auctions for frauds is becoming very crucial. We propose here a generic framework that covers realtime monitoring of... more
In spite of many advantages of online auctioning, serious frauds menace the auction users’ interests. Today, monitoring auctions for frauds is becoming very crucial. We propose here a generic framework that covers realtime monitoring of multiple live auctions. The monitoring is performed at different auction times depending on fraud types and auction duration. We divide the real-time monitoring functionality into threefold: detecting frauds, reacting to frauds, and updating bidders’ clusters. The first task examines in run-time bidding activities in ongoing auctions by applying fraud detection mechanisms. The second one determines how to react to suspicious activities by taking appropriate run-time actions against the fraudsters and infected auctions. Finally, every time an auction ends, successfully or unsuccessfully, participants’ fraud scores and their clusters are updated dynamically. Through simulated auction data, we conduct an experiment to monitor live auctions for shill bidding. The latter is considered the most severe fraud in online auctions, and the most difficult to detect. More precisely, we monitor each live auction at three time points, and for each of them, we verify the shill patterns that most likely happen.
Winner(s) determination in combinatorial reverse auctions is a very appealing application in e-commerce but very challenging especially when multiple attributes of multiple instances of items are considered. The difficulty here is to... more
Winner(s) determination in combinatorial reverse auctions
is a very appealing application in e-commerce but very
challenging especially when multiple attributes of multiple
instances of items are considered. The difficulty here is to
return the optimal solution to this hard optimization problem
in a reasonable computation time. In this paper, we make
this problem more interesting by considering all-units
discounts on attributes and solving it using genetic
algorithms. We also consider the availability of instances of
items in sellers’ stock. In order to evaluate the performance
of our proposed method, we conducted several experiments
on randomly generated instances. The results clearly
demonstrate the efficiency of our method in determining the
winner(s) with an optimal procurement cost in an efficient
processing time.
SVM has been given top consideration for addressing the challenging problem of data imbalance learning. Here, we conduct an empirical classification analysis of new UCI datasets that have different imbalance ratios, sizes and... more
SVM has been given top consideration for addressing the challenging problem of data imbalance learning. Here, we conduct an empirical classification analysis of new UCI datasets that have different imbalance ratios, sizes and complexities. The experimentation consists of comparing the classification results of SVM with two other popular classifiers, Naive Bayes and decision tree C4.5, to explore their pros and cons. To make the comparative exper- iments more comprehensive and have a better idea about the learning performance of each classifier, we employ in total four performance metrics: Sensitive, Specificity, G-means and time-based efficiency. For each benchmark dataset, we perform an empirical search of the learning model through numerous training of the three classifiers under different parameter settings and performance measurements. This paper exposes the most significant results i.e. the highest performance achieved by each classifier for each dataset. In summary, SVM outperforms the other two classifiers in terms of Sensitive (or Specificity) for all the datasets, and is more accurate in terms of G-means when classifying large datasets.
Research Interests:
Trust management is becoming crucial in open systems because they may contain malicious and untrustworthy service providers. Trust management in multi-agent systems (used to model open systems) has gained a huge amount of attention from... more
Trust management is becoming crucial in open systems because they may contain malicious and untrustworthy service providers. Trust management in multi-agent systems (used to model open systems) has gained a huge amount of attention from researchers in recent years. In our previous work, we proposed a generic agent trust management framework, called ScubAA, which is based on the theory of Human Plausible Reasoning. ScubAA first recommends to the trustor (e.g. a user) a personalized ranked list of the most trusted trustees (e.g. service providers), within the context of the trustor's request, and then forwards the request to those trusted trustees only. In this article, we are particularly interested in comparing, from a theoretical perspective, ScubAA with four other trust management systems that we selected from the vast literature on trust management. This comparison highlights significant factors that agent trust management systems utilize in their trust evaluation process. It also shows that ScubAA is able to consider more trust evidences towards a more accurate value of trust. Indeed, ScubAA introduces a single unified framework that considers various important aspects of trust management, such as the truster’s feedback, history of trustor’s interactions, context of the trustor's request, third-party references from trustors as well as from trustees, and the structure of the society of trustor and trustee agents.
Research Interests:
Research Interests:
Research Interests:

And 23 more

Deep learning was adopted successfully in hate speech detection problems, but very minimal for the Arabic language. Also, the word-embedding modelséffect on the neural network's performance were not adequately examined in the literature.... more
Deep learning was adopted successfully in hate speech detection problems, but very minimal for the Arabic language. Also, the word-embedding modelséffect on the neural network's performance were not adequately examined in the literature. Through 2-class, 3-class, and 6-class classification tasks, we investigate the impact of both word-embedding models and neural network architectures on the predictive accuracy. We first train several word-embedding models on a large-scale Arabic text corpus. Next, based on a reliable dataset of Arabic hate and offensive speech, we train several neural networks for each detection task using the pre-trained word embeddings. This task yields a large number of learned models, which allows conducting an exhaustive comparison. The experiments demonstrate the superiority of the skip-gram models and CNN networks across the three detection tasks.
This research explores Cost-Sensitive Learning (CSL) in the fraud detection domain to decrease the fraud class's incorrect predictions and increase its accuracy. Notably, we concentrate on shill bidding fraud that is challenging to detect... more
This research explores Cost-Sensitive Learning (CSL) in the fraud detection domain to decrease the fraud class's incorrect predictions and increase its accuracy. Notably, we concentrate on shill bidding fraud that is challenging to detect because the behavior of shill and legitimate bidders are similar. We investigate CSL within the Semi-Supervised Classification (SSC) framework to address the scarcity of labeled fraud data. Our paper is the first attempt to integrate CSL with SSC for fraud detection. We adopt a meta-CSL approach to manage the costs of mis-classification errors, while SSC algorithms are trained with imbalanced data. Using an actual shill bidding dataset, we assess the performance of several hybrid models of CSL and SSC and then compare their mis-classification error and accuracy rates statistically. The most efficient CSL+SSC model was able to detect 99% of fraudsters and with the lowest total cost.