Skip to main content
Prof. Mona Nasr
  • Cairo, Egypt
  • 002-01001779619
  • Prof. Nasr is a Professor at Information Systems Department, Faculty of Computers and Artificial Intelligence, Helwan... moreedit
The internet of things (IoT) and cloud computing are evolving technologies in the information technology field. Merging the pervasive IoT technology with cloud computing is an innovative solution for better analytics and decision-making.... more
The internet of things (IoT) and cloud computing are evolving technologies in the information technology field. Merging the pervasive IoT technology with cloud computing is an innovative solution for better analytics and decision-making. Deployed IoT devices offload different types of data to the cloud, while cloud computing converges the infrastructure, links up the servers, analyzes information obtained from the IoT devices, reinforces processing power, and offers huge storage capacity. However, this merging is prone to various cyber threats that affect the IoT-Cloud environment. Mutual authentication is considered as the forefront mechanism for cyber-attacks as the IoT-Cloud participants have to ensure the authenticity of each other and generate a session key for securing the exchanged traffic. While designing these mechanisms, the constrained nature of the IoT devices must be taken into consideration. We proposed a novel lightweight protocol (Light-AHAKA) for authenticating IoT-Cloud elements and establishing a key agreement for encrypting the exchanged sensitive data was proposed. In this paper, the formal verification of (Light-AHAKA) was presented to prove and verify the correctness of our proposed protocol to ensure that the protocol is free from design flaws before the deployment phase. The verification is performed based on two different approaches, the strand space model and the automated validation of internet security protocols and applications (AVISPA) tool.
Internet of Things (IoT) is a pervasive technology that grants authorized users the ability to communicate with sensors and devices. This technology connects millions of devices, exchanges sensitive information with users, and off-loads... more
Internet of Things (IoT) is a pervasive technology that grants authorized users the ability to communicate with sensors and devices. This technology connects millions of devices, exchanges sensitive information with users, and off-loads classified information to the cloud. This technology is evolving to encompass
time-critical applications. In IoT-time critical applications, legitimate users may require accessing the real-time data directly from the IoT devices rather than requesting data stored in the cloud. These IoT devices are prone to distinct threats and security breaches. Authentication mechanisms are substantial to control access to IoT devices in cloud computing, as authorized users and IoT devices should ensure the authenticity of each other and generate a session key for securing the exchanged traffic. As different IoT devices are resourceconstrained, traditional security mechanisms will not be appropriate for these
devices, as they need considerable computational power and consume excessive energy. Cryptographic researchers are exerting a worthy effort to develop lightweight security mechanisms to cope with resource-constrained IoT systems. In this paper, we propose a novel lightweight protocol (LightAHAKA) for authenticating IoT-cloud elements and establishing a key agreement for encrypting the exchanged sensitive data. Security analysis of the (Light-AHAKA) is carried out to assure the protocol immunity to different security attacks.
Advanced machine learning approaches are qualified for recognizing the too composite patterns in the massive datasets. We provide a perspective technical survey analysis in machine learning (ML), and deep learning (DL) approaches for... more
Advanced machine learning approaches are qualified for recognizing the too composite patterns in the massive datasets.
We provide a perspective technical survey analysis in machine learning (ML), and deep learning (DL) approaches for genome analysis. It's quickly rising applications related to cancer diseases such as cancer diagnosis or subtypes of cancer through omics input data. It discusses effective approaches in the fields of genomics regulatory, pathogenicity, and variant calling. Moreover, the representation of ML's potential benefits due to the several technological platforms involved in its
diagnosis, prognosis, and treatment. We concentrate on the most up-to-date knowledge of cancer classification models, targeted therapy, and define how genetic mutations inspire targeted therapy's responsiveness and highlight the different related issues in this era of precision medicine. Finally, we disuse limitations of the different approaches and hopeful ways of upcoming research in targeted therapy.
One of the modern technological techniques in practical application will be discussed in this paper in broaden manner will be discussed on a large scale.This technique, which is considered a means of technological security to keep... more
One of the modern technological techniques in practical application will be discussed in this paper in broaden manner will be discussed on a large scale.This technique, which is considered a means of technological security to keep personal data away from the hands of snoopers and spies, this technology has occupied the minds of developers in recent years who have ensured its development continuously is a facial recognition technology.
There are huge data from unstructured text obtained daily from various resources like emails, tweets, social media posts, customer comments, reviews, and reports in many different fields, etc. Unstructured text data can be analyzed to... more
There are huge data from unstructured text obtained daily from various resources like emails, tweets, social media posts, customer comments, reviews, and reports in many different fields, etc. Unstructured text data can be analyzed to obtain useful information that will be used according to the purpose of the analysis also the domain that the data was obtained from it. Because of the huge amount of the data the human manually analysis of these texts is not possible, so we have to automatic analysis. The topic analysis is the Natural Language Processing (NLP) technology that organizes and understands large collections of text data, by identifying the topics, finding patterns and semantic. There two common approaches for topic analysis, topic modeling, and topic classification each approach has different algorithms to apply that will be discussed.
Online social networks (OSNs) have become essential ways for users to socially share information and feelings, communicate, and thoughts with others through online social networks. Online social networks such as Twitter and Facebook are... more
Online social networks (OSNs) have become essential ways for users to socially share information and feelings, communicate, and thoughts with others through online social networks. Online social networks such as Twitter and Facebook are some of the most common OSNs among users. Users’ behaviors on social networks aid researchers for detecting and understanding their online behaviors and personality traits. Personality detection is one of the new difficulties in social networks. Machine learning techniques are used to build models for understanding personality, detecting personality traits, and classifying users into different kinds through user generated content based on different features and measures of psychological models such as PEN (Psychoticism, Extraversion, and Neuroticism) model, DISC (Dominance, Influence, Steadiness, and Compliance) model, and the Big-five model (Openness, Extraversion, Consciousness, Agreeableness, and Neurotic) which is the most accepted model of personality. This survey discusses the existing works on psychological personality classification.
Colon cancer is also referred to as colorectal cancer, a kind of cancer that starts with colon damage to the large intestine in the last section of the digestive tract. Elderly people typically suffer from colon cancer, but this may occur... more
Colon cancer is also referred to as colorectal cancer, a kind of cancer that starts with colon damage to the large intestine in the last section of the digestive tract. Elderly people typically suffer from colon cancer, but this may occur at any age.It normally starts as little, noncancerous (benign) mass of cells named polyps that structure within the colon. After a period of time these polyps can turn into advanced malignant tumors that attack the human body and some of these polyps can become colon cancers. So far, no concrete causes have been identified and the complete cancer treatment is very difficult to be detected by doctors in the medical field. Colon cancer often has no symptoms in early stage so detecting it at this stage is curable but colorectal cancer diagnosis in the final stages (stage IV), gives it the opportunity to spread to different pieces of the body, difficult to treat successfully, and the person's chances of survival are much lower. False diagnosis of colorectal cancer which mean wrong treatment for patients with long-term infections and they are suffering from colon cancer this causing the death for these patients. Also, the cancer treatment needs more time and a lot of money. This paper provides a comparative study for methodologies and algorithms used in colon cancer diagnoses and detection this can help for proposing a prediction for risk levels of colon cancer disease using CNN algorithm of the deep learning (Convolutional Neural Networks Algorithm).
Internet of Things (IoT) is a fundamental concept of a new technology that will be promising and significant in various fields. IoT is a vision that allows things or objects equipped with sensors, actuators, and processors to talk and... more
Internet of Things (IoT) is a fundamental concept of a new technology that will be promising and significant in various fields. IoT is a vision that allows things or objects equipped with sensors, actuators, and processors to talk and communicate with each other over the internet to achieve a meaningful goal. Unfortunately, one of the major challenges that affect IoT is data quality and uncertainty, as data volume increases noise, inconsistency and redundancy increases within data and causes paramount issues for IoT technologies. And since IoT is considered to be a massive quantity of heterogeneous networked embedded devices that generate big data, then it is very complex to compute and analyze such massive data. So this paper introduces a new model named NRDD-DBSCAN based on DBSCAN algorithm and using resilient distributed datasets (RDDs) to detect outliers that affect the data quality of IoT technologies. NRDD-DBSCAN has been applied on three different datasets of N-dimensions (2-D, 3-D, and 25-D) and the results were promising. Finally, comparisons have been made between NRDD-DBSCAN and previous models such as RDD-DBSCAN model and DBSCAN algorithm, and these comparisons proved that NRDD-DBSCAN solved the low dimensionality issue of RDD-DBSCAN model and also solved the fact that DBSCAN algorithm cannot handle IoT data. So the conclusion is that NRDD-DBSCAN proposed model can detect the outliers that exist in the datasets of N-dimensions by using resilient distributed datasets (RDDs), and NRDD-DBSCAN can enhance the quality of data exists in IoT applications and technologies.
E-payment is the key function for any e-business as it is rising exponentially in today's business world as e-business grows. E-payments made it easier for people to survive and helped them save a lot of money and time. Using various... more
E-payment is the key function for any e-business as it is rising exponentially in today's business world as e-business grows. E-payments made it easier for people to survive and helped them save a lot of money and time. Using various forms and devices, our payments are more exciting and convenient to press on your mobile phone and pay for your orders. In order to obtain better results in e-business, it must be linked to e-payments. E-payments have many systems and opportunities in the field of e-business, but it is facing many risks and challenges that need to be highlighted in order to find solutions to it. This paper presents an overview study for e-payments opportunities, challenges, and different risks for e-payments especially fraud as it is one of the most critical threats to the e-payments field and it is causing huge losses. Paper also discusses different types of e-payments, benefits, and the future of e-payment.
Solving an optimization task in any domain is a very challenging problem, especially when dealing with nonlinear problems and non-convex functions. Many meta-heuristic algorithms are very efficient when solving nonlinear functions. A... more
Solving an optimization task in any domain is a very challenging problem, especially when dealing with nonlinear problems and non-convex functions. Many meta-heuristic algorithms are very efficient when solving nonlinear functions. A meta-heuristic algorithm is a problem-independent technique that can be applied to a broad range of problems. In this experiment, some of the evolutionary algorithms will be tested, evaluated, and compared with each other. We will go through the Genetic Algorithm, Differential Evolution, Particle Swarm Optimization Algorithm, Grey Wolf Optimizer, and Simulated Annealing. They will be evaluated against the performance from many points of view like how the algorithm performs throughout generations and how the algorithm's result is close to the optimal result. Other points of evaluation are discussed in depth in later sections.
This paper explains how to detect the 2D pose of multiple people in an image. We use in this paper Part Affinity Fields for Part Association (It is non-parametric representation), Confidence Maps for Part Detection, Multi-Person Parsing... more
This paper explains how to detect the 2D pose of multiple people in an image. We use in this paper Part Affinity Fields for Part Association (It is non-parametric representation), Confidence Maps for Part Detection, Multi-Person Parsing using PAFs, Simultaneous Detection and Association, this method achieve high accuracy and performance regardless the number of people in the image. This architecture placed first within the inaugural COCO 2016 key points challenge. Also, this architecture exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.
Climate change is a major challenge for agriculture and food security, Climate change may cause sudden reductions in agricultural productivity. Egypt is considered as a one of the worst-weather countries which is affected by the climate... more
Climate change is a major challenge for agriculture and food security, Climate change may cause sudden reductions in agricultural productivity. Egypt is considered as a one of the worst-weather countries which is affected by the climate change: including high rising temperatures, erratic rainfall, sandstorms and extreme heat. The production of the strategic crops will suffer from significant reduction from 10% to 60% by the middle of the century (2050) due to temperature increases. Intelligent Decision Support System IDSS provides the good solution to minimize the effects of climate change in an intelligent manner and provides the required solutions to successfully penetrate this highly competitive and regulatory challenging environment. These solutions depends on GIS to determine location and its properties, climate prediction model, knowledge base model in agricultural domain as plants, insects and diseases using domain expert knowledge extraction, data mining, machine learning and fuzzy logic to obtain a fast and high accuracy solutions.
Deep Web is an important topic of research. According to the deep web pages' complicated structure, extracting content is a very challenging issue. In this paper a framework for efficiently discovery deep web data records is proposed. The... more
Deep Web is an important topic of research. According to the deep web pages' complicated structure, extracting content is a very challenging issue. In this paper a framework for efficiently discovery deep web data records is proposed. The proposed framework is able to perform crawling and fetching relevant pages related to user's text query. To retrieve the relevant pages this paper proposes a similarity method based on the improved weighting function (ITF-IDF). This framework utilizes the web page's visual features to obtain data records rather than analyze the source code of HTML. To accurately retrieve the data records, an approach called layout tree is exploited. The proposed framework uses Noise Filter (NSFilter) algorithm to eliminate all noise like header, footer, ads and unnecessary content. Data records are defined as a similar layout visual blocks. To cluster the visual blocks with similar layout, this paper proposes a method based on appearance similarity and similar shape and coordinate feature (SSC). The experiment results illustrate that the framework being proposed is better than previous data extraction works.
Abnormal temperature of human body is a natural extensive indicator of illness. Infrared thermography (IRT) is a fast, non-invasive, non-contact and passive substitution to ordinary medical thermometers for monitoring and observation... more
Abnormal temperature of human body is a natural extensive indicator of illness. Infrared thermography (IRT) is a fast, non-invasive, non-contact and passive substitution to ordinary medical thermometers for monitoring and observation human body temperature. Aside from, IRT is able to chart body surface heat remotely. Last five decades testified a stationary development in thermal imaging cameras utilization to obtain relations between the thermal physiology and surface temperature. IRT has effectively used in diagnosis and detection of breast cancer, diabetes neuropathy and peripheral vascular disorders. It has been employed to detect issues related to gynecology, dermatology, heart, neonatal physiology, and brain imaging. With the advent of modern infrared cameras, data acquisition and processing techniques, it is now possible to have real time high resolution thermographic images, which is likely to surge further research in this field. The emergent technology known as the Internet of Things (IoT) has guided practitioners, physicians and researchers to design innovative solutions in different environments, particularly in medical and healthcare using smart sensors, computer networks and a remote server. This paper aims to propose IoT-enabled medical system enables diagnostics and detection for several medical anomalies remotely; in real-time and simultaneous depend on combination of IoT and Thermal Infrared imaging techniques. It will detect and diagnostics any abnormal and alert the user through IoT remotely and in real-time.
Diabetes is a inveterate defect and disturbance resulted from metabolic conk out in carbohydrate metabolism thus it has occupied a globally serious health problem. In general, the detection of diabetes in early stages can greatly has... more
Diabetes is a inveterate defect and disturbance resulted from metabolic conk out in carbohydrate metabolism thus it has occupied a globally serious health problem. In general, the detection of diabetes in early stages can greatly has significant impact on the diabetic patients treatment in which lead to drive out its relevant side effects. Machine learning is an emerging technology that providing high importance prognosis and a deeper understanding for different clustering of diseases such as diabetes. And because there is a lack of effective analysis tools to discover hidden relationships and trends in data, so Health information technology has emerged as a new technology in health care sector in a short period by utilizing Business Intelligence 'BI' which is a data-driven Decision Support System. In this study, we proposed a high precision diagnostic analysis by using k-means clustering technique. In the first stage, noisy, uncertain and inconsistent data was detected and removed from data set through the preprocessing to prepare date to implement a clustering model. Then, we apply k-means technique on community health diabetes related indicators data set to cluster diabetic patients from healthy one with high accuracy and reliability results.
Abnormal temperature of human body is a natural extensive indicator of illness. Infrared thermography (IRT) is a fast, non-invasive, non-contact and passive substitution to ordinary medical thermometers for monitoring and observation... more
Abnormal temperature of human body is a natural extensive indicator of illness. Infrared thermography (IRT) is a fast, non-invasive, non-contact and passive substitution to ordinary medical thermometers for monitoring and observation human body temperature. Aside from, IRT is able to chart body surface heat remotely. Last five decades testified a stationary development in thermal imaging cameras utilization to obtain relations between the thermal physiology and surface temperature. IRT has effectively used in diagnosis and detection of breast cancer, diabetes neuropathy and peripheral vascular disorders. It has been employed to detect issues related to gynecology, dermatology, heart, neonatal physiology, and brain imaging. With the advent of modern infrared cameras, data acquisition and processing techniques, it is now possible to have real time high resolution thermographic images, which is likely to surge further research in this field. The emergent technology known as the Internet of Things (IoT) has guided practitioners, physicians and researchers to design innovative solutions in different environments, particularly in medical and healthcare using smart sensors, computer networks and a remote server. This paper aims to propose IoT-enabled medical system enables diagnostics and detection for several medical anomalies remotely; in real-time and simultaneous depend on combination of IoT and Thermal Infrared imaging techniques. It will detect and diagnostics any abnormal and alert the user through IoT remotely and in real-time.
The techniques of data mining are very popular of Diseases. The advancement in health analysis has been improved by technical advances in computation, automation and data mining. Nowadays, data mining is getting used in a vast area .The... more
The techniques of data mining are very popular of Diseases. The advancement in health analysis has been improved by technical advances in computation, automation and data mining. Nowadays, data mining is getting used in a vast area .The Nature of the medical field is made with the knowledge wherever there's a spread of data but untapped during a correct. and thus, the foremost serious challenge facing this area is the quality of service provided which suggests to create the diagnose during a correct manner in a timely manner and supply acceptable medications to patients. Thus Health information technology has emerged as a replacement technology within the health care sector in a short amount by utilizing Business Intelligence 'BI' that could be a data-driven Decision Support System. The various techniques of data mining are used and compared during this analysis.
Diabetes is a inveterate defect and disturbance resulted from metabolic conk out in carbohydrate metabolism thus it has occupied a globally serious health problem. In general, the detection of diabetes in early stages can greatly has... more
Diabetes is a inveterate defect and disturbance resulted from metabolic conk out in carbohydrate metabolism thus it has occupied a globally serious health problem. In general, the detection of diabetes in early stages can greatly has significant impact on the diabetic patients treatment in which lead to drive out its relevant side effects. Machine learning is an emerging technology that providing high importance prognosis and a deeper understanding for different clustering of diseases such as diabetes. And because there is a lack of effective analysis tools to discover hidden relationships and trends in data, so Health information technology has emerged as a new technology in health care sector in a short period by utilizing Business Intelligence 'BI' which is a data-driven Decision Support System. In this study, we proposed a high precision diagnostic analysis by using k-means clustering technique. In the first stage, noisy, uncertain and inconsistent data was detected and removed from data set through the preprocessing to prepare date to implement a clustering model. Then, we apply k-means technique on community health diabetes related indicators data set to cluster diabetic patients from healthy one with high accuracy and reliability results.
Using Business Intelligence in the cloud is considered a key factor for success in various fields in 2018, about 66 percent of successful organizations in BI already using cloud. 86% of Cloud BI adopters choose Amazon AWS as their first... more
Using Business Intelligence in the cloud is considered a key factor for success in various fields in 2018, about 66 percent of successful organizations in BI already using cloud. 86% of Cloud BI adopters choose Amazon AWS as their first choice, 82% choose Microsoft Azure, 66% choose Google Cloud, and 36% identify IBM Bluemix as their preferred provider of cloud BI services. In recent years, both Business Intelligence and cloud computing have undergone dramatic changes and advancements. The newest capabilities that these recent developments bring forth are introduced. In this paper the latest technologies in the field of Cloud (SaaS) BI is introduced. The paper shows also that many of the current problems in Cloud (SaaS) BI can be solved by enhance the performance and increase the use and acceptance of this technology. Many of the key characteristics of Business Intelligence systems tend to complement those of cloud computing systems and vice versa. Therefore, when integrated properly, these two technologies can be made to strengthen each other's advantages and eliminate each other's weaknesses.
This paper describes Mobile Agents paradigm for tracking and tracing the effects of Denial of Service security threat in Mobile Agent System, an implementation of this paradigm has been entirely developed in java programming language. The... more
This paper describes Mobile Agents paradigm for tracking and tracing the effects of Denial of Service security threat in Mobile Agent System, an implementation of this paradigm has been entirely developed in java
programming language. The proposed paradigm considers a range of techniques that provide high degree of security during the mobile agent system life cycle in its environment.
This paper highlights the spot to two main design objectives: The  importance of including various supportive types of agents within a system e.g., police agents, service agents, …etc. Second: Evaluation analysis and number of checks to be done to trace the Mobile Agents if denial of the provided services during its path. Evaluation analysis for detecting tolerance differences for the calculated agent’s route before and during its journey, storing agent transactions, storing snapshots of agent state information, checking from time to time agent status and task completeness and lastly guard agent checks the changed variables of migrated agent. During tracing and monitoring Mobile Agents, the initiator node may destroy it and continue with another. In this paper a new paradigm is presented that detects and eliminate with high probability, any degree of tampering within a reasonable amount of time, also provide the ability of scalability of security administration.
Organizations built their customer information data warehouse aiming to enhance the process of customer services which depends on different data mining techniques. Most of data mining techniques face a common problem which is the most... more
Organizations built their customer information data warehouse aiming to enhance the process of customer services which depends on different data mining techniques. Most of data mining techniques face a common problem which is the most important attribute to set as start node to begin with. To overcome this problem K-MIAS is a proposed methodology to select the K-Most important attributes that distinguish different customer types. K-MIAS methodology consists of three phases. The first phase is data preparation which prepares data for computing calculations. The second phase is K-MIAS algorithm which ranks the quantification levels for each attributes with respect to all attributes to select the K-Most important attributes while the third phase is to visualize data which helps for better data understanding and clarifying the results. In this paper, K-MIAS methodology is tested by a dataset consist of 1000 instants for trainee's questionnaire. K-MIAS methodology selects the K-Most important attribute successfully with interesting remarks and findings.
Human Resources Management (HRM) has become one of the essential interests of managers and decision makers in almost all types of businesses to adopt plans for correctly discovering highly qualified employees. Accordingly, managements... more
Human Resources Management (HRM) has become one of the essential interests of managers and decision makers in almost all types of businesses to adopt plans for correctly discovering highly qualified employees. Accordingly, managements become interested about the performance of these employees. Especially to ensure the appropriate person allocated to the convenient job at the right time. From here, the interest of data mining (DM) role has been growing that its objective is the discovery of knowledge from huge amounts of data. In this paper, DM techniques were utilized to build a classification model for predicting employees’ performance using a real dataset collected from the Ministry of Egyptian Civil Aviation (MOCA) through a questionnaire prepared and distributed for 145 employees. Three main DM techniques were used for building the classification model and identifying the most effective factors that positively affect the performance. The techniques are the Decision Tree (DT), Naïve Bayes, and Support Vector Machine (SVM). To get a highly accurate model, several experiments were executed based on the previous techniques that are implemented in WEKA tool for enabling decision makers and human resources professionals to predict and enhance the performance of their employees.
The Demand for healthcare IT and its analytics increases in the last few years. To improve quality of care (e.g., ensuring that patients receive the correct medication) which will help to improve the efficiency of clinical quality and... more
The Demand for healthcare IT and its analytics increases in the last few years. To improve quality of care (e.g., ensuring that patients receive the correct medication) which will help to improve the efficiency of clinical quality and safety, operations.

The Nature of the medical field is rich with information where there’s a variety and abundance of data but untapped in a correct and effective manner to get the right knowledge. and therefore, the most serious challenge facing this area is the quality of service provided which means to make the diagnose in a proper manner at a timely manner and provide appropriate medications to patients because Poor diagnosing can lead to serious consequences which are unacceptable. And because there is a lack of effective analysis tools to discover hidden relationships and trends in data, so Health information technology has emerged as a new technology in health care sector in a short period by utilizing Business Intelligence ‘BI’ which is a data-driven Decision Support System.
Which Was developed from 1990s to now, and gradually become one of the most important information systems applied in any sector. BI enables to deal with huge
amount of data and extract useful knowledge to support decision making. Data mining ‘DM’ is a kind of data processing technology which can be regarded as a part of the BI system, but it can be also considered as an independent and integrated technology which can treat mass data and extract hidden relationships from it.

This introduction highlights the main importance of how to apply the business intelligence applications using data mining techniques to help medical professionals in  healthcare sector rapidly diagnosing and predicting diseases of any patients not only this but also detecting the disease complications on the patient which will decrease the overall cost of expenditure that the country paid, briefly this is the central research idea which address the motivation for doing this research.
With the new era of Information and Communication Technology; collaborative learning is considered to be an elearning approach where learners are able to socially interact to the others, as well as instructors. In essence, learners work... more
With the new era of Information and Communication Technology; collaborative learning is considered to be an elearning approach where learners are able to socially interact to the others, as well as instructors. In essence, learners work together in order to expand their knowledge of a particular subject or skill. Nowadays, Social Networks could be used as an e-learning platform. One would typically log in and collaborate with other learners on a specific topic using the social network as the common working space. In this paper, a new approach is presented for collaborative learning throughout social networks. The behavior the proposed approach is presented. The proposed collaborative e-learning approach consists of three modules. The first one is mobile application which is used for collecting some data about the user and his/her interests that may be considered as a data entry process. The second module is getting some location information using the GPS of the mobile device. The third module here is matching algorithm, which will do two main functions. The first one is matching the interests of the users and displaying the results based on these interests. The second one is displaying the results and sorting it based on the nearest user.in addition to routing information.
This paper is describing how to add recommendation resources in Blended learning systems. The Blended learning system is a term increasingly used to describe the way e-learning is being combined with traditional classroom methods and... more
This paper is describing how to add recommendation resources in Blended learning systems. The Blended learning system is a term increasingly used to describe the way e-learning is being combined with traditional classroom methods and independent study to create a new, which called hybrid teaching methodology. It represents a much greater change in basic technique than simply adding computers to classrooms; it represents, in many cases, a fundamental change in the way teachers and students approach the learning experience. This paper describes how an algorithm can be used to make recommendation of resources in the Blended Learning field.
Recommender systems are built to help us to easily find the most proper information on the internet. Unlike the search engines recommender systems bring the information to the user without any manual search effort. This is achieved by... more
Recommender systems are built to help us to easily find the most proper information on the internet. Unlike the search engines recommender systems bring the information to the user without any manual search effort. This is achieved by using the similarities between users and/or items. There are many methods to build a recommender system and these methods can be applied to many specific domains like shopping, movies and music. Since each application domain has its own specific needs, the method used for recommendations differs. As a specific application domain, news recommender systems aim to give the most relevant news article recommendations to users according to their personal interests and preferences. News recommendation have specific challenges when compared to the other domains. From the technical point of view there are many different methods to build a recommender system. Thus, while general methods are used in news recommendation, researchers also need some new methods to make proper news recommendations. The proposed framework for building automatic recommendations in news is composed of two modules: an off-line module which preprocesses data to build reader and content models, and an online module which uses these models on-the-fly to recognize the reader' needs and goals, and predict a recommendation list. The recommended objects are obtained by using a range of recommendation strategies based mainly on content based filtering and collaborative filtering approaches, each applied separately or in combination.
Research Interests:
Data preprocessing is a crucial step through which the data can be cleaned from any quality defects. Quality defects include catching duplicates, filling missing values, removing irrelevant features, catching outliers and other defects.... more
Data preprocessing is a crucial step through which the data can be cleaned from any quality defects. Quality defects
include catching duplicates, filling missing values, removing irrelevant features, catching outliers and other defects. This paper
presents a multi-dimensional information quality framework that enhances the accuracy of business intelligence applications by
eliminating quality issues in the input data. The results declared that our framework enhances the quality of the data and works
effectively.
Research Interests:
Many decisions are made daily based on the simple mental processing. This way of decision making is suitable for the simple personal daily issues. But when decisions are concerned with general and sensitive sectors, this way of decision... more
Many decisions are made daily based on the simple mental processing. This way of decision making is suitable for the simple personal daily issues. But when decisions are concerned with general and sensitive sectors, this way of decision making is unacceptable. Nowadays, Geospatial Information Systems help in making more accurate decisions in a lot of sections that would be built on accurate information by way of drawing maps and visualizing data to clearly judge which option is the best for that particular situation. This paper raises a framework that integrates the Geospatial Information Systems with the Hybrid Cloud Computing to let them work together and get greater powerful benefits via applying the concept of cloud computing to overcome the flaws related to the desktop GIS including the huge startup cost and the storage capacity and to provide the feature of location independence accessibility where the GIS can be accessed from anywhere and anytime. The hybrid cloud computing was picked to be integrated with the GIS to gain the elasticity and security of dealing with different types of data; private and public data. This integration is presented in three dimensions. The first one is architecture with seven segments that illustrate the main structure for the Hybrid Cloud GIS within a mix of private environment and public environment. The second one is the types of the participants and their workflow within the two environments. The last dimension is a case study for applying this integration in the health sector in Egypt.
Research Interests:
These days most people use social media sites like Facebook, Twitter, etc. to review, buying and complain about products or services. According to the previous, most companies changed from traditional CRM to SCRM to be able to retain the... more
These days most people use social media sites like Facebook, Twitter, etc. to review, buying and complain about products or services. According to the previous, most companies changed from traditional CRM to SCRM to be able to retain the current Customers and also can compete with the others and get new Customers. Starting from the importance of Customer reviews about products or services for companies, we started working on this paper. Sentiment analysis model was used to get Customers opinions about product or service then manual analysis has been done on negative and positive reviews. The result of this research is beneficial reports for business decision makers to enhance SCRM.
Research Interests:
Student advising is an important and time-consuming effort in academic life. Academic advising has been implemented in order to fill the gap between student and the academic routine, by moving advising, complaining, evaluating, suggesting... more
Student advising is an important and time-consuming effort in academic life. Academic advising has been implemented in order to fill the gap between student and the academic routine, by moving advising, complaining, evaluating, suggesting system from the traditional ways to an automated way. The researcher surveyed the existing literature; as utilized that many institutions have implemented computerized solutions in order to enhance their overall advising experience. In this paper the researcher innovates an automated mechanism for academic advising in the university system. The paper presents an overview of the development and implementation of a new model of e-Academic Advising System as a web-based application. The proposed model attempts to develop a model that the staff and advisor can access to follow-up the student complaints and suggestions. Also, the students who registered can through complain, evaluate & suggest in any subject. Finally, the head of the department can receive a KPIs reports to follow-up his department. Therefore, a need for a system that could detect student’s problems and provide them with suitable feedback is raised. The aim of this paper is to implement a system which facilitates and assists academic advisors in their efforts to providing quality, accurate and consistent advising services to their students; also, to explore the design and implementation of a computerized tool to facilitate this process. This paper discussed the required methodologies used in the development of the Academic Advising System, it has been shown that Academic Advising is a Process more than a Final Product or system, a technical vision for Academic Advising System has been provided. The e-Academic Advising web-based developed and implemented by "Ruby on Rails" as a Web framework which runs via the Ruby programming language and "PostgreSQL" as a Database Engine.
The incredible rising of online networks show that these networks are complex and involving massive data. Giving a very strong interest to set of techniques developed for mining these networks. The clique problem is a wellknown NP-Hard... more
The incredible rising of online networks show that these networks are complex and involving massive data. Giving a very strong interest to set of techniques developed for mining these networks. The clique problem is a wellknown NP-Hard problem in graph mining. One of the fundamental applications for it is the community detection.
It helps to understand and model the network structure which has been a fundamental problem in several fields.
In literature, the exponentially increasing computation time of this problem make the quality of these solutions is
limited and infeasible for massive graphs. Furthermore, most of the proposed approaches are able to detect only
disjoint communities. In this paper, we present a new clique based approach for fast and efficient overlapping
community detection. The work overcomes the short falls of clique percolation method (CPM), one of most
popular and commonly used methods in this area. The shortfalls occur due to brute force algorithm for
enumerating maximal cliques and also the missing out many vertices thatleads to poor node coverage. The proposed work overcome these shortfalls producing NMC method for enumerating maximal cliques then detects overlapping communities using three different community scales based on three different depth levels to assure high nodes coverage and detects the largest communities. The clustering coefficient and cluster density are used to measure the quality. The work also provide experimental results on benchmark real world network to demonstrate the efficiency and compare the new proposed algorithm with CPM method, The proposed algorithm is able to quickly discover the maximal cliques and detects overlapping community with interesting remarks and findings.
Research Interests:
The electronic services become an important integral part of the Information Systems which supported by the term e-government. Many traditional business systems are now shifting to electronic systems and that in the midst of tremendous... more
The electronic services become an important integral part of the Information Systems which supported by the term e-government. Many traditional business systems are now shifting to electronic systems and that in the midst of tremendous information, which is stored inside these systems. There are many researches in business information systems and their importance and advantages. Transforming business information systems to gain profit especially in government services is more difficult. This paper discusses the factors effects on the transformation of business information system represented in the State Council of Egypt information systems as a case study to an electronic inquiries system.
Research Interests:
In recent years, cloud computing become mainstream technology in IT industry offering new trends to software, platform and infrastructure as a service over internet on a global scale by centralizing storage, memory and bandwidth. This new... more
In recent years, cloud computing become mainstream technology in IT industry offering new trends to software, platform and infrastructure as a service over internet on a global scale by centralizing storage, memory and bandwidth. This new technology raises some new opportunities in producing different business operations which influence some new business benefits also some different risks issues are involved using cloud computing. This paper attempts to identify cloud computing approaches, highlights its business opportunities and help cloud computing user to analysis the cloud computing risks and to produce different solving approaches. This paper is targeted towards business and IT leaders considering a move to the cloud for some or all of their business applications
Research Interests:
As it is one of the most important and sensitive sectors, decisions related to the health sector are always critical. So it must be built on accurate information in order to weight the positives and negatives of each option and consider... more
As it is one of the most important and sensitive sectors, decisions related to the health sector are always critical. So it must be built on accurate information in order to weight the positives and negatives of each option and consider all the alternatives to determine which option is the best for that particular situation. The health sector ultimately faces basic challenges of operations, logistics, resource allocation, customers, and management. As it's true that information can help overcome the hurdles, this paper raises the relation between the spatial analyses based on Cloud Computing and the health sector improvement. It presents the vital role of the spatial analysis and the health geography in improving the health sector services. In addition, it presents the significance of the Cloud based GIS. Finally, it presents the usage of the Cloud based GIS for the hospitals distribution in Egypt as a case study.
Research Interests:
Student advising is an important and time-consuming effort in academic life. Academic advising has been implemented in order to fill the gap between student and the academic routine, by moving advising, complaining, evaluating, suggesting... more
Student advising is an important and time-consuming effort in academic life. Academic advising has been implemented in order to fill the gap between student and the academic routine, by moving advising, complaining, evaluating, suggesting system from the traditional ways to an automated way. The researcher surveyed the existing literature; as utilized that many institutions have implemented computerized solutions in order to enhance their overall advising experience. In this paper the researcher innovates an automated mechanism for academic advising in the university system. The paper presents an overview of the development and implementation of a new model of e-Academic Advising System as a web-based application. The proposed model attempts to develop a model that the staff and advisor can access to follow-up the student complaints and suggestions. Also, the students who registered can through complain, evaluate & suggest in any subject. Finally, the head of the department can receive a KPIs reports to follow-up his department. Therefore, a need for a system that could detect student's problems and provide them with suitable feedback is raised. The aim of this paper is to implement a system which facilitates and assists academic advisors in their efforts to providing quality, accurate and consistent advising services to their students; also, to explore the design and implementation of a computerized tool to facilitate this process. This paper discussed the required methodologies used in the development of the Academic Advising System, it has been shown that Academic Advising is a Process more than a Final Product or system, a technical vision for Academic Advising System has been provided. The e-Academic Advising web-based developed and implemented by "Ruby on Rails" as a Web framework which runs via the Ruby programming language and "PostgreSQL" as a Database Engine.
Research Interests:
The incredible rising of online networks show that these networks are complex and involving massive data. Giving a very strong interest to set of techniques developed for mining these networks. One of the fundamental applications for it... more
The incredible rising of online networks show that these networks are complex and involving massive data. Giving a very strong interest to set of techniques developed for mining these networks. One of the fundamental applications for it is the community detection. It helps to understand and model the network structure which has been a fundamental problem in several fields. Community detection is one of the fundamental applications that provide a solution discipline where systems are often represented as graphs. Community detection is of great importance in sociology, biology and computer science. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. It helps to understand and model the network structure, can be useful in various applications such as rating prediction, link and Top-N recommendation, trend analysis and also be the main provider engine for recommender systems. Several community detection approaches have been proposed and applied to many domains in the literature. This paper presents a comparison study of two different existing community detection approaches and also discusses some of its vital applications for online networks.
Research Interests:
This paper aims to estimate the effectiveness of reusing learning object (LO) by evaluating the most affected aspects in reusing LO by specialized reviewers. In this study we propose a metric aims to give very accurate results about the... more
This paper aims to estimate the effectiveness of reusing
learning object (LO) by evaluating the most affected aspects
in reusing LO by specialized reviewers. In this study we
propose a metric aims to give very accurate results about the
effectiveness of reusing LO, it can be done by letting a group
of reviewers cooperate in reviewing every LO. Every
reviewer, reviews only his/ her specialized area.
The reviewers of this metric are categorized into three
groups; academic, technical and students while the evaluated
elements are categorized into eight categories; gender,
re-tasking & repurposing, accessibility, appropriateness,
content quality, metadata, motivation and usability.
A comparison to the results of the proposed metric vice the
most famous learning objects' metrics is done through this
study to indicate the reliability and accurateness of the
proposed metric compared with the other metrics.
Research Interests:
— Data classification is one of the most important tasks in data mining, which identify to which categories a new observation belongs, on the basis of a training set. Preparing data before doing any data mining is essential step to ensure... more
— Data classification is one of the most important tasks in data mining, which identify to which categories a new observation belongs, on the basis of a training set. Preparing data before doing any data mining is essential step to ensure the quality of mined data. There are different algorithms used to solve classification problems. In this research four algorithms namely support vector machine (SVM), C5.0, K-nearest neighbor (KNN) and Recursive Partitioning and Regression Trees (rpart) are compared before and after applying two feature selection techniques. These techniques are Wrapper and Filter. This comparative study is implemented throughout using R programming language. Direct marketing campaigns dataset of banking institution is used to predict if the client will subscribe a term deposit or not. The dataset is composed of 4521 instances. 3521 instance as training set 78%, 1000 instance as testing set 22%. The results show that C5.0 is superior to other algorithms before implementing FS technique and SVM is superior to others after implementing FS. Keywords— Classification, Feature Selection, Wrapper Technique, Filter Technique, Support Vector Machine (SVM), C5.0, K-Nearest Neighbor (KNN), Recursive Partitioning and Regression Trees (Rpart). I. INTRODUCTION The problem of data classification has numerous applications in a wide variety of mining applications. This is because the problem attempts to learn the relationship between a set of feature variables and a target variable of interest. Excellent overviews on data classification may be found in Classification algorithms typically contain two phases. The first one is training phase in which a model is constructed from the training instances. The second is testing phase in which the model is used to assign a label to an unlabeled test instance[1]. Classification consists of predicting a certain outcome based on a given input. In order to predict the outcome, the algorithm processes a training set containing a set of attributes and the respective outcome, usually called goal or prediction attribute. The algorithm tries to discover relationships between the attributes that would make it possible to predict the outcome. Next the algorithm is given a data set, called prediction set, which contains the same set of attributes, except for the prediction attribute – not yet known. The algorithm analyses the input and produces predicted instances. The prediction accuracy defines how " good " the algorithm is [2]. The four classifiers used in this paper are shown in (figure 1). But many irrelevant, noisy or ambiguous attributes may be present in data to be mined. So they need to be removed because it affects the performance of algorithms. Attribute selection methods are used to avoid over fitting and improve model performance and to provide faster and more cost-effective models [3]. The main purpose of Feature Selection (FS) approach is to select a minimal and relevant feature subset for a given dataset and maintain its original representation. FS not only reduces the dimensionality of data but also enhance the performance of a classifier. So, the task of FS is to search for best possible feature subset depending on the problem to be solved [4]. This paper is organized as follows. Section 2 refers to the four algorithms to deal with the classification problem. Section 3 describes the used FS techniques. Section 4 demonstrates our experimental methodology then section 5 presents the results. Finally section 6 provides conclusion and future work.
Research Interests:
—Sentiment analysis is called opinion mining which is the field of study that analyzes people's opinions, sentiments, evaluations, appraisals, attitudes, and emotions towards entities such as products, services, organizations,... more
—Sentiment analysis is called opinion mining which is the field of study that analyzes people's opinions, sentiments, evaluations, appraisals, attitudes, and emotions towards entities such as products, services, organizations, individuals, issues, events, topics, and their attributes. Starting from the importance of the sentiment analysis generally for individuals and more specifically for gigantic organizations, we started digging in this paper. Graphlab was used to build the sentiment models. Many algorithms were used along with text features selection techniques to predict the positive and negative sentiments like " SVM " , " logistic regression " and " boosted trees ". The mentioned classifiers were applied to a Hotel reviews dataset got from Trip Advisor website to emulate real customer opinions. The results showed that using SVM classifier along with N-grams features selection technique was superior to others.
Research Interests:
Social Media and mobile commerce changes the way organizations do business; Social networking has become popular and raised a controversial question on its profitability and future influences. This paper is to provide a general snapshot... more
Social Media and mobile commerce changes the way organizations do business; Social networking has become popular and raised a controversial question on its profitability and future influences. This paper is to provide a general snapshot of social networking and mobile, summarize the benefits and limitations of social commerce and describe mobile social commerce. It also focus on understanding the relationship between m-commerce and e-commerce, discuss the current advantages and disadvantages of ecommerce and m-commerce, identify different m-commerce applications, study of security issues in online marketing and their effect on security and privacy issues. This paper is targeted towards business and IT leaders considering social media and mobile application in some or all of their business applications.
Research Interests:
online stores booming up, more and more people today are using online shopping. These networks are complex and involving massive data, giving a very strong interest to a set of techniques developed for mining these graphs enable... more
online stores booming up, more and more people today are using online shopping. These networks are complex and involving massive data, giving a very strong interest to a set of techniques developed for mining these graphs enable e-retailers to make better recommendation and comprehend the buying patterns. Community detection is one of the fundamental applications that provide a solution for online stores networks, disciplines where systems are often represented as graphs. It helps to understand and model the network structure, provide a mechanism for executives to assess consumer opinion using this information to improve products, customer service and perception, Also can be useful in various applications such as rating prediction, link and Top-N recommendation, trend analysis and also be the main provider engine for recommender systems. In literature, the exponentially increasing computation time of this problem make the quality of these solutions is limited and impractical. Furthermore, most of the proposed approaches are able to detect only disjoint communities. The paper first focuses on the implementation of new clique based overlapping community detection algorithm, consists of two phases. First to enumerate the maximal cliques. The second to detect the overlapping communities among the discovered maximal cliques using three different community scales based on three different depth levels to discover the largest communities and assures high nodes coverage. While the second part of the work is to rank Top-N nodes through the discovered communities in two ways. The work provides experimental results on Amazon products co-purchasing network, most popular online stores. Clustering coefficient and cluster density are used to measure the quality. The results show that our algorithm can extract meaningful communities in different scales from this network, able to rank the top-N nodes in these different scales and revealing large scale patterns present in interaction habits of customers.
Research Interests:
— Data classification is one of the most important tasks in data mining, which identify to which categories a new observation belongs, on the basis of a training set. Preparing data before doing any data mining is essential step to ensure... more
— Data classification is one of the most important tasks in data mining, which identify to which categories a new observation belongs, on the basis of a training set. Preparing data before doing any data mining is essential step to ensure the quality of mined data. There are different algorithms used to solve classification problems. In this research four algorithms namely support vector machine (SVM), C5.0, K-nearest neighbor (KNN) and Recursive Partitioning and Regression Trees (rpart) are compared before and after applying two feature selection techniques. These techniques are Wrapper and Filter. This comparative study is implemented throughout using R programming language. Direct marketing campaigns dataset of banking institution is used to predict if the client will subscribe a term deposit or not. The dataset is composed of 4521 instances. 3521 instance as training set 78%, 1000 instance as testing set 22%. The results show that C5.0 is superior to other algorithms before implementing FS technique and SVM is superior to others after implementing FS. Keywords— Classification, Feature Selection, Wrapper Technique, Filter Technique, Support Vector Machine (SVM), C5.0, K-Nearest Neighbor (KNN), Recursive Partitioning and Regression Trees (Rpart). I. INTRODUCTION The problem of data classification has numerous applications in a wide variety of mining applications. This is because the problem attempts to learn the relationship between a set of feature variables and a target variable of interest. Excellent overviews on data classification may be found in Classification algorithms typically contain two phases. The first one is training phase in which a model is constructed from the training instances. The second is testing phase in which the model is used to assign a label to an unlabeled test instance[1]. Classification consists of predicting a certain outcome based on a given input. In order to predict the outcome, the algorithm processes a training set containing a set of attributes and the respective outcome, usually called goal or prediction attribute. The algorithm tries to discover relationships between the attributes that would make it possible to predict the outcome. Next the algorithm is given a data set, called prediction set, which contains the same set of attributes, except for the prediction attribute – not yet known. The algorithm analyses the input and produces predicted instances. The prediction accuracy defines how " good " the algorithm is [2]. The four classifiers used in this paper are shown in (figure 1). But many irrelevant, noisy or ambiguous attributes may be present in data to be mined. So they need to be removed because it affects the performance of algorithms. Attribute selection methods are used to avoid over fitting and improve model performance and to provide faster and more cost-effective models [3]. The main purpose of Feature Selection (FS) approach is to select a minimal and relevant feature subset for a given dataset and maintain its original representation. FS not only reduces the dimensionality of data but also enhance the performance of a classifier. So, the task of FS is to search for best possible feature subset depending on the problem to be solved [4]. This paper is organized as follows. Section 2 refers to the four algorithms to deal with the classification problem. Section 3 describes the used FS techniques. Section 4 demonstrates our experimental methodology then section 5 presents the results. Finally section 6 provides conclusion and future work.
Research Interests:
—Sentiment analysis is called opinion mining which is the field of study that analyzes people's opinions, sentiments, evaluations, appraisals, attitudes, and emotions towards entities such as products, services, organizations,... more
—Sentiment analysis is called opinion mining which is the field of study that analyzes people's opinions, sentiments, evaluations, appraisals, attitudes, and emotions towards entities such as products, services, organizations, individuals, issues, events, topics, and their attributes. Starting from the importance of the sentiment analysis generally for individuals and more specifically for gigantic organizations, we started digging in this paper. Graphlab was used to build the sentiment models. Many algorithms were used along with text features selection techniques to predict the positive and negative sentiments like " SVM " , " logistic regression " and " boosted trees ". The mentioned classifiers were applied to a Hotel reviews dataset got from Trip Advisor website to emulate real customer opinions. The results showed that using SVM classifier along with N-grams features selection technique was superior to others.
Research Interests:
International Journal of Advanced Networking and Applications (IJANA), Volume: 08, Issue: 01, July-August, 2016, pp 2997-3002.
Research Interests:
International Journal of Engineering Research and Application(IJERA), Vol. 6, Issue 7, (Part -3) July 2016, pp.63-68.
Research Interests:
Volume 150 – No.12, September 2016, pp 44-47
Research Interests:

And 53 more

Research Interests:
Word From Dr. Mona Nasr at Corona Virus Ban Period
Research Interests:
Research Interests:
How to use Zoom  OnLine Meeting the video for Staff Members and Students
Research Interests:
Research Interests:
ICT-Learn 2019
Ramsis Hilton
26-27 September 2019

ITU/UNESCO Regional Digital Inclusion Week for the Arab States

13th International Forum for Smart Learning
Augmented Reality in Learning on 26th 2:00 P.M. - 3:00 P.M.
A trophy (The Best Senior Level STEM Executive 2015) from Meera Kaul Foundation. The foundation focuses on women and women led enterprise in STEM ( Science, Technology, Engineering, and Mathematics) in North America, Eastern Europe, and... more
A trophy (The Best Senior Level STEM Executive 2015) from Meera Kaul Foundation. The foundation focuses on women and women led enterprise in STEM ( Science, Technology, Engineering, and Mathematics) in North America, Eastern Europe, and selected countries in Middle East, Africa,China and India.
Full fellowship from Bibliotheca Alexandrina(BA) in cooperation with The German Academic Exchange Servicethrough the DAAD Kairo, Alexandria, Egypt.
Full fellowship from Bibliotheca Alexandrina(BA) in cooperation with the Arab Regional Office for the World Academy of Science for the advancement of science in developing countries (TWAS-ARO), Alexandria, Egypt.
Full fellowship from The Cyprus Institute for participating in the Third SESAME-LinkSCEEM Third Cross-sectional HPC Workshop, Cairo, Egypt.
Full fellowship from The Cyprus Institute for participating in the Second SESAME-LinkSCEEM Summer School, Allan-Jordan
Full fellowship from The Cyprus Institute for participating in the Second LinkSCEEM/V-MUST, Alexandria, Egypt.
Basically, by increasing Internet of Things (IoT) paradigm involvement in our lives, a lot of threats and attacks regarding IoT security and privacy are realized and raised. If they are left without taking extensive considerations, such... more
Basically, by increasing Internet of Things (IoT) paradigm involvement in our lives, a lot of threats and attacks regarding IoT security and privacy are realized and raised. If they are left without taking extensive considerations, such security and privacy issues and challenges can certainly threaten the IoT existence. Such security and privacy issues are derived from many reasons. One of these reasons is the unavailability of universal accepted appropriate IoT security techniques, methods, and guidelines. These methods and techniques will greatly guide IoT developers and engineering, draw the success road for developing, and implementing secure IoT systems. So, our contribution focusses on such objective in which, we propose a comprehensive IoT security and privacy framework based on the seven levels of IoT reference architecture introduced by Cisco, in which a set of proper security techniques and guidelines is specified for each level.
Additionally, we identify several critical techniques which can be accomplished for blocking many possible attacks against the IoT seven levels.
Data quality dimension is a term to identify quality measure that is related to many data elements including attribute, record, table, system or more abstract groupings such as business unit, company or product range. This paper presents... more
Data quality dimension is a term to identify quality measure that is related to many data
elements including attribute, record, table, system or more abstract groupings such as
business unit, company or product range. This paper presents a thorough analysis
of three data quality dimensions which are completeness, relevance, and duplication.
Besides; it covers all commonly used techniques for each dimension. Regarding
completeness; Predictive value imputation, distribution-based imputation, KNN, and
more methods are investigated. Moreover; relevance dimension is explored via filter and
wrapper approach, rough set theory, hybrid feature selection, and other
techniques. Duplication is investigated throughout many techniques such as; K-medoids,
standard duplicate elimination algorithm, online record matching, and sorted blocks.
The 12th International Arab Conference on Information Technology (ACIT), Naif Arab University for Security Sciences
Research Interests: