Prof. Mona Nasr
Helwan University, Faculty of Computers and Information, Faculty Member
- Mobile Learning, Concept Map, Artifial Intellegence, Software Reengineering, Mobile Banking, Software Engines, and 21 moreSoftware Engineering, Information Systems (Business Informatics), Blended And Mobile Learning, Pervasive Computing, Mobile Computing, Mobile Commerce, Mobility/Mobilities, Technology, Computer Science, Informatics, Communication, Agile Software Process Improvement, Agile Project Management, Cloud Computing, Information Systems, Information Systems Development And Management, IS Project Management, Enterprise Information Systems, Agile software development, Wireless Computing, and E-Commerceedit
- Prof. Nasr is a Professor at Information Systems Department, Faculty of Computers and Artificial Intelligence, Helwan... moreProf. Nasr is a Professor at Information Systems Department, Faculty of Computers and Artificial Intelligence, Helwan University, Prof. Mona held many positions such as (Chief Information Officer (CIO) for Helwan University HU, Scientific Computing Center Manager at HU, Head of Information Systems Department at Faculty of Computers and Artificial Intelligence HU, Vice Dean for Community Service and Environmental Development at FCAI-HU, Vice Dean at the Canadian International College CIC, El Sheikh Zayed Campus).
She received the M.Sc. degree in information systems from Faculty of Computers and Artificial Intelligence, Helwan University, Egypt, 2000. Ph.D. degree in Information Systems from Faculty of Computers and Artificial Intelligence, Helwan University, Egypt, 2006.
Prof. Nasr is a well-known speaker with a charismatic presence at academic conferences and seminars, university, and public lectures and radio and television programs in his area of expertise. She published many articles in international journals and conferences and she is an active reviewer for numerous international journals.
Prof. Nasr attended Digital Transformation & e-Governance Course(16th Nov. till 9th Dec 2021, Tallinn, Estonia),Organized by e-Governance Academy eGA , UNDP, Lebanon in cooperation with the Estonian e-Governance Academy, sharing the best practices on e-Governance and digitization in general.
Prof. Nasr attended Teaching the Teacher TTT Program in Artificial Intelligence from EPITA , School of Engineering and Computer Science (EPITA-Paris, France) Nov- Dec 2020.
she is a member in many other societies such as (IEDRC, IACSIT, MJC, IAOE, TWOWS/OWSD, CSTA, ArabWIC ABI,...etc).
Prof. Nasr Awarded: -
Selected from the top 50 women achievers in the middle East in 2021 supporting their organizations in the journey for business and Digital Transformation. IDC's list of Female Achievers 2021 in Category "CIO Pioneers" (Women Transforming Technology summit 7th Oct. 2021, Middle East), UAE, Dubai.
Selected from the Top 10 Women in Technology and Business, for female business and IT leaders who have positively impacted the business outcomes, work cultures, and levels of innovation required to drive significant improvements in financial performance @IDC CIO Summit Egypt, 14th June 2021
African Women in Technology Role Model from AFCHIX, 2017
A trophy (The Best Senior Level STEM Executive 2015) from Meera Kaul Foundation. The foundation focuses on women and women led enterprise in STEM (Science, Technology, Engineering, and Mathematics) in North America, Eastern Europe, and selected countries in Middle East, Africa, China and India.
Full fellowship from Bibliotheca Alexandrina(BA) in cooperation with The German Academic Exchange Service through the DAAD Kairo, Alexandria, Egypt.
Full fellowship from Bibliotheca Alexandrina(BA) in cooperation with the Arab Regional Office for the World Academy of Science for the advancement of science in developing countries (TWAS-ARO), Alexandria, Egypt.
Full fellowship from The Cyprus Institute for participating in the Third SESAME-LinkSCEEM Third Cross-sectional HPC Workshop, Cairo, Egypt.
Full fellowship from The Cyprus Institute for participating in the Second SESAME-LinkSCEEM Summer School, Allan-Jordan.
Full fellowship from The Cyprus Institute for participating in the Second LinkSCEEM/V-MUST, Alexandria, Egypt.edit
The internet of things (IoT) and cloud computing are evolving technologies in the information technology field. Merging the pervasive IoT technology with cloud computing is an innovative solution for better analytics and decision-making.... more
The internet of things (IoT) and cloud computing are evolving technologies in the information technology field. Merging the pervasive IoT technology with cloud computing is an innovative solution for better analytics and decision-making. Deployed IoT devices offload different types of data to the cloud, while cloud computing converges the infrastructure, links up the servers, analyzes information obtained from the IoT devices, reinforces processing power, and offers huge storage capacity. However, this merging is prone to various cyber threats that affect the IoT-Cloud environment. Mutual authentication is considered as the forefront mechanism for cyber-attacks as the IoT-Cloud participants have to ensure the authenticity of each other and generate a session key for securing the exchanged traffic. While designing these mechanisms, the constrained nature of the IoT devices must be taken into consideration. We proposed a novel lightweight protocol (Light-AHAKA) for authenticating IoT-Cloud elements and establishing a key agreement for encrypting the exchanged sensitive data was proposed. In this paper, the formal verification of (Light-AHAKA) was presented to prove and verify the correctness of our proposed protocol to ensure that the protocol is free from design flaws before the deployment phase. The verification is performed based on two different approaches, the strand space model and the automated validation of internet security protocols and applications (AVISPA) tool.
Research Interests:
Internet of Things (IoT) is a pervasive technology that grants authorized users the ability to communicate with sensors and devices. This technology connects millions of devices, exchanges sensitive information with users, and off-loads... more
Internet of Things (IoT) is a pervasive technology that grants authorized users the ability to communicate with sensors and devices. This technology connects millions of devices, exchanges sensitive information with users, and off-loads classified information to the cloud. This technology is evolving to encompass
time-critical applications. In IoT-time critical applications, legitimate users may require accessing the real-time data directly from the IoT devices rather than requesting data stored in the cloud. These IoT devices are prone to distinct threats and security breaches. Authentication mechanisms are substantial to control access to IoT devices in cloud computing, as authorized users and IoT devices should ensure the authenticity of each other and generate a session key for securing the exchanged traffic. As different IoT devices are resourceconstrained, traditional security mechanisms will not be appropriate for these
devices, as they need considerable computational power and consume excessive energy. Cryptographic researchers are exerting a worthy effort to develop lightweight security mechanisms to cope with resource-constrained IoT systems. In this paper, we propose a novel lightweight protocol (LightAHAKA) for authenticating IoT-cloud elements and establishing a key agreement for encrypting the exchanged sensitive data. Security analysis of the (Light-AHAKA) is carried out to assure the protocol immunity to different security attacks.
time-critical applications. In IoT-time critical applications, legitimate users may require accessing the real-time data directly from the IoT devices rather than requesting data stored in the cloud. These IoT devices are prone to distinct threats and security breaches. Authentication mechanisms are substantial to control access to IoT devices in cloud computing, as authorized users and IoT devices should ensure the authenticity of each other and generate a session key for securing the exchanged traffic. As different IoT devices are resourceconstrained, traditional security mechanisms will not be appropriate for these
devices, as they need considerable computational power and consume excessive energy. Cryptographic researchers are exerting a worthy effort to develop lightweight security mechanisms to cope with resource-constrained IoT systems. In this paper, we propose a novel lightweight protocol (LightAHAKA) for authenticating IoT-cloud elements and establishing a key agreement for encrypting the exchanged sensitive data. Security analysis of the (Light-AHAKA) is carried out to assure the protocol immunity to different security attacks.
Research Interests:
Advanced machine learning approaches are qualified for recognizing the too composite patterns in the massive datasets. We provide a perspective technical survey analysis in machine learning (ML), and deep learning (DL) approaches for... more
Advanced machine learning approaches are qualified for recognizing the too composite patterns in the massive datasets.
We provide a perspective technical survey analysis in machine learning (ML), and deep learning (DL) approaches for genome analysis. It's quickly rising applications related to cancer diseases such as cancer diagnosis or subtypes of cancer through omics input data. It discusses effective approaches in the fields of genomics regulatory, pathogenicity, and variant calling. Moreover, the representation of ML's potential benefits due to the several technological platforms involved in its
diagnosis, prognosis, and treatment. We concentrate on the most up-to-date knowledge of cancer classification models, targeted therapy, and define how genetic mutations inspire targeted therapy's responsiveness and highlight the different related issues in this era of precision medicine. Finally, we disuse limitations of the different approaches and hopeful ways of upcoming research in targeted therapy.
We provide a perspective technical survey analysis in machine learning (ML), and deep learning (DL) approaches for genome analysis. It's quickly rising applications related to cancer diseases such as cancer diagnosis or subtypes of cancer through omics input data. It discusses effective approaches in the fields of genomics regulatory, pathogenicity, and variant calling. Moreover, the representation of ML's potential benefits due to the several technological platforms involved in its
diagnosis, prognosis, and treatment. We concentrate on the most up-to-date knowledge of cancer classification models, targeted therapy, and define how genetic mutations inspire targeted therapy's responsiveness and highlight the different related issues in this era of precision medicine. Finally, we disuse limitations of the different approaches and hopeful ways of upcoming research in targeted therapy.
Research Interests:
One of the modern technological techniques in practical application will be discussed in this paper in broaden manner will be discussed on a large scale.This technique, which is considered a means of technological security to keep... more
One of the modern technological techniques in practical application will be discussed in this paper in broaden manner will be discussed on a large scale.This technique, which is considered a means of technological security to keep personal data away from the hands of snoopers and spies, this technology has occupied the minds of developers in recent years who have ensured its development continuously is a facial recognition technology.
Research Interests:
Online social networks (OSNs) have become essential ways for users to socially share information and feelings, communicate, and thoughts with others through online social networks. Online social networks such as Twitter and Facebook are... more
Online social networks (OSNs) have become essential ways for users to socially share information and feelings, communicate, and thoughts with others through online social networks. Online social networks such as Twitter and Facebook are some of the most common OSNs among users. Users’ behaviors on social networks aid researchers for detecting and understanding their online behaviors and personality traits. Personality detection is one of the new difficulties in social networks. Machine learning techniques are used to build models for understanding personality, detecting personality traits, and classifying users into different kinds through user generated content based on different features and measures of psychological models such as PEN (Psychoticism, Extraversion, and Neuroticism) model, DISC (Dominance, Influence, Steadiness, and Compliance) model, and the Big-five model (Openness, Extraversion, Consciousness, Agreeableness, and Neurotic) which is the most accepted model of personality. This survey discusses the existing works on psychological personality classification.
Research Interests:
Colon cancer is also referred to as colorectal cancer, a kind of cancer that starts with colon damage to the large intestine in the last section of the digestive tract. Elderly people typically suffer from colon cancer, but this may occur... more
Colon cancer is also referred to as colorectal cancer, a kind of cancer that starts with colon damage to the large intestine in the last section of the digestive tract. Elderly people typically suffer from colon cancer, but this may occur at any age.It normally starts as little, noncancerous (benign) mass of cells named polyps that structure within the colon. After a period of time these polyps can turn into advanced malignant tumors that attack the human body and some of these polyps can become colon cancers. So far, no concrete causes have been identified and the complete cancer treatment is very difficult to be detected by doctors in the medical field. Colon cancer often has no symptoms in early stage so detecting it at this stage is curable but colorectal cancer diagnosis in the final stages (stage IV), gives it the opportunity to spread to different pieces of the body, difficult to treat successfully, and the person's chances of survival are much lower. False diagnosis of colorectal cancer which mean wrong treatment for patients with long-term infections and they are suffering from colon cancer this causing the death for these patients. Also, the cancer treatment needs more time and a lot of money. This paper provides a comparative study for methodologies and algorithms used in colon cancer diagnoses and detection this can help for proposing a prediction for risk levels of colon cancer disease using CNN algorithm of the deep learning (Convolutional Neural Networks Algorithm).
Research Interests:
Internet of Things (IoT) is a fundamental concept of a new technology that will be promising and significant in various fields. IoT is a vision that allows things or objects equipped with sensors, actuators, and processors to talk and... more
Internet of Things (IoT) is a fundamental concept of a new technology that will be promising and significant in various fields. IoT is a vision that allows things or objects equipped with sensors, actuators, and processors to talk and communicate with each other over the internet to achieve a meaningful goal. Unfortunately, one of the major challenges that affect IoT is data quality and uncertainty, as data volume increases noise, inconsistency and redundancy increases within data and causes paramount issues for IoT technologies. And since IoT is considered to be a massive quantity of heterogeneous networked embedded devices that generate big data, then it is very complex to compute and analyze such massive data. So this paper introduces a new model named NRDD-DBSCAN based on DBSCAN algorithm and using resilient distributed datasets (RDDs) to detect outliers that affect the data quality of IoT technologies. NRDD-DBSCAN has been applied on three different datasets of N-dimensions (2-D, 3-D, and 25-D) and the results were promising. Finally, comparisons have been made between NRDD-DBSCAN and previous models such as RDD-DBSCAN model and DBSCAN algorithm, and these comparisons proved that NRDD-DBSCAN solved the low dimensionality issue of RDD-DBSCAN model and also solved the fact that DBSCAN algorithm cannot handle IoT data. So the conclusion is that NRDD-DBSCAN proposed model can detect the outliers that exist in the datasets of N-dimensions by using resilient distributed datasets (RDDs), and NRDD-DBSCAN can enhance the quality of data exists in IoT applications and technologies.
Research Interests:
Solving an optimization task in any domain is a very challenging problem, especially when dealing with nonlinear problems and non-convex functions. Many meta-heuristic algorithms are very efficient when solving nonlinear functions. A... more
Solving an optimization task in any domain is a very challenging problem, especially when dealing with nonlinear problems and non-convex functions. Many meta-heuristic algorithms are very efficient when solving nonlinear functions. A meta-heuristic algorithm is a problem-independent technique that can be applied to a broad range of problems. In this experiment, some of the evolutionary algorithms will be tested, evaluated, and compared with each other. We will go through the Genetic Algorithm, Differential Evolution, Particle Swarm Optimization Algorithm, Grey Wolf Optimizer, and Simulated Annealing. They will be evaluated against the performance from many points of view like how the algorithm performs throughout generations and how the algorithm's result is close to the optimal result. Other points of evaluation are discussed in depth in later sections.
Research Interests:
Deep Web is an important topic of research. According to the deep web pages' complicated structure, extracting content is a very challenging issue. In this paper a framework for efficiently discovery deep web data records is proposed. The... more
Deep Web is an important topic of research. According to the deep web pages' complicated structure, extracting content is a very challenging issue. In this paper a framework for efficiently discovery deep web data records is proposed. The proposed framework is able to perform crawling and fetching relevant pages related to user's text query. To retrieve the relevant pages this paper proposes a similarity method based on the improved weighting function (ITF-IDF). This framework utilizes the web page's visual features to obtain data records rather than analyze the source code of HTML. To accurately retrieve the data records, an approach called layout tree is exploited. The proposed framework uses Noise Filter (NSFilter) algorithm to eliminate all noise like header, footer, ads and unnecessary content. Data records are defined as a similar layout visual blocks. To cluster the visual blocks with similar layout, this paper proposes a method based on appearance similarity and similar shape and coordinate feature (SSC). The experiment results illustrate that the framework being proposed is better than previous data extraction works.
Research Interests:
Diabetes is a inveterate defect and disturbance resulted from metabolic conk out in carbohydrate metabolism thus it has occupied a globally serious health problem. In general, the detection of diabetes in early stages can greatly has... more
Diabetes is a inveterate defect and disturbance resulted from metabolic conk out in carbohydrate metabolism thus it has occupied a globally serious health problem. In general, the detection of diabetes in early stages can greatly has significant impact on the diabetic patients treatment in which lead to drive out its relevant side effects. Machine learning is an emerging technology that providing high importance prognosis and a deeper understanding for different clustering of diseases such as diabetes. And because there is a lack of effective analysis tools to discover hidden relationships and trends in data, so Health information technology has emerged as a new technology in health care sector in a short period by utilizing Business Intelligence 'BI' which is a data-driven Decision Support System. In this study, we proposed a high precision diagnostic analysis by using k-means clustering technique. In the first stage, noisy, uncertain and inconsistent data was detected and removed from data set through the preprocessing to prepare date to implement a clustering model. Then, we apply k-means technique on community health diabetes related indicators data set to cluster diabetic patients from healthy one with high accuracy and reliability results.
Research Interests:
Research Interests:
Using Business Intelligence in the cloud is considered a key factor for success in various fields in 2018, about 66 percent of successful organizations in BI already using cloud. 86% of Cloud BI adopters choose Amazon AWS as their first... more
Using Business Intelligence in the cloud is considered a key factor for success in various fields in 2018, about 66 percent of successful organizations in BI already using cloud. 86% of Cloud BI adopters choose Amazon AWS as their first choice, 82% choose Microsoft Azure, 66% choose Google Cloud, and 36% identify IBM Bluemix as their preferred provider of cloud BI services. In recent years, both Business Intelligence and cloud computing have undergone dramatic changes and advancements. The newest capabilities that these recent developments bring forth are introduced. In this paper the latest technologies in the field of Cloud (SaaS) BI is introduced. The paper shows also that many of the current problems in Cloud (SaaS) BI can be solved by enhance the performance and increase the use and acceptance of this technology. Many of the key characteristics of Business Intelligence systems tend to complement those of cloud computing systems and vice versa. Therefore, when integrated properly, these two technologies can be made to strengthen each other's advantages and eliminate each other's weaknesses.
Research Interests:
This paper describes Mobile Agents paradigm for tracking and tracing the effects of Denial of Service security threat in Mobile Agent System, an implementation of this paradigm has been entirely developed in java programming language. The... more
This paper describes Mobile Agents paradigm for tracking and tracing the effects of Denial of Service security threat in Mobile Agent System, an implementation of this paradigm has been entirely developed in java
programming language. The proposed paradigm considers a range of techniques that provide high degree of security during the mobile agent system life cycle in its environment.
This paper highlights the spot to two main design objectives: The importance of including various supportive types of agents within a system e.g., police agents, service agents, …etc. Second: Evaluation analysis and number of checks to be done to trace the Mobile Agents if denial of the provided services during its path. Evaluation analysis for detecting tolerance differences for the calculated agent’s route before and during its journey, storing agent transactions, storing snapshots of agent state information, checking from time to time agent status and task completeness and lastly guard agent checks the changed variables of migrated agent. During tracing and monitoring Mobile Agents, the initiator node may destroy it and continue with another. In this paper a new paradigm is presented that detects and eliminate with high probability, any degree of tampering within a reasonable amount of time, also provide the ability of scalability of security administration.
programming language. The proposed paradigm considers a range of techniques that provide high degree of security during the mobile agent system life cycle in its environment.
This paper highlights the spot to two main design objectives: The importance of including various supportive types of agents within a system e.g., police agents, service agents, …etc. Second: Evaluation analysis and number of checks to be done to trace the Mobile Agents if denial of the provided services during its path. Evaluation analysis for detecting tolerance differences for the calculated agent’s route before and during its journey, storing agent transactions, storing snapshots of agent state information, checking from time to time agent status and task completeness and lastly guard agent checks the changed variables of migrated agent. During tracing and monitoring Mobile Agents, the initiator node may destroy it and continue with another. In this paper a new paradigm is presented that detects and eliminate with high probability, any degree of tampering within a reasonable amount of time, also provide the ability of scalability of security administration.
Research Interests:
Organizations built their customer information data warehouse aiming to enhance the process of customer services which depends on different data mining techniques. Most of data mining techniques face a common problem which is the most... more
Organizations built their customer information data warehouse aiming to enhance the process of customer services which depends on different data mining techniques. Most of data mining techniques face a common problem which is the most important attribute to set as start node to begin with. To overcome this problem K-MIAS is a proposed methodology to select the K-Most important attributes that distinguish different customer types. K-MIAS methodology consists of three phases. The first phase is data preparation which prepares data for computing calculations. The second phase is K-MIAS algorithm which ranks the quantification levels for each attributes with respect to all attributes to select the K-Most important attributes while the third phase is to visualize data which helps for better data understanding and clarifying the results. In this paper, K-MIAS methodology is tested by a dataset consist of 1000 instants for trainee's questionnaire. K-MIAS methodology selects the K-Most important attribute successfully with interesting remarks and findings.
Research Interests:
Human Resources Management (HRM) has become one of the essential interests of managers and decision makers in almost all types of businesses to adopt plans for correctly discovering highly qualified employees. Accordingly, managements... more
Human Resources Management (HRM) has become one of the essential interests of managers and decision makers in almost all types of businesses to adopt plans for correctly discovering highly qualified employees. Accordingly, managements become interested about the performance of these employees. Especially to ensure the appropriate person allocated to the convenient job at the right time. From here, the interest of data mining (DM) role has been growing that its objective is the discovery of knowledge from huge amounts of data. In this paper, DM techniques were utilized to build a classification model for predicting employees’ performance using a real dataset collected from the Ministry of Egyptian Civil Aviation (MOCA) through a questionnaire prepared and distributed for 145 employees. Three main DM techniques were used for building the classification model and identifying the most effective factors that positively affect the performance. The techniques are the Decision Tree (DT), Naïve Bayes, and Support Vector Machine (SVM). To get a highly accurate model, several experiments were executed based on the previous techniques that are implemented in WEKA tool for enabling decision makers and human resources professionals to predict and enhance the performance of their employees.
Research Interests:
The Demand for healthcare IT and its analytics increases in the last few years. To improve quality of care (e.g., ensuring that patients receive the correct medication) which will help to improve the efficiency of clinical quality and... more
The Demand for healthcare IT and its analytics increases in the last few years. To improve quality of care (e.g., ensuring that patients receive the correct medication) which will help to improve the efficiency of clinical quality and safety, operations.
The Nature of the medical field is rich with information where there’s a variety and abundance of data but untapped in a correct and effective manner to get the right knowledge. and therefore, the most serious challenge facing this area is the quality of service provided which means to make the diagnose in a proper manner at a timely manner and provide appropriate medications to patients because Poor diagnosing can lead to serious consequences which are unacceptable. And because there is a lack of effective analysis tools to discover hidden relationships and trends in data, so Health information technology has emerged as a new technology in health care sector in a short period by utilizing Business Intelligence ‘BI’ which is a data-driven Decision Support System.
Which Was developed from 1990s to now, and gradually become one of the most important information systems applied in any sector. BI enables to deal with huge
amount of data and extract useful knowledge to support decision making. Data mining ‘DM’ is a kind of data processing technology which can be regarded as a part of the BI system, but it can be also considered as an independent and integrated technology which can treat mass data and extract hidden relationships from it.
This introduction highlights the main importance of how to apply the business intelligence applications using data mining techniques to help medical professionals in healthcare sector rapidly diagnosing and predicting diseases of any patients not only this but also detecting the disease complications on the patient which will decrease the overall cost of expenditure that the country paid, briefly this is the central research idea which address the motivation for doing this research.
The Nature of the medical field is rich with information where there’s a variety and abundance of data but untapped in a correct and effective manner to get the right knowledge. and therefore, the most serious challenge facing this area is the quality of service provided which means to make the diagnose in a proper manner at a timely manner and provide appropriate medications to patients because Poor diagnosing can lead to serious consequences which are unacceptable. And because there is a lack of effective analysis tools to discover hidden relationships and trends in data, so Health information technology has emerged as a new technology in health care sector in a short period by utilizing Business Intelligence ‘BI’ which is a data-driven Decision Support System.
Which Was developed from 1990s to now, and gradually become one of the most important information systems applied in any sector. BI enables to deal with huge
amount of data and extract useful knowledge to support decision making. Data mining ‘DM’ is a kind of data processing technology which can be regarded as a part of the BI system, but it can be also considered as an independent and integrated technology which can treat mass data and extract hidden relationships from it.
This introduction highlights the main importance of how to apply the business intelligence applications using data mining techniques to help medical professionals in healthcare sector rapidly diagnosing and predicting diseases of any patients not only this but also detecting the disease complications on the patient which will decrease the overall cost of expenditure that the country paid, briefly this is the central research idea which address the motivation for doing this research.
Research Interests:
Data preprocessing is a crucial step through which the data can be cleaned from any quality defects. Quality defects include catching duplicates, filling missing values, removing irrelevant features, catching outliers and other defects.... more
Data preprocessing is a crucial step through which the data can be cleaned from any quality defects. Quality defects
include catching duplicates, filling missing values, removing irrelevant features, catching outliers and other defects. This paper
presents a multi-dimensional information quality framework that enhances the accuracy of business intelligence applications by
eliminating quality issues in the input data. The results declared that our framework enhances the quality of the data and works
effectively.
include catching duplicates, filling missing values, removing irrelevant features, catching outliers and other defects. This paper
presents a multi-dimensional information quality framework that enhances the accuracy of business intelligence applications by
eliminating quality issues in the input data. The results declared that our framework enhances the quality of the data and works
effectively.
Research Interests:
Many decisions are made daily based on the simple mental processing. This way of decision making is suitable for the simple personal daily issues. But when decisions are concerned with general and sensitive sectors, this way of decision... more
Many decisions are made daily based on the simple mental processing. This way of decision making is suitable for the simple personal daily issues. But when decisions are concerned with general and sensitive sectors, this way of decision making is unacceptable. Nowadays, Geospatial Information Systems help in making more accurate decisions in a lot of sections that would be built on accurate information by way of drawing maps and visualizing data to clearly judge which option is the best for that particular situation. This paper raises a framework that integrates the Geospatial Information Systems with the Hybrid Cloud Computing to let them work together and get greater powerful benefits via applying the concept of cloud computing to overcome the flaws related to the desktop GIS including the huge startup cost and the storage capacity and to provide the feature of location independence accessibility where the GIS can be accessed from anywhere and anytime. The hybrid cloud computing was picked to be integrated with the GIS to gain the elasticity and security of dealing with different types of data; private and public data. This integration is presented in three dimensions. The first one is architecture with seven segments that illustrate the main structure for the Hybrid Cloud GIS within a mix of private environment and public environment. The second one is the types of the participants and their workflow within the two environments. The last dimension is a case study for applying this integration in the health sector in Egypt.
Research Interests:
These days most people use social media sites like Facebook, Twitter, etc. to review, buying and complain about products or services. According to the previous, most companies changed from traditional CRM to SCRM to be able to retain the... more
These days most people use social media sites like Facebook, Twitter, etc. to review, buying and complain about products or services. According to the previous, most companies changed from traditional CRM to SCRM to be able to retain the current Customers and also can compete with the others and get new Customers. Starting from the importance of Customer reviews about products or services for companies, we started working on this paper. Sentiment analysis model was used to get Customers opinions about product or service then manual analysis has been done on negative and positive reviews. The result of this research is beneficial reports for business decision makers to enhance SCRM.
Research Interests:
The electronic services become an important integral part of the Information Systems which supported by the term e-government. Many traditional business systems are now shifting to electronic systems and that in the midst of tremendous... more
The electronic services become an important integral part of the Information Systems which supported by the term e-government. Many traditional business systems are now shifting to electronic systems and that in the midst of tremendous information, which is stored inside these systems. There are many researches in business information systems and their importance and advantages. Transforming business information systems to gain profit especially in government services is more difficult. This paper discusses the factors effects on the transformation of business information system represented in the State Council of Egypt information systems as a case study to an electronic inquiries system.
Research Interests:
In recent years, cloud computing become mainstream technology in IT industry offering new trends to software, platform and infrastructure as a service over internet on a global scale by centralizing storage, memory and bandwidth. This new... more
In recent years, cloud computing become mainstream technology in IT industry offering new trends to software, platform and infrastructure as a service over internet on a global scale by centralizing storage, memory and bandwidth. This new technology raises some new opportunities in producing different business operations which influence some new business benefits also some different risks issues are involved using cloud computing. This paper attempts to identify cloud computing approaches, highlights its business opportunities and help cloud computing user to analysis the cloud computing risks and to produce different solving approaches. This paper is targeted towards business and IT leaders considering a move to the cloud for some or all of their business applications
Research Interests:
As it is one of the most important and sensitive sectors, decisions related to the health sector are always critical. So it must be built on accurate information in order to weight the positives and negatives of each option and consider... more
As it is one of the most important and sensitive sectors, decisions related to the health sector are always critical. So it must be built on accurate information in order to weight the positives and negatives of each option and consider all the alternatives to determine which option is the best for that particular situation. The health sector ultimately faces basic challenges of operations, logistics, resource allocation, customers, and management. As it's true that information can help overcome the hurdles, this paper raises the relation between the spatial analyses based on Cloud Computing and the health sector improvement. It presents the vital role of the spatial analysis and the health geography in improving the health sector services. In addition, it presents the significance of the Cloud based GIS. Finally, it presents the usage of the Cloud based GIS for the hospitals distribution in Egypt as a case study.
Research Interests:
Student advising is an important and time-consuming effort in academic life. Academic advising has been implemented in order to fill the gap between student and the academic routine, by moving advising, complaining, evaluating, suggesting... more
Student advising is an important and time-consuming effort in academic life. Academic advising has been implemented in order to fill the gap between student and the academic routine, by moving advising, complaining, evaluating, suggesting system from the traditional ways to an automated way. The researcher surveyed the existing literature; as utilized that many institutions have implemented computerized solutions in order to enhance their overall advising experience. In this paper the researcher innovates an automated mechanism for academic advising in the university system. The paper presents an overview of the development and implementation of a new model of e-Academic Advising System as a web-based application. The proposed model attempts to develop a model that the staff and advisor can access to follow-up the student complaints and suggestions. Also, the students who registered can through complain, evaluate & suggest in any subject. Finally, the head of the department can receive a KPIs reports to follow-up his department. Therefore, a need for a system that could detect student's problems and provide them with suitable feedback is raised. The aim of this paper is to implement a system which facilitates and assists academic advisors in their efforts to providing quality, accurate and consistent advising services to their students; also, to explore the design and implementation of a computerized tool to facilitate this process. This paper discussed the required methodologies used in the development of the Academic Advising System, it has been shown that Academic Advising is a Process more than a Final Product or system, a technical vision for Academic Advising System has been provided. The e-Academic Advising web-based developed and implemented by "Ruby on Rails" as a Web framework which runs via the Ruby programming language and "PostgreSQL" as a Database Engine.
Research Interests:
The incredible rising of online networks show that these networks are complex and involving massive data. Giving a very strong interest to set of techniques developed for mining these networks. One of the fundamental applications for it... more
The incredible rising of online networks show that these networks are complex and involving massive data. Giving a very strong interest to set of techniques developed for mining these networks. One of the fundamental applications for it is the community detection. It helps to understand and model the network structure which has been a fundamental problem in several fields. Community detection is one of the fundamental applications that provide a solution discipline where systems are often represented as graphs. Community detection is of great importance in sociology, biology and computer science. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. It helps to understand and model the network structure, can be useful in various applications such as rating prediction, link and Top-N recommendation, trend analysis and also be the main provider engine for recommender systems. Several community detection approaches have been proposed and applied to many domains in the literature. This paper presents a comparison study of two different existing community detection approaches and also discusses some of its vital applications for online networks.
Research Interests:
— Data classification is one of the most important tasks in data mining, which identify to which categories a new observation belongs, on the basis of a training set. Preparing data before doing any data mining is essential step to ensure... more
— Data classification is one of the most important tasks in data mining, which identify to which categories a new observation belongs, on the basis of a training set. Preparing data before doing any data mining is essential step to ensure the quality of mined data. There are different algorithms used to solve classification problems. In this research four algorithms namely support vector machine (SVM), C5.0, K-nearest neighbor (KNN) and Recursive Partitioning and Regression Trees (rpart) are compared before and after applying two feature selection techniques. These techniques are Wrapper and Filter. This comparative study is implemented throughout using R programming language. Direct marketing campaigns dataset of banking institution is used to predict if the client will subscribe a term deposit or not. The dataset is composed of 4521 instances. 3521 instance as training set 78%, 1000 instance as testing set 22%. The results show that C5.0 is superior to other algorithms before implementing FS technique and SVM is superior to others after implementing FS. Keywords— Classification, Feature Selection, Wrapper Technique, Filter Technique, Support Vector Machine (SVM), C5.0, K-Nearest Neighbor (KNN), Recursive Partitioning and Regression Trees (Rpart). I. INTRODUCTION The problem of data classification has numerous applications in a wide variety of mining applications. This is because the problem attempts to learn the relationship between a set of feature variables and a target variable of interest. Excellent overviews on data classification may be found in Classification algorithms typically contain two phases. The first one is training phase in which a model is constructed from the training instances. The second is testing phase in which the model is used to assign a label to an unlabeled test instance[1]. Classification consists of predicting a certain outcome based on a given input. In order to predict the outcome, the algorithm processes a training set containing a set of attributes and the respective outcome, usually called goal or prediction attribute. The algorithm tries to discover relationships between the attributes that would make it possible to predict the outcome. Next the algorithm is given a data set, called prediction set, which contains the same set of attributes, except for the prediction attribute – not yet known. The algorithm analyses the input and produces predicted instances. The prediction accuracy defines how " good " the algorithm is [2]. The four classifiers used in this paper are shown in (figure 1). But many irrelevant, noisy or ambiguous attributes may be present in data to be mined. So they need to be removed because it affects the performance of algorithms. Attribute selection methods are used to avoid over fitting and improve model performance and to provide faster and more cost-effective models [3]. The main purpose of Feature Selection (FS) approach is to select a minimal and relevant feature subset for a given dataset and maintain its original representation. FS not only reduces the dimensionality of data but also enhance the performance of a classifier. So, the task of FS is to search for best possible feature subset depending on the problem to be solved [4]. This paper is organized as follows. Section 2 refers to the four algorithms to deal with the classification problem. Section 3 describes the used FS techniques. Section 4 demonstrates our experimental methodology then section 5 presents the results. Finally section 6 provides conclusion and future work.
Research Interests:
—Sentiment analysis is called opinion mining which is the field of study that analyzes people's opinions, sentiments, evaluations, appraisals, attitudes, and emotions towards entities such as products, services, organizations,... more
—Sentiment analysis is called opinion mining which is the field of study that analyzes people's opinions, sentiments, evaluations, appraisals, attitudes, and emotions towards entities such as products, services, organizations, individuals, issues, events, topics, and their attributes. Starting from the importance of the sentiment analysis generally for individuals and more specifically for gigantic organizations, we started digging in this paper. Graphlab was used to build the sentiment models. Many algorithms were used along with text features selection techniques to predict the positive and negative sentiments like " SVM " , " logistic regression " and " boosted trees ". The mentioned classifiers were applied to a Hotel reviews dataset got from Trip Advisor website to emulate real customer opinions. The results showed that using SVM classifier along with N-grams features selection technique was superior to others.
Research Interests:
International Journal of Advanced Networking and Applications (IJANA), Volume: 08, Issue: 01, July-August, 2016, pp 2997-3002.
Research Interests:
Research Interests:
Word From Dr. Mona Nasr at Corona Virus Ban Period
Research Interests:
Research Interests:
How to use Zoom OnLine Meeting the video for Staff Members and Students
Research Interests:
Research Interests:
Research Interests:
Research Interests:
ICT-Learn 2019
Ramsis Hilton
26-27 September 2019
ITU/UNESCO Regional Digital Inclusion Week for the Arab States
13th International Forum for Smart Learning
Augmented Reality in Learning on 26th 2:00 P.M. - 3:00 P.M.
Ramsis Hilton
26-27 September 2019
ITU/UNESCO Regional Digital Inclusion Week for the Arab States
13th International Forum for Smart Learning
Augmented Reality in Learning on 26th 2:00 P.M. - 3:00 P.M.
Research Interests:
A trophy (The Best Senior Level STEM Executive 2015) from Meera Kaul Foundation. The foundation focuses on women and women led enterprise in STEM ( Science, Technology, Engineering, and Mathematics) in North America, Eastern Europe, and... more
A trophy (The Best Senior Level STEM Executive 2015) from Meera Kaul Foundation. The foundation focuses on women and women led enterprise in STEM ( Science, Technology, Engineering, and Mathematics) in North America, Eastern Europe, and selected countries in Middle East, Africa,China and India.
Full fellowship from Bibliotheca Alexandrina(BA) in cooperation with The German Academic Exchange Servicethrough the DAAD Kairo, Alexandria, Egypt.
Full fellowship from Bibliotheca Alexandrina(BA) in cooperation with the Arab Regional Office for the World Academy of Science for the advancement of science in developing countries (TWAS-ARO), Alexandria, Egypt.
Full fellowship from The Cyprus Institute for participating in the Third SESAME-LinkSCEEM Third Cross-sectional HPC Workshop, Cairo, Egypt.
Full fellowship from The Cyprus Institute for participating in the Second SESAME-LinkSCEEM Summer School, Allan-Jordan
Full fellowship from The Cyprus Institute for participating in the Second LinkSCEEM/V-MUST, Alexandria, Egypt.
Data quality dimension is a term to identify quality measure that is related to many data elements including attribute, record, table, system or more abstract groupings such as business unit, company or product range. This paper presents... more
Data quality dimension is a term to identify quality measure that is related to many data
elements including attribute, record, table, system or more abstract groupings such as
business unit, company or product range. This paper presents a thorough analysis
of three data quality dimensions which are completeness, relevance, and duplication.
Besides; it covers all commonly used techniques for each dimension. Regarding
completeness; Predictive value imputation, distribution-based imputation, KNN, and
more methods are investigated. Moreover; relevance dimension is explored via filter and
wrapper approach, rough set theory, hybrid feature selection, and other
techniques. Duplication is investigated throughout many techniques such as; K-medoids,
standard duplicate elimination algorithm, online record matching, and sorted blocks.
elements including attribute, record, table, system or more abstract groupings such as
business unit, company or product range. This paper presents a thorough analysis
of three data quality dimensions which are completeness, relevance, and duplication.
Besides; it covers all commonly used techniques for each dimension. Regarding
completeness; Predictive value imputation, distribution-based imputation, KNN, and
more methods are investigated. Moreover; relevance dimension is explored via filter and
wrapper approach, rough set theory, hybrid feature selection, and other
techniques. Duplication is investigated throughout many techniques such as; K-medoids,
standard duplicate elimination algorithm, online record matching, and sorted blocks.