Across the world, several millions of people use sign language as their main way of communication... more Across the world, several millions of people use sign language as their main way of communication with their society, daily they face a lot of obstacles with their families, teachers, neighbours, employers. According to the most recent statistics of World Health Organization, there are 360 million persons in the world with disabling hearing loss i.e. (5.3% of the world's population), around 13 million in the Middle East. Hence, the development of automated systems capable of translating sign languages into words and sentences becomes a necessity. We propose a model to recognize both of static gestures like numbers, letters, ...etc and dynamic gestures which includes movement and motion in performing the signs. Additionally, we propose a segmentation method in order to segment a sequence of continuous signs in real time based on tracking the palm velocity and this is useful in translating not only pre-segmented signs but also continuous sentences. We use an affordable and compact device called Leap Motion controller, which detects and tracks the hands' and fingers' motion and position in an accurate manner. The proposed model applies several machine learning algorithms as Support Vector Machine (SVM), K-Nearest Neighbour (KNN), Artificial Neural Network (ANN) and Dynamic Time Wrapping (DTW) depending on two different features sets. This research will increase the chance for the Arabic hearing-impaired and deaf persons to communicate easily using Arabic Sign language(ArSLR). The proposed model works as an interface between hearing-impaired and normal persons who are not familiar with Arabic sign language, overcomes the gap between them and it is also valuable for social respect. The proposed model is applied on Arabic signs with 38 static gestures (28 letters, numbers (1:10) and 16 static words) and 20 dynamic gestures. Features selection process is maintained and we get two different features sets. For static gestures, KNN model dominates other models for both of palm features set and bone features set with accuracy 99 and 98% respectively. For dynamic gestures, DTW model dominates other models for both palm features set and bone features set with accuracy 97.4% and 96.4% respectively.
— This paper explores a method that analyzes Arabic text lexically, morphologically and semantica... more — This paper explores a method that analyzes Arabic text lexically, morphologically and semantically. The highly agglutinative nature of Arabic diminishes the effectiveness of conventional Bag of Words (BoW) which considered insufficient to form a representative vector for large scale social media content as it ignores possible relations between terms. The proposed work overcomes this limitation by incorporating different feature sets and performing cascaded analysis that fundamentally contains lexical analysis, morphological analysis, and semantic analysis. ICA is used to handle Arabic morphological pluralism. AWN semantically is exploited to extract generic and semantic relations for the lexical units over all the dataset. Moreover, specific feature extraction components are integrated to account for the linguistic characteristics of Arabic. Finally, we can leverage from standard social media features such as emoticons and smileys. So, a system for automatic Emotion Detection (ED) and mood recognitions was built to provide further sentiment insight and classification power. The optimal feature combination for each of the different emotions was determined using a combination of Machine Learning (ML) and rule-based methods. Experimentally, the results revealed that incorporation of multifaceted analysis is superior to classical BoW representation, in terms of feature reduction (31% reduction percentage) and accuracy results (F-Measure was increased up to 89%).
Due to the increasing popularity of contents of social media platforms, the number of posts and m... more Due to the increasing popularity of contents of social media platforms, the number of posts and messages is steadily increasing. A huge amount of data is generated daily as an outcome of the interactions between fans of the networking platforms. It becomes extremely troublesome to find the most relevant, interactive information for the subscribers. The aim of this work is to enable the users to get a powerful brief of comments without reading the entire list. This paper opens up a new field of short text summarization (STS) predicated on a hybrid ant colony optimization coming with a mechanism of local search, called ACO-LS-STS, to produce an optimal or near-optimal summary. Initially, the graph coloring algorithm, called GC-ISTS, was employed before to shrink the solution area of ants to small sets. Evidently , the main purpose of using the GC algorithm is to make the search process more facilitated, faster and prevents the ants from falling into the local optimum. First, the dissimilar comments are assembled together into the same color, at the same time preserving the information ratio as for an original list of comment. Subsequently, activating the ACO-LS-STS algorithm, which is a novel technique concerning the extraction of the most interactive comments from each color in a parallel form. At the end, the best summary is picked from the best color. This problem is formalized as an optimization problem utilizing GC and ACO-LS to generate the optimal solution. Eventually, the proposed algorithm was evaluated and tested over a collection of Facebook messages with their associated comments. Indeed, it was found that the proposed algorithm has an ability to capture a good solution that is guaranteed to be near optimal and had realized notable performance in comparison with traditional document summarization algorithms.
User-contributed comments (UCC) are one of the signs of the social media. Due to the high popular... more User-contributed comments (UCC) are one of the signs of the social media. Due to the high popularity of social media, it becomes already exceedingly difficult to find the most relevant, interactive information for the users. The motivation behind this work is the fact that users may interest to get an effi-cacious brief understanding of comments without reading the entire comments. This paper opens up an unconventional field of comment's summarization predicated on Ant colony optimization mixed with Jensen–Shannon divergence (ACO-JSD). ACO-JSD is a proposed novel technique concerning the extraction the most interactive comments from the huge amount of concise comment's perspectives. This problem is unfastened utilizing ACO to generate the optimal solution. Moreover, the JSD model is employed to ensure a summary could capture the essence of the original comments. First, an acyclic semi-graph has been constructed under two constraints: (1) the longest comments will be isolated from the graph, (2) The more similarity between two comments, the greater the chance that mutual connectivity is eliminated. Next, a feasible solution is constructed to select the high-quality summarization. Finally, the proposed algorithm has been evaluated over a collection of Facebook posts with their associated comments and an excellent performance in comparison with traditional document summarization algorithms was obtained. Accordingly, the computational results show the efficiency of the proposed algorithm, as well as its ability to find a good summary that is guaranteed to be near-optimal.
— Software is nowadays a critical component of our lives and everyday-work working activities. Ho... more — Software is nowadays a critical component of our lives and everyday-work working activities. However, as the technological infrastructure of the modern world evolves a great challenge arises for developing high quality software systems with increasing size and complexity. Software engineers and researchers are striving to meet this challenge by developing and implementing software engineering methodologies able to deliver software products of high quality, within budget and time constraints. The field of machine learning in software engineering has recently emerged to provide means for addressing, studying, analyzing, and understanding critical software development issues and at the same time to offer mature machine learning techniques such as artificial neural network, Bayesian networks, decision trees, fuzzy logic, genetic algorithms, and rule induction. Machine learning algorithms have proven to be of great practical value to software engineering. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development tasks could be formulated as learning problems and approached in terms of learning algorithms. In this paper, we first take a look at the characteristics and applicability of some frequently utilized machine learning algorithms. We then present the application of machine learning in the different phases of software engineering that include project planning, requirements analysis, design, implementation, testing and maintenance.
ABSTRACT COMPARATIVE STUDY OF SOFTWARE ENGINEERING PROCESSES IN EGYPTIAN CMMI COMPANIES The Egypt... more ABSTRACT COMPARATIVE STUDY OF SOFTWARE ENGINEERING PROCESSES IN EGYPTIAN CMMI COMPANIES The Egyptian government has paid special attention to the software industry in Egypt to provide it with a competitive advantage that makes this emerging industry promising. Thus, the State has supported the Egyptian companies to make use of the Capability Maturity Model Integration (CMMI). Since 2009, more than thirty companies obtained the CMMI at different levels. However, these companies suffer from lack of a mechanism to exchange experience and information among themselves although they could be similar in the culture of their engineers and perhaps in the nature and size of their software projects. We provide in this research a survey to gauge the quality of methods, tools and processes used in these Egyptian companies winning the CMMI. Then we analyze the results to reach the recommendations aimed at enriching the software industry in Egypt. الخلاصة أولت الحكومة المصرية صناعة البرمجيات اهتماما خاصا باعتبار أن مصر لديها ميزات تنافسية تجعل من هذه الصناعة صناعة صاعدة واعدة. ومن ثم فقد دعمت الدولة الشركات المصرية فى الحصول على نظام الجودة العالمى المسمى بنموذج نضج المقدرات المتكامل (CMMI). وفى السنوات الأخيرة, استطاع ما يربو عن ثلاثين شركة الحصول على نظام الجودة CMMI بمستويات مختلفة. إلا أن هذه الشركات تعانى من عدم وجود آلية لتبادل الخبرات والمعلومات فيما بينها مع أنها تتشابه فى تخصصاتها وفى ثقافة المهندسين العاملين بها بل وربما فى طبيعة وحجم المشاريع البرمجية فيها. ومن ثم فقد قمنا فى هذا البحث بعمل استبيان بهدف الوقوف على نوعية الأساليب والأدوات والعمليات المستخدمة فى هذه الشركات ثم قمنا بتحليل النتائج للوصول إلى توصيات تهدف إلى إثراء صناعة البرمجيات فى مصر.
— Software Process Improvement (SPI) projects incorporate organization transition risks which may... more — Software Process Improvement (SPI) projects incorporate organization transition risks which may cause many process improvement initiatives to fail. To mitigate these risks, an iterative and incremental approach called 'Process Increments' is used to manage the SPI project. In this paper, the Configuration Management process area is used as a case study to show the improvement results difference when the 'Process Increments' approach is used. Results are compared with similar projects which didn't use an incremental approach. This approach shifts the focus from adopting new techniques to achieving value-add for the organization and shows excellent results in effective and efficient implementation of Software Configuration Management. Through our proposed process increment model, we could reach a significant increase in the performance of the software process improvement.
Cairo is experiencing traffic congestion that places it among the worst in the world. Obviously, ... more Cairo is experiencing traffic congestion that places it among the worst in the world. Obviously, it is difficult if not impossible to solve the transportation problem because it is multi-dimensional problem but it's good to reduce this waste of money and the associated waste of time resulting from congestion. One way to accomplish this is to provide driver or passenger with current traffic information throughout their trip. Travel time prediction is becoming increasingly important and it is one of the most important traffic information for both drivers and passengers. It is difficult to measure the travel time directly so the present study estimates the travel time using the speed. In this paper we present a model based approach for travelling time prediction. It will provide both passenger or driver with the fastest routes depending on the travel time. The proposed method uses DSmT (Dezert-Smarandache Theory) as a fusion technique and Artificial Neural Network as mining tool. The estimates are corroborated using actual values and the results show the model performing well and gave us acceptable prediction.
— Story Point is a relative measure heavily used for agile estimation of size. The team decides h... more — Story Point is a relative measure heavily used for agile estimation of size. The team decides how big a point is, and based on that size, determines how many points each work item is. In many organizations, the use of story points for similar features can vary from team to another, and successfully, based on the teams' sizes, skillset and relative use of this tool. But in a CMMI organization, this technique demands a degree of consistency across teams for a more streamlined approach to solution delivery. This generates a challenge for CMMI organizations to adopt Agile in software estimation and planning. In this paper, a process and methodology that guarantees relativity in software sizing while using agile story points is introduced. The proposed process and methodology are applied in a CMMI company level three on different projects. By that, the story point is used on the level of the organization, not the project. Then, the performance of sizing process is measured to show a significant improvement in sizing accuracy after adopting the agile story point in CMMI organizations. To complete the estimation cycle, an improvement in effort estimation dependent on story point is also introduced, and its performance effect is measured.
—Foreign Exchange market is a
worldwide market to exchange currencies with 3.98
trillion US dolla... more —Foreign Exchange market is a worldwide market to exchange currencies with 3.98 trillion US dollars daily turnover. With such a massive turnover, probability of profit is very high; however, trading in such massive market needs high knowledge, skills and full commitment in order to achieve high profit. The purpose of this work is to design a smart agent that 1) acquire Foreign Exchange market prices, 2) pre-processes it, 3) predicts future trend using Genetic Programming approach and Adaptive Neuro-fuzzy Inference System and 4) makes a buy/sell decision to maximize profitability with no human supervision.
— Software is nowadays a critical component of our lives and everyday-work working activities. Ho... more — Software is nowadays a critical component of our lives and everyday-work working activities. However, as the technological infrastructure of the modern world evolves a great challenge arises for developing high quality software systems with increasing size and complexity. Software engineers and researchers are striving to meet this challenge by developing and implementing software engineering methodologies able to deliver software products of high quality, within budget and time constraints. The field of machine learning in software engineering has recently emerged to provide means for addressing, studying, analyzing, and understanding critical software development issues and at the same time to offer mature machine learning techniques such as artificial neural network, Bayesian networks, decision trees, fuzzy logic, genetic algorithms, and rule induction. Machine learning algorithms have proven to be of great practical value to software engineering. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development tasks could be formulated as learning problems and approached in terms of learning algorithms. In this paper, we first take a look at the characteristics and applicability of some frequently utilized machine learning algorithms. We then present the application of machine learning in the different phases of software engineering that include project planning, requirements analysis, design, implementation, testing and maintenance.
With the problem of increased web resources and the huge amount of information available, the nec... more With the problem of increased web resources and the huge amount of information available, the necessity of having automatic
summarization systems appeared. Since summarization is needed
the most in the process of searching for information on the web,
where the user aims at a certain domain of interest according
to his query, in this case domain-based summaries would serve
the best. Despite the existence of plenty of research work in the
domain-based summarization in English, there is lack of them in
Arabic due to the shortage of existing knowledge bases. In this
paper we introduce a query based, Arabic text, single document
summarization using an existing Arabic language thesaurus
and an extracted knowledge base. We use an Arabic corpus to
extract domain knowledge represented by topic related concepts/
keywords and the lexical relations among them. The user’s query
is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion.
For the summarization dataset, Essex Arabic Summaries Corpus
was used. It has many topic based articles with multiple human
summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet
Encryption technology has been developed quickly and many image encryption methods have been used... more Encryption technology has been developed quickly and many image encryption methods have been used to protect confidential image data from unauthorized access.
In this paper, we introduce general quick brief introduction about cryptography, and then propose a technique for image encryption/decryption by using the nature of FrFT in signals analysis, without use of special methods of encryption algorithms, based on multi-order Fractional Fourier Transform. Taking into account the security of the method used in the encryption work. In this research was to identify indicators to measure the security of the encryption Techniques: (i) sensitivity proposed Techniques for the encryption key can not get the original data only in the presence of this key, (ii) The complexity of the processes that are form internal proposed methods for encryption and decryption processes, and (iii) statistical analysis has been performed by calculating the histograms analysis, comparing with traditional Fourier transform the security system based on the fractional Fourier transform is protected by order of FrFT, it is can provides additional keys for encryption to make it more difficult to break. The keg is formed by combination of order of Fractional Fourier Transform and the matrix. Then, the encrypted image is obtained by the summation of different orders. Numerical simulation results are given to demonstrate this proposed method.
Abstract—Today, the number of users of social network is increasing. Millions of users share opin... more Abstract—Today, the number of users of social network is increasing. Millions of users share opinions on different aspects of life every day. Therefore social network are rich sources of data for opinion mining and sentiment analysis. Also users have become more interested in following news pages on Facebook. Several posts; political for example, have thousands of users’ comments that agree/disagree with the post content. Such comments can be a good indicator for the community opinion about the post content. For politicians, marketers, decision makers …, it is required to make sentiment analysis to know the percentage of users agree, disagree and neutral respect to a post. This raised the need to analyze theusers’ comments in Facebook. We focused on Arabic Facebook news pages for the task of sentiment analysis. We developed a corpus for sentiment analysis and opinion mining purposes. Then, we used different machine learning algorithms – decision tree, support vector machines, and naive bayes - to develop sentiment analyzer. The performance of the system using each technique was evaluated and compared with others.
Sentiment analysis is an area that has a huge attention in recent years, but most of systems and ... more Sentiment analysis is an area that has a huge attention in recent years, but most of systems and researches are tailored for English and other Indo-European languages. So, the need for building systems for other languages increased. In this work, we focus on Sentiment analysis for Arabic comments in Facebook. We collected a corpus for sentiment analysis and opinion mining purposes. The corpus is in Egyptian dialect. However, all available natural language tools are for Modern Standard Arabic. So, we transform it from Egyptian Arabic dialect to Modern Standard Arabic. By that, it becomes possible to use the available Arabic Natural Languages Processing tools like Part Of Speech Tagger (POST) and stemmer. After that, we use support vector machines to train and develop sentiment analyzer. The performance of the system using MSA and Egyptian Arabic dialect corpus is evaluated and compared with others.
With the problem of increased web resources and the huge amount of information available, the nec... more With the problem of increased web resources and the huge amount of information available, the necessity of having automatic summarization systems appeared. Since summarization is needed the most in the process of searching for information on the web, where the user aims at a certain domain of interest according to his query, domain-based summaries would serve the best. Despite the existence of plenty of research work in the domain-based summarization in English, there is lack of them in Arabic due to the shortage of existing knowledge bases. In this paper an Ontology-based Summarization System for Arabic Documents, OSSAD, is introduced. Domain knowledge is extracted from an Arabic corpus and represented by topic related concepts/keywords and the lexical relations among them. The user’s query is first expanded by using the Arabic WordNet and then by adding the domain-specific knowledge base to the expansion. For summarization, decision tree algorithm (C4.5) is used, which was trained by a set of features extracted from the original documents. For the testing dataset, Essex Arabic Summaries Corpus (EASC) was used. Recall Oriented Understudy for Gisting Evaluation (ROUGE) was used to compare OSSAD summaries with the human summaries along with other automatic summarization systems, showing that the proposed approach demonstrated promising results.
Cloud Computing has recently emerged as a new
computing paradigm based on the concept of virtual... more Cloud Computing has recently emerged as a new
computing paradigm based on the concept of virtualization
with the goal of creating a shared and highly scalable
computing infrastructure from aggregated physical resources
to deliver seamless and on-demand provisioning of software,
hardware, and data as services. Universities typically have
large amounts of computing resources to support instructional
and research activities. This paper investigates the challenges
of developing a Campus Cloud based on aggregating resources
in multiple universities. The requirements model and the
architecture model of this cloud environment are presented.
An implementation methodology using open source cloud
middleware is also discussed.
Across the world, several millions of people use sign language as their main way of communication... more Across the world, several millions of people use sign language as their main way of communication with their society, daily they face a lot of obstacles with their families, teachers, neighbours, employers. According to the most recent statistics of World Health Organization, there are 360 million persons in the world with disabling hearing loss i.e. (5.3% of the world's population), around 13 million in the Middle East. Hence, the development of automated systems capable of translating sign languages into words and sentences becomes a necessity. We propose a model to recognize both of static gestures like numbers, letters, ...etc and dynamic gestures which includes movement and motion in performing the signs. Additionally, we propose a segmentation method in order to segment a sequence of continuous signs in real time based on tracking the palm velocity and this is useful in translating not only pre-segmented signs but also continuous sentences. We use an affordable and compact device called Leap Motion controller, which detects and tracks the hands' and fingers' motion and position in an accurate manner. The proposed model applies several machine learning algorithms as Support Vector Machine (SVM), K-Nearest Neighbour (KNN), Artificial Neural Network (ANN) and Dynamic Time Wrapping (DTW) depending on two different features sets. This research will increase the chance for the Arabic hearing-impaired and deaf persons to communicate easily using Arabic Sign language(ArSLR). The proposed model works as an interface between hearing-impaired and normal persons who are not familiar with Arabic sign language, overcomes the gap between them and it is also valuable for social respect. The proposed model is applied on Arabic signs with 38 static gestures (28 letters, numbers (1:10) and 16 static words) and 20 dynamic gestures. Features selection process is maintained and we get two different features sets. For static gestures, KNN model dominates other models for both of palm features set and bone features set with accuracy 99 and 98% respectively. For dynamic gestures, DTW model dominates other models for both palm features set and bone features set with accuracy 97.4% and 96.4% respectively.
— This paper explores a method that analyzes Arabic text lexically, morphologically and semantica... more — This paper explores a method that analyzes Arabic text lexically, morphologically and semantically. The highly agglutinative nature of Arabic diminishes the effectiveness of conventional Bag of Words (BoW) which considered insufficient to form a representative vector for large scale social media content as it ignores possible relations between terms. The proposed work overcomes this limitation by incorporating different feature sets and performing cascaded analysis that fundamentally contains lexical analysis, morphological analysis, and semantic analysis. ICA is used to handle Arabic morphological pluralism. AWN semantically is exploited to extract generic and semantic relations for the lexical units over all the dataset. Moreover, specific feature extraction components are integrated to account for the linguistic characteristics of Arabic. Finally, we can leverage from standard social media features such as emoticons and smileys. So, a system for automatic Emotion Detection (ED) and mood recognitions was built to provide further sentiment insight and classification power. The optimal feature combination for each of the different emotions was determined using a combination of Machine Learning (ML) and rule-based methods. Experimentally, the results revealed that incorporation of multifaceted analysis is superior to classical BoW representation, in terms of feature reduction (31% reduction percentage) and accuracy results (F-Measure was increased up to 89%).
Due to the increasing popularity of contents of social media platforms, the number of posts and m... more Due to the increasing popularity of contents of social media platforms, the number of posts and messages is steadily increasing. A huge amount of data is generated daily as an outcome of the interactions between fans of the networking platforms. It becomes extremely troublesome to find the most relevant, interactive information for the subscribers. The aim of this work is to enable the users to get a powerful brief of comments without reading the entire list. This paper opens up a new field of short text summarization (STS) predicated on a hybrid ant colony optimization coming with a mechanism of local search, called ACO-LS-STS, to produce an optimal or near-optimal summary. Initially, the graph coloring algorithm, called GC-ISTS, was employed before to shrink the solution area of ants to small sets. Evidently , the main purpose of using the GC algorithm is to make the search process more facilitated, faster and prevents the ants from falling into the local optimum. First, the dissimilar comments are assembled together into the same color, at the same time preserving the information ratio as for an original list of comment. Subsequently, activating the ACO-LS-STS algorithm, which is a novel technique concerning the extraction of the most interactive comments from each color in a parallel form. At the end, the best summary is picked from the best color. This problem is formalized as an optimization problem utilizing GC and ACO-LS to generate the optimal solution. Eventually, the proposed algorithm was evaluated and tested over a collection of Facebook messages with their associated comments. Indeed, it was found that the proposed algorithm has an ability to capture a good solution that is guaranteed to be near optimal and had realized notable performance in comparison with traditional document summarization algorithms.
User-contributed comments (UCC) are one of the signs of the social media. Due to the high popular... more User-contributed comments (UCC) are one of the signs of the social media. Due to the high popularity of social media, it becomes already exceedingly difficult to find the most relevant, interactive information for the users. The motivation behind this work is the fact that users may interest to get an effi-cacious brief understanding of comments without reading the entire comments. This paper opens up an unconventional field of comment's summarization predicated on Ant colony optimization mixed with Jensen–Shannon divergence (ACO-JSD). ACO-JSD is a proposed novel technique concerning the extraction the most interactive comments from the huge amount of concise comment's perspectives. This problem is unfastened utilizing ACO to generate the optimal solution. Moreover, the JSD model is employed to ensure a summary could capture the essence of the original comments. First, an acyclic semi-graph has been constructed under two constraints: (1) the longest comments will be isolated from the graph, (2) The more similarity between two comments, the greater the chance that mutual connectivity is eliminated. Next, a feasible solution is constructed to select the high-quality summarization. Finally, the proposed algorithm has been evaluated over a collection of Facebook posts with their associated comments and an excellent performance in comparison with traditional document summarization algorithms was obtained. Accordingly, the computational results show the efficiency of the proposed algorithm, as well as its ability to find a good summary that is guaranteed to be near-optimal.
— Software is nowadays a critical component of our lives and everyday-work working activities. Ho... more — Software is nowadays a critical component of our lives and everyday-work working activities. However, as the technological infrastructure of the modern world evolves a great challenge arises for developing high quality software systems with increasing size and complexity. Software engineers and researchers are striving to meet this challenge by developing and implementing software engineering methodologies able to deliver software products of high quality, within budget and time constraints. The field of machine learning in software engineering has recently emerged to provide means for addressing, studying, analyzing, and understanding critical software development issues and at the same time to offer mature machine learning techniques such as artificial neural network, Bayesian networks, decision trees, fuzzy logic, genetic algorithms, and rule induction. Machine learning algorithms have proven to be of great practical value to software engineering. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development tasks could be formulated as learning problems and approached in terms of learning algorithms. In this paper, we first take a look at the characteristics and applicability of some frequently utilized machine learning algorithms. We then present the application of machine learning in the different phases of software engineering that include project planning, requirements analysis, design, implementation, testing and maintenance.
ABSTRACT COMPARATIVE STUDY OF SOFTWARE ENGINEERING PROCESSES IN EGYPTIAN CMMI COMPANIES The Egypt... more ABSTRACT COMPARATIVE STUDY OF SOFTWARE ENGINEERING PROCESSES IN EGYPTIAN CMMI COMPANIES The Egyptian government has paid special attention to the software industry in Egypt to provide it with a competitive advantage that makes this emerging industry promising. Thus, the State has supported the Egyptian companies to make use of the Capability Maturity Model Integration (CMMI). Since 2009, more than thirty companies obtained the CMMI at different levels. However, these companies suffer from lack of a mechanism to exchange experience and information among themselves although they could be similar in the culture of their engineers and perhaps in the nature and size of their software projects. We provide in this research a survey to gauge the quality of methods, tools and processes used in these Egyptian companies winning the CMMI. Then we analyze the results to reach the recommendations aimed at enriching the software industry in Egypt. الخلاصة أولت الحكومة المصرية صناعة البرمجيات اهتماما خاصا باعتبار أن مصر لديها ميزات تنافسية تجعل من هذه الصناعة صناعة صاعدة واعدة. ومن ثم فقد دعمت الدولة الشركات المصرية فى الحصول على نظام الجودة العالمى المسمى بنموذج نضج المقدرات المتكامل (CMMI). وفى السنوات الأخيرة, استطاع ما يربو عن ثلاثين شركة الحصول على نظام الجودة CMMI بمستويات مختلفة. إلا أن هذه الشركات تعانى من عدم وجود آلية لتبادل الخبرات والمعلومات فيما بينها مع أنها تتشابه فى تخصصاتها وفى ثقافة المهندسين العاملين بها بل وربما فى طبيعة وحجم المشاريع البرمجية فيها. ومن ثم فقد قمنا فى هذا البحث بعمل استبيان بهدف الوقوف على نوعية الأساليب والأدوات والعمليات المستخدمة فى هذه الشركات ثم قمنا بتحليل النتائج للوصول إلى توصيات تهدف إلى إثراء صناعة البرمجيات فى مصر.
— Software Process Improvement (SPI) projects incorporate organization transition risks which may... more — Software Process Improvement (SPI) projects incorporate organization transition risks which may cause many process improvement initiatives to fail. To mitigate these risks, an iterative and incremental approach called 'Process Increments' is used to manage the SPI project. In this paper, the Configuration Management process area is used as a case study to show the improvement results difference when the 'Process Increments' approach is used. Results are compared with similar projects which didn't use an incremental approach. This approach shifts the focus from adopting new techniques to achieving value-add for the organization and shows excellent results in effective and efficient implementation of Software Configuration Management. Through our proposed process increment model, we could reach a significant increase in the performance of the software process improvement.
Cairo is experiencing traffic congestion that places it among the worst in the world. Obviously, ... more Cairo is experiencing traffic congestion that places it among the worst in the world. Obviously, it is difficult if not impossible to solve the transportation problem because it is multi-dimensional problem but it's good to reduce this waste of money and the associated waste of time resulting from congestion. One way to accomplish this is to provide driver or passenger with current traffic information throughout their trip. Travel time prediction is becoming increasingly important and it is one of the most important traffic information for both drivers and passengers. It is difficult to measure the travel time directly so the present study estimates the travel time using the speed. In this paper we present a model based approach for travelling time prediction. It will provide both passenger or driver with the fastest routes depending on the travel time. The proposed method uses DSmT (Dezert-Smarandache Theory) as a fusion technique and Artificial Neural Network as mining tool. The estimates are corroborated using actual values and the results show the model performing well and gave us acceptable prediction.
— Story Point is a relative measure heavily used for agile estimation of size. The team decides h... more — Story Point is a relative measure heavily used for agile estimation of size. The team decides how big a point is, and based on that size, determines how many points each work item is. In many organizations, the use of story points for similar features can vary from team to another, and successfully, based on the teams' sizes, skillset and relative use of this tool. But in a CMMI organization, this technique demands a degree of consistency across teams for a more streamlined approach to solution delivery. This generates a challenge for CMMI organizations to adopt Agile in software estimation and planning. In this paper, a process and methodology that guarantees relativity in software sizing while using agile story points is introduced. The proposed process and methodology are applied in a CMMI company level three on different projects. By that, the story point is used on the level of the organization, not the project. Then, the performance of sizing process is measured to show a significant improvement in sizing accuracy after adopting the agile story point in CMMI organizations. To complete the estimation cycle, an improvement in effort estimation dependent on story point is also introduced, and its performance effect is measured.
—Foreign Exchange market is a
worldwide market to exchange currencies with 3.98
trillion US dolla... more —Foreign Exchange market is a worldwide market to exchange currencies with 3.98 trillion US dollars daily turnover. With such a massive turnover, probability of profit is very high; however, trading in such massive market needs high knowledge, skills and full commitment in order to achieve high profit. The purpose of this work is to design a smart agent that 1) acquire Foreign Exchange market prices, 2) pre-processes it, 3) predicts future trend using Genetic Programming approach and Adaptive Neuro-fuzzy Inference System and 4) makes a buy/sell decision to maximize profitability with no human supervision.
— Software is nowadays a critical component of our lives and everyday-work working activities. Ho... more — Software is nowadays a critical component of our lives and everyday-work working activities. However, as the technological infrastructure of the modern world evolves a great challenge arises for developing high quality software systems with increasing size and complexity. Software engineers and researchers are striving to meet this challenge by developing and implementing software engineering methodologies able to deliver software products of high quality, within budget and time constraints. The field of machine learning in software engineering has recently emerged to provide means for addressing, studying, analyzing, and understanding critical software development issues and at the same time to offer mature machine learning techniques such as artificial neural network, Bayesian networks, decision trees, fuzzy logic, genetic algorithms, and rule induction. Machine learning algorithms have proven to be of great practical value to software engineering. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development tasks could be formulated as learning problems and approached in terms of learning algorithms. In this paper, we first take a look at the characteristics and applicability of some frequently utilized machine learning algorithms. We then present the application of machine learning in the different phases of software engineering that include project planning, requirements analysis, design, implementation, testing and maintenance.
With the problem of increased web resources and the huge amount of information available, the nec... more With the problem of increased web resources and the huge amount of information available, the necessity of having automatic
summarization systems appeared. Since summarization is needed
the most in the process of searching for information on the web,
where the user aims at a certain domain of interest according
to his query, in this case domain-based summaries would serve
the best. Despite the existence of plenty of research work in the
domain-based summarization in English, there is lack of them in
Arabic due to the shortage of existing knowledge bases. In this
paper we introduce a query based, Arabic text, single document
summarization using an existing Arabic language thesaurus
and an extracted knowledge base. We use an Arabic corpus to
extract domain knowledge represented by topic related concepts/
keywords and the lexical relations among them. The user’s query
is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion.
For the summarization dataset, Essex Arabic Summaries Corpus
was used. It has many topic based articles with multiple human
summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet
Encryption technology has been developed quickly and many image encryption methods have been used... more Encryption technology has been developed quickly and many image encryption methods have been used to protect confidential image data from unauthorized access.
In this paper, we introduce general quick brief introduction about cryptography, and then propose a technique for image encryption/decryption by using the nature of FrFT in signals analysis, without use of special methods of encryption algorithms, based on multi-order Fractional Fourier Transform. Taking into account the security of the method used in the encryption work. In this research was to identify indicators to measure the security of the encryption Techniques: (i) sensitivity proposed Techniques for the encryption key can not get the original data only in the presence of this key, (ii) The complexity of the processes that are form internal proposed methods for encryption and decryption processes, and (iii) statistical analysis has been performed by calculating the histograms analysis, comparing with traditional Fourier transform the security system based on the fractional Fourier transform is protected by order of FrFT, it is can provides additional keys for encryption to make it more difficult to break. The keg is formed by combination of order of Fractional Fourier Transform and the matrix. Then, the encrypted image is obtained by the summation of different orders. Numerical simulation results are given to demonstrate this proposed method.
Abstract—Today, the number of users of social network is increasing. Millions of users share opin... more Abstract—Today, the number of users of social network is increasing. Millions of users share opinions on different aspects of life every day. Therefore social network are rich sources of data for opinion mining and sentiment analysis. Also users have become more interested in following news pages on Facebook. Several posts; political for example, have thousands of users’ comments that agree/disagree with the post content. Such comments can be a good indicator for the community opinion about the post content. For politicians, marketers, decision makers …, it is required to make sentiment analysis to know the percentage of users agree, disagree and neutral respect to a post. This raised the need to analyze theusers’ comments in Facebook. We focused on Arabic Facebook news pages for the task of sentiment analysis. We developed a corpus for sentiment analysis and opinion mining purposes. Then, we used different machine learning algorithms – decision tree, support vector machines, and naive bayes - to develop sentiment analyzer. The performance of the system using each technique was evaluated and compared with others.
Sentiment analysis is an area that has a huge attention in recent years, but most of systems and ... more Sentiment analysis is an area that has a huge attention in recent years, but most of systems and researches are tailored for English and other Indo-European languages. So, the need for building systems for other languages increased. In this work, we focus on Sentiment analysis for Arabic comments in Facebook. We collected a corpus for sentiment analysis and opinion mining purposes. The corpus is in Egyptian dialect. However, all available natural language tools are for Modern Standard Arabic. So, we transform it from Egyptian Arabic dialect to Modern Standard Arabic. By that, it becomes possible to use the available Arabic Natural Languages Processing tools like Part Of Speech Tagger (POST) and stemmer. After that, we use support vector machines to train and develop sentiment analyzer. The performance of the system using MSA and Egyptian Arabic dialect corpus is evaluated and compared with others.
With the problem of increased web resources and the huge amount of information available, the nec... more With the problem of increased web resources and the huge amount of information available, the necessity of having automatic summarization systems appeared. Since summarization is needed the most in the process of searching for information on the web, where the user aims at a certain domain of interest according to his query, domain-based summaries would serve the best. Despite the existence of plenty of research work in the domain-based summarization in English, there is lack of them in Arabic due to the shortage of existing knowledge bases. In this paper an Ontology-based Summarization System for Arabic Documents, OSSAD, is introduced. Domain knowledge is extracted from an Arabic corpus and represented by topic related concepts/keywords and the lexical relations among them. The user’s query is first expanded by using the Arabic WordNet and then by adding the domain-specific knowledge base to the expansion. For summarization, decision tree algorithm (C4.5) is used, which was trained by a set of features extracted from the original documents. For the testing dataset, Essex Arabic Summaries Corpus (EASC) was used. Recall Oriented Understudy for Gisting Evaluation (ROUGE) was used to compare OSSAD summaries with the human summaries along with other automatic summarization systems, showing that the proposed approach demonstrated promising results.
Cloud Computing has recently emerged as a new
computing paradigm based on the concept of virtual... more Cloud Computing has recently emerged as a new
computing paradigm based on the concept of virtualization
with the goal of creating a shared and highly scalable
computing infrastructure from aggregated physical resources
to deliver seamless and on-demand provisioning of software,
hardware, and data as services. Universities typically have
large amounts of computing resources to support instructional
and research activities. This paper investigates the challenges
of developing a Campus Cloud based on aggregating resources
in multiple universities. The requirements model and the
architecture model of this cloud environment are presented.
An implementation methodology using open source cloud
middleware is also discussed.
An online blogs provide facility to its users to write and read text-based posts known as "articl... more An online blogs provide facility to its users to write and read text-based posts known as "articles". It became one of the most commonly used social networks. However, an important problem arises is that the returned articles, when searching for a topic phrase, are only sorted by recently not relevancy. This makes the user to manually read through the articles in order to understand what are primarily saying about the particular topic. Some strategies were developed for clustering English text but Arabic text clustering is still an active research area. A major challenge in article clustering is the extremely high dimensionality. In this paper we proposed the new method for features reduction using stemming, (Arabic WordNet) (Arabic Word Net) dictionary and Arabic diacritics, Also, new method in measuring similarity by using (Arabic WordNet) relations to enhance accuracy of clustering.
Uploads
worldwide market to exchange currencies with 3.98
trillion US dollars daily turnover. With such a
massive turnover, probability of profit is very high;
however, trading in such massive market needs
high knowledge, skills and full commitment in
order to achieve high profit. The purpose of this
work is to design a smart agent that 1) acquire
Foreign Exchange market prices, 2) pre-processes
it, 3) predicts future trend using Genetic
Programming approach and Adaptive Neuro-fuzzy
Inference System and 4) makes a buy/sell decision
to maximize profitability with no human
supervision.
summarization systems appeared. Since summarization is needed
the most in the process of searching for information on the web,
where the user aims at a certain domain of interest according
to his query, in this case domain-based summaries would serve
the best. Despite the existence of plenty of research work in the
domain-based summarization in English, there is lack of them in
Arabic due to the shortage of existing knowledge bases. In this
paper we introduce a query based, Arabic text, single document
summarization using an existing Arabic language thesaurus
and an extracted knowledge base. We use an Arabic corpus to
extract domain knowledge represented by topic related concepts/
keywords and the lexical relations among them. The user’s query
is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion.
For the summarization dataset, Essex Arabic Summaries Corpus
was used. It has many topic based articles with multiple human
summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet
In this paper, we introduce general quick brief introduction about cryptography, and then propose a technique for image encryption/decryption by using the nature of FrFT in signals analysis, without use of special methods of encryption algorithms, based on multi-order Fractional Fourier Transform. Taking into account the security of the method used in the encryption work. In this research was to identify indicators to measure the security of the encryption Techniques: (i) sensitivity proposed Techniques for the encryption key can not get the original data only in the presence of this key, (ii) The complexity of the processes that are form internal proposed methods for encryption and decryption processes, and (iii) statistical analysis has been performed by calculating the histograms analysis, comparing with traditional Fourier transform the security system based on the fractional Fourier transform is protected by order of FrFT, it is can provides additional keys for encryption to make it more difficult to break. The keg is formed by combination of order of Fractional Fourier Transform and the matrix. Then, the encrypted image is obtained by the summation of different orders. Numerical simulation results are given to demonstrate this proposed method.
computing paradigm based on the concept of virtualization
with the goal of creating a shared and highly scalable
computing infrastructure from aggregated physical resources
to deliver seamless and on-demand provisioning of software,
hardware, and data as services. Universities typically have
large amounts of computing resources to support instructional
and research activities. This paper investigates the challenges
of developing a Campus Cloud based on aggregating resources
in multiple universities. The requirements model and the
architecture model of this cloud environment are presented.
An implementation methodology using open source cloud
middleware is also discussed.
worldwide market to exchange currencies with 3.98
trillion US dollars daily turnover. With such a
massive turnover, probability of profit is very high;
however, trading in such massive market needs
high knowledge, skills and full commitment in
order to achieve high profit. The purpose of this
work is to design a smart agent that 1) acquire
Foreign Exchange market prices, 2) pre-processes
it, 3) predicts future trend using Genetic
Programming approach and Adaptive Neuro-fuzzy
Inference System and 4) makes a buy/sell decision
to maximize profitability with no human
supervision.
summarization systems appeared. Since summarization is needed
the most in the process of searching for information on the web,
where the user aims at a certain domain of interest according
to his query, in this case domain-based summaries would serve
the best. Despite the existence of plenty of research work in the
domain-based summarization in English, there is lack of them in
Arabic due to the shortage of existing knowledge bases. In this
paper we introduce a query based, Arabic text, single document
summarization using an existing Arabic language thesaurus
and an extracted knowledge base. We use an Arabic corpus to
extract domain knowledge represented by topic related concepts/
keywords and the lexical relations among them. The user’s query
is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion.
For the summarization dataset, Essex Arabic Summaries Corpus
was used. It has many topic based articles with multiple human
summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet
In this paper, we introduce general quick brief introduction about cryptography, and then propose a technique for image encryption/decryption by using the nature of FrFT in signals analysis, without use of special methods of encryption algorithms, based on multi-order Fractional Fourier Transform. Taking into account the security of the method used in the encryption work. In this research was to identify indicators to measure the security of the encryption Techniques: (i) sensitivity proposed Techniques for the encryption key can not get the original data only in the presence of this key, (ii) The complexity of the processes that are form internal proposed methods for encryption and decryption processes, and (iii) statistical analysis has been performed by calculating the histograms analysis, comparing with traditional Fourier transform the security system based on the fractional Fourier transform is protected by order of FrFT, it is can provides additional keys for encryption to make it more difficult to break. The keg is formed by combination of order of Fractional Fourier Transform and the matrix. Then, the encrypted image is obtained by the summation of different orders. Numerical simulation results are given to demonstrate this proposed method.
computing paradigm based on the concept of virtualization
with the goal of creating a shared and highly scalable
computing infrastructure from aggregated physical resources
to deliver seamless and on-demand provisioning of software,
hardware, and data as services. Universities typically have
large amounts of computing resources to support instructional
and research activities. This paper investigates the challenges
of developing a Campus Cloud based on aggregating resources
in multiple universities. The requirements model and the
architecture model of this cloud environment are presented.
An implementation methodology using open source cloud
middleware is also discussed.